I've told this story in the past, but since I'm new to this blog, it's worth repeating.
There is an inherent flaw with IEEE's representation of decimals. This problem effects many languages, not just Visual Basic. Most people are astonished when they learn of this. First, I will show you the problem, then I will show you how to work around this problem, followed by an explanation for why this happens.
In VB, open the Immediate window. Type the following:
? 2.07 * 100
? int (2.07 * 100)
Now type the following:
? int (2.07*100+.00001)
Often times I convert decimal numbers to "implied decimal" integers. Now that I've learned of this issue, I always add .00001 before converting to an integer.
I know what your next question is. HOW and WHY does this happen???
The problem stems from computers using base 2 and number using base 10. Any number that is not a base 2 will have some rounding when you convert types. Your choices are to except some rounding errors or using 16 bit representation (which waste 6 bits).