In terms of performance some developers jump to the
conclusion that Byte and Short would be generally more efficient since they
require less room on the stack, but actually
this is a common misnomer. Int32 (int) is actually more efficient than both
Int16 (short) and Byte (byte) in most situations due to the fact that modern
processors are optimized for use of 32 and 64 bit values. When a short (16 bit)
or byte (8 bit) is read, the processor must read the entire 32 bits anyway and
then apply a mask to the remaining bits, so it is actually less efficient to
use these.
Even though this is true in most cases, there are some
situations where using the smaller data types can be beneficial, and that’s
where memory allocation is absolutely critical to application performance. A
case where this applies is, for example, when you have extremely large
collections of integer values where the values will never need more than 8 or
16 bits. When you encounter this situation, the space saved may be worth the
extra instructions being run on the processor for use of these types. This is a
special case, so in general it is considered best practice to use Int32 unless
use of an alternative is absolutely necessary.