Question from Doran: "How can 2 chars allocate to an unsigned short? It just doesn’t make sense to me.
I’ve heard about NUXI few times before, and still I can’t get it. Please can you explain it for me. (even in C)
On a 32-bit computer, a short is composed of 16 bits (2 bytes). In order to set the value, you an specify the short using two bytes, which is 4 hex characters (in C):
short a = 0x1234;
short b = 0x5678;
So short a has 0x1234 (4,660 decimal) and b is similar. Now, instead of using the characters “0-A”, let’s just use U N I and X to represent each byte. For example, U could be 0x12, N could be 0x34, I could be 0x56, and X could be 0x78.
short a = 0xUN;
short b = 0xIX;
On any machine, these shorts would be stored consecutively in memory. Address 0 and 1 would be “a”, and address 2 and 3 would be “b”. [Again, each short takes up 2 bytes].
On a big-endian machine, the data would look like this:
Addr 0: U
Addr 1: N
Addr 2: I
Addr 3: X
On a little-endian machine, we store the smallest part of the number first. That is, in a = 0xUN, we store “N” first, which are the low-order bits. So in memory it would look like this:
Addr 0: N
Addr 1: U
Addr 2: X
Addr 3: I
Hence the “NUXI” problem. On a big-endian machine the data looks like UNIX, on a little-endian machine the data looks like NUXI. This isn’t a problem if you stay on the same machine (each machine knows how to convert appropriately), but can be a problem if you are exchanging binary data between machines.
Hope this helps,