I have the following code:
// Incrementer
datastores.cmtDatastores.u32Region[0] += 1;
// Decrementer
datastores.cmtDatastores.u32Region[1] = (datastores.cmtDatastores.u32Region[1] == 0) ?
10 : datastores.cmtDatastores.u32Region[1] - 1;
// Toggler
datastores.cmtDatastores.u32Region[2] =
(datastores.cmtDatastores.u32Region[2] == 0x0000) ?
0xFFFF : 0x0000;
The u32Region array is an unsigned int array that is part of a struct. Later in the code I convert this array to Big endian format:
unsigned long *swapL = (unsigned long*)&datastores.cmtDatastores.u32Region[50];
for (int i=0;i<50;i++)
{
swapL[i] = _byteswap_ulong(swapL[i]);
}
This entire code snippet is part of a loop that repeats indefinitely. It is a contrived program that increments one element, decrements another and toggles a third element. The array is then sent via TCP to another machine that unpacks this data.
The first loop works fine. After that, since the data is in big endian format, when I "increment", "decrement", and "toggle", the values are incorrect. Obviously, if in the first loop datastores.cmtDatastores.u32Region[0] += 1; results in 1, the second loop it should be 2, but it's not. It is adding the number 1(little endian) to the number in datastores.cmtDatastores.u32Region[0](big endian).
I guess I have to revert back to little endian at the start of every loop, but it appears there should be an easier way to do this.
Any thoughts?
Thanks,
Bobby