Search Results

Search found 22709 results on 909 pages for '64 bit'.

Page 83/909 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Bitshift in javascript

    - by pingvinus
    I've got a really big number: 5799218898. And want to shift it right to 13 bits. So, windows-calculator or python gives me: 5799218898 13 | 100010100100001110011111100001 13 70791 | 10001010010000111 As expected. But Javascript: 5799218898 13 | 100010100100001110011111100001 13 183624 | 101100110101001000 I think it because of internal integer representation in javascript, but cannot find anything about that.

    Read the article

  • Interface Builder error: IBXMLDecoder: The value for key is too large to fit into a 32 bit integer

    - by stdout
    I'm working with Robert Payne's fork of PSMTabBarControl that works with IB 3.2 (thanks BTW Robert!): http://codaset.com/robertjpayne/psmtabbarcontrol/. The demo application works fine on 64-bit systems, but when I try to open the XIB file in Interface Builder on a 32-bit system I get: IBXMLDecoder: The value (4654500848) for key (myTrackingRectTag) is too large to fit into a 32 bit integer Building the app as 32 bit works, but then running it gives: PSMTabBarControlDemo[9073:80f] * -[NSKeyedUnarchiver decodeInt32ForKey:]: value (4654500848) for key (myTrackingRectTag) too large to fit in 32-bit integer Not sure if this is a generic IB issue that can occur when moving between 64 and 32 bit systems, or if this is a more specific issue with this code. Has anyone else run into this?

    Read the article

  • Better name for CHAR_BIT?

    - by Potatoswatter
    I was just checking an answer and realized that CHAR_BIT isn't defined by headers as I'd expect, not even by #include <bitset>, on newer GCC. Do I really have to #include <climits> just to get the "functionality" of CHAR_BIT?

    Read the article

  • ARM assembly puzzle

    - by ivant
    First of all, I'm not sure if solution even exists. I spent more than a couple of hours trying to come up with one, so beware. The problem: r1 contains an arbitrary integer, flags are not set according to its value. Set r0 to 1 if r1 is 0x80000000, to 0 otherwise, using only two instructions. It's easy to do that in 3 instructions (there are many ways), however doing it in 2 seems very hard, and may very well be impossible.

    Read the article

  • Macros to set and clear bits

    - by volting
    Im trying to write a few simple macros to simplify the task of setting and clearing bits which should be a simple task however I cant seem to get them to work correctly. #define SET_BIT(p,n) ((p) |= (1 << (n))) #define CLR_BIT(p,n) ((p) &= (~(1) << (n)))

    Read the article

  • Setting last N bits in an array

    - by Martin
    I'm sure this is fairly simple, however I have a major mental block on it, so I need a little help here! I have an array of 5 integers, the array is already filled with some data. I want to set the last N bits of the array to be random noise. [int][int][int][int][int] set last 40 bits [unchanged][unchanged][unchanged][24 bits of old data followed 8 bits of randomness][all random] This is largely language agnostic, but I'm working in C# so bonus points for answers in C#

    Read the article

  • Why does the right-shift operator produce a zero instead of a one?

    - by mrt181
    Hi, i am teaching myself java and i work through the exercises in Thinking in Java. On page 116, exercise 11, you should right-shift an integer through all its binary positions and display each position with Integer.toBinaryString. public static void main(String[] args) { int i = 8; System.out.println(Integer.toBinaryString(i)); int maxIterations = Integer.toBinaryString(i).length(); int j; for (j = 1; j < maxIterations; j++) { i >>= 1; System.out.println(Integer.toBinaryString(i)); } In the solution guide the output looks like this: 1000 1100 1110 1111 When i run this code i get this: 1000 100 10 1 What is going on here. Are the digits cut off? I am using jdk1.6.0_20 64bit. The book uses jdk1.5 32bit.

    Read the article

  • Overwriting a range of bits in an integer in a generic way

    - by porgarmingduod
    Given two integers X and Y, I want to overwrite bits at position P to P+N. Example: int x = 0xAAAA; // 0b1010101010101010 int y = 0x0C30; // 0b0000110000110000 int result = 0xAC3A; // 0b1010110000111010 Does this procedure have a name? If I have masks, the operation is easy enough: int mask_x = 0xF00F; // 0b1111000000001111 int mask_y = 0x0FF0; // 0b0000111111110000 int result = (x & mask_x) | (y & mask_y); What I can't quite figure out is how to write it in a generic way, such as in the following generic C++ function: template<typename IntType> IntType OverwriteBits(IntType dst, IntType src, int pos, int len) { // If: // dst = 0xAAAA; // 0b1010101010101010 // src = 0x0C30; // 0b0000110000110000 // pos = 4 ^ // len = 8 ^------- // Then: // result = 0xAC3A; // 0b1010110000111010 } The problem is that I cannot figure out how to make the masks properly when all the variables, including the width of the integer, is variable. Does anyone know how to write the above function properly?

    Read the article

  • Is It Worth Using Bitwise Operators In Methods?

    - by user1626141
    I am very new to Java (and programming in general, my previous experience is with ActionScript 2.0 and some simple JavaScript), and I am working my way slowly and methodically through Java: A Beginner's Guide by Herbert Schildt. It is an incredible book. For one thing, I finally understand more-or-less what bitwise operators (which I first encountered in ActionScript 2.0) do, and that they are more efficient than other methods for certain sums. My question is, is it more efficient to use a method that uses, say, a shift right, to perform all your divisions/2 (or divisions/even) for you in a large program with many calculations (in this case, a sprawling RPG), or is it more efficient to simply use standard mathematical operations because the compiler will optimise it all for you? Or, am I asking the wrong question entirely?

    Read the article

  • How do I unpack bits from a structure's stream_data in c code?

    - by Chelp
    Ex. typedef struct { bool streamValid; dword dateTime; dword timeStamp; stream_data[800]; } RadioDataA; Ex. Where stream_data[800] contains: **Variable** **Length (in bits)** packetID 8 packetL 8 versionMajor 4 versionMinor 4 radioID 8 etc.. I need to write: void unpackData(radioDataA *streamData, MA_DataA *maData) { //unpack streamData (from above) & put some of the data into maData //How do I read in bits of data? I know it's by groups of 8 but I don't understand how. //MAData is also a struct. }

    Read the article

  • Getting the fractional part of a float without using modf()

    - by knight666
    Hi, I'm developing for a platform without a math library, so I need to build my own tools. My current way of getting the fraction is to convert the float to fixed point (multiply with (float)0xFFFF, cast to int), get only the lower part (mask with 0xFFFF) and convert it back to a float again. However, the imprecision is killing me. I'm using my Frac() and InvFrac() functions to draw an anti-aliased line. Using modf I get a perfectly smooth line. With my own method pixels start jumping around due to precision loss. This is my code: const float fp_amount = (float)(0xFFFF); const float fp_amount_inv = 1.f / fp_amount; inline float Frac(float a_X) { return ((int)(a_X * fp_amount) & 0xFFFF) * fp_amount_inv; } inline float Frac(float a_X) { return (0xFFFF - (int)(a_X * fp_amount) & 0xFFFF) * fp_amount_inv; } Thanks in advance!

    Read the article

  • Duplicate ping packages in Linux VirtualBox machine

    - by Darkmage
    i cant seem t figure out what is going on here. The Linux machine I am using is running as a VM on a Win7 machine using Virtual Box running as a service. If i ping the win7 Host i get ok result. root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 192.168.1.100 PING 192.168.1.100 (192.168.1.100) 10(38) bytes of data. 18 bytes from 192.168.1.100: icmp_seq=1 ttl=128 time=1.78 ms 18 bytes from 192.168.1.100: icmp_seq=2 ttl=128 time=1.68 ms if i ping localhost im ok root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 localhost PING localhost (127.0.0.1) 10(38) bytes of data. 18 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.255 ms 18 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.221 ms but if i ping gateway i get DUP packets root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 10(38) bytes of data. 18 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.27 ms 18 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.46 ms (DUP!) 18 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=22.1 ms 18 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=22.4 ms (DUP!) if i ping other machine on same LAN i stil get dups. pinging remote hosts also gives (DUP!) result root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 www.vg.no PING www.vg.no (195.88.55.16) 10(38) bytes of data. 18 bytes from www.vg.no (195.88.55.16): icmp_seq=1 ttl=245 time=10.0 ms 18 bytes from www.vg.no (195.88.55.16): icmp_seq=1 ttl=245 time=10.3 ms (DUP!) 18 bytes from www.vg.no (195.88.55.16): icmp_seq=2 ttl=245 time=10.3 ms 18 bytes from www.vg.no (195.88.55.16): icmp_seq=2 ttl=245 time=10.6 ms (DUP!)

    Read the article

  • Website does not resolve in browser but traceroute is successful

    - by Colum
    I am trying to figure out an issue. My internet is working fine, but this one website is not resolving. It works via a proxy, traceroute works: 1 192.168.1.1 (192.168.1.1) 4.205 ms 0.568 ms 0.510 ms 2 * * * 3 67.59.255.13 (67.59.255.13) 10.583 ms 7.949 ms 7.557 ms 4 67.59.255.61 (67.59.255.61) 10.256 ms 9.576 ms 13.083 ms 5 64.15.8.126 (64.15.8.126) 9.943 ms 11.929 ms 11.452 ms 6 64.15.0.217 (64.15.0.217) 14.655 ms 14.092 ms 13.771 ms 7 64.15.0.118 (64.15.0.118) 33.201 ms 34.875 ms 36.544 ms 8 xe-6-0-3.ar1.ord1.us.nlayer.net (69.31.111.169) 34.027 ms 34.957 ms 34.231 ms 9 ae1-30g.cr1.ord1.us.nlayer.net (69.31.111.133) 82.683 ms 35.138 ms 37.592 ms 10 xe-3-0-0.cr2.iad1.us.nlayer.net (69.22.142.26) 41.657 ms 34.063 ms 34.519 ms 11 ae2-30g.ar2.iad1.us.nlayer.net (69.31.31.186) 35.780 ms 36.361 ms 33.968 ms 12 as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.230) 35.086 ms as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.234) 38.031 ms as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.230) 36.833 ms 13 cr1.iad2.inforelay.net (66.231.176.246) 32.595 ms cr2.iad1.inforelay.net (66.231.176.10) 31.771 ms cr1.iad2.inforelay.net (66.231.176.246) 32.622 ms 14 cr1.iad2.inforelay.net (66.231.176.246) 32.956 ms 33.625 ms !X 41.058 ms 15 * cr1.iad2.inforelay.net (66.231.176.246) 35.312 ms !X * 16 * cr1.iad2.inforelay.net (66.231.176.246) 32.814 ms !X * 17 cr1.iad2.inforelay.net (66.231.176.246) 35.459 ms !X * 53.137 ms !X Ping returns this: Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 Request timeout for icmp_seq 3 Request timeout for icmp_seq 4 Request timeout for icmp_seq 5 Request timeout for icmp_seq 6 But what I can not figure out is why my browsers (Firefox, Safari, Opera) can not resolve the domain. I am on a Wifi connection. What could be the problem? BTW I am on a Mac (10.6.5)

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >