Search Results

Search found 19795 results on 792 pages for '8 bit'.

Page 16/792 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Position of least significant bit that is set

    - by peterchen
    I am looking for an efficient way to determine the position of the least significant bit that is set in an integer, e.g. for 0x0FF0 it would be 4. A trivial implementation is this: unsigned GetLowestBitPos(unsigned value) { assert(value != 0); // handled separately unsigned pos = 0; while (!(value & 1)) { value >>= 1; ++pos; } return pos; } Any ideas how to squeeze some cycles out of it? (Note: this question is for people that enjoy such things, not for people to tell me xyzoptimization is evil.) [edit] Thanks everyone for the ideas! I've learnt a few other things, too. Cool!

    Read the article

  • C/C++ - Convert 24-bit signed integer to float

    - by e-t172
    I'm programming in C++. I need to convert a 24-bit signed integer (stored in a 3-byte array) to float (normalizing to [-1.0,1.0]). The platform is MSVC++ on x86 (which means the input is little-endian). I tried this: float convert(const unsigned char* src) { int i = src[2]; i = (i << 8) | src[1]; i = (i << 8) | src[0]; const float Q = 2.0 / ((1 << 24) - 1.0); return (i + 0.5) * Q; } I'm not entirely sure, but it seems the results I'm getting from this code are incorrect. So, is my code wrong and if so, why?

    Read the article

  • Bitwise operators versus .NET abstractions for bit manipulation in C# prespective

    - by Leron
    I'm trying to get basic skills in working with bits using C#.NET. I posted an example yesterday with a simple problem that needs bit manipulation which led me to the fact that there are two main approaches - using bitwise operators or using .NET abstractions such as BitArray (Please let me know if there are more build-in tools for working with bits other than BitArray in .NET and how to find more info for them if there are?). I understand that bitwise operators work faster but using BitArray is something much more easier for me, but one thing I really try to avoid is learning bad practices. Even though my personal preferences are for the .NET abstraction(s) I want to know which i actually better to learn and use in a real program. Thinking about it I'm tempted to think that .NET abstractions are not that bad at, after all there must be reason to be there and maybe being a beginner it's more natural to learn the abstraction and later on improve my skills with low level operations, but this is just random thoughts.

    Read the article

  • Bit Flipping in Hex

    - by freyrs
    I have an 8 digit hexadecimal number of which I need certain digits to be either 0 or f. Given the specific place of the digits is there a quick way to generate the hex number with those places "flipped" to f. For example: flip_digits(1) = 0x000000f flip_digits(1,2,4) = 0x0000f0ff flip_digits(1,7,8) = 0xff00000f I'm doing this on an embedded device so I can't call any math libraries, I suspect it can be done with just bit shifts but I can't quite figure out the method. Any sort of solution (Python, C, Pseudocode) will work. Thanks in advance.

    Read the article

  • Can I connect to a 64 bit mysql server from a 32 bit machine with 32 bit mysql client lib?

    - by chenqingzhi
    Can I connect to a 64 bit mysql server from a 32 bit machine with 32 bit mysql client lib? I mean the server is 64 bit version and running on an 64 bit machine and the client app is running on an 32 bit machine with the 32 bit mysql client lib. Is that OK? Or it will cause some problems? I don't have two machine so I can't do the test, can some tell me the answer? Thank you!

    Read the article

  • Running 32 bit assembly code on a 64 bit Linux & 64 bit Processor : Explain the anomaly.

    - by claws
    Hello, I'm in an interesting problem.I forgot I'm using 64bit machine & OS and wrote a 32 bit assembly code. I don't know how to write 64 bit code. This is the x86 32-bit assembly code for Gnu Assembler (AT&T syntax) on Linux. //hello.S #include <asm/unistd.h> #include <syscall.h> #define STDOUT 1 .data hellostr: .ascii "hello wolrd\n"; helloend: .text .globl _start _start: movl $(SYS_write) , %eax //ssize_t write(int fd, const void *buf, size_t count); movl $(STDOUT) , %ebx movl $hellostr , %ecx movl $(helloend-hellostr) , %edx int $0x80 movl $(SYS_exit), %eax //void _exit(int status); xorl %ebx, %ebx int $0x80 ret Now, This code should run fine on a 32bit processor & 32 bit OS right? As we know 64 bit processors are backward compatible with 32 bit processors. So, that also wouldn't be a problem. The problem arises because of differences in system calls & call mechanism in 64-bit OS & 32-bit OS. I don't know why but they changed the system call numbers between 32-bit linux & 64-bit linux. asm/unistd_32.h defines: #define __NR_write 4 #define __NR_exit 1 asm/unistd_64.h defines: #define __NR_write 1 #define __NR_exit 60 Anyway using Macros instead of direct numbers is paid off. Its ensuring correct system call numbers. when I assemble & link & run the program. $cpp hello.S hello.s //pre-processor $as hello.s -o hello.o //assemble $ld hello.o // linker : converting relocatable to executable Its not printing helloworld. In gdb its showing: Program exited with code 01. I don't know how to debug in gdb. using tutorial I tried to debug it and execute instruction by instruction checking registers at each step. its always showing me "program exited with 01". It would be great if some on could show me how to debug this. (gdb) break _start Note: breakpoint -10 also set at pc 0x4000b0. Breakpoint 8 at 0x4000b0 (gdb) start Function "main" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Temporary breakpoint 9 (main) pending. Starting program: /home/claws/helloworld Program exited with code 01. (gdb) info breakpoints Num Type Disp Enb Address What 8 breakpoint keep y 0x00000000004000b0 <_start> 9 breakpoint del y <PENDING> main I tried running strace. This is its output: execve("./helloworld", ["./helloworld"], [/* 39 vars */]) = 0 write(0, NULL, 12 <unfinished ... exit status 1> Explain the parameters of write(0, NULL, 12) system call in the output of strace? What exactly is happening? I want to know the reason why exactly its exiting with exitstatus=1? Can some one please show me how to debug this program using gdb? Why did they change the system call numbers? Kindly change this program appropriately so that it can run correctly on this machine. EDIT: After reading Paul R's answer. I checked my files claws@claws-desktop:~$ file ./hello.o ./hello.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped claws@claws-desktop:~$ file ./hello ./hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped All of my questions still hold true. What exactly is happening in this case? Can someone please answer my questions and provide an x86-64 version of this code?

    Read the article

  • Bit reversal of an integer, ignoring integer size and endianness

    - by ??O?????
    Given an integer typedef: typedef unsigned int TYPE; or typedef unsigned long TYPE; I have the following code to reverse the bits of an integer: TYPE max_bit= (TYPE)-1; void reverse_int_setup() { TYPE bits= (TYPE)max_bit; while (bits <<= 1) max_bit= bits; } TYPE reverse_int(TYPE arg) { TYPE bit_setter= 1, bit_tester= max_bit, result= 0; for (result= 0; bit_tester; bit_tester>>= 1, bit_setter<<= 1) if (arg & bit_tester) result|= bit_setter; return result; } One just needs first to run reverse_int_setup(), which stores an integer with the highest bit turned on, then any call to reverse_int(arg) returns arg with its bits reversed (to be used as a key to a binary tree, taken from an increasing counter, but that's more or less irrelevant). Is there a platform-agnostic way to have in compile-time the correct value for max_int after the call to reverse_int_setup(); Otherwise, is there an algorithm you consider better/leaner than the one I have for reverse_int()? Thanks.

    Read the article

  • Speed up bitstring/bit operations in Python?

    - by Xavier Ho
    I wrote a prime number generator using Sieve of Eratosthenes and Python 3.1. The code runs correctly and gracefully at 0.32 seconds on ideone.com to generate prime numbers up to 1,000,000. # from bitstring import BitString def prime_numbers(limit=1000000): '''Prime number generator. Yields the series 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 ... using Sieve of Eratosthenes. ''' yield 2 sub_limit = int(limit**0.5) flags = [False, False] + [True] * (limit - 2) # flags = BitString(limit) # Step through all the odd numbers for i in range(3, limit, 2): if flags[i] is False: # if flags[i] is True: continue yield i # Exclude further multiples of the current prime number if i <= sub_limit: for j in range(i*3, limit, i<<1): flags[j] = False # flags[j] = True The problem is, I run out of memory when I try to generate numbers up to 1,000,000,000. flags = [False, False] + [True] * (limit - 2) MemoryError As you can imagine, allocating 1 billion boolean values (1 byte 4 or 8 bytes (see comment) each in Python) is really not feasible, so I looked into bitstring. I figured, using 1 bit for each flag would be much more memory-efficient. However, the program's performance dropped drastically - 24 seconds runtime, for prime number up to 1,000,000. This is probably due to the internal implementation of bitstring. You can comment/uncomment the three lines to see what I changed to use BitString, as the code snippet above. My question is, is there a way to speed up my program, with or without bitstring?

    Read the article

  • Bit shift and pointer oddities in C, looking for explanations

    - by foo
    Hi all, I discovered something odd that I can't explain. If someone here can see what or why this is happening I'd like to know. What I'm doing is taking an unsigned short containing 12 bits aligned high like this: 1111 1111 1111 0000 I then want to shif the bits so that each byte in the short hold 7bits with the MSB as a pad. The result on what's presented above should look like this: 0111 1111 0111 1100 What I have done is this: unsigned short buf = 0xfff; //align high buf <<= 4; buf >>= 1; *((char*)&buf) >>= 1; This gives me something like looks like it's correct but the result of the last shift leaves the bit set like this: 0111 1111 1111 1100 Very odd. If I use an unsigned char as a temporary storage and shift that then it works, like this: unsigned short buf = 0xfff; buf <<= 4; buf >>= 1; tmp = *((char*)&buf); *((char*)&buf) = tmp >> 1; The result of this is: 0111 1111 0111 1100 Any ideas what is going on here?

    Read the article

  • Returning a Value From the Bit.ly JS API Callback

    - by mtorbin
    Hey all, I am attempting to turn this "one shot" script into something more extensible. The problem is that I cannot figure out how to get the callback function to set a value outside of itself (please note that references to the Bit.ly API and the prototype.js frame work which are required have been left out due to login and apiKey information): CURRENTLY WORKING CODE var setShortUrl = function(data) { var resultOBJ, myURL; for(x in data.results){resultOBJ = data.results[x];} for(key in resultOBJ){if(key == "shortUrl"){myURL = resultOBJ[key];}} alert(myURL); } BitlyClient.shorten('http://www.thinkgeek.com', 'setShortUrl'); PROPOSED CHANGES var setShortUrl = function(data) { var resultOBJ, myURL; for(x in data.results){resultOBJ = data.results[x];} for(key in resultOBJ){if(key == "shortUrl"){myURL = resultOBJ[key];}} alert(myURL); return myURL; } var myTEST = BitlyClient.shorten('http://www.thinkgeek.com', 'setShortUrl'); alert(myTEST); As you can see this doesn't work the way I'm am expecting. If I could get a pointer in the right direction it would be most appreciated. Thanks, MT

    Read the article

  • How to produce 64 bit masks?

    - by egiakoum1984
    Based on the following simple program the bitwise left shit operator works only for 32 bits. Is it true? #include <iostream> #include <stdlib.h> using namespace std; int main(void) { long long currentTrafficTypeValueDec; int input; cout << "Enter input:" << endl; cin >> input; currentTrafficTypeValueDec = 1 << (input - 1); cout << currentTrafficTypeValueDec << endl; cout << (1 << (input - 1)) << endl; return 0; } The output of the program: Enter input: 30 536870912 536870912 Enter input: 62 536870912 536870912 How could I produce 64-bit masks?

    Read the article

  • What's up with this reversing bit order function?

    - by MattyW
    I'm rather ashamed to admit that I don't know as much about bits and bit manipulation as I probably should. I tried to fix that this weekend by writing some 'reverse the order of bits' and 'count the ON bits' functions. I took an example from here but when I implemented it as below, I found I had to be looping while < 29. If I loop while < 32 (as in the example) Then when I try to print the integer (using a printBits function i've written) I seem to be missing the first 3 bits. This makes no sense to me, can someone help me out? int reverse(int n) { int r = 0; int i = 0; for(i = 0; i < 29; i++) { r = (r << 1) + (n & 1); n >>=1; } return r; }

    Read the article

  • What is the maximum memory a process (MySQL) can consume on a 32-bit OS?

    - by mmattax
    I have MySQL running on a 32-bit RHEL box. The server itself has 4GB total memory with 2GB allocated to MySQL. I would like to know the max amount of memory I can put in the box and how much of that I can allocate to MySQL. I have heard both 2GB and 4GB as the per-process-limit on a 32-bit OS... Ultimately I'd like to know if I can increase the memory for MySQL without upgrading to a 64-bit OS.

    Read the article

  • How can I get the Android SDK working with Eclipse in Ubuntu 9.10 64-bit?

    - by user30667
    I would like to tinker with the Android software development kit, and I have found out that it only support 32-bit versions of the Java Platform and Eclipse. I installed the ia32 Sun Java runtime environment and the 32-bit version of Eclipse. I also used the update-alternatives program to make a java 32-bit preference. Both of these seem to run fine. I also installed the Eclipse android plugins, but my problem lies in the SDK downloaded from Google. When I go to Eclipse preferences and try to tell it about my Android SDK location, there are no SDK targets listed. Has anyone else gotten this running on Ubuntu 9.10 64-bit? Thanks.

    Read the article

  • Can most Intel processors run 64-bit Windows 7?

    - by Jian Lin
    Can most Intel processors run 64-bit Windows 7? Such as all the i7, i5, Core 2 Duo, Dual Core, Single Core with HT, and even just Single Core? I think a popular view is that if you can run 64-bit Windows 7, then use it? It might have driver compatibility issue but if there is no device hooked up, then there is no problem? What about some software / games not compatible with the 64-bit version or may run slower? thanks. update: a couple of my machines have 4GB RAM. so 64-bit Win 7 can make use of the full 4GB RAM instead of only about 3.2GB

    Read the article

  • How can I get the Android SDK working with Eclipse in Ubuntu 9.10 64-bit?

    - by vulcan99
    I would like to tinker with the Android software development kit, and I have found out that it only support 32-bit versions of the Java Platform and Eclipse. I installed the ia32 Sun Java runtime environment and the 32-bit version of Eclipse. I also used the update-alternatives program to make a java 32-bit preference. Both of these seem to run fine. I also installed the Eclipse android plugins, but my problem lies in the SDK downloaded from Google. When I go to Eclipse preferences and try to tell it about my Android SDK location, there are no SDK targets listed. Has anyone else gotten this running on Ubuntu 9.10 64-bit? Thanks.

    Read the article

  • How to get Xvfb to work on 32 bit color

    - by Robus
    Can anybody tell me how to get Xvfb to work on 32bit color? Vnc4server works fine for example, but didn't fit my purpose. > /etc/X11# Xvfb :1 -screen 0 1600x1200x24 error opening security policy file /etc/X11/xserver/SecurityPolicy (EE) XKB: Couldn't open rules file /usr/share/X11/xkb/rules/base Could not init font path element /usr/share/fonts/X11/cyrillic, removing from list! [config/hal] couldn't initialise context: (null) ((null)) FreeFontPath: FPE "/usr/share/fonts/X11/misc" refcount is 2, should be 1; fixing. Aka - it works, while: > /etc/X11# Xvfb :1 -screen 0 1600x1200x32 Fatal server error: Couldn't add screen 0

    Read the article

  • Learning C bit manipulation

    - by VaioIsBorn
    I didn't know any better name for a title of the question. For ex: on one of my previous questions one answered ((a)-(b))&0x80000000) 31 - this is kind of a too advanced for me and i can't really get what it means. I am not looking just for an answer of that, but i need someone to tell me some books/sites/whatever where i can learn this cool "advanced" tricks in C - and learn how and where to use them respectively too.

    Read the article

  • Fastest way to calculate an X-bit bitmask?

    - by Virtlink
    I have been trying to solve this problem for a while, but couldn't with just integer arithmetic and bitwise operators. However, I think its possible and it should be fairly easy. What am I missing? The problem: to get an integer value of arbitrary length (this is not relevant to the problem) with it's X least significant bits sets to 1 and the rest to 0. For example, given the number 31, I need to get an integer value which equals 0x7FFFFFFF (31 least significant bits are 1 and the rest zeros). Of course, using a loop OR-ing a shifted 1 to an integer X times will do the job. But that's not the solution I'm looking for. It should be more in the direction of (X << Y - 1), thus using no loops.

    Read the article

  • Characters problem in Bit.ly

    - by Fevos
    Hello, When i try to shorten a link with "#,&" Character i got an exception. is there a way to handle them . this is a sample of code that works String shortUrl = bitly.getShortUrl("http://z"); //Works but if i add for example '&' or '%25' to the string it will produce exption : - String shortUrl = bitly.getShortUrl("http://z%26"); // Exception - String shortUrl = bitly.getShortUrl("http://z&"); // Exception the getShortUrl function form this Java class: http://github.com/finnjohnsen/BitlyAndroid/raw/master/src/com/finnjohnsen/bitlyandroid/test/BitlyAndroid.java Thanks

    Read the article

  • Bit manipulation, permutate bits

    - by triktae
    Hi. I am trying to make a loop that loops through all different integers where exactly 10 of the last 40 bits are set high, the rest set low. The reason is that I have a map with 40 different values, and I want to sum all different ways ten of these values can be multiplied. (This is just out of curiosity, so it's really the "bitmanip"-loop that is of interest, not the sum as such.) If I were to do this with e.g. 2 out of 4 bits, it would be easy to set all manually, 0011 = 7, 0101 = 5, 1001 = 9, 0110 = 6, 1010 = 10, 1100 = 12, but with 10 out of 40 I can't seem to find a method to generate these effectively. I tried, starting with 1023 ( = 1111111111 in binary), finding a good way to manipulate this, but with no success. I've been trying to do this in C++, but it's really the general method(if any) that is of interest. I did some googling, but with little success, if anyone has a good link, that would of course be appreciated too. :)

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >