Search Results

Search found 22709 results on 909 pages for '64 bit'.

Page 71/909 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • How to get LSB bit in MIPS?

    - by israkir
    Is there a short way to check/get for least significant bit in a 32-bit integer, in MIPS? It is obviously set for the odd numbers and an algorithm which checks for the whole number is odd or even can decide for this. But I just wonder is there a better way to do this...

    Read the article

  • ASP.NET - How to edit 'bit' data type?

    - by Peter
    I am coding in Visual Basic. I am using a checkbox control. Now depending on its checked property I need to set/unset a bit column in a SQL Server database. Here's the code: Try conSQL.Open() Dim cmd As New SqlCommand("update Student set send_mail = " + _ sendemailCheckBox.Checked.ToString + " where student_id = '" _ + sidnolabel.Text + "'", conSQL) cmd.ExecuteNonQuery() Finally conSQL.Close() End Try The send_mail attribute is of bit datatype. This code is not working. How do I go about it?

    Read the article

  • Xenserver 5.5 U2 a bit unstable with an unstable W2003 VM

    - by twistedbrain
    In the last week I had to reboot the host system twice and the second one by means of the power button. The system is a Dell PE 6950 (4 Opteron dual core, 2,8 Ghz, 16 GB RAM, 900 GB disk in a RAID 10 array composed by 4 450GB 15000 rpm disks) with XenServer 5.5 U2. We're installing it and at now there are working in production a 2003 32 bit Windows server VM with 2 GB RAM, 3 vcpu and about 200 GB disk in 4 partition (12 GB boot, 20 GB program, 80 GB user data, 80 GB other data). The first time I was compressing many Windows 2003 folders (some tens GB by means of W2003 compressed folder option) from a Windows remote console and some hours before my colleague installed Backup Exec agent that was alredy installed and that required a reboot (that was pending). The console stopped responding, it was no more possible to connect by means of remote console or by means of the console of the XenCentre, it was still possible from the network to use the shared folder of the VM and the programs on it (2 db and a GIS program), but the print server didn't work any more and I couldn't give remote reboot from other domain controller hosts. I couldn't stop the virtual machine neither from the XenCentre, neither from the command line of the host also forcing the reboot. I had to reboot the host server. Yesterday it has been worst. I installed a template of another VM, CentOS 5.3 and then, put the DVD in the drive of the host. Then, before the install and after the boot I checked for defect the DVD and the W2003 VM began to respond slowly (I was connected by means of an administration remote console) the task manager showed only mid or low load, it is, only the first of the 3 vcpu was loaded (about 70%), while the other two were about at 20% and also the disk I/O was not so heavy. Then the users were not so happy because they couldn't any more use the MS word docs on the server. I immediately stopped the check of the DVD (to do that I had to force the stop of such Centos 5.3 VM), then some users could again use their docs, but other had still problems, so I decided to reboot the VM, but it doesn't stopped, neither from XenCenter, neither from command line, neither forcing the reboot. Then I tried to reboot of the host, but it didn't worked, neither from the XenCentre, neither from the host prompt (shutdown -r now as root: it told that it was shutting down, but then it didn't did that). So I had to power off by means of the power button of the server (before I tried some Magic SyS REQ, but I saw that Xen isn't compiled with this option enabled). What could you suggest about my problem, what can I look, search and see? In the W2003 VM logs there are no errors or warning to explain what happened. Some more exciting, amusing and inspiring words of poetry (the facts happened around 11 am): \# egrep -i 'err|warning' xensource.log [20100219 10:32:05.597|debug|culo|6301 unix-RPC|VBD.plug R:c81bcda701f6|xenops] watch: watching xenstore paths: [ /xapi/0/frontend/vbd/51712/hotplug; /local/domain/0/backend/vbd/0/51712/tapdisk-error ] with timeout 1200.000000 seconds [20100219 10:32:05.597|debug|culo|6301 unix-RPC|VBD.plug R:c81bcda701f6|xenops] watch: fired on /local/domain/0/backend/vbd/0/51712/tapdisk-error [20100219 10:32:14.314|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] watch: watching xenstore paths: [ /local/domain/0/backend/vbd/0/51712/shutdown-done; /local/domain/0/error/device/vbd/51712/error ] with timeout 1200.000000 seconds [20100219 10:32:14.337|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] xenstore-rm /local/domain/0/error/backend/vbd/0 [20100219 10:32:14.337|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] xenstore-rm /local/domain/0/error/device/vbd/51712 [20100219 10:32:14.338|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] watch: fired on /local/domain/0/error/device/vbd/51712/error [20100219 10:53:48.903|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|helpers] Ignoring exception: INTERNAL_ERROR: [ Xb.Noent ] while Vmops.destroy_domain: Destroying domid 14 guest session [20100219 10:53:52.048|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/tap/14 [20100219 10:53:52.048|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51744 [20100219 10:53:52.085|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/tap/14 [20100219 10:53:52.086|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51728 [20100219 10:53:52.122|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/tap/14 [20100219 10:53:52.122|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51712 [20100219 10:53:52.127|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/vbd/14 [20100219 10:53:52.128|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51760 [20100219 10:53:52.496|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] Device.Vif.hard_shutdown about to blow away backend and error paths [20100219 10:53:52.497|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/vif/14 [20100219 10:53:52.497|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vif/0 [20100219 10:53:53.385|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] Device.Vif.hard_shutdown about to blow away backend and error paths [20100219 10:53:53.386|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/vif/14 [20100219 10:53:53.386|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vif/1 [20100219 10:53:53.389| warn|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|hotplug] Warning, deleting 'vif' entry from /xapi/14/hotplug/vif/0 [20100219 10:53:53.391| warn|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|hotplug] Warning, deleting 'vif' entry from /xapi/14/hotplug/vif/1 [20100219 11:20:49.766|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|helpers] Ignoring exception: INTERNAL_ERROR: [ Xb.Noent ] while Vmops.destroy_domain: Destroying domid 11 guest session [20100219 11:20:50.339|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/tap/11 [20100219 11:20:50.339|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vbd/832 [20100219 11:20:50.360|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/tap/11 [20100219 11:20:50.360|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vbd/768 [20100219 11:20:50.366|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/vbd/11 [20100219 11:20:50.366|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vbd/5696 [20100219 11:20:50.753|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] Device.Vif.hard_shutdown about to blow away backend and error paths [20100219 11:20:50.754|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/vif/11 [20100219 11:20:50.754|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vif/1 [20100219 11:20:50.757| warn|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|hotplug] Warning, deleting 'vif' entry from /xapi/11/hotplug/vif/1 [20100219 11:28:13.803|debug|culo|6610 inet-RPC|Connection to VM console R:e9f8b76e8975|console] error: INTERNAL_ERROR: [ Unix.Unix_error(63, "connect", "") ]

    Read the article

  • How to set MinWorkingSet and MaxWorkingSet in a 64-bit .NET process?

    - by Gravitas
    How do I set MinWorkingSet and MaxWorking set for a 64-bit .NET process? p.s. I can set the MinWorkingSet and MaxWorking set for a 32-bit process, as follows: [DllImport("KERNEL32.DLL", EntryPoint = "SetProcessWorkingSetSize", SetLastError = true, CallingConvention = CallingConvention.StdCall)] internal static extern bool SetProcessWorkingSetSize(IntPtr pProcess, int dwMinimumWorkingSetSize, int dwMaximumWorkingSetSize); [DllImport("KERNEL32.DLL", EntryPoint = "GetCurrentProcess", SetLastError = true, CallingConvention = CallingConvention.StdCall)] internal static extern IntPtr MyGetCurrentProcess(); // In main(): SetProcessWorkingSetSize(Process.GetCurrentProcess().Handle, int.MaxValue, int.MaxValue); Update: Unfortunately, even if we do this call, the garbage collection trims the working set down anyway, bypassing MinWorkingSet (see "Automatic GC.Collect() in the diagram below). Question: Is there a way to lock the WorkingSet (the green line) to 1GB, to avoid the spike in page faults (the red lines) that occur when allocating new memory into the process? p.s. Every time a page fault occurs, it blocks the thread for 250us, which hits application performance badly.

    Read the article

  • Can't edit and continue when using Visual Studio 2010 on a 64 bit machine, app targets x86

    - by Sayed Ibrahim Hashimi
    I'm having some problems with Edit and Continue when using Visual Studio 2010 on a Windows 7 64 bit machine. I've ensured the following Edit and Continue is enabled under ToolsOptionsDebuggingEdit and Continue My solution platform is set to x86 My solution configuration is set to Debug All my projects are building for Debug and x86 For all projects under ProjectsPropertiesBuild the Optimize code is unchecked When I hit a break point and try to edit I and confronted with the following message. This is happening for me for all projects that I create whether they are WPF/Win Forms/VB.NET/C#/.NET 4/.NET 3. Any ideas?

    Read the article

  • How does one convert 16-bit RGB565 to 24-bit RGB888?

    - by jleedev
    I’ve got my hands on a 16-bit rgb565 image (specifically, an Android framebuffer dump), and I would like to convert it to 24-bit rgb888 for viewing on a normal monitor. The question is, how does one convert a 5- or 6-bit channel to 8 bits? The obvious answer is to shift it. I started out by writing this: uint16_t buf; while (read(0, &buf, sizeof buf)) { unsigned char red = (buf & 0xf800) >> 11; unsigned char green = (buf & 0x07c0) >> 5; unsigned char blue = buf & 0x003f; putchar(red << 3); putchar(green << 2); putchar(blue << 3); } However, this doesn’t have one property I would like, which is for 0xffff to map to 0xffffff, instead of 0xf8fcf8. I need to expand the value in some way, but I’m not sure how that should work. The Android SDK comes with a tool called ddms (Dalvik Debug Monitor) that takes screen captures. As far as I can tell from reading the code, it implements the same logic; yet its screenshots are coming out different, and white is mapping to white. Here’s the raw framebuffer, the smart conversion by ddms, and the dumb conversion by the above algorithm. (By the way, this conversion is implemented in ffmpeg, but it’s just performing the dumb conversion listed above, leaving the LSBs at all zero.) I guess I have two questions: What’s the most sensible way to convert rgb565 to rgb888? How is DDMS converting its screenshots?

    Read the article

  • 8 bit enum, in C

    - by oxinabox.ucc.asn.au
    I have to store instuctions, commands that I will be receiving via serial. The commands will be 8 bits long. I'd like to use Enumerations to deal with them in my code. Only a enumeration corresponds to a ... on this platform I think a 16 bit integer. I need to preserve transparancy between command name, and its value. So as to avoid having to translate an 8-bit number received in serial into any type. BTW the platform is AVR ATmega169V microcontroller, on the Butterfly demo board. It may be being underclocked to preserve power (I'm opposed to this, I believe the ATmega169V uses no power, not next to a router. But that's getting offtopic.) So I need to keep things fast, and I don't have any luxuries like file I/O. Or operating systems. So any suggestions as to what type I should be using to store 8-bit commands? There has got to be something better than a massive header of #defines.

    Read the article

  • Problems running Java SWT application. Is this something to do with 64 vs 32-bit libraries?

    - by ?????
    I'm getting started with SWT Exception in thread "main" java.lang.UnsatisfiedLinkError: no swt-cocoa-3557 or swt-cocoa in swt.library.path, java.library.path or the jar file at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source) at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source) at org.eclipse.swt.internal.C.<clinit>(Unknown Source) at org.eclipse.swt.internal.cocoa.NSThread.isMainThread(Unknown Source) at org.eclipse.swt.graphics.Device.<init>(Unknown Source) at org.eclipse.swt.widgets.Display.<init>(Unknown Source) at org.eclipse.swt.widgets.Display.<init>(Unknown Source) at org.eclipse.swt.widgets.Display.getDefault(Unknown Source) at com.astrobetty.getotagdesktop.MainUIWindow.main(MainUIWindow.java:110) I've carefully followed the directions here http://www.eclipse.org/swt/eclipse.php I can see libswt-awt-cocoa-3557.jnilib, libswt-cocoa-3557.jnilib, libswt-pi-cocoa-3557.jnilib clearly listed under "Referenced Libraries" in my Eclipse package explorer. Presumably, that should make them available on the classpath when I select the java module that contains "main" and do a "run as" application. What's the problem? I suspect it may be related to 64-bit vs 32-bit VMs....

    Read the article

  • Why does my 64-bit IIS app pool show 3 gigabytes more virtual memory than private memory?

    - by Brett
    I have an ASP.Net application that I am running on 64-bit IIS 6 on Windows XP x64. When I open performance counters after one page hit of a trivial page, I see a Private Bytes of about 88 megs, but a Virtual Bytes of about 3 Gigs. When I try the same thing with a VERY trivial ASP.Net app, I get the same result. We see something similar on Windows Server 2003 in production -- there it is an issue because we recycle when the virtual memory consumed outgrows a limit. Before we make any changes to our recycling settings, we'd like to answer the following questions: Why does the app pool grab such a large hunk of virtual memory? Is the amount of virtual memory headroom the app requests configurable? Thanks! Brett

    Read the article

  • Hibernate bit array to entity mapping

    - by teabot
    I am trying to map a normalized Java model to a legacy database schema using Hibernate 3.5. One particular table encodes a foreign keys in a one-to-many relationship as a bit array column. Consider tables 'person' and 'clubs' that describes people's affiliations to clubs: person .----.------. club: .----.---------.---------------------------. | id | name | | id | name | members | binary(members) | |----+------| |----+---------|---------+-----------------| | 1 | Bob | | 10 | Cricket | 0 | 000 | | 2 | Joe | | 11 | Tennis | 5 | 101 | | 3 | Sue | | 12 | Cooking | 7 | 111 | '----'------' | 13 | Golf | 3 | 100 | '----'---------'---------'-----------------' So hopefully it is clear that person.id is used as the bit index in the bit array club.members. In this example the members column tells us that: no one is a member of Cricket, Bob/Sue - Tennis, Bob/Sue/Joe - Cooking and Sue - Golf. In my Java domain I'd like to declare this with entities like so: class Person { private int id; private String name; ... } class Club { private Set<Person> members; private int id; private String name; ... } I am assuming that I must use a UserType implementation but have been unable to find any examples where the items described by the user type are references to entities - not literal field values - or composites thereof. Additionally I am aware that I'll have to consider how the person entities are fetched when a club instance is loaded. Can anyone tell me how I can tame this legacy schema with Hibernate?

    Read the article

  • Strip parity bits in C from 8 bits of data followed by 1 parity bit

    - by dubnde
    I have a buffer of bits with 8 bits of data followed by 1 parity bit. This pattern repeats itself. The buffer is currently stored as an array of octets. Example (p are parity bits): 0001 0001 p000 0100 0p00 0001 00p01 1100 ... should become 0001 0001 0000 1000 0000 0100 0111 00 ... Basically, I need to strip of every ninth bit to just obtain the data bits. How can I achieve this? This is related to another question asked here sometime back. This is on a 32 bit machine so the solution to the related question may not be applicable. The maximum possible number of bits is 45 i.e. 5 data octets This is what I have tried so far. I have created a "boolean" array and added the bits into the array based on the the bitset of the octet. I then look at every ninth index of the array and through it away. Then move the remaining array down one index. Then I've got only the data bits left. I was thinking there may be better ways of doing this.

    Read the article

  • Most efficient way to extract bit flags

    - by Hallik
    I have these possible bit flags. 1, 2, 4, 8, 16, 64, 128, 256, 512, 2048, 4096, 16384, 32768, 65536 So each number is like a true/false statement on the server side. So if the first 3 items, and only the first 3 items are marked "true" on the server side, the web service will return a 7. Or if all 14 items above are true, I would still get a single number back from the web service which is is the sum of all those numbers. What is the best way to handle the number I get back to find out which items are marked as "true"?

    Read the article

  • How Can I: Generate 40/64 Bit WEP Key In Python?

    - by Aktariel
    So, I've been beating my head against the wall of this issue for several months now, partly because it's a side interest and partly because I suck at programming. I've searched and researched all across the web, but have not had any luck (except one small bit of success; see below), so I thought I might try asking the experts. What I am trying to do is, as the title suggests, generate a 40/64 bit WEP key from a passphrase, according to the "de facto" standard. (A site such as [http://www.powerdog.com/wepkey.cgi] produces the expected outputs.) I have already written portions of the script that take inputs and write them to a file; one of the inputs would be the passphrase, sanitized to lower case. For the longest time I had no idea what the defacto standard was, much less how to even go about implementing it. I finally stumbled across a paper (http://www.lava.net/~newsham/wlan/WEP_password_cracker.pdf) that sheds as much light as I've had yet on the issue (page 18 has the relevant bits). Apparently, the passphrase is "mapped to a 32-bit value with XOR," the result of which is then used as the seed for a "linear congruential PRNG (which one of the several PRNGs Python has would fit this description, I don't know), and then from that result several bits of the result are taken. I have no idea how to go about implementing this, since the description is rather vague. What I need is help in writing the generator in Python, and also in understanding how exactly the key is generated. I'm not much of a programmer, so explanations are appreciated as well. (Yes, I know that WEP isn't secure.)

    Read the article

  • Add two 32-bit integers in Assembler for use in VB6

    - by Emtucifor
    I would like to come up with the byte code in assembler (assembly?) for Windows machines to add two 32-bit longs and throw away the carry bit. I realize the "Windows machines" part is a little vague, but I'm assuming that the bytes for ADD are pretty much the same in all modern Intel instruction sets. I'm just trying to abuse VB a little and make some things faster. So... if the string "8A4C240833C0F6C1E075068B442404D3E0C20800" is the assembly code for SHL that can be "injected" into a VB6 program for a fast SHL operation expecting two Long parameters (we're ignoring here that 32-bit longs in VB6 are signed, just pretend they are unsigned), what is the hex string of bytes representing assembler instructions that will do the same thing to return the sum? The hex code above for SHL is, according to the author: mov eax, [esp+4] mov cl, [esp+8] shl eax, cl ret 8 I spit those bytes into a file and tried unassembling them in a windows command prompt using the old debug utility, but I figured out it's not working with the newer instruction set because it didn't like EAX when I tried assembling something but it was happy with AX. I know from comments in the source code that SHL EAX, CL is D3E0, but I don't have any reference to know what the bytes are for instruction ADD EAX, CL or I'd try it. I tried flat assembler and am not getting anything I can figure out how to use. I used it to assemble the original SHL code and got a very different result, not the same bytes. Help?

    Read the article

  • Is there any "standard" htonl-like function for 64 bits integers in C++ ?

    - by ereOn
    Hi, I'm working on an implementation of the memcache protocol which, at some points, uses 64 bits integer values. These values must be stored in "network byte order". I wish there was some uint64_t htonll(uint64_t value) function to do the change, but unfortunately, if it exist, I couldn't find it. So I have 1 or 2 questions: Is there any portable (Windows, Linux, AIX) standard function to do this ? If there is no such function, how would you implement it ? I have in mind a basic implementation but I don't know how to check the endianness at compile-time to make the code portable. So your help is more than welcome here ;) Thank you.

    Read the article

  • Detection of negative integers using bit operations

    - by Nawaz
    One approach to check if a given integer is negative or not, could be this: (using bit operations) int num_bits = sizeof(int) * 8; //assuming 8 bits per byte! int sign_bit = given_int & (1 << (num_bits-1)); //sign_bit is either 1 or 0 if ( sign_bit ) { cout << "given integer is negative"<<endl; } else { cout << "given integer is positive"<<endl; } The problem with this solution is that number of bits per byte couldn't be 8, it could be 9,10, 11 even 16 or 40 bits per byte. Byte doesn't necessarily mean 8 bits! Anyway, this problem can be easily fixed by writing, //CHAR_BIT is defined in limits.h int num_bits = sizeof(int) * CHAR_BIT; //no assumption. It seems fine now. But is it really? Is this Standard conformant? What if the negative integer is not represented as 2's complement? What if it's representation in a binary numeration system that doesn't necessitate only negative integers to have 1 in it's most significant bit? Can we write such code that will be both portable and standard conformant? Related topics: Size of Primitive data types Why is a boolean 1 byte and not 1 bit of size?

    Read the article

  • setBit java method using bit shifting and hexadecimal code - question

    - by somewhat_confused
    I am having trouble understanding what is happening in the two lines with the 0xFF7F and the one below it. There is a link here that explains it to some degree. http://www.herongyang.com/java/Bit-String-Set-Bit-to-Byte-Array.html I don't know if 0xFF7FposBit) & oldByte) & 0x00FF are supposed to be 3 values 'AND'ed together or how this is supposed to be read. If anyone can clarify what is happening here a little better, I would greatly appreciate it. private static void setBit(byte[] data, final int pos, final int val) { int posByte = pos/8; int posBit = pos%8; byte oldByte = data[posByte]; oldByte = (byte) (((0xFF7F>>posBit) & oldByte) & 0x00FF); byte newByte = (byte) ((val<<(8-(posBit+1))) | oldByte); data[posByte] = newByte; } passed into this method as parameters from a selectBits method was setBit(out,i,val); out = is byte[] out = new byte[numOfBytes]; (numOfBytes can be 7 in this situation) i = which is number [57], the original number from the PC1 int array holding the 56-integers. val = which is the bit taken from the byte array from the getBit() method.

    Read the article

  • how many color combinations in a 24 bit image

    - by numerical25
    I am reading a book and I am not sure if its a mistake or I am misunderstanding the quote. It reads... Nowadays every PC you can buy has hardware that can render images with at least 16.7 million individual colors. Rather than have an array with thousands of color entries, the images instead contain explicit color values for each pixel. A 24-bit display, of course, uses 24 bits, or 3 bytes per pixel, for color information. This gives 1 byte, or 256 distinct values each, for red, green, and blue. This is generally called true color, because 256^3 (16.7 million) He says 1 byte is equal to 256 distinct values. 1 byte = 8 bits. 8^2 bits = 64 distinct colors right ?? It's not adding up right to me. I know it might be something simple to understand, but I don't understand.

    Read the article

  • how many color combinations in a 24 bit image

    - by numerical25
    I am reading a book and I am not sure if its a mistake or I am misunderstanding the quote. It reads... Nowadays every PC you can buy has hardware that can render images with at least 16.7 million individual colors. Rather than have an array with thousands of color entries, the images instead contain explicit color values for each pixel. A 24-bit display, of course, uses 24 bits, or 3 bytes per pixel, for color information. This gives 1 byte, or 256 distinct values each, for red, green, and blue. This is generally called true color, because 256^3 (16.7 million) He says 1 byte is equal to 256 distinct values. 1 byte = 8 bits. 8^2 bits = 64 combinations of colors right ?? It's not adding up right to me. I know it might be something simple to understand, but I don't understand.

    Read the article

  • Installed Service Pack 3 for SQL Server 2005 but it does not show up when selecting @@version

    - by Blegger
    We installed Service Pack 3 for SQL Server 2005 Standard Edition (64-bit). In the Add or Remove Programs menu we have the following entries under Microsoft SQL Server 2005 (64-bit): Service Pack 3 for SQL Server Integration Services 64-bit) ENU (KB955706) Service Pack 3 for SQL Server Notification Services 2005 (64-bit) ENU (KB955706) Service Pack 3 for SQL Server Database Services 2005 (64-bit) ENU (KB955706) Service Pack 3 for SQL Server Tools and Workstation Components 2005 (64-bit) ENU (KB955706) But when I run the following query on the server: select @@version I get this result: Microsoft SQL Server 2005 - 9.00.4035.00 (X64) Nov 24 2008 16:17:31 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition (64-bit) on Windows NT 5.2 (Build 3790: Service Pack 2) Why does it not display that it is running Service Pack 3?

    Read the article

  • MySQL on Windows-7 (64-bit) on 0.0.0.0:3306 rather than 127.0.0.1:3306

    - by Mark Baker
    I've just installed the latest production release of MySQL (64-bit) on my Windows 7 box. It was a straight vanilla install, using all defaults; but phpmyadmin can't see it at all. MySQL is configured as a service to start automatically, and I know it's running because the MySQL GUI tools work correctly. Doing a netstat -a, I see TCP 0.0.0.0:3306 Marks-Netbook:0 LISTENING when I'd expect to see TCP 127.0.0.1:3306 Marks-Netbook:0 LISTENING I don't know if this is the reason phpmyadmin can't connect, but suspect that it is probably the case. Can anybody confirm whether this is the likely cause, and/or suggest how I can reso;lve this?

    Read the article

  • How to reliably specialize template with intptr_t in 32 and 64 bit environments?

    - by vava
    I have a template I want to specialize with two int types, one of them plain old int and another one is intptr_t. On 64 bit platform they have different sizes and I can do that with ease but on 32 bit both types are the same and compiler throws an error about redefinition. What can I do to fix it except for disabling one of definitions off with preprocessor? Some code as an example: template<typename T> type * convert(); template<> type * convert<int>() { return getProperIntType(sizeof(int)); } template<> type * convert<intptr_t>() { return getProperIntType(sizeof(intptr_t)); } //this template can be specialized with non-integral types as well, // so I can't just use sizeof() as template parameter. template<> type * convert<void>() { return getProperVoidType(); }

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >