Search Results

Search found 22300 results on 892 pages for 'half bit'.

Page 75/892 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • How do I load SciLexer.dll in Visual Studio 2008 Designer, on Windows 7 64-bit?

    - by Filini
    We develop a WinForm application using Scintilla.NET (1.7) component, which uses SciLexer.dll (unamnaged). At run-time, we distribute both 32bit and 64bit SciLexer.dll, and we load the correct one when the application starts (everything works fine). On our new development environments (Windows 7 64-bit), all our solutions build and run just fine, but the WinForm visual designer does not load our forms/controls which use Scintilla.NET, because it cannot load the correct SciLexer.dll: Window class name is not valid. at System.Windows.Forms.NativeWindow.WindowClass.RegisterClass() at System.Windows.Forms.NativeWindow.WindowClass.Create(String className, Int32 classStyle) at System.Windows.Forms.NativeWindow.CreateHandle(CreateParams cp) at System.Windows.Forms.Control.CreateHandle() at System.Windows.Forms.Control.get_Handle() at Scintilla.ScintillaControl.SendMessageDirect(UInt32 msg, IntPtr wParam, IntPtr lParam) at Scintilla.ScintillaControl.SendMessageDirect(UInt32 msg) at Scintilla.ScintillaControl.get_CodePage() at Scintilla.ScintillaControl..ctor(String sciLexerDllName) at Scintilla.ScintillaControl..ctor() Where does Visual Studio 2008 look for unmanaged libraries? I tried putting the 64-bit SciLexer.dll in SysWOW64, in the folder where ScintillaNET.dll is referenced, adding a folder in PATH system variable, adding a folder reference in the project, but I keep getting this error. Any help is appreciated.

    Read the article

  • Why 2 GB memory limit when running in 64 bit Windows ?

    - by Roland Bengtsson
    I'm a member in a team that develop a Delphi application. The memory requirements are huge. 500 MB is normal but in some cases it got out of memory exception. The memory allocated in that cases is typically between 1000 - 1700 MB. We of course want 64-bits compiler but that won't happen now (and if it happens we also must convert to unicode, but that is another story...). My question is why is there a 2 GB memory limit per process when running in a 64 bit environment. The pointer is 32 bit so I think 4 GB would be the right limit. I use Delphi 2007.

    Read the article

  • Faster way to convert from 24 bit wav pcm format to float?

    - by LMO
    I need to read data in from a wav file in 24 bit pcm format, and convert to float. I'm using Python 2.7.2. The wave package reads the data in as a string, so what I've tried is: # read in entire wav file wdata = f.readframes(nFrames) # unpack into signed integers and convert to float data = array.array('f') for i in range(0,nFrames*3,3): data.append(float(struct.unpack('<i', '\x00'+ wdata[i:i+3])[0])) # normalize sample values data = data / 0x800000 This is quite a bit faster than my earlier approaches, but still quite slow. Can anyone suggest a more efficient method?

    Read the article

  • NASM: Count how many bits in a 32 Bit number are set to 1.

    - by citronas
    I have a 32 Bit number and want to count know how many bits are 1. I'm thinking of this pseudocode: mov eax, [number] while(eax != 0) { div eax, 2 if(edx == 1) { ecx++; } shr eax, 1 } Is there a more efficient way? I'm using NASM on a x86 processor. (I'm just beginning with assembler, so please do not tell me to use code from extern libraries, because I do not even know how to include them ;) ) (I just found http://stackoverflow.com/questions/109023/best-algorithm-to-count-the-number-of-set-bits-in-a-32-bit-integer which also contains my solution. There are other solutions posted, but unfortunatly I can't seem to figure out, how I would write them in assembler)

    Read the article

  • How can I access the sign bit of a number in C++?

    - by Keand64
    I want to be able to access the sign bit of a number in C++. My current code looks something like this: int sign bit = number >> 31; That appears to work, giving me 0 for positive numbers and -1 for negative numbers. However, I don't see how I get -1 for negative numbers: if 12 is 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 1100 then -12 is 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 0011 and shifting it 31 bits would make 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 which is 1, not -1, so why do I get -1 when I shift it?

    Read the article

  • How to reliably specialize template with intptr_t in 32 and 64 bit environments?

    - by vava
    I have a template I want to specialize with two int types, one of them plain old int and another one is intptr_t. On 64 bit platform they have different sizes and I can do that with ease but on 32 bit both types are the same and compiler throws an error about redefinition. What can I do to fix it except for disabling one of definitions off with preprocessor? Some code as an example: template<typename T> type * convert(); template<> type * convert<int>() { return getProperIntType(sizeof(int)); } template<> type * convert<intptr_t>() { return getProperIntType(sizeof(intptr_t)); } //this template can be specialized with non-integral types as well, // so I can't just use sizeof() as template parameter. template<> type * convert<void>() { return getProperVoidType(); }

    Read the article

  • How do you specify a 64 bit unsigned int const 0x8000000000000000 in VS2008?

    - by Mark Franjione
    I read about the Microsoft specific suffix "i64" for integer constants. I want to do an UNsigned shift to a ULONGLONG. ULONGLONG bigNum64 = 0x800000000000000i64 >> myval; In normal C, I would use the suffix "U", e.g. the similar 32 bit operation would be ULONG bigNum32 = 0x80000000U >> myval; I do NOT want the 2's complement sign extension to propogate through the high bits. I want an UNSIGNED shift on a 64 bit const number. I think my first statement is going to do a SIGNED shift right. I tried 0x800000000000000i64U and 0x800000000000000u64 but got compiler errors.

    Read the article

  • Is there a way to have a bit bucket pointer? (C/C++)

    - by Crazy Chenz
    Is there a way to have a bit bucket pointer? A lot of IO (specifically input related) system calls return data to a buffer of a specific size. Is there a trick or way to make a sorta bit bucket pointer, so I can accept any amount of data that will be thrown away. Doing something like "char tmp[INT_MAX]" is crazy. The behavior I am looking for is something like /dev/null, only in a pointer world. Not to hopeful on this.... just curious. Thanks, Chenz UPDATE: Perhaps mmap-ing /dev/null. Forgot about that when I asked the question.

    Read the article

  • For LinqToSQL is 0 true or is 1 (for type Bit)?

    - by Vaccano
    I have a column of type Bit (called BBoolVal in this example). I have a LinqToSQL Statement Like this: var query = List<MyClass> myList = _ctx.DBList .Where(x => x.AGuidID == paramID) .Where(x => x.BBoolVal == false); When I look at the sql it ends up like this (I added the spacing and changed the names): SELECT [t0].[Id], [t0].[AGuidID], [t0].[OtherIDID], [t0].[TimeColumn], [t0].[BBoolVal], [t0].[MoreID] FROM [dbo].[MyTable] AS [t0] WHERE (NOT ([t0].[BBoolVal] = 1)) AND ([t0].[AGuidID] = @p0) Because my x.BBoolVal == false translates to [BBoolVal] == 1 I gather that false = 1 (and thus true = 0). I am asking because this seems a bit backwards to me. I am fine to accept it, I just want to be sure.

    Read the article

  • What does "single-bit ECC errors were detected on the RAID controller" mean?

    - by jsp
    I have a Dell T7600 with a Perc H710P RAID controller and 4 attached 3TB drives. Over the past few months the RAID controller has been intermittently reporting errors on boot: "no boot device found", "adapter at baseport is not responding", disks frequently reported as missing or failed. I have since replaced the RAID controller, the 4 hard drives, and finally the system's motherboard. After replacing the motherboard and rebooting a few times, I got the error Single bit ECC errors were detected on the RAID controller. Please contact technical support to resolve this issue. After rebooting about 20 more times, I haven't seen the ECC error. The system seems otherwise OK, except for the fact that the disk fans will sometimes start blowing at full blast when the the system is sitting completely idle and not stop until I reboot. Are the ECC errors in memory on the RAID controller? Or, does the RAID controller map in system memory, and the ECC errors are really in system memory? Or, are the ECC errors in the 1GB cache that resides in the RAID controller?

    Read the article

  • Can I install two Ubuntu versions on the same machine

    - by abh
    Hello, I have Ubuntu 10.10 32 bit already installed on my machine. I am using MongoDB and it does not work properly with 32 bit machine. So I want to install 64 bit Ubuntu 10.10 on my system on another partition(So that I can have both 32 bit and 64 bit versions). Is it okay to install both 32 bit and 64 bit. I mean will it give any problem? and on which partition I should install 64 bit version ..my partitions are as follows Filesystem Size Used Avail Use% Mounted on /dev/sda1 37G 11G 25G 30% / none 1.4G 260K 1.4G 1% /dev none 1.4G 776K 1.4G 1% /dev/shm none 1.4G 244K 1.4G 1% /var/run none 1.4G 0 1.4G 0% /var/lock /dev/sda6 129G 73G 50G 60% /home /dev/sda7 127G 76G 45G 64% /vol Waiting for your replies.

    Read the article

  • Looking for best practice for version numbering of dependent software components

    - by bit-pirate
    We are trying to decide on a good way to do version numbering for software components, which are depending on each other. Let's be more specific: Software component A is a firmware running on an embedded device and component B is its respective driver for a normal PC (Linux/Windows machine). They are communicating with each other using a custom protocol. Since, our product is also targeted at developers, we will offer stable and unstable (experimental) versions of both components (the firmware is closed-source, while the driver is open-source). Our biggest difficulty is how to handle API changes in the communication protocol. While we were implementing a compatibility check in the driver - it checks if the firmware version is compatible to the driver's version - we started to discuss multiple ways of version numbering. We came up with one solution, but we also felt like reinventing the wheel. That is why I'd like to get some feedback from the programmer/software developer community, since we think this is a common problem. So here is our solution: We plan to follow the widely used major.minor.patch version numbering and to use even/odd minor numbers for the stable/unstable versions. If we introduce changes in the API, we will increase the minor number. This convention will lead to the following example situation: Current stable branch is 1.2.1 and unstable is 1.3.7. Now, a new patch for unstable changes the API, what will cause the new unstable version number to become 1.5.0. Once, the unstable branch is considered stable, let's say in 1.5.3, we will release it as 1.4.0. I would be happy about an answer to any of the related questions below: Can you suggest a best practice for handling the issues described above? Do you think our "custom" convention is good? What changes would you apply to the described convention? Thanks a lot for your feedback! PS: Since I'm new here, I can't create new tags (e.g. best-practice). So, I'm wondering if best-pactice is just misspelled or I don't get its meaning.

    Read the article

  • Should I be paid for time spent learning a framework?

    - by nate-bit
    To give light to the situation: I am currently one of two programmers working in a small startup software company. Part of my job requires me to learn a Web development framework that I am not currently familiar with. I get paid by the hour. So the question is: Is it wholly ethical to spend multiple hours of the day reading through documentation and tutorials and be paid for this time where I am not actively developing for our product? Or should the bulk of this learning be done at home, or otherwise off hours, to allow for more full-on development of our application during the work day?

    Read the article

  • Will Windows 7 work at all on my old toshiba [closed]

    - by andrew
    Windows 7 requires the following specifications: 1 gigahertz (GHz) or faster 32-bit (x86) or 64-bit (x64) processor 1 gigabyte (GB) RAM (32-bit) or 2 GB RAM (64-bit) 16 GB available hard disk space (32-bit) or 20 GB (64-bit) DirectX 9 graphics device with WDDM 1.0 or higher driver Will it work at all on my old toshiba Satellite A100 PSAA8C-SK400E Intel® Core™ Solo processor T1350 (1.86GHz, 533MHz FSB, L1 Cache 32KB/32KB, L2 Cache 2MB) Standard Memory: 2x512 MB DDR2 Intel® Graphics Media Accelerator 950 with 8MB-128MB. The main problem I can see is that the graphics is not up to it.

    Read the article

  • Game timings and formats

    - by topright
    There are more or less standardized TV-show/movie formats and recommended timings: 1. By the early 1960s, television companies commonly presented half-hour long "comedy" series, or one hour long "dramas." Half-hour series were mostly restricted to situation comedy or family comedy, and were usually aired with either a live or artificial laugh track. One hour dramas included genre series such as police and detective series, westerns, science fiction, and, later, serialized prime time soap operas. Programs today still overwhelmingly conform to these half-hour and one hour guidelines. Source 2. In the United States, most medical dramas are one hour long. Source 3. Traditionally serials were broadcast as fifteen minute installments each weekday in daytime slots. In 1956 As the World Turns debuted as the first half-hour soap opera. All soap operas broadcast half-hour episodes by the end of the 1960s. With increased popularity in the 1970s most soap operas expanded to an hour (Another World even expanded to ninety minutes for a short time). More than half of the serials had expanded to one hour episodes by 1980. As of 2010, six of the seven US serials air one hour episodes each weekday. Source Interesting. Are there any standards of timing in game development? Well, 5-20 minutes casual games, of course. There is even a "5-minutes-game" site. And 1-hour-gamer site. Are there 1-week, 1-year, 1-eternity game formats? Chess and Go - deep games that you can study all your life; but they are played in hour or several days (pro games). Addictive long-term online role-playing games (without win-condition) are played in monthes and, possibly, years. Replayability is an important factor to consider. It's good when game design document contains a line: "A game is designed for solving in X hours". How can it be measured before there is any prototype or demo? When you know your game format, you know your audience (and vice versa). It is practical question. Are there psychological researches about dynamic of gaming interest and involvement? And is there a correlation between game format and game genre?

    Read the article

  • Should I be paid for time spent learning a framework?

    - by nate-bit
    To give light to the situation: I am currently one of two programmers working in a small startup software company. Part of my job requires me to learn a Web development framework that I am not currently familiar with. I get paid by the hour. So the question is: Is it wholly ethical to spend multiple hours of the day reading through documentation and tutorials and be paid for this time where I am not actively developing for our product? Or should the bulk of this learning be done at home, or otherwise off hours, to allow for more full-on development of our application during the work day?

    Read the article

  • Who spotted the omission?

    - by olaf.heimburger
    In my entry OFM 11g: Install OAM 10.1.4.3 (32-bit) on 64-bit RedHat AS 5 I explained how to install OAM 10.1.4.3 (32-bit) on 64-bit RedHat. This is great and works. If you seriously want to use OAM 10.1.4.3 you should consider OHS 11g 32-bit. But this installation is a bit tricky. Nearly all tricks to get this done are described in the above mentioned entry. Today I realized that I missed a small bit to get the installation successfully done.The missing part is within the script to create a vital piece of the OHS 11g package. This part is called genclientsh and resides in $OHS_HOME/bin. This script uses gcc to link binaries. By default this script works great, but on a 64-bit Linux it fails. To get around this, find the variable LD and change the value of gcc to gcc -m32.Done. Caveat On support.oracle.com you will find a Note that suggests to build a small shell script named gcc and includes the -m32 switch. Actually, I consider this as dangerous, because we are humans and tend to forget things quickly. Building a globally available script that changes things for a single setup has side effects that will result in unpredictable results.

    Read the article

  • Why does Outlook 2007 lose connection to Exchange when Windows 7 64-bit turns off display?

    - by Greg R.
    The problem: When Windows 7 puts the display to sleep, Outlook 2007 and also Microsoft Office Communicator 2005 lose the connection to the Exchange server. When I unlock the computer, Outlook is logged out of Exchange and prompts me for credentials (although usually I have to restart Outlook to get it to reconnect). The network connection is still active, e.g. other applications don't lose their connection to the network or Internet when Windows 7 puts the display to sleep. I'm using a Dell E5400 notebook running Windows 7 Enterprise 64-bit with Outlook 2007 connecting to a corporate Exchange server (not sure if it's Exchange 2007 or 2010). The Dell is typically docked and connected via DVI (through the dock) to two Dell monitors. The Power Options in Windows 7 are set as follows: Turn Off The Display: 15 minutes Put The Computer To Sleep: never Those are the "Plugged In" settings but the problematic behavior is the same when running on battery. When Windows 7 turns off the display, it automatically locks the computer. E.g., I have to re-enter my credentials to access the machine. This is per corporate policy. The equivalent set up on my previous Dell notebook running Windows XP SP3 did not result in this problem with Outlook 2007 or Office Communicator 2005 connecting the very same exchange server. The problem began when I switched to the new Dell E5400 with Windows 7.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >