Search Results

Search found 19795 results on 792 pages for 'bit whacker'.

Page 46/792 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Trying to compile VS2008 project on Win 64 bit which is custom Powershell PSSnapin

    - by Boris Kleynbok
    Library Project compiles fine for ANY CPU in VS2008 running on Win 7 64 -bit. Now in the post build following command fails when attemptiong to register library dll: PS C:\Windows\Microsoft.NET\Framework64\v2.0.50727 .\installutil C:\path\Project.dll Exception occurred while initializing the installation: System.BadImageFormatException: Could not load file or assembly 'file:///C:\path\Project.dll' or one of its dependencies. An attempt was made to load a program with an incorrect format.. Do I need to compile the project as x64 I was under impression that AnyCPU will take care of it. Alo my library does have dependencies. Do they also need to be compiled as x64 bit? Any help is appreciated.

    Read the article

  • Cannot Debug Visual Basic 6 ActiveX Component On 64-bit OS

    - by Humanier
    Hi, The situation might sound a bit weird but I have to play with what I have. There's a Win2003 64-bit server OS and a legacy application written using Visual Studio 6. The app consists of two parts: ActiveX components written in VB6 and C++ code which uses them. I need to debug the components' code. I installed Visual Studio 6 on the server and I'm able to step into the component's code. Then I got following situation: 1) C++ code works until it needs to instantiate component A. 2) At this step we switch to VB6 and start debugging Component's A. 3) In the very beginning component A creates an instance of a class C exposed by component B. At this step VB6 debugger shows error message with title "OLEDB32.DLL" and following text: "Failed to load resource DLL C:\Program Files (x86)\Common Files\System\Ole DB\OLEDB32R.DLL" Additional information: The last step in initialization of the class C is opening an ADO connection to SQL server using OLEDB provider. I'd appreciate any ideas on how resolve this problem. Thanks in advance.

    Read the article

  • ODBC datasource for DB2 on 64 bit Windows 2008

    - by Rob Vermeulen
    First of all, sorry for this non-programming question. I just finished development of some code that communicates with DB2 and want to test/deploy it on a Windows 2008 machine. I'm a bit concerned about not being able to find a working ODBC datasource (DSN/client) driver for DB2 on Windows 2008 (x64). I have a 32-bit driver for XP but that one (obviously) won't install on 2008-64. The IBM web site comes up with 1844 results when searching for "ODBC Windows 2008", but none of them are relevant. The web site's also a pain to use, btw. While googling around I found some solutions by 3rd party vendors but they all want money :) And the DB2 client and ODBC driver from IBM have always been free-of-charge. Does anyone have a solution?

    Read the article

  • Process Memory limit of 64-bit process

    - by prakash
    I currently have a 32-bit .Net application (on x86 Windows) which require lots of memory. Recently it started throwing System.OutOfMemoryException's. So, I am planning to move it to a x64 platform as 64-bit process. So will this help with the out of memory exceptions. I was reading this article from MSDN Memory limits for Windows So, my question is if I compile a 64bit .Net application, will it have IMAGE_FILE_LARGE_ADDRESS_AWARE set as default (As the article suggests)? i.e will I be able to take advantage of the 8GB user-mode virtual address space?

    Read the article

  • Converting Bit Field to int

    - by shaharg
    Hi, I have bit field declared this way: typedef struct morder { unsigned int targetRegister : 3; unsigned int targetMethodOfAddressing : 3; unsigned int originRegister : 3; unsigned int originMethodOfAddressing : 3; unsigned int oCode : 4; } bitset; I also have int array, and i want to get int value from this array, that represents the actual value of this bit field (which is actually some kind of machine word that i have the parts of it, and i want the int representation of the whole word). Thanks a lot.

    Read the article

  • Setting Mercurial's execute bit on Windows

    - by Joe
    I work on a Mercurial repository that is checked out onto an Unix filesystem such as ext3 on some machines, and FAT32 on others. In Subversion, I can set the svn:executable property to control whether a file should be marked executable when checked out on a platform that supports such a bit. I can do this regardless of the platform I'm running SVN on or the filesystem containing my working copy. In Mercurial, I can chmod +x to get the same effect if the clone is on a Unix filesystem. But how can I set (or remove) the executable bit on a file on a FAT filesystem?

    Read the article

  • Issue with RegConnectRegistry connecting to 64 bit machines

    - by RA
    I'm seeing a weird thing when connecting to the performance registry on 64 bit editions of Windows. The whole program stalls and callstacks becomes unreadable. After a long timeout, the connection attempts aborts and everything goes back to normal. The only solution is to make sure that only one thread at the time queries the remote registry, unless the remote machine is a 32 bit Windows XP, 2003, 2000 , then you can use as many threads as you like. Have anyone a technical explanation why this might be happening ? I've spent 2-3 days searching the web without coming up with anything. Here is a test program, run it first with one thread (connecting to a 64 bit Windows), then remove the comment in tmain and run it with 4 threads. Running it with one thread works as expected, running with 4, returns ERROR_BUSY (dwRet == 170) after stalling for a while. Remember to set a remote machine correctly in RegConnectRegistry before running the program. #define TOTALBYTES 8192 #define BYTEINCREMENT 4096 void PerfmonThread(void *pData) { DWORD BufferSize = TOTALBYTES; DWORD cbData; DWORD dwRet; PPERF_DATA_BLOCK PerfData = (PPERF_DATA_BLOCK) malloc( BufferSize ); cbData = BufferSize; printf("\nRetrieving the data..."); HKEY hKey; DWORD dwAccessRet = RegConnectRegistry(L"REMOTE_MACHINE",HKEY_PERFORMANCE_DATA,&hKey); dwRet = RegQueryValueEx( hKey,L"global",NULL,NULL,(LPBYTE) PerfData, &cbData ); while( dwRet == ERROR_MORE_DATA ) { // Get a buffer that is big enough. BufferSize += BYTEINCREMENT; PerfData = (PPERF_DATA_BLOCK) realloc( PerfData, BufferSize ); cbData = BufferSize; printf("."); dwRet = RegQueryValueEx( hKey,L"global",NULL,NULL,(LPBYTE) PerfData,&cbData ); } if( dwRet == ERROR_SUCCESS ) printf("\n\nFinal buffer size is %d\n", BufferSize); else printf("\nRegQueryValueEx failed (%d)\n", dwRet); RegCloseKey(hKey); } int _tmain(int argc, _TCHAR* argv[]) { _beginthread(PerfmonThread,0,NULL); /* _beginthread(PerfmonThread,0,NULL); _beginthread(PerfmonThread,0,NULL); _beginthread(PerfmonThread,0,NULL); */ while(1) { Sleep(2000); } }

    Read the article

  • 0xDEADBEEF equivalent for 64-bit development?

    - by Peter Mortensen
    For C++ development for 32-bit systems (be it Linux, Mac OS or Windows, PowerPC or x86) I have initialised pointers that would otherwise be undefined (e.g. they can not immediately get a proper value) like so: int *pInt = reinterpret_cast<int *>(0xDEADBEEF); (To save typing and being DRY the right-hand side would normally be in a constant, e.g. BAD_PTR.) If pInt is dereferenced before it gets a proper value then it will crash immediately on most systems (instead of crashing much later when some memory is overwritten or going into a very long loop). Of course the behavior is dependent on the underlying hardware (getting a 4 byte integer from the odd address 0xDEADBEEF from a user process may be perfectly valid), but the crashing has been 100% reliable for all the systems I have developed for so far (Mac OS 68xxx, Mac OS PowerPC, Linux Redhat Pentium, Windows GUI Pentium, Windows console Pentium). For instance on PowerPC it is illegal (bus fault) to fetch a 4 byte integer from an odd address. What is a good value for this on 64-bit systems?

    Read the article

  • C++ iostream not setting eof bit even if gcount returns 0

    - by raph.amiard
    Hi I'm developping an application under windows, and i'm using fstreams to read and write to the file. I'm writing with fstream opened like this : fs.open(this->filename.c_str(), std::ios::in|std::ios::out|std::ios::binary); and writing with this command fs.write(reinterpret_cast<char*>(&e.element), sizeof(T)); closing the file after each write with fs.close() Reading with ifstream opened like this : is.open(filename, std::ios::in); and reading with this command : is.read(reinterpret_cast<char*>(&e.element), sizeof(T)); The write is going fine. However, i read in a loop this way : while(!is.eof()) { is.read(reinterpret_cast<char*>(&e.element), sizeof(T)); } and the program keeps reading, even though the end of file should be reached. istellg pos is 0, and gcount is equal to 0 too, but the fail bit and eof bit are both ok. I'm running crazy over this, need some help ...

    Read the article

  • [C] converting Bit Field to int

    - by shaharg
    Hi, I have bit field declared this way: typedef struct morder { unsigned int targetRegister : 3; unsigned int targetMethodOfAddressing : 3; unsigned int originRegister : 3; unsigned int originMethodOfAddressing : 3; unsigned int oCode : 4; } bitset; I also have int array, and i want to get int value from this array, that represents the actual value of this bit field (which is actually some kind of machine word that i have the parts of it, and i want the int representation of the whole word). Thanks a lot.

    Read the article

  • How to insert zeros between bits in a bitmap?

    - by anatolyg
    I have some performance-heavy code that performs bit manipulations. It can be reduced to the following well-defined problem: Given a 13-bit bitmap, construct a 26-bit bitmap that contains the original bits spaced at even positions. To illustrate: 0000000000000000000abcdefghijklm (input, 32 bits) 0000000a0b0c0d0e0f0g0h0i0j0k0l0m (output, 32 bits) I currently have it implemented in the following way in C: if (input & (1 << 12)) output |= 1 << 24; if (input & (1 << 11)) output |= 1 << 22; if (input & (1 << 10)) output |= 1 << 20; ... My compiler (MS Visual Studio) turned this into the following: test eax,1000h jne 0064F5EC or edx,1000000h ... (repeated 13 times with minor differences in constants) I wonder whether i can make it any faster. I would like to have my code written in C, but switching to assembly language is possible. Can i use some MMX/SSE instructions to process all bits at once? Maybe i can use multiplication? (multiply by 0x11111111 or some other magical constant) Would it be better to use condition-set instruction (SETcc) instead of conditional-jump instruction? If yes, how can i make the compiler produce such code for me? Any other idea how to make it faster? Any idea how to do the inverse bitmap transformation (i have to implement it too, bit it's less critical)?

    Read the article

  • 64 bit COM(ActiveX) server

    - by Velja Radenkovic
    Hello, I have activex server exe that was building and registering fine on 32bit OS. I wanted to make 64 bit version of that exe by upgrading project to Visual Studio 2010 and changing platform to X64 which apparently doesn't work. Application itself works but I don't see it registered after running That.exe /RegServer I would appreciate any usable advice on migrating activex from 32 to x64. Code that is processing /RegServer param is below: if(lstrcmpi(lpszToken, _T("RegServer")) == 0) { _Module.UpdateRegistryFromResource(IDR_OUTDISKSARG, TRUE); nRet = _Module.RegisterServer(TRUE); bRun = false; break; } 32 bit activex is unuable for me since I have to load it in x64 .NET process.

    Read the article

  • app.config and 64-bit machines

    - by Dale Lutes
    I have an app that works fine on 32-bit systems, but fails on XP 64 bit systems. I've tracked it down to the connection string defined in my app.config thus: <connectionStrings> <clear/> <add name="IFDSConnectionString" connectionString="Data Source=fdsdata;Initial Catalog=IFDS; Trusted_Connection=true;Connect Timeout=0" providerName="System.Data.SqlClient" /> </connectionStrings> When I try to reference it in code, I find that the ConfigurationManager.ConnectionStrings collection only contains the LocalSqlServer connection string from the machine.config file and not my custom string. Another oddity is that it works fine when I run the app out of Visual Studio. It is only when I run out of the release folder that the connection string does not get defined. The application's .exe.config file is there in the release folder along with the .exe file and is up to date.

    Read the article

  • Compiling zlib for 64 bit on windows

    - by Allan Hollenberg
    I am currently working on a cross-platform game for Mac OSX and Windows and I'm having some issues with the ZLib library on Windows 64 bit. My game is focussed on a 64 bit architecture and I am unable to get ZLib to work along with it. When I compile ZLib itself (through make all64 at the source directory of ZLib) it shows no issues but when I want to use it I get a error saying '/usr/local/lib/libz.a(gzread.o):gzread.c:(.text+0x28e): undefined reference to `__errno'' I have included errno.h before I include zlib.h in my project but that doesn't seem to matter. I am compiling my app through the cygwin64 terminal and using the x86_64-w64-mingw32-g++ command, I am also linking directly against the lib64 version (if I remove that it compiles correctly but crashes on running due to it having a x86 lib)

    Read the article

  • regular expression for bit strings with even number of 1s

    - by equilibrium
    Let L= { w in (0+1)* | w has even number of 1s}, i.e. L is the set of all bit strings with even number of 1s. Which one of the regular expressions below represents L? A) (0*10*1)* B) 0*(10*10*)* C) 0*(10*1)* 0* D) 0*1(10*1)* 10* According to me option D is never correct because it does not represent the bit string with zero 1s. But what about the other options? We are concerned about the number of 1s(even or not) not the number of zeros doesn't matter. Then which is the correct option and why?

    Read the article

  • Dividing n-bit binary integers

    - by Julian
    Was wondering if anyone could help me with creating a pseudocode for how to go about dividing n-bit binary integers. Here is what I'm thinking could possibly work right now, could someone correct this if I'm wrong: divide (x,y) if x=0: return (0,0) //(quotient, remainder) (q,r) = divide(floor(x/2), y) q=2q, r=2r if x is odd: r = r+1 if r >= y: r = r-y, q = q+1 return (q,r) Would you guys say that this general pseudocode algorithm would accomplish the intended task of dividing n-bit numbers or am I missing something in my psuedocode before I start coding up something that's wrong?

    Read the article

  • How to combine two 32-bit integers into one 64-bit integer?

    - by Bei337
    I have a count register, which is made up of two 32-bit unsigned integers, one for the higher 32 bits of the value (most significant word), and other for the lower 32 bits of the value (least significant word). What is the best way in C to combine these two 32-bit unsigned integers and then display as a large number? In specific: leastSignificantWord = 4294967295; //2^32-1 printf("Counter: %u%u", mostSignificantWord,leastSignificantWord); This would print fine. When the number is incremented to 4294967296, I have it so the leastSignificantWord wipes to 0, and mostSignificantWord (0 initially) is now 1. The whole counter should now read 4294967296, but right now it just reads 10, because I'm just concatenating 1 from mostSignificantWord and 0 from leastSignificantWord. How should I make it display 4294967296 instead of 10?

    Read the article

  • Thread 0 crashed with X86 Thread State (32-bit): in cocoa Application

    - by John
    I am doing crash fixing in an osx application .The crash report shows Date/Time: 2012-05-01 16:05:58.004 +0200 OS Version: Mac OS X 10.5.8 (9L31a) Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x00000000545f5f00 Crashed Thread: 8 Thread 8 crashed with X86 Thread State (32-bit): eax: 0x140e0850 ebx: 0x00060fc8 ecx: 0x92df0ec0 edx: 0xc0000003 edi: 0x545f5f00 esi: 0x140e0870 ebp: 0xb0445988 esp: 0xb0445964 ss: 0x0000001f efl: 0x00010206 eip: 0x92dca68c cs: 0x00000017 ds: 0x0000001f es: 0x0000001f fs: 0x0000001f gs: 0x00000037 cr2: 0x545f5f00 How to tares the application code with this report? what is Thread 0 crashed with X86 Thread State (32-bit)? if anybody know please help me. Thanks in advance.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >