Search Results

Search found 19840 results on 794 pages for 'bit blit'.

Page 46/794 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • SQL Server 2008 (64-bit)

    - by Grace09
    I have transaction log backup failures randomly. SQL Server Log says 'Memory constraints resulted reduced backup/restore buffer sizes. Proceding with 6 buffers of size 64KB.', and the SQL Server Agent Error Log has quite a few errors like 'Unable to start Job Manager thread for job xxx', '[298] SQLServer Error: 768, Client unable to establish connection [SQLSTATE 08001]', [298] SQLServer Error: 768, SSL Provider: Not enough memory is available to complete this request [SQLSTATE 08001]'. I am really low on the memory? It has total of 32 GB, but I set maximum to 20. Task manager shows it is using 99% of physical memory. Memoryclerk-sqlbufferpool has 32GB for the Virtual Memory Reserved, and 20 GB for the Virtual Memory Committed. From the perfmon, SQLServer:Memory Manager/Total ServerMemory shows 21GB of memory in use, that's what I set the maximum to. I don't where the rest of memory go. Can anyone advice? Thanks in advance.

    Read the article

  • pysqlite2: ProgrammingError - You must not use 8-bit bytestrings

    - by rfkrocktk
    I'm currently persisting filenames in a sqlite database for my own purposes. Whenever I try to insert a file that has a special character (like é etc.), it throws the following error: pysqlite2.dbapi2.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. When I do "switch my application over to Unicode strings" by wrapping the value sent to pysqlite with the unicode method like: unicode(filename), it throws this error: UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 66: ordinal not in range(128) Is there something I can do to get rid of this? Modifying all of my files to conform isn't an option.

    Read the article

  • fastest calculation of largest prime factor of 512 bit number in python

    - by miraclesoul
    dear all, i am simulating my crypto scheme in python, i am a new user to it. p = 512 bit number and i need to calculate largest prime factor for it, i am looking for two things: Fastest code to process this large prime factorization Code that can take 512 bit of number as input and can handle it. I have seen different implementations in other languages, my whole code is in python and this is last point where i am stuck. So let me know if there is any implementation in python. Kindly explain in simple as i am new user to python sorry for bad english.

    Read the article

  • NHibernate CreateSQLQuery data conversion from bit to boolean error

    - by RemotecUk
    Hi, Im being a bit lazy in NHibernate and using Session.CreateSqlQuery(...) instead of doing the whole thing with Lambda's. Anyway what struct me is that there seems to be a problem converting some of the types returned from (in this case the MySQL) DB into native .Net tyes. The query in question looks like this.... IList<Client> allocatableClients = Session.CreateSQLQuery( "select clients.id as Id, clients.name as Name, clients.customercode as CustomerCode, clients.superclient as SuperClient, clients.clienttypeid as ClientType " + ... ... .SetResultTransformer(new NHibernate.Transform.AliasToBeanResultTransformer(typeof(Client))).List<Client>(); The type in the database of SuperClient is a bit(1) and in the Client object the type is a bool. The error received is: System.ArgumentException: Object of type 'System.UInt64' cannot be converted to type 'System.Boolean'. It seems strange that this conversion cannot be completed. Would be greatful for any ideas. Thanks.

    Read the article

  • Cannot Debug Visual Basic 6 ActiveX Component On 64-bit OS

    - by Humanier
    Hi, The situation might sound a bit weird but I have to play with what I have. There's a Win2003 64-bit server OS and a legacy application written using Visual Studio 6. The app consists of two parts: ActiveX components written in VB6 and C++ code which uses them. I need to debug the components' code. I installed Visual Studio 6 on the server and I'm able to step into the component's code. Then I got following situation: 1) C++ code works until it needs to instantiate component A. 2) At this step we switch to VB6 and start debugging Component's A. 3) In the very beginning component A creates an instance of a class C exposed by component B. At this step VB6 debugger shows error message with title "OLEDB32.DLL" and following text: "Failed to load resource DLL C:\Program Files (x86)\Common Files\System\Ole DB\OLEDB32R.DLL" Additional information: The last step in initialization of the class C is opening an ADO connection to SQL server using OLEDB provider. I'd appreciate any ideas on how resolve this problem. Thanks in advance.

    Read the article

  • Trying to compile VS2008 project on Win 64 bit which is custom Powershell PSSnapin

    - by Boris Kleynbok
    Library Project compiles fine for ANY CPU in VS2008 running on Win 7 64 -bit. Now in the post build following command fails when attemptiong to register library dll: PS C:\Windows\Microsoft.NET\Framework64\v2.0.50727 .\installutil C:\path\Project.dll Exception occurred while initializing the installation: System.BadImageFormatException: Could not load file or assembly 'file:///C:\path\Project.dll' or one of its dependencies. An attempt was made to load a program with an incorrect format.. Do I need to compile the project as x64 I was under impression that AnyCPU will take care of it. Alo my library does have dependencies. Do they also need to be compiled as x64 bit? Any help is appreciated.

    Read the article

  • ODBC datasource for DB2 on 64 bit Windows 2008

    - by Rob Vermeulen
    First of all, sorry for this non-programming question. I just finished development of some code that communicates with DB2 and want to test/deploy it on a Windows 2008 machine. I'm a bit concerned about not being able to find a working ODBC datasource (DSN/client) driver for DB2 on Windows 2008 (x64). I have a 32-bit driver for XP but that one (obviously) won't install on 2008-64. The IBM web site comes up with 1844 results when searching for "ODBC Windows 2008", but none of them are relevant. The web site's also a pain to use, btw. While googling around I found some solutions by 3rd party vendors but they all want money :) And the DB2 client and ODBC driver from IBM have always been free-of-charge. Does anyone have a solution?

    Read the article

  • Process Memory limit of 64-bit process

    - by prakash
    I currently have a 32-bit .Net application (on x86 Windows) which require lots of memory. Recently it started throwing System.OutOfMemoryException's. So, I am planning to move it to a x64 platform as 64-bit process. So will this help with the out of memory exceptions. I was reading this article from MSDN Memory limits for Windows So, my question is if I compile a 64bit .Net application, will it have IMAGE_FILE_LARGE_ADDRESS_AWARE set as default (As the article suggests)? i.e will I be able to take advantage of the 8GB user-mode virtual address space?

    Read the article

  • Converting Bit Field to int

    - by shaharg
    Hi, I have bit field declared this way: typedef struct morder { unsigned int targetRegister : 3; unsigned int targetMethodOfAddressing : 3; unsigned int originRegister : 3; unsigned int originMethodOfAddressing : 3; unsigned int oCode : 4; } bitset; I also have int array, and i want to get int value from this array, that represents the actual value of this bit field (which is actually some kind of machine word that i have the parts of it, and i want the int representation of the whole word). Thanks a lot.

    Read the article

  • Setting Mercurial's execute bit on Windows

    - by Joe
    I work on a Mercurial repository that is checked out onto an Unix filesystem such as ext3 on some machines, and FAT32 on others. In Subversion, I can set the svn:executable property to control whether a file should be marked executable when checked out on a platform that supports such a bit. I can do this regardless of the platform I'm running SVN on or the filesystem containing my working copy. In Mercurial, I can chmod +x to get the same effect if the clone is on a Unix filesystem. But how can I set (or remove) the executable bit on a file on a FAT filesystem?

    Read the article

  • Issue with RegConnectRegistry connecting to 64 bit machines

    - by RA
    I'm seeing a weird thing when connecting to the performance registry on 64 bit editions of Windows. The whole program stalls and callstacks becomes unreadable. After a long timeout, the connection attempts aborts and everything goes back to normal. The only solution is to make sure that only one thread at the time queries the remote registry, unless the remote machine is a 32 bit Windows XP, 2003, 2000 , then you can use as many threads as you like. Have anyone a technical explanation why this might be happening ? I've spent 2-3 days searching the web without coming up with anything. Here is a test program, run it first with one thread (connecting to a 64 bit Windows), then remove the comment in tmain and run it with 4 threads. Running it with one thread works as expected, running with 4, returns ERROR_BUSY (dwRet == 170) after stalling for a while. Remember to set a remote machine correctly in RegConnectRegistry before running the program. #define TOTALBYTES 8192 #define BYTEINCREMENT 4096 void PerfmonThread(void *pData) { DWORD BufferSize = TOTALBYTES; DWORD cbData; DWORD dwRet; PPERF_DATA_BLOCK PerfData = (PPERF_DATA_BLOCK) malloc( BufferSize ); cbData = BufferSize; printf("\nRetrieving the data..."); HKEY hKey; DWORD dwAccessRet = RegConnectRegistry(L"REMOTE_MACHINE",HKEY_PERFORMANCE_DATA,&hKey); dwRet = RegQueryValueEx( hKey,L"global",NULL,NULL,(LPBYTE) PerfData, &cbData ); while( dwRet == ERROR_MORE_DATA ) { // Get a buffer that is big enough. BufferSize += BYTEINCREMENT; PerfData = (PPERF_DATA_BLOCK) realloc( PerfData, BufferSize ); cbData = BufferSize; printf("."); dwRet = RegQueryValueEx( hKey,L"global",NULL,NULL,(LPBYTE) PerfData,&cbData ); } if( dwRet == ERROR_SUCCESS ) printf("\n\nFinal buffer size is %d\n", BufferSize); else printf("\nRegQueryValueEx failed (%d)\n", dwRet); RegCloseKey(hKey); } int _tmain(int argc, _TCHAR* argv[]) { _beginthread(PerfmonThread,0,NULL); /* _beginthread(PerfmonThread,0,NULL); _beginthread(PerfmonThread,0,NULL); _beginthread(PerfmonThread,0,NULL); */ while(1) { Sleep(2000); } }

    Read the article

  • 0xDEADBEEF equivalent for 64-bit development?

    - by Peter Mortensen
    For C++ development for 32-bit systems (be it Linux, Mac OS or Windows, PowerPC or x86) I have initialised pointers that would otherwise be undefined (e.g. they can not immediately get a proper value) like so: int *pInt = reinterpret_cast<int *>(0xDEADBEEF); (To save typing and being DRY the right-hand side would normally be in a constant, e.g. BAD_PTR.) If pInt is dereferenced before it gets a proper value then it will crash immediately on most systems (instead of crashing much later when some memory is overwritten or going into a very long loop). Of course the behavior is dependent on the underlying hardware (getting a 4 byte integer from the odd address 0xDEADBEEF from a user process may be perfectly valid), but the crashing has been 100% reliable for all the systems I have developed for so far (Mac OS 68xxx, Mac OS PowerPC, Linux Redhat Pentium, Windows GUI Pentium, Windows console Pentium). For instance on PowerPC it is illegal (bus fault) to fetch a 4 byte integer from an odd address. What is a good value for this on 64-bit systems?

    Read the article

  • C++ iostream not setting eof bit even if gcount returns 0

    - by raph.amiard
    Hi I'm developping an application under windows, and i'm using fstreams to read and write to the file. I'm writing with fstream opened like this : fs.open(this->filename.c_str(), std::ios::in|std::ios::out|std::ios::binary); and writing with this command fs.write(reinterpret_cast<char*>(&e.element), sizeof(T)); closing the file after each write with fs.close() Reading with ifstream opened like this : is.open(filename, std::ios::in); and reading with this command : is.read(reinterpret_cast<char*>(&e.element), sizeof(T)); The write is going fine. However, i read in a loop this way : while(!is.eof()) { is.read(reinterpret_cast<char*>(&e.element), sizeof(T)); } and the program keeps reading, even though the end of file should be reached. istellg pos is 0, and gcount is equal to 0 too, but the fail bit and eof bit are both ok. I'm running crazy over this, need some help ...

    Read the article

  • [C] converting Bit Field to int

    - by shaharg
    Hi, I have bit field declared this way: typedef struct morder { unsigned int targetRegister : 3; unsigned int targetMethodOfAddressing : 3; unsigned int originRegister : 3; unsigned int originMethodOfAddressing : 3; unsigned int oCode : 4; } bitset; I also have int array, and i want to get int value from this array, that represents the actual value of this bit field (which is actually some kind of machine word that i have the parts of it, and i want the int representation of the whole word). Thanks a lot.

    Read the article

  • How to insert zeros between bits in a bitmap?

    - by anatolyg
    I have some performance-heavy code that performs bit manipulations. It can be reduced to the following well-defined problem: Given a 13-bit bitmap, construct a 26-bit bitmap that contains the original bits spaced at even positions. To illustrate: 0000000000000000000abcdefghijklm (input, 32 bits) 0000000a0b0c0d0e0f0g0h0i0j0k0l0m (output, 32 bits) I currently have it implemented in the following way in C: if (input & (1 << 12)) output |= 1 << 24; if (input & (1 << 11)) output |= 1 << 22; if (input & (1 << 10)) output |= 1 << 20; ... My compiler (MS Visual Studio) turned this into the following: test eax,1000h jne 0064F5EC or edx,1000000h ... (repeated 13 times with minor differences in constants) I wonder whether i can make it any faster. I would like to have my code written in C, but switching to assembly language is possible. Can i use some MMX/SSE instructions to process all bits at once? Maybe i can use multiplication? (multiply by 0x11111111 or some other magical constant) Would it be better to use condition-set instruction (SETcc) instead of conditional-jump instruction? If yes, how can i make the compiler produce such code for me? Any other idea how to make it faster? Any idea how to do the inverse bitmap transformation (i have to implement it too, bit it's less critical)?

    Read the article

  • 64 bit COM(ActiveX) server

    - by Velja Radenkovic
    Hello, I have activex server exe that was building and registering fine on 32bit OS. I wanted to make 64 bit version of that exe by upgrading project to Visual Studio 2010 and changing platform to X64 which apparently doesn't work. Application itself works but I don't see it registered after running That.exe /RegServer I would appreciate any usable advice on migrating activex from 32 to x64. Code that is processing /RegServer param is below: if(lstrcmpi(lpszToken, _T("RegServer")) == 0) { _Module.UpdateRegistryFromResource(IDR_OUTDISKSARG, TRUE); nRet = _Module.RegisterServer(TRUE); bRun = false; break; } 32 bit activex is unuable for me since I have to load it in x64 .NET process.

    Read the article

  • app.config and 64-bit machines

    - by Dale Lutes
    I have an app that works fine on 32-bit systems, but fails on XP 64 bit systems. I've tracked it down to the connection string defined in my app.config thus: <connectionStrings> <clear/> <add name="IFDSConnectionString" connectionString="Data Source=fdsdata;Initial Catalog=IFDS; Trusted_Connection=true;Connect Timeout=0" providerName="System.Data.SqlClient" /> </connectionStrings> When I try to reference it in code, I find that the ConfigurationManager.ConnectionStrings collection only contains the LocalSqlServer connection string from the machine.config file and not my custom string. Another oddity is that it works fine when I run the app out of Visual Studio. It is only when I run out of the release folder that the connection string does not get defined. The application's .exe.config file is there in the release folder along with the .exe file and is up to date.

    Read the article

  • Compiling zlib for 64 bit on windows

    - by Allan Hollenberg
    I am currently working on a cross-platform game for Mac OSX and Windows and I'm having some issues with the ZLib library on Windows 64 bit. My game is focussed on a 64 bit architecture and I am unable to get ZLib to work along with it. When I compile ZLib itself (through make all64 at the source directory of ZLib) it shows no issues but when I want to use it I get a error saying '/usr/local/lib/libz.a(gzread.o):gzread.c:(.text+0x28e): undefined reference to `__errno'' I have included errno.h before I include zlib.h in my project but that doesn't seem to matter. I am compiling my app through the cygwin64 terminal and using the x86_64-w64-mingw32-g++ command, I am also linking directly against the lib64 version (if I remove that it compiles correctly but crashes on running due to it having a x86 lib)

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >