Search Results

Search found 949 results on 38 pages for '32bit'.

Page 1/38 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Solaris SPARC 10 32bit mode

    - by TM.
    I'm looking for a definitive answer, does Solaris 10 running on a SPARC machine support booting into 32bit mode? I've found one site that states Solaris 8 was the last version that supported booting in a 32bit mode for SPARC. I've read multiple items that explain how to boot Solaris into 32bit mode, however they did not list the Solaris version. We've tried all the ways specified, but the system keeps booting into 64bit mode.

    Read the article

  • 32bit 64bit referenced library

    - by bleevo
    Hi, I am developing an application that has two DLLs one is a 32bit version another is 64bit version, Client is 32bit Server is 64bit My question is is there a way I can say use the 32bit dll when doing Debug/Release and use 64bit dll when I perform a publish. I realize I can solve this problem using NAnt or MSBuild but was wondering if I can do any of this in visual studio. UPDATE All my code will run on either 32bit or 64bit but I am using a library that has a 32bit library and a 64bit library. 32bit wont work on server, 64bit wont work on dev machine

    Read the article

  • Crash when attempting to install 32bit delphi service on 2008 r2

    - by Oded
    I have an old 32bit delphi application (with no source code), that is used as a windows service. It runs fine on windows 2003 32bit. I do not know if it has been created as a service originally, or converted to one later on. It is supposed to get installed to the server using a /install flag on the command line. When attempting to install it on a Windows 2008 R2 virtual machine, I am getting an APPCRASH event in the event log. The service is supposed to read a blob from a remote SQL Server instance and write it out to the local HD. It also reads some initialization data from the registry. Is there any way I can install this application as a service on windows 2008 r2 64bit? If not, are there any workarounds I can try? What are your suggestions?

    Read the article

  • 32bit vs 64bit guest VM and RAM usage

    - by sims
    Why does a 32bit domU (Xen guest VM) use less RAM than a 64bit? Notes: The same software complied for a different arch(AMD64 vs. 686). Obviously this is Linux or BSD or something easily ported. Maybe this is also a good one for SO. I've read this is so. I can guess why, but I'd like to hear everyone's comments.

    Read the article

  • Compile 32bit mercurial on x86_64

    - by krashalot
    I'm using the academic version of EPD (Enthought Python Distribution) which is 32bit. My computer is Linux x86_64. platform.architecture() returns ('32bit','ELF') I want to install Mercurial. The instructions in README didn't work at first, because make gave this error: "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)." I commented out that line in pyport.h and then it compiled fine. Now, after successful compilation I get this error when running it: ImportError: /scratch/epd/lib/python2.6/site-packages/mercurial/osutil.so: wrong ELF class: ELFCLASS64 It appears that I compiled a 64bit version of hg, and it won't run with my 32bit python. I don't see any arch flags in the mercurial makefile. How can I force it to compile in 32bit mode?

    Read the article

  • forcing stack w/i 32bit when -m64 -mcmodel=small

    - by chaosless
    have C sources that must compile in 32bit and 64bit for multiple platforms. structure that takes the address of a buffer - need to fit address in a 32bit value. obviously where possible these structures will use natural sized void * or char * pointers. however for some parts an api specifies the size of these pointers as 32bit. on x86_64 linux with -m64 -mcmodel=small tboth static data and malloc()'d data fit within the 2Gb range. data on the stack, however, still starts in high memory. so given a small utility _to_32() such as: int _to_32( long l ) { int i = l & 0xffffffff; assert( i == l ); return i; } then: char *cp = malloc( 100 ); int a = _to_32( cp ); will work reliably, as would: static char buff[ 100 ]; int a = _to_32( buff ); but: char buff[ 100 ]; int a = _to_32( buff ); will fail the assert(). anyone have a solution for this without writing custom linker scripts? or any ideas how to arrange the linker section for stack data, would appear it is being put in this section in the linker script: .lbss : { *(.dynlbss) *(.lbss .lbss.* .gnu.linkonce.lb.*) *(LARGE_COMMON) } thanks!

    Read the article

  • how to find if the machine is 32bit or 64bit

    - by prabodh
    Is there anyway from a C prog to find whether the OS is currently running in 32bit or 64bit mode. I am using a simple program as below int main(void){ switch(sizeof(void*)){ case 4: printf("32\n"); break; case 8: printf("64\n"); break; } } Is this a correct approach ? Would this code work in all the scenarios like, If the hardware is 64bit and the OS is 32bit what would it return ? I don't have machine to test this in diff configurations. Thanks for the advice.

    Read the article

  • 32bit to 64bit inline assembly porting

    - by Simone Margaritelli
    I have a piece of C++ code (compiled with g++ under a GNU/Linux environment) that load a function pointer (how it does that doesn't matter), pushes some arguments onto the stack with some inline assembly and then calls that function, the code is like : unsigned long stack[] = { 1, 23, 33, 43 }; /* save all the registers and the stack pointer */ unsigned long esp; asm __volatile__ ( "pusha" ); asm __volatile__ ( "mov %%esp, %0" :"=m" (esp)); for( i = 0; i < sizeof(stack); i++ ){ unsigned long val = stack[i]; asm __volatile__ ( "push %0" :: "m"(val) ); } unsigned long ret = function_pointer(); /* restore registers and stack pointer */ asm __volatile__ ( "mov %0, %%esp" :: "m" (esp) ); asm __volatile__ ( "popa" ); I'd like to add some sort of #ifdef _LP64 // 64bit inline assembly #else // 32bit version as above example #endif But i don't know inline assembly for 64bit machines, anyone could help me? Thanks

    Read the article

  • MapViewOfFile shared between 32bit and 64bit processes

    - by MK
    Hi, I'm trying to use MapViewOfFile in a 64 bit process on a file that is already mapped to memory of another 32 bit process. It fails and gives me an "access denied" error. Is this a known Windows limitation or am I doing something wrong? Same code works fine with 2 32bit processes. The code sort of looks like this: hMapFile = OpenFileMapping(FILE_MAP_ALL_ACCESS, FALSE, szShmName); if (NULL == hMapFile) { /* failed to open - create new (this happens in the 32 bit app) */ SECURITY_ATTRIBUTES sa; sa.nLength = sizeof(SECURITY_ATTRIBUTES); sa.bInheritHandle = FALSE; /* give access to members of administrators group */ BOOL success = ConvertStringSecurityDescriptorToSecurityDescriptor( "D:(A;OICI;GA;;;BA)", SDDL_REVISION_1, &(sa.lpSecurityDescriptor), NULL); HANDLE hShmFile = CreateFile(FILE_FAXCOM_SHM, FILE_ALL_ACCESS, 0, &sa, OPEN_ALWAYS, 0, NULL); hMapFile = CreateFileMapping(hShmFile, &sa, PAGE_READWRITE, 0, SHM_SIZE, szShmName); CloseHandle(hShmFile); } // this one fails in 64 bit app pShm = MapViewOfFile(hMapFile, FILE_MAP_ALL_ACCESS, 0, 0, SHM_SIZE);

    Read the article

  • Can't register 32bit dll under 64bit windows

    - by Wodzu
    Hi guys. I try to create a COM object from my JS script like this: function main() { var MyApplication = new ActiveXObject("Base.Application"); } main(); I am getting error: "Automation server can't create object". This error occurs on Windows 2003 64 bit. The dll is 32 bit and it works fine on 32 bit systems. I've tried both versions of Regsvr32.exe on the 64 bit system and both versions told me that dll registered succesfully. Unfortunatelly the error message does not tell me why it can not create object. The reason is unknown, it might be that it can't create object because it is still not registered or it might be something totally different... I've also add full permisions to this dll. I don't know what else I can do, do you have any ideas?

    Read the article

  • Is there a performance advantage in using a 64bit version of openCV+Emgu instead of 32bit?

    - by Jelly Amma
    Hello, I am developing an application that processes images captured in real time by a Point Grey camera (http://www.ptgrey.com/). The Point Grey SDK is a .net wrapper and can be either 32bit or 64bit. Then to process the captured images, I'm using a wrapper for openCV called Emgu CV (http://www.emgu.com/) that comes in both 32bit or 64bit flavors as well. Now, being on Vista64 I went for the 64bit versions of FlyCapture (Point Grey's SDK) and Emgu CV (which includes openCV in its install) hoping to maximize performance. Recently I've been wanting to call my FlyCapture+Emgu DLL code from XNA, which unfortunately only exists in 32bit, and I realize that I may have to reinstall all those components in 32bit as I don't really want to go through IPC, remoting, etc. Apart from the obvious limit to memory space inherent to 32bit, is there also a performance loss I should be expecting? How dramatic would that be and why ? Thanks in advance for any advice or explanation.

    Read the article

  • Build 32-bit with 64-bit llvm-gcc

    - by Jay Conrod
    I have a 64-bit version of llvm-gcc, but I want to be able to build both 32-bit and 64-bit binaries. Is there a flag for this? I tried passing -m32 (which works on the regular gcc), but I get an error message like this: [jay@andesite]$ llvm-gcc -m32 test.c -o test Warning: Generation of 64-bit code for a 32-bit processor requested. Warning: 64-bit processors all have at least SSE2. /tmp/cchzYo9t.s: Assembler messages: /tmp/cchzYo9t.s:8: Error: bad register name `%rbp' /tmp/cchzYo9t.s:9: Error: bad register name `%rsp' ... This is backwards; I want to generate 32-bit code for a 64-bit processor! I'm running llvm-gcc 4.2, the one that comes with Ubuntu 9.04 x86-64. EDIT: Here is the relevant part of the output when I run llvm-gcc with the -v flag: [jay@andesite]$ llvm-gcc -v -m32 test.c -o test.bc Using built-in specs. Target: x86_64-linux-gnu Configured with: ../llvm-gcc4.2-2.2.source/configure --host=x86_64-linux-gnu --build=x86_64-linux-gnu --prefix=/usr/lib/llvm/gcc-4.2 --enable-languages=c,c++ --program-prefix=llvm- --enable-llvm=/usr/lib/llvm --enable-threads --disable-nls --disable-shared --disable-multilib --disable-bootstrap Thread model: posix gcc version 4.2.1 (Based on Apple Inc. build 5546) (LLVM build) /usr/lib/llvm/gcc-4.2/libexec/gcc/x86_64-linux-gnu/4.2.1/cc1 -quiet -v -imultilib . test.c -quiet -dumpbase test.c -m32 -mtune=generic -auxbase test -version -o /tmp/ccw6TZY6.s I looked in /usr/lib/llvm/gcc-4.2/libexec/gcc hoping to find another binary, but the only directory there is x86_64-linux-gnu. I will probably look at compiling llvm-gcc from source with appropriate options next.

    Read the article

  • FOR BOUNTY: "QFontEngine(Win) GetTextMetrics failed ()" error on 64-bit Windows

    - by David Murdoch
    I'll add a large bounty to this when Stack Overflow lets me. I'm using wkhtmltopdf to convert HTML web pages to PDFs. This works perfectly on my 32-bit dev server [unfortunately, I can't ship my machine :-p ]. However, when I deploy to the web application's 64-bit server the following errors are displayed: C:\>wkhtmltopdf http://www.google.com google.pdf Loading pages (1/5) QFontEngine::loadEngine: GetTextMetrics failed () ] 10% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () ] 36% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () // ...etc.... and the PDF is created and saved... just WITHOUT text. All form-fields, images, borders, tables, divs, spans, ps, etc are rendered accurately...just void of any text at all. Server information: Windows edition: Windows Server Standard Service Pack 2 Processor: Intel Xeon E5410 @ 2.33GHz 2.33 GHz Memory: 8.00 GB System type: 64-bit Operating System Can anyone give me a clue as to what is happening and how I can fix this? Also, I wasn't sure what to tag/title this question with...so if you can think of better tags/title comment them or edit the question. :-)

    Read the article

  • "QFontEngine(Win) GetTextMetrics failed ()" error on 64-bit Windows

    - by David Murdoch
    I'll add 500 of my own rep as a bounty when SO lets me. I'm using wkhtmltopdf to convert HTML web pages to PDFs. This works perfectly on my 32-bit dev server [unfortunately, I can't ship my machine :-p ]. However, when I deploy to the web application's 64-bit server the following errors are displayed: C:\>wkhtmltopdf http://www.google.com google.pdf Loading pages (1/5) QFontEngine::loadEngine: GetTextMetrics failed () ] 10% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () ] 36% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () // ...etc.... and the PDF is created and saved... just WITHOUT text. All form-fields, images, borders, tables, divs, spans, ps, etc are rendered accurately...just void of any text at all. Server information: Windows edition: Windows Server Standard Service Pack 2 Processor: Intel Xeon E5410 @ 2.33GHz 2.33 GHz Memory: 8.00 GB System type: 64-bit Operating System Can anyone give me a clue as to what is happening and how I can fix this? Also, I wasn't sure what to tag/title this question with...so if you can think of better tags/title comment them or edit the question. :-)

    Read the article

  • What is the difference between a 32-bit and 64-bit processor?

    - by JJG
    I have been trying to read up on 32-bit and 64-bit processors (http://en.wikipedia.org/wiki/32-bit_processing). My understanding is that a 32-bit processor (like x86) has registers 32-bits wide. I'm not sure what that means. So it has special "memory spaces" that can store integer values up to 2^32? I don't want to sound stupid, but I have no idea about processors. I'm assuming 64-bits is, in general, better than 32-bits. Although my computer now (one year old, Win 7, Intel Atom) has a 32-bit processor.

    Read the article

  • Migration a database from 32bit to 64bit

    - by Mike Dietrich
    Database migrations from an 32bit environment to an 64bit environment keeping the same platform architecture (e.g. moving an Oracle 10.2.0.5 database from MS Windows XP 32bit to MS Windows Server 2003 64bit) does not happen that often anymore. But still we see them getting done. And there are a few things to note when doing such a move. First of all the important question is:Will you upgrade your database as part of this move - Yes or No? If you say "Yes" then you are almost done with that topic as we will take care of that bitnes move during the upgrade. The only thing you have to take care is OLAP in case you are using OLAP Option with Analytic Workspaces (AW) by yourself. Those store data in Binary LOBs - and in order to move AWs from 32bit to 64bit you have to export your AWs prior to the move - and import them later on. People who don't use OLAP don't have to take care on this. But if you say "No" (meaning: no upgrade actions involved - you keep your database version) then you have to make sure to invalidate all packages and stored code in the database before you shutdown your database in the 32bit environment and prior to moving it over. And the same rule as above for OLAP applies once you use the OLAP Option. In the source environment: startup upgrade;    -- [or startup migrate; -- for Oracle 9i] @?/rdbms/admin/utlirp.sqlshutdown immediate In the destination environment: startup upgrade @?/olap/admin/xumuts.plb --Only if OLAP Option is installed@?/rdbms/admin/utlrp.sql The script utlirp.sql will invalidate all packages and stored code, utlrp.sql will recompile - and xumuts.plb will rebuild the OLAP Analytic Workspaces in case you have the OLAP Option installed.

    Read the article

  • Why aren't my old DLL's running with my app pool in 32bit mode?

    - by brokkalen
    I am moving my websites from a server 2003x86 environment to a server 2008x64. the 2008 server is using iis 7.5 and the app pool I am using is configured for 32bit mode. I get an error 'Server object error 'ASP 0177 : 800401f3' Server.createObject failed.' I beleive that it is in the DLL's that all the ASP sites point to. My programmers, as usual, say it isn't code or the DLL's. Am I missing something to make these old DLL's work? By the way these sites are connecting to a SQL 2000 Database.

    Read the article

  • Using Visual Studio 2005 (32bit) on a Windows 7 64bit machine.

    - by Krakkos
    I need to use Visual Studio 2005 (C++) on my new laptop - a Sony Vaio with Windows 7 64bit.. I don't need to develop for a 64bit environment, my work is all 32bit, so how can I be sure that I can still develop/debug/test for a 32bit target environment using VS2005 on a 64bit machine....? What's the best option: 1) Just install VS2005 on Windows 7 64bit and carry on.. (suspect problems with 64bit runtime libs..?) 2) Dual boot the laptop with Windows XP 32bit. 3) Run some kind of Virtual Machine with Windows XP in it... (I don't have a VM yet, but would look into it) Thanks

    Read the article

  • PlayOnLinux Unable to find 32bit opengl libraries Dual ATI Videocards

    - by Rodolfo Pires
    Im curently running Ubuntu 12.04 LTS 64Bit So, i installed league of legends fine the first time with the opensource ATI Drivers provided by ubuntu itself with no issues at all, but it runs so slow ... max 20fps because those drivers dont fully support my Dual Graphic cards Than i restored system and i installed the Linux Version of the Proper ATI Drivers from the AMD Website wich supports my APU AMD-A8-4500M with the AMD Radeon 7640G + 7670M Graphics Cards enabling me full performance from my system .. Problem is, to run League of Legends i need a 32bit opengl library, and the driver, automaticly detects a 64bits Linux install and loads the 64bit libraries but not the 32 ones . i need some kind of command, to force the 32bit libraries to load, or to make League of Legends run on the 64 ones .. Im kinda noob to ubuntu .. i installed the 32 bits ones trough terminal and still doesnt work idk why, maybe the driver doesnt want to load them .. plzz help me on this, i dont want to go back to windows just to play league since im noob idk what more details to post here so plz tell me what do you need

    Read the article

  • Can XP 32bit use the Windows 7 64bit upgrade?

    - by user295734
    Purchased a new computer, actually the parts to build one. Unsure what windows 7 to purchase. Need at least professional, seems to be around $200 upgrade to $300 full version. If I put the old XP on my new machine, will i be bale to use the 64bit windows 7 Pro upgrade instead of having to buy the full version? Will it still do a clean install?

    Read the article

  • 64bit server running in 32bit mode using incorrect locale

    - by Derek Ekins
    I have a 64bit server that we are running an application on in 32bit mode. For whatever reason the locale of the 32bit process is coming through as en-US when the server is set to en-GB. I am guessing that the reason for this is there is that the 64bit and WOW64 are not sharring the locale settings. So my question is how do you set the locale for a 32bit process? This is Windows 2003. The application is an asp.net app running under IIS in 32bit mode. This setup is definitely not my choice.

    Read the article

  • 32bit Application Memory Usage on 64bit Windows 7

    - by Brian
    I have an early 2012 Macbook Pro with and Intel I7 processor and 16 gigs RAM running Windows 7 Professional 64bit via Bootcamp. I work in Geographical Information Systems as a programmer so most of the applications I am running are 32bit Applications, but tend to use a lot of resources (i.e. ArcGIS, SQL Server Express, Visual Studio, etc.). I have been noticing that when I have multiple instances of either the same 32bit application or different 32bit applications and they are all working on hefty processing tasks, I am still only topping out at about 30% memory use. I understand 32bit applications are limited to less than 4gb RAM, but I assumed that one instance could use its own 4gb while another instance could use another 4gb to take full advantage of all the memory I have installed. Can anyone explain how this works and how I can get my applications to take advantage of all my memory via running multiple instances?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >