Search Results

Search found 950 results on 38 pages for 'ashy 32bit'.

Page 8/38 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • MySQL-python 1.2.3 and OS X 10.5: 64- or 32-bit?

    - by Dave Everitt
    I've been happily using Django and MySQL in development on an existing machine running OS X 10.4 Tiger, and have set up a similar environment in 10.5 Leopard on a new 64-bit MacBook, with a working MySQL and Python 2.6.4. However, now I want them to communicate, easy_install MySQL-python gave ld warnings that the file is not of the required architecture, which led me to test my Python 2.4.6 install (from the Mac OS X disc image): >>> import sys >>> sys.maxint 2147483647 Ah. So my Python install appears to be 32-bit and (I think?) won't install MySQL-python for my 64-bit MySQL. There are lots of hacks out there for MySQL-python on OS X (mostly 1.2.2), but - after hours of reading - I'm pretty sure they won't fix this architecture mismatch. So I'm stuck because I can't decide whether to: give up, remove the 64-bit MySQL install (thorough methods, please?) and use the 32-bit MySQL disc image instead; re-install Python in 64-bit mode from the tarball, --with-universal archs-64-bit and --enable-universalsdk= as detailed in Python.org's 2.6 news. So my questions for anyone who has encountered this issue are: Is installing 64-bit Python on OS X 10.5 worth bothering with? If so, (naive, lazy question!) how are the two required arguments combined? If I just skip along in 32-bit (as on my working setup) what am I missing? I'm after a hassle-free install that's easy to reproduce on other machines (possible student use) so I'd really welcome your opinions, please!

    Read the article

  • How does 64 bit code work on OS-X 10.5?

    - by philcolbourn
    I initially thought that 64 bit instructions would not work on OS-X 10.5. I wrote a little test program and compiled it with GCC -m64. I used long long for my 64 bit integers. The assembly instructions used look like they are 64 bit. eg. imultq and movq 8(%rbp),%rax. I seems to work. I am only using printf to display the 64 bit values using %lld. Is this the expected behaviour? Are there any gotcha's that would cause this to fail? Am I allowed to ask multiple questions in a question? Does this work on other OS's?

    Read the article

  • Log off from Remote Desktop Session does not closing Session, showing the login screen again on Wind

    - by Santhosha
    Hi As per requirement we have written one custom GINA. I have observed one interesting behavior in Windows XP 32 Bit(SP2). Customized GINA internally calls windows default Windows GINA(msgina.dll) and shows one extra window as per our requirement. I used to do remote desktop to XP machine from my machine. After replacing Windows GINA with customized GINA I tried to log off from the XP Machine(I am Using Remote Desktop Connection to log in), Log off completes successfully(After showing saving your settings, Closing network connections etc) and I will get log in screen which we get during log on, this is not expected comapred to other flavors of Windows OD. Where as in other operating systems such as Windows XP 64 Bit/ Windows 2003 32/64 Bit even after replacing the Windows Gina with custom GINA remote desktop session closes after log off from the machine. I have tried installing Novell GINA on Windows XP 32 Bit but i have not find any issue with that. I have Tried upgrading XP SP2 to SP3, still i am facing the same issue. Whether anyone faced Such issues when worked with Windows GINA? Thanks in advance Santhosha K S

    Read the article

  • Excel ODBC and 64 bit server

    - by Causas
    using ASP.NET I need to update an excel template. Our server is running Windows 2008 in 64 bit mode. I am using the following code to access the excel file: ... string connection = @"Provider=MSDASQL;Driver={Microsoft Excel Driver (*.xls)};DBQ=" + path + ";"; ... IF the application pool is set to Enable 32 bit applications the code works as expected; however the oracle driver I am using fails as it is only 64 bit. If Enable 32-bit applications is set to false the excel code fails with the error: Data source name not found and no default driver specified Any suggestions?

    Read the article

  • Why execution of a portion of code loaded from external file is not halted by the OS?

    - by menjaraz
    I've harnessed a project released on internet a long time ago. Here comes the details, all irrelevant things being stripped off for sake of concision and clarity. A binary file whose content is descibed below HEX DUMP: 55 89 E5 83 EC 08 C7 45 FC 00 00 00 00 8B 45 FC 3B 45 10 72 02 EB 19 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 08 8A 00 88 02 8D 45 FC FF 00 EB DD C6 45 FA 00 83 7D 10 01 76 6C 80 7D FA 00 74 02 EB 64 C6 45 FA 01 C7 45 FC 00 00 00 00 8B 45 10 48 39 45 FC 72 02 EB E2 8B 45 FC 8B 4D 0C 01 C1 8B 45 FC 03 45 0C 8D 50 01 8A 01 3A 02 73 30 8B 45 FC 03 45 0C 8A 00 88 45 FB 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 0C 40 8A 00 88 02 8B 45 FC 03 45 0C 8D 50 01 8A 45 FB 88 02 C6 45 FA 00 8D 45 FC FF 00 EB A7 C9 C2 0C 00 90 90 90 90 90 90 is loaded into memory and executed using the following method snippet var MySrcArray, MyDestArray: array [1 .. 15] of Byte; // ... MyBuffer: Pointer; TheProc: procedure; SortIt: procedure(ASrc, ADest: Pointer; ASize: LongWord); stdcall; begin // Initialization of MySrcArray with random Bytes and display here ... // Instructions of loading of the binary file into MyBuffer using merely **GetMem** here ... @SortIt := MyBuffer; try SortIt(@MySrcArray, @MyDestArray, 15); // Display of MyDestArray (The outcome of the processing !) except // Invalid code error handling end; // Cleaning code here ... end; works like a charm on my box. My Question: How comes it works without using VirtualAlloc and/or VirtualProtect?

    Read the article

  • Can a 32-bit RHEL4 userland work with a 64-bit kernel?

    - by James
    Is there a way to change an i386 RHEL4 machine to run an amd64 kernel, but ensure that it still builds software into same i386 binaries? On Debian this seems quite straightforward: just install an amd64 kernel (worst case, build one like this guy: http://www.debian-administration.org/users/jonesy/weblog/1) and prefix everything with "linux32". Then everything that considers uname -m will be unchanged, I just need to handle the few cases that consider uname -r. What is the Red Hat equivalent? Is the only way a full 64-bit installation on another disk and then chrooting back to the 32-bit system before anyone builds anything? (Even the best examples of that seem to be Debian-based.) Background: We make a large system that runs on (a variant of) i386 RHEL4. However, some of the larger RHEL build machines now have enough RAM that they might benefit from going 64-bit (for the kernel and maybe some of the bigger build steps). Our build system doesn't support cross-compilation.

    Read the article

  • Help me understand this "Programming pearls" bitsort program

    - by ardsrk
    Jon Bentley in Column 1 of his book programming pearls introduces a technique for sorting a sequence of non-zero positive integers using bit vectors. I have taken the program bitsort.c from here and pasted it below: /* Copyright (C) 1999 Lucent Technologies */ /* From 'Programming Pearls' by Jon Bentley */ /* bitsort.c -- bitmap sort from Column 1 * Sort distinct integers in the range [0..N-1] */ #include <stdio.h> #define BITSPERWORD 32 #define SHIFT 5 #define MASK 0x1F #define N 10000000 int a[1 + N/BITSPERWORD]; void set(int i) { int sh = i>>SHIFT; a[i>>SHIFT] |= (1<<(i & MASK)); } void clr(int i) { a[i>>SHIFT] &= ~(1<<(i & MASK)); } int test(int i){ return a[i>>SHIFT] & (1<<(i & MASK)); } int main() { int i; for (i = 0; i < N; i++) clr(i); /*Replace above 2 lines with below 3 for word-parallel init int top = 1 + N/BITSPERWORD; for (i = 0; i < top; i++) a[i] = 0; */ while (scanf("%d", &i) != EOF) set(i); for (i = 0; i < N; i++) if (test(i)) printf("%d\n", i); return 0; } I understand what the functions clr, set and test are doing and explain them below: ( please correct me if I am wrong here ). clr clears the ith bit set sets the ith bit test returns the value at the ith bit Now, I don't understand how the functions do what they do. I am unable to figure out all the bit manipulation happening in those three functions. Please help.

    Read the article

  • .net app won't run. When compiling on dest machine (same sources) it does work. Why?

    - by reinier
    I have a service which needs to be deployed on a clients machine. The destination machine is 64bit. My dev machine is also 64 bit. The app is really simple, listens on a port and does some db things. It targets .net 3.5 When I deploy the Anycpu, the X64 or the X86 version, the thing won't install on the clients machine. I checked dependencywalker and it lists: devmgr.dll ieshims.dll wer.dll anyhow...I install visual studio 2008 on the clients machine... check out all the sources. I don't change a thing and compile. Copy the exe over to it's dest locations...and what do you know..it works. Dependencywalker still lists the same dependency problems. How can it be that the act of compiling it on this machine gives me a different exe?

    Read the article

  • starting 64 Bit Windows Application Development

    - by user173438
    I intend to start writing a 64 Bit Scientific Computing Application (signal processing) for Windows using Microsoft Visual Studio 2008. What should I have ready as far as a development platform is concerned? How would it be different from 32 Bit development? What could be the porting issues for a 32 Bit version that I already have (ok - this might too early to ask.. even before I start compiling)? As you might have guessed, I am looking for general directions. All pointers would be much appreciated! :) Thanks in advance..

    Read the article

  • How do I run a VBScript in 32-bit mode on a 64-bit machine?

    - by Peter
    I have a text file that ends with .vbs that I have written the following in: Set Conn = CreateObject("ADODB.Connection") Conn.Provider = "Microsoft.ACE.OLEDB.12.0" Conn.Properties("Data Source") = "C:\dummy.accdb" Conn.Properties("Jet OLEDB:Database Password") = "pass" Conn.Open Conn.Close Set Conn = Nothing When I execute this on a Windows 32-bit machine it runs and ends without any notion (expected). When I execute this on a Windows 64-bit machine it gets the error "Provider cannot be found. It may not be properly installed.". But it is installed. I think the root of the problem is that the provider is a 32-bit provider, as far as I know it doesn't exist as 64-bit. If I run the VBScript through IIS on my 64-bit machine (as a ASP file) I can select that it should run in 32-bit mode. It can then find the provider. How can I make it find the provider on Windows 64-bit? Can I tell CScript (which executes the .vbs text file) to run in 32-bit mode somehow?

    Read the article

  • How to check code is compatible with Windows 7

    - by Julen
    Hello, We are developing using Visual C# 2008 Express a program based on WPF under Windows XP machines (32 bits). The thing is that we have tried to run the program in two Windows 7 machines, one is 32 bits Windows 7 and the other is 64 bits Windows 7. Under Windows XP everything is fine. In Windows 7 machine, it launches in the 32 bits version altough there is an error when running one functionality (it does not happen in XP). In W7 64 bits it even does not launch. Is this normal? Is not possible to run 32 bit programs under W7 64-bits,even if they execute slower?? How can we check the code is compatible with Windows 7? Thank you very much in advance. Julen.

    Read the article

  • No-overflow cast on x64

    - by Cheeso
    I have an existing C codebase that works on x86. I'm now compiling it for x64. What I'd like to do is cast a size_t to a DWORD, and throw an exception if there's a loss of data. Q: Is there an idiom for this? Here's why I'm doing this: A bunch of Windows APIs accept DWORDs as arguments, and the code currently assumes sizeof(DWORD)==sizeof(size_t). That assumption holds for x86, but not for x64. So when compiling for x64, passing size_t in place of a DWORD argument, generates a compile-time warning. In virtually all of these cases the actual size is not going to exceed 2^32. But I want to code it defensively and explicitly. This is my first x64 project, so... be gentle.

    Read the article

  • CreateThread() fails on 64 bit Windows, works on 32 bit Windows. Why?

    - by Stephen Kellett
    Operating System: Windows XP 64 bit, SP2. I have an unusual problem. I am porting some code from 32 bit to 64 bit. The 32 bit code works just fine. But when I call CreateThread() for the 64 bit version the call fails. I have three places where this fails. 2 call CreateThread(). 1 calls beginthreadex() which calls CreateThread(). All three calls fail with error code 0x3E6, "Invalid access to memory location". The problem is all the input parameters are correct. HANDLE h; DWORD threadID; h = CreateThread(0, // default security 0, // default stack size myThreadFunc, // valid function to call myParam, // my param 0, // no flags, start thread immediately &threadID); All three calls to CreateThread() are made from a DLL I've injected into the target program at the start of the program execution (this is before the program has got to the start of main()/WinMain()). If I call CreateThread() from the target program (same params) via say a menu, it works. Same parameters etc. Bizarre. If I pass NULL instead of &threadID, it still fails. If I pass NULL as myParam, it still fails. I'm not calling CreateThread from inside DllMain(), so that isn't the problem. I'm confused and searching on Google etc hasn't shown any relevant answers. If anyone has seen this before or has any ideas, please let me know. Thanks for reading.

    Read the article

  • .NET version with 64-bit versus 32-bit assemblies

    - by user54064
    What version of .NET (64-bit vs. 32-bit) will be loaded if some of the assemblies referenced in an app are compiled with 32-bit only (instead of AnyOS) setting? Will the app still run as 64-bit or will it be forced to run as 32-bit if at least one of the referenced assemblies is compiled as 32-bit only? The app is running .NET 3.5.

    Read the article

  • Force Python to be 32 bit on OS X Lion

    - by sciencectn
    I'm trying to use CPLEX within Python on Mac OS 10.7.5. CPLEX appears to only support a 32 bit python. I'm using this in a python shell to check if it's 32 bit: import sys,platform; print platform.architecture()[0], sys.maxsize > 2**32 I've tried these 2 commands as suggested in man 1 python, but neither seem to force 32 bit: export VERSIONER_PYTHON_PREFER_32_BIT=yes defaults write com.apple.versioner.python Prefer-32-Bit -bool yes The only thing that seems to work is this: arch -i386 python However, if I run a script using arch which calls other scripts, they all seem to start up in 64 bit mode. Is there another system wide variable to force it into 32 bit mode?

    Read the article

  • Running my web site in a 32-bit application pool on a 64-bit OS.

    - by Jeremy H
    Here is my setup: Dev: - Windows Server 2008 64-bit - Visual Studio 2008 - Solution with 3 class libraries, 1 web application Staging Web Server: - Windows Server 2008 R2 64-bit - IIS7.5 Integrated Application Pool with 32-bit Applications Enabled In Visual Studio I have set all 4 of my projects to compile to 'Any CPU' but when I run this web application on the web server with the 32-bit application pool it times out and crashes. When I run the application pool in 64-bit mode it works fine. The production web server requires me to run 32-bit application pool in 64-bit OS which is why I have this configured in this way on the staging web server. (I considered posting on ServerFault but the server part seems to be working fine. It is my code specifically that doesn't seem to want to run in 32-bit application pool which is why I am posting here.)

    Read the article

  • How to correctly load 32-bit DLL dependencies when running a program from a batch file

    - by neilwhitaker1
    I have written a tool that references Microsoft.TeamFoundation.VersionControl.Client.dll, which is a 32-bit DLL. When I build my tool on 64-bit Windows, I set Visual Studio to specifically target X86 in order to force it to a 32-bit build. Targetting X86 instead of All-CPU's prevents me from getting a BadImageFormatException, as long as I invoke the tool directly (e.g. by typing "myTool.exe" on the command line). However, if I run a batch file that invokes the tool, I still get the exception. This happens even if the batch file runs in a 32-bit command prompt (%WINDIR%\SysWOW64\cmd.exe). What else can I do to make this work?

    Read the article

  • Could not load SWT library on Windows 32-bit

    - by Firzen
    I am almost done with one Java project that I have been developing on Linux. Now I need to build and test it on Windows. So I have installed Eclipse on Windows XP 32-bit, and imported my project. All dependencies of project are in jar files in lib folder, and on Linux everything works well, but on Windows XP I get following error: Exception in thread "main" java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons: no swt-pi-gtk-4234 in java.library.path no swt-pi-gtk in java.library.path Can't load library: C:\Documents and Settings\firzen\.swt\lib\win32\x86\swt-pi-gtk-4234.dll Can't load library: C:\Documents and Settings\firzen\.swt\lib\win32\x86\swt-pi-gtk.dll at org.eclipse.swt.internal.Library.loadLibrary(Library.java:331) at org.eclipse.swt.internal.Library.loadLibrary(Library.java:240) at org.eclipse.swt.internal.gtk.OS.<clinit>(OS.java:22) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:63) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:54) at org.eclipse.swt.widgets.Display.<clinit>(Display.java:133) at gui.Frontend.<init>(Frontend.java:51) at Fighter.main(Fighter.java:18) I have searched for these DLLs, but I have failed to find them. Where can I download these DLL files? Thanks in advance.

    Read the article

  • app.config and 64-bit machines

    - by Dale Lutes
    I have an app that works fine on 32-bit systems, but fails on XP 64 bit systems. I've tracked it down to the connection string defined in my app.config thus: <connectionStrings> <clear/> <add name="IFDSConnectionString" connectionString="Data Source=fdsdata;Initial Catalog=IFDS; Trusted_Connection=true;Connect Timeout=0" providerName="System.Data.SqlClient" /> </connectionStrings> When I try to reference it in code, I find that the ConfigurationManager.ConnectionStrings collection only contains the LocalSqlServer connection string from the machine.config file and not my custom string. Another oddity is that it works fine when I run the app out of Visual Studio. It is only when I run out of the release folder that the connection string does not get defined. The application's .exe.config file is there in the release folder along with the .exe file and is up to date.

    Read the article

  • Find most significant bit (left-most) that is set in a bit array

    - by Claudiu
    I have a bit array implementation where the 0th index is the MSB of the first byte in an array, the 8th index is the MSB of the second byte, etc... What's a fast way to find the first bit that is set in this bit array? All the related solutions I have looked up find the first least significant bit, but I need the first most significant one. So, given 0x00A1, I want 9.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >