Search Results

Search found 950 results on 38 pages for 'ashy 32bit'.

Page 6/38 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Good reasons to keep 32-bit desktop OS's

    - by Mark Henderson
    Server software has been 64-bit only for a while now (Since Server 2008 R2 for Windows, even earlier for Exchange and Sharepoint) and even Ubuntu are pushing you away from 32-bit versions for their server OS's. But is there any good, quantifiable reason to keep a 32-bit desktop operating system maintained? We're preparing our Windows 8 images for the (unfortunate?) few that will be early adopters. The majority of our desktop computers have 4gb or less of RAM, but I would love to not have to bother supporting a 32-bit flavoured operating system any more. Any reason why I should?

    Read the article

  • Virtual machine : is it possible to run a 32 bits guest OS on a 64 bits host OS?

    - by Cédric Girard
    I am a software developer, and I need to use old version of Borland/Embarcadero Delphi 7 for one software. The others ones are PHP software. I will have soon a 64 bits PC, running Linux, but I need a Windows 32 bits virtual machine for Delphi (because Delphi 7 is a bit old, and our clients still use Windows XP 32 bits systems). I already have a VM under virtualbox for my Delphi environment. Will it run fine, or will I have some problem?

    Read the article

  • Should I install GCC 64 bit on a new server?

    - by diederikvanliere
    I am going to rebuild my server and I am wondering whether I should install GCC 32 or 64 bit. I develop in Python and I use some libraries that would benefit from a 64 bit GCC installation but I am not sure if I am going to run into problems with other programs / libraries. What are your thoughts?

    Read the article

  • How can install scripts determine which are the correct equivalents for /usr/lib for 32- and 64-bit libraries?

    - by Randalli
    I have an SDK that must install 32-bit and 64-bit files in the correct places under /usr/lib for a variety of Linux distributions. For example, it appears that for Fedora, /usr/lib64 is the 64bit lib, but for Debian based systems, /usr/lib is the 64bit directory. I want to find out if there is a reliable way to determine the correct locations. More specifically, is there a way an install script can determine programmatically which are the correct equivalents for /usr/lib for 32- and 64-bit libraries on a given distribution?

    Read the article

  • what does macosx-version-min imply?

    - by Quincy
    When I pass compiler flag "-mmacosx-version-min=10.5", what does it mean? I think it implies the result binary is x86, not ppc, but is it 32 bits or 64 bits? I'm compiling on snow leopard, so default output binary is 64 bits. I'm not passing -universal, it's not 32bit-64bit universal binary, I think.

    Read the article

  • Creating a 64bit build in Visual studio

    - by user48408
    I have an old .net 2.0 windows app I need to deploy on a windows 7 machine and its not going too well. I want to build a native 64 bit version rather than a 32bit capable of running on a 64bit environment. I'm working with Visual studio 2005 My question is what settings do I need to set within each project of my solution, (both the windows app + my dll's that it references) and what settings i should set on my install project (I have a deployment project for distribution)

    Read the article

  • int datatype in 64bit JVM. Is it more "inefficient" than long?

    - by Zwei Steinen
    I heard that using shorts on 32bit system is just more inefficient than using ints. Is this the same for ints on a 64bit system? Python recently(?) basically merged ints with long and has basically a single datatype long, right? If you are sure that your app. will only run on 64bit then, is it even conceivable (potentially a good idea) to use long for everything in Java?

    Read the article

  • nautilus will not start, no right click menu, apps will not run due to 64bit libicui18n.so.48 overwritten with 32bit, ubuntu 12.04

    - by Dewb
    yeah so i broke my system trying to install support for 32bit apps. Somehow the 32bit set of libicu*.so.48 files replaced my 64bit files and now nothing works. I know what the problem is and how to fix it , what i need is some help. I need some kind soul running a 64bit install of Ubuntu 12.04 to simply go into their /usr/lib dir and copy the files that have the form libicu*.so.48* (8 of them i think, maybe 16 i dunno how many there is supposed to be since they were overwritten). Anyway if you could put those files in an archive, zip or tar preferred and then email/share the link. I was able to get these files installed but they're from the latest edition of fedora and while i can now use my computer , it's not that stable things don't start right smoothly every time like they used to .... and well its a simple ordeal to put things back the way they were if someone wouldn't mind helping out. also any help setting up the libraries related to 32bit apps would be great as well as i apparently screwed that up royally. Thanks for any help or advice

    Read the article

  • 32 bit dll importing in 64 bit .Net application

    - by scatterbraiin
    hello i'm having a problem, i try to solve it since yesterday but no luck. i have 32 bit delphi dll which i want to import in .NET Win Application. this application has to be built on ANY CPU mode. of course, there's BadImageFormatException coming, which means that in x64 application can't be loaded x86 dll.. i googled around and find solution, it said i have to do wrapper, but it wasn't clear for me. can anybody tell how to solve this problem, is there any possible way i can import 32bit Delphi dll in program builted Any CPU or x64 mode(maybe another solution).

    Read the article

  • portable way to deal with 64/32 bit time_t

    - by MK
    I have some code which is built both on Windows and Linux. Linux at this point is always 32bit but Windows is 32 and 64bit. Windows wants to have time_t be 64 bit and Linux still has it as 32 bit. I'm fine with that, except in some places time_t values are converted to strings. So when time_T is 32 bit it should be done with %d and when it is 64bit with %lld... what is the smart way to do this? Also: any ideas how I may find all places where time_t's are passed to printf-style functions to address this issue?

    Read the article

  • Python: get windows OS version and architecture

    - by Thorfin
    First of all, I don't think this question is a duplicate of http://stackoverflow.com/questions/2208828/detect-64bit-os-windows-in-python because imho it has not been thoroughly answered. The only approaching answer is: Use sys.getwindowsversion() or the existence of PROGRAMFILES(X86) (if 'PROGRAMFILES(X86)' in os.environ) But: Can we completely rely on the windows environment variable PROGRAMFILES(X86)? I fear that anyone can create it, even if it's not present on the system. How can we use sys.getwindowsversion() to get the architecture? Regarding sys.getwindowsversion(): The link http://docs.python.org/library/sys.html#sys.getwindowsversion leads us to http://msdn.microsoft.com/en-us/library/ms724451%28VS.85%29.aspx but I don't see anything related to the architecture (32bit/64bit). Moreover, the platform element if the returned tuple seems to be independent of the architecture. One last note, I'm using python 2.5. Thanks!

    Read the article

  • How to compile a C DLL for 64 bit with Visual Studio 2010?

    - by Daren Thomas
    I have a DLL written in C in source code. This is the code for the General Polygon Clipper (in case you are interested). I'm using it in a C# project via the C# wrapper provided on the homepage. This comes with a precompiled DLL. Since switching to a 64bit Development machine with Visual Studio 2010 and Windows 7 64 bit, the application won't run anymore. This is the error I get: An attempt was made to load a program with an incorrect format. This is because of DLLImporting the 32bit gpc.dll, as I have gathered from stuff found on the web. I assume this will all go away if I recompile the DLL to 64bit, but can't for the love of me figure out how to do so. My C skills are basic, in that I can write a C program with the GNU tools, but have no experience with various compilers / processors / IDEs etc. I believe I could port this to C#. By that I mean I trust myself to actually pull it off. But I'd prefer not to, since it is a lot of work that I'd prefer a compiler to do for me ;)

    Read the article

  • Using 32bit COM object from C# or VBS on Vista 64bit and getting error 80004005

    - by alexandroid
    I need some mind reading here, since I am trying to do what I do not completely understand. There is a 32-bit application (electronic trading application called CQG) which provides a COM API for external access. I have sample programs and scripts which access this API from Excel, .NET (C++, VB and C#) and shell VBScript. I have these .NET applications as source code and as compiled executables (32-bit, compiled on Windows XP). Now I have Windows Vista Home 64-bit which makes my head to spin. Excel examples work just fine (in Excel 2003). Compiled .NET sample executables work as well. But when I am trying to run .NET C# sample converted to and compiled by Visual Studio C# Expression, or run the VBScript script, I am getting error 80004005 when trying to create an object. Initially the .NET application also gave me 80040154 but then I figured how to make it produce 32-bit code and not 64-bit, so now the errors in C# and VBScript applications are the same. That's all the progress I got for now. And yes, I tried running 32-bit versions of cscript.exe/WScript from SysWOW64 folder on my VBS, but still the result is the same (80004005). How to solve this problem? I am almost ready to believe it is practically impossible, but the fact that Excel VBA works and .NET executables compiled on Windows XP run just fine just makes me angry. There should be a way to beat this thing (some secret which probably only Windows Vista developers know)! I will appreciate any help! PS: I believe code samples do not make much sense here, but this is the line of VBScript which fails: Set CEL = WScript.CreateObject("CQG.CQGCEL.4.0", "CEL_") And this is C#: CQGCEL CEL = new CQGCEL(); Update: Forgot to say UAC is off, of course. And I am working from account with administrator priviledges. I also tried watching which registry keys are read using Process Monitor but everything looks OK for GUIDs of this object. I could not recognize some other GUIDs so I am not sure whether they were critical or not. Is there a chance that this COM object uses Internet Explorer and gets the wrong one (like Internet Explorer 7 instead of Internet Explorer 6 engine or something)?

    Read the article

  • Hash 32bit int to 16bit int?

    - by dkamins
    What are some simple ways to hash a 32-bit integer (e.g. IP address, e.g. Unix time_t, etc.) down to a 16-bit integer? E.g. hash_32b_to_16b(0x12345678) might return 0xABCD. Let's start with this as a horrible but functional example solution: function hash_32b_to_16b(val32b) { return val32b % 0xffff; } Question is specifically about JavaScript, but feel free to add any language-neutral solutions, preferably without using library functions. Simple = good. Wacky+obfuscated = amusing.

    Read the article

  • Intercept and ignore keyboard event in Windows 7 32bit

    - by Sg2010
    Hi all, My hardware has a problem, from time to time it's sending a "keydown" followed by a "keyup" of event: keydown: None LButton, OemClear 255 keyup: None LButton, OemClear 255 keydown: None LButton, OemClear 255 keyup: None LButton, OemClear 255 It goes like this forever, in Windows. In general it doesn't affect most of the applications, because this key is not printable. I think it's a special function key, like a media key or something. It doesn't do anything. But, in some applications that LISTEN to keydown and keyup, I get undesire and unexpected behaviour. Is there a way to intercept these 2 events in Windows (for all applications, for Windows itself) and make the OS ignore them? This is really important to me, if you can think of any solution, I'd be forever thankful.

    Read the article

  • Find a missing 32bit integer among a unsorted array containing at most 4 billion ints

    - by pierr
    Hi, This is the problem described in Programming pearls. I can not understand binary search method descrbied by the author. Can any one helps to elaborate? Thanks. EDIT: I can understand binary search in general. I just can not understand how to apply binary search in this special case. How to decide the missing number is in or not in some range so that we can choose another. English is not my native language, that is one reason I can not understand the author well. So, use plain english please:) EDIT: Thank you all for your great answer and comments ! The most important lesson I leant from solving this question is Binary search applies not only on sorted array!

    Read the article

  • Calling 32 bit unmanaged dlls from C# randomly failing

    - by Bert
    Hi, I'm having an issue when calling 32 bit delphi dll's from c# web site. The code generally runs fine, but occasionally I get an error Unable to load DLL '': The specified module could not be found. (Exception from HRESULT: 0x8007007E). This issue persists until I recycle the app pool for the site, and then it works fine again. On the same server, there is also a web service that is calling the same set of dlls. This web service doesn't seem to have the same issue that the web site has. Both applications are using .net framework 3.5, separate app pools on IIS. Here is the code I'm using to wrap the dlls: public sealed class Mapper { static Mapper instance = null; [DllImport("kernel32.dll")] private static extern bool SetDllDirectory(string lpPathName); private Mapper() { SetDllDirectory(ConfigManager.Path); } public static Mapper Instance { get { if (instance == null) { instance = new Mapper(); } return instance; } } public int Foo(string bar, ref double val) { return Loader.Foo(bar, ref val); } } public static class Loader { [DllImport("some.dll", CallingConvention = CallingConvention.StdCall, CharSet = CharSet.Unicode, EntryPoint = "foo")] public static extern int Foo(string bar, ref double val); } Then I call it like this: double val = 0.0; Mapper.Instance.Foo("bar", ref val); Any ideas as to why it would "randomly" Unable to load DLL '': The specified module could not be found. (Exception from HRESULT: 0x8007007E). The other problem is that I haven't been able to replicate the issue in the development environment. I thought that due to 2 applications calling the same dlls, that there could be some locks occurring. To replicate this, I created an app that spawned multiple threads and repeatedly called the 32bit dlls, and then used the web site to call these same dlls. I still couldn't replicate the issue. Some possible fixes that I can think of: Wrap the 32 bit dlls in web service (because the webservice doesn't seem to suffer from the same problem). But this may be worthless if it turns out that the web service also fails. Set up state server for the session state and periodically recycle the app pool for the site.This isn't fixing the problem, only avoiding it. Wrap the dll's in exe, and call that exe. Then I shouldn't get the same issue. But this also seems like a hacky solution. Implement the mapper class differently ? But how else should I be doing the call? The other draw back is that other applications are using this mapper, so I'd need to change there code too. Thanks

    Read the article

  • memory available to 64bit Fedora guest under 32bit XP host running virtualbox

    - by Chris Card
    I have successfully installed a 64 bit Fedora 11 guest os using VirtualBox on a host machine (AMD64) running 32 bit Windows XP . At the moment the host machine has 2 Gb ram installed and I've allocated 1 Gb to the guest, which all works well. The host machine can hold a maximum of 4 Gb ram, so I was wondering if it's worth buying an extra 2 Gb for it. I know that 32 bit Windows XP can't use all of the 4 Gb, but can the guest os use any of the ram that the host os can't use?

    Read the article

  • How to genrate a monochrome bit mask for a 32bit bitmap

    - by Mordachai
    Under Win32, it is a common technique to generate a monochrome bitmask from a bitmap for transparency use by doing the following: SetBkColor(hdcSource, clrTransparency); VERIFY(BitBlt(hdcMask, 0, 0, bm.bmWidth, bm.bmHeight, hdcSource, 0, 0, SRCCOPY)); This assumes that hdcSource is a memory DC holding the source image, and hdcMask is a memory DC holding a monochrome bitmap of the same size (so both are 32x32, but the source is 4 bit color, while the target is 1bit monochrome). However, this seems to fail for me when the source is 32 bit color + alpha. Instead of getting a monochrome bitmap in hdcMask, I get a mask that is all black. No bits get set to white (1). Whereas this works for the 4bit color source. My search-foo is failing, as I cannot seem to find any references to this particular problem. I have isolated that this is indeed the issue in my code: i.e. if I use a source bitmap that is 16 color (4bit), it works; if I use a 32 bit image, it produces the all-black mask. Is there an alternate method I should be using in the case of 32 bit color images? Is there an issue with the alpha channel that overrides the normal behavior of the above technique? Thanks for any help you may have to offer! ADDENDUM: I am still unable to find a technique that creates a valid monochrome bitmap for my GDI+ produced source bitmap. I have somewhat alleviated my particular issue by simply not generating a monochrome bitmask at all, and instead I'm using TransparentBlt(), which seems to get it right (but I don't know what they're doing internally that's any different that allows them to correctly mask the image). It might be useful to have a really good, working function: HBITMAP CreateTransparencyMask(HDC hdc, HBITMAP hSource, COLORREF crTransparency); Where it always creates a valid transparency mask, regardless of the color depth of hSource. Ideas?

    Read the article

  • Different PATH environment variable for 32bit and 64bit Windows - is it possible?

    - by Piotr Dobrogost
    Is it possible to have whole or part of PATH environment variable specific to the type of running process's image (32bit/64bit)? When I run some app from within 64bit cmd.exe I would like to have it pick the 64bit version of OpenSSL library whereas when I run some app from within 32bit cmd.exe I would like to have it pick the 32bit version of OpenSSL library. FOLLOW UP where.exe does not find OpenSSL libs when %ProgramFiles% variable is used in the PATH environment variable

    Read the article

  • Connection to .mdb file fails on windows 2008 core - 32bit

    - by amritad
    Hi, I have an application where I am doing connection with .mdb file. Here are the steps we are following : Creating object for database utils inteface Calling the connect() function, to get connected to the database whose extension is .mdb file Above flow is working correct on all windows platform but not on windows server 2008 CORE 32-bit. I also checked whether the ODBC drivers are installed or not, by opening regedit. I found that all drivers specially Microsoft access driver (*.mdb) is installed. ANd I think it is by default get installed while installing OS. Is anyone aware of any thing about ODBC drivers and connectivity to the database (like .mdb) on windows 2008 CORE ( which runs only on command line and no interface)

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >