Search Results

Search found 9557 results on 383 pages for 'x86 64'.

Page 364/383 | < Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >

  • Orphan IBM JVM process

    - by Nicholas Key
    Hi people, I have this issue about orphan IBM JVM process being created in the process tree: For example: C:\Program Files\IBM\WebSphere\AppServer\bin>wsadmin -lang jython -f "C:\Hello.py" Hello.py has the simple implementation: import time i = 0 while (1): i = i + 1 print "Hello World " + str(i) time.sleep(3.0) My machine has such JVM information: C:\Program Files\WebSphere\java\bin>java -verbose:sizes -version -Xmca32K RAM class segment increment -Xmco128K ROM class segment increment -Xmns0K initial new space size -Xmnx0K maximum new space size -Xms4M initial memory size -Xmos4M initial old space size -Xmox1624995K maximum old space size -Xmx1624995K memory maximum -Xmr16K remembered set size -Xlp4K large page size available large page sizes: 4K 4M -Xmso256K operating system thread stack size -Xiss2K java thread stack initial size -Xssi16K java thread stack increment -Xss256K java thread stack maximum size java version "1.6.0" Java(TM) SE Runtime Environment (build pwi3260sr6ifix-20091015_01(SR6+152211+155930+156106)) IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Windows Server 2003 x86-32 jvmwi3260sr6-20091001_43491 (JIT enabled, AOT enabled) J9VM - 20091001_043491 JIT - r9_20090902_1330ifx1 GC - 20090817_AA) JCL - 20091006_01 While the program is running, I tried to kill it and subsequently I found an orphan IBM JVM process in the process tree. Is there a way to fix this issue? Why is there an orphan process in the first place? Is there something wrong with my code? I really don't believe that my simplistic code is wrongly implemented. Any suggestions?

    Read the article

  • JAVASCRIPT ENABLED [closed]

    - by kirchoffs415
    HI, I hope somebody can help, i keep getting the following message when i log on-- Your Javascript is disabled. Limited functionality is available. it will stay for maybe a day sometimes two.I have uninstalled javascript and reinstalled but still the same. Iam using chrome. any help would be gratefull many thanks Dominic p.s. my system spec is as follows System InformationOS Name Microsoft® Windows Vista™ Home Premium Version 6.0.6002 Service Pack 2 Build 6002 Other OS Description Not Available OS Manufacturer Microsoft Corporation System Name DOM-PC System Manufacturer Dell Inc. System Model Inspiron 1545 System Type X86-based PC Processor Pentium(R) Dual-Core CPU T4200 @ 2.00GHz, 2000 Mhz, 2 Core(s), 2 Logical Processor(s) BIOS Version/Date Dell Inc. A05, 25/02/2009 SMBIOS Version 2.4 Windows Directory C:\Windows System Directory C:\Windows\system32 Boot Device \Device\HarddiskVolume3 Locale United Kingdom Hardware Abstraction Layer Version = "6.0.6002.18005" User Name DOM-PC\DOM Time Zone GMT Standard Time Installed Physical Memory (RAM) 3.00 GB Total Physical Memory 2.96 GB Available Physical Memory 1.38 GB Total Virtual Memory 5.89 GB Available Virtual Memory 4.25 GB Page File Space 3.00 GB Page File C:\pagefile.sys My System Specs

    Read the article

  • Access cost of dynamically created objects with dynamically allocated members

    - by user343547
    I'm building an application which will have dynamic allocated objects of type A each with a dynamically allocated member (v) similar to the below class class A { int a; int b; int* v; }; where: The memory for v will be allocated in the constructor. v will be allocated once when an object of type A is created and will never need to be resized. The size of v will vary across all instances of A. The application will potentially have a huge number of such objects and mostly need to stream a large number of these objects through the CPU but only need to perform very simple computations on the members variables. Could having v dynamically allocated could mean that an instance of A and its member v are not located together in memory? What tools and techniques can be used to test if this fragmentation is a performance bottleneck? If such fragmentation is a performance issue, are there any techniques that could allow A and v to allocated in a continuous region of memory? Or are there any techniques to aid memory access such as pre-fetching scheme? for example get an object of type A operate on the other member variables whilst pre-fetching v. If the size of v or an acceptable maximum size could be known at compile time would replacing v with a fixed sized array like int v[max_length] lead to better performance? The target platforms are standard desktop machines with x86/AMD64 processors, Windows or Linux OSes and compiled using either GCC or MSVC compilers.

    Read the article

  • How to run White + SL4 UATs through TeamCity?

    - by Duncan Bayne
    After experiencing a series of unpleasant issues with TFS, including source code orruption and project management inflexibility, we (meaning the project team of which I'm a part) have decided to move from TFS 2010 to TeamCity + SVN + V1. I've managed to get our MSTest component and unit tests running as part of every build. However, our UATs are failing, and I was hoping for some advice from the TeamCity community as to best practices w.r.t. running web servers and interacting with the desktop. Each of our UAT fixtures starts a web server to host the site, like this: public static void StartWebServer() { var pathToSite = @"C:\projects\myproject\FrontEnd\MyProject.FrontEnd.Web"; var webServer = new Process { StartInfo = new ProcessStartInfo { Arguments = string.Format("/port:9150 /path:\"{0}\"", pathToSite), FileName = @"C:\Program Files (x86)\Common Files\microsoft shared\DevServer\10.0\WebDev.WebServer40.EXE" } }; webServer.Start(); } Needless to say, this doesn't work when running through TeamCity, as the pathToSite value is different each time. I'm hoping there is a way of determining the path into which the the code is checked out prior to building? That would allow me to point the web server at the right place. The other issue is that our UATs use White to drive the Silverlight UI through an instance of Internet Explorer: _browserWindow = InternetExplorer.Launch("http://localhost:9150/index.html#/Home", "Home - Windows Internet Explorer"); _document = _browserWindow.SilverlightDocument; I've ensured that the TeamCity service is granted the ability to interact with the desktop, and I've set the build agent machine up to log in automatically (an open session is a pre-requisite for White to work properly). Is that all I need to do or are there additional steps required?

    Read the article

  • Powershell 2.0 Hang When Run From MsDeploy pre- post- ops using c/

    - by SonOfNun
    I am trying to invoke powershell during the preSync call in a MSDeploy command, but powershell does not exit the process after it has been called. The command (from command line): "tools/MSDeploy/msdeploy.exe" -verb:sync -preSync:runCommand="powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command C:/MyInstallPath/deploy.ps1 Set-WebAppOffline Uninstall-Service ",waitInterval=60000 -usechecksum -source:dirPath="build/for-deployment" -dest:wmsvc=BLUEPRINT-X86,username=deployer,password=deployer,dirPath=C:/MyInstallPath I used a hack here (http://therightstuff.de/2010/02/06/How-We-Practice-Continuous-Integration-And-Deployment-With-MSDeploy.aspx) that gets the powershell process and kills it but that didn't work. I also tried taskkill and the sysinternals equivalent, but nothing will kill the process so that MSDeploy errors out. The command is executed, but then just sits there. Any ideas what might be causing powershell to hang like this? I have found a few other similar issues around the web but no answers. Environment is Win 2K3, using Powershell 2.0. UPDATE: Here is a .vbs script I use to invoke my powershell command now. Invoke using 'cscript.exe path/to/script.vbs': Option Explicit Dim oShell, appCmd,oShellExec Set oShell = CreateObject("WScript.Shell") appCmd = "powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command ""&{ . c:/development/materialstesting/deploy/web/deploy.ps1; Set-WebAppOffline }"" " Set oShellExec = oShell.Exec(appCmd) oShellExec.StdIn.Close()

    Read the article

  • How to use VC++ intrinsic functions w/o run-time library

    - by Adrian McCarthy
    I'm involved in one of those challenges where you try to produce the smallest possible binary, so I'm building my program without the C or C++ run-time libraries (RTL). I don't link to the DLL version or the static version. I don't even #include the header files. I have this working fine. For some code constructs, the compiler generates calls to memset(). For example: struct MyStruct { int foo; int bar; }; MyStruct blah = {}; // calls memset() Since I don't include the RTL, this results in a missing symbol at link time. I've been getting around this by avoiding those constructs. For the given example, I'll explicitly initialize the struct. MyStruct blah; blah.foo = 0; blah.bar = 0; But memset() can be useful, so I tried adding my own implementation. It works fine in Debug builds, even for those places where the compiler generates an implicit call to memset(). But in Release builds, I get an error saying that I cannot define an intrinsic function. You see, in Release builds, intrinsic functions are enabled, and memset() is an intrinsic. I would love to use the intrinsic for memset() in my release builds, since it's probably inlined and smaller and faster than my implementation. But I seem to be a in catch-22. If I don't define memset(), the linker complains that it's undefined. If I do define it, the compiler complains that I cannot define an intrinsic function. I've tried adding #pragma intrinsic(memset) with and without declarations of memset, but no luck. Does anyone know the right combination of definition, declaration, #pragma, and compiler and linker flags to get an intrinsic function without pulling in RTL overhead? Visual Studio 2008, x86, Windows XP+.

    Read the article

  • Why do bind1st and bind2nd require constant function objects?

    - by rlbond
    So, I was writing a C++ program which would allow me to take control of the entire world. I was all done writing the final translation unit, but I got an error: error C3848: expression having type 'const `anonymous-namespace'::ElementAccumulator<T,BinaryFunction>' would lose some const-volatile qualifiers in order to call 'void `anonymous-namespace'::ElementAccumulator<T,BinaryFunction>::operator ()(const point::Point &,const int &)' with [ T=SideCounter, BinaryFunction=std::plus<int> ] c:\program files (x86)\microsoft visual studio 9.0\vc\include\functional(324) : while compiling class template member function 'void std::binder2nd<_Fn2>::operator ()(point::Point &) const' with [ _Fn2=`anonymous-namespace'::ElementAccumulator<SideCounter,std::plus<int>> ] c:\users\****\documents\visual studio 2008\projects\TAKE_OVER_THE_WORLD\grid_divider.cpp(361) : see reference to class template instantiation 'std::binder2nd<_Fn2>' being compiled with [ _Fn2=`anonymous-namespace'::ElementAccumulator<SideCounter,std::plus<int>> ] I looked in the specifications of binder2nd and there it was: it took a const AdaptibleBinaryFunction. So, not a big deal, I thought. I just used boost::bind instead, right? Wrong! Now my take-over-the-world program takes too long to compile (bind is used inside a template which is instantiated quite a lot)! At this rate, my nemesis is going to take over the world first! I can't let that happen -- he uses Java! So can someone tell me why this design decision was made? It seems like an odd decision. I guess I'll have to make some of the elements of my class mutable for now...

    Read the article

  • fesetround with MSVC x64

    - by mr grumpy
    I'm porting some code to Windows (sigh) and need to use fesetround. MSVC doesn't support C99, so for x86 I copied an implementation from MinGW and hacked it about: //__asm__ volatile ("fnstcw %0;": "=m" (_cw)); __asm { fnstcw _cw } _cw &= ~(FE_TONEAREST | FE_DOWNWARD | FE_UPWARD | FE_TOWARDZERO); _cw |= mode; //__asm__ volatile ("fldcw %0;" : : "m" (_cw)); __asm { fldcw _cw } if (has_sse) { unsigned int _mxcsr; //__asm__ volatile ("stmxcsr %0" : "=m" (_mxcsr)); __asm { stmxcsr _mxcsr } _mxcsr &= ~ 0x6000; _mxcsr |= (mode << __MXCSR_ROUND_FLAG_SHIFT); //__asm__ volatile ("ldmxcsr %0" : : "m" (_mxcsr)); __asm { ldmxcsr _mxcsr } } The commented lines are the originals for gcc; uncommented for msvc. This appears to work. However the x64 cl.exe doesn't support inline asm, so I'm stuck. Is there some code out there I can "borrow" for this? (I've spent hours with Google). Or will I have to go on a 2 week detour to learn some assembly and figure out how to get/use MASM? Any advice is appreciated. Thank you.

    Read the article

  • How to determine values saved on the stack?

    - by Brian
    I'm doing some experimenting and would like to be able to see what is saved on the stack during a system call (the saved state of the user land process). According to http://lxr.linux.no/#linux+v2.6.30.1/arch/x86/kernel/entry_32.S it shows that the various values of registers are saved at those particular offsets to the stack pointer. Here is the code I have been trying to use to examine what is saved on the stack (this is in a custom system call I have created): asm("movl 0x1C(%esp), %ecx"); asm("movl %%ecx, %0" : "=r" (value)); where value is an unsigned long. As of right now, this value is not what is expected (it is showing a 0 is saved for the user value of ds). Am I correctly accessing the offset of the stack pointer? Another possibility might be could I use a debugger such as GDB to examine the stack contents while in the kernel? I don't have much extensive use with debugging and am not sure of how to debug code inside the kernel. Any help is much appreciated.

    Read the article

  • Using Tcl DSL in Python

    - by Sridhar Ratnakumar
    I have a bunch of Python functions. Let's call them foo, bar and baz. They accept variable number of string arguments and does other sophisticated things (like accessing the network). I want the "user" (let's assume he is only familiar with Tcl) to write scripts in Tcl using those functions. Here's an example (taken from Macports) that user can come up with: post-configure { if {[variant_isset universal]} { set conflags "" foreach arch ${configure.universal_archs} { if {${arch} == "i386"} {append conflags "x86 "} else { if {${arch} == "ppc64"} {append conflags "ppc_64 "} else { append conflags ${arch} " " } } } set profiles [exec find ${worksrcpath} -name "*.pro"] foreach profile ${profiles} { reinplace -E "s|^(CONFIG\[ \\t].*)|\\1 ${conflags}|" ${profile} # Cures an isolated case system "cd ${worksrcpath}/designer && \ ${qt_dir}/bin/qmake -spec ${qt_dir}/mkspecs/macx-g++ -macx \ -o Makefile python.pro" } } } Here, variant_issset, reinplace are so on (other than Tcl builtins) are implemented as Python functions. if, foreach, set, etc.. are normal Tcl constructs. post-configure is a Python function that accepts, well, a Tcl code block that can later be executed (which in turns would obviously end up calling the above mentioned Python "functions"). Is this possible to do in Python? If so, how? from Tkinter import *; root= Tk(); root.tk.eval('puts [array get tcl_platform]') is the only integration I know of, which is obviously very limited (not to mention the fact that it starts up X11 server on mac).

    Read the article

  • Linux C debugging library to detect memory corruptions

    - by calandoa
    When working sometimes ago on an embedded system with a simple MMU, I used to program dynamically this MMU to detect memory corruptions. For instance, at some moment at runtime, the foo variable was overwritten with some unexpected data (probably by a dangling pointer or whatever). So I added the additional debugging code : at init, the memory used by foo was indicated as a forbidden region to the MMU; each time foo was accessed on purpose, access to the region was allowed just before then forbidden just after; a MMU irq handler was added to dump the master and the address responsible of the violation. This was actually some kind of watchpoint, but directly self-handled by the code itself. Now, I would like to reuse the same trick, but on a x86 platform. The problem is that I am very far from understanding how is working the MMU on this platform, and how it is used by Linux, but I wonder if any library/tool/system call already exist to deal with this problem. Note that I am aware that various tools exist like Valgrind or GDB to manage memory problems, but as far as I know, none of these tools car be dynamically reconfigured by the debugged code. I am mainly interested for user space under Linux, but any info on kernel mode or under Windows is also welcome!

    Read the article

  • How does PATH environment affect my running executable from using msvcr90 to msvcr80 ???

    - by Runner
    #include <gtk/gtk.h> int main( int argc, char *argv[] ) { GtkWidget *window; gtk_init (&argc, &argv); window = gtk_window_new (GTK_WINDOW_TOPLEVEL); gtk_widget_show (window); gtk_main (); return 0; } I tried putting various versions of MSVCR80.dll under the same directory as the generated executable(via cmake),but none matched. Is there a general solution for this kinda problem? UPDATE Some answers recommend install the VS redist,but I'm not sure whether or not it will affect my installed Visual Studio 9, can someone confirm? Manifest file of the executable <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"></requestedExecutionLevel> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugCRT" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> </assembly> It seems the manifest file says it should use the MSVCR90, why it always reporting missing MSVCR80.dll? FOUND After spending several hours on it,finally I found it's caused by this setting in PATH: D:\MATLAB\R2007b\bin\win32 After removing it all works fine.But why can that setting affect my running executable from using msvcr90 to msvcr80 ???

    Read the article

  • How to benchmark on multi-core processors

    - by Pascal Cuoq
    I am looking for ways to perform micro-benchmarks on multi-core processors. Context: At about the same time desktop processors introduced out-of-order execution that made performance hard to predict, they, perhaps not coincidentally, also introduced special instructions to get very precise timings. Example of these instructions are rdtsc on x86 and rftb on PowerPC. These instructions gave timings that were more precise than could ever be allowed by a system call, allowed programmers to micro-benchmark their hearts out, for better or for worse. On a yet more modern processor with several cores, some of which sleep some of the time, the counters are not synchronized between cores. We are told that rdtsc is no longer safe to use for benchmarking, but I must have been dozing off when we were explained the alternative solutions. Question: Some systems may save and restore the performance counter and provide an API call to read the proper sum. If you know what this call is for any operating system, please let us know in an answer. Some systems may allow to turn off cores, leaving only one running. I know Mac OS X Leopard does when the right Preference Pane is installed from the Developers Tools. Do you think that this make rdtsc safe to use again? More context: Please assume I know what I am doing when trying to do a micro-benchmark. If you are of the opinion that if an optimization's gains cannot be measured by timing the whole application, it's not worth optimizing, I agree with you, but I cannot time the whole application until the alternative data structure is finished, which will take a long time. In fact, if the micro-benchmark were not promising, I could decide to give up on the implementation now; I need figures to provide in a publication whose deadline I have no control over.

    Read the article

  • LNK 1104 error to lib file - Continues despite removing includes and links

    - by user1556594
    A link error to a lib file popped up out of the blue in a c++ application of mine after code was working fine in my last session. Error 1 error LNK1104: cannot open file '..........\Program Files (x86)\FMOD SoundSystem\FMOD Programmers API Windows\api\lib\fmodex_vc.lib' I triple checked my project directories were set up correctly to link to the lib file, that the file existed in said directory and that it was a working version of the .lib. My next step was to remove the includes to the file and the links to bypass the error and work on the rest of my code until the problem was solved. The error remains, however, despite: Commenting out absolutely every include relating to the lib. Commenting out absolutely every line of code dependant on the includes. Removing the directory from VC++ Directories in the project properties. Checking the Additional Library Directories field was also clear of references. To my understanding this should have made the library and related code virtually non-existant to the compiler. What am I missing? The library itself is fmodex_vc.lib - part of the FMOD API for providing sound to interactive applications. Again, the application was working one session, but failed to compile the next. I hadn't touched the code since so this led me to believe some aspect of VS is at fault. I'd like to avoid the time involded in re-installing if possible as I'm on the clock for a review tomorrow evening and there are a few more things I'd like to smooth out before then. If necessary, however, I won't hesitate. Very much appreciate the help.

    Read the article

  • How do I patch a Windows API at runtime so that it to returns 0 in x64?

    - by Jorge Vasquez
    In x86, I get the function address using GetProcAddress() and write a simple XOR EAX,EAX; RET; in it. Simple and effective. How do I do the same in x64? bool DisableSetUnhandledExceptionFilter() { const BYTE PatchBytes[5] = { 0x33, 0xC0, 0xC2, 0x04, 0x00 }; // XOR EAX,EAX; RET; // Obtain the address of SetUnhandledExceptionFilter HMODULE hLib = GetModuleHandle( _T("kernel32.dll") ); if( hLib == NULL ) return false; BYTE* pTarget = (BYTE*)GetProcAddress( hLib, "SetUnhandledExceptionFilter" ); if( pTarget == 0 ) return false; // Patch SetUnhandledExceptionFilter if( !WriteMemory( pTarget, PatchBytes, sizeof(PatchBytes) ) ) return false; // Ensures out of cache FlushInstructionCache(GetCurrentProcess(), pTarget, sizeof(PatchBytes)); // Success return true; } static bool WriteMemory( BYTE* pTarget, const BYTE* pSource, DWORD Size ) { // Check parameters if( pTarget == 0 ) return false; if( pSource == 0 ) return false; if( Size == 0 ) return false; if( IsBadReadPtr( pSource, Size ) ) return false; // Modify protection attributes of the target memory page DWORD OldProtect = 0; if( !VirtualProtect( pTarget, Size, PAGE_EXECUTE_READWRITE, &OldProtect ) ) return false; // Write memory memcpy( pTarget, pSource, Size ); // Restore memory protection attributes of the target memory page DWORD Temp = 0; if( !VirtualProtect( pTarget, Size, OldProtect, &Temp ) ) return false; // Success return true; } This example is adapted from code found here: http://www.debuginfo.com/articles/debugfilters.html#overwrite .

    Read the article

  • Android - Two different programs at the same time in an emulator

    - by Léa Massiot
    I'm new to Android development. My OS is WinXP. I'm trying to install two different applications on an Android Device Emulator in command line. I have two Android projects "ap1" and "ap2". In the "ap1" project directory, I ran "ant debug". I got an "ap1.apk" executable. In the "ap2" project directory, I ran "ant debug". I got an "ap2.apk" executable. I created an Android Virtual Device: cmd_line android create avd -n avd1 -t 1 --abi x86 I launched the emulator: cmd_line emulator -avd avd1 -verbose The "adb devices" command returns: List of devices attached emulator-5554 device I installed the first program on the emulator: cmd_line adb -s emulator-5554 install "ap1.apk" I ran the program: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg.android/.Activity1 = It worked. I installed the second program on the emulator: cmd_line adb -s emulator-5554 install "ap2.apk" I ran the program: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg2.android/.AnotherActivity1 = It worked. All this works except that the second executable "replaced" of the first one. If I try to run the first executable, I get an error: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg.android/.Activity1 Starting: Intent { act=android.intent.action.MAIN cmp=my.pkg.android/.Activity1 } Error type 3 Error: Activity class {my.pkg.android/my.pkg.android.Activity1} does not exist. It looks like I can't have the two apps at the same time in the emulator. What do you think? What do I have to do to have the two apps available (at the same time) in the emulator? Thank you for helping. Best regards.

    Read the article

  • Visual Studio 2010 randomly unable to debug WCF service.

    - by rossisdead
    I'm running Visual Studio 2010 on a Windows 7 x64 machine, and occasionally VS is giving me the good old "The remote procedure could not be debugged.This usually indicates that debugging has not been enabled on the server" error that a lot of people ask about. My problem, though, is that it seems to only do this randomly(it can be anywhere from a few minutes to a few hours), and after I've made plenty of successful calls to the service already. It doesn't prevent the service from working. It still returns values and doesn't throw any errors. The only difference is that annoying dialog pops up everytime I start to debug my application. I should mention that I'm connecting the WCF service from a WPF application. If I launch the web site the service is part of, I don't get the dialog. A few of the things I've tried that do not work: Killing and restarting the server. Compiling the web server in x86 Enabling tracing, but couldn't find any problems. Is this just a bug in Visual Studio 2010, or is there something I'm missing?

    Read the article

  • How to modify Keyboard interrupt (under Windows XP) from a C++ Program ?

    - by rockr90
    Hi everyone ! We have been given a little project (As part of my OS course) to make a Windows program that modifies keyboard input, so that it transforms any lowercase character entered into an uppercase one (without using caps-lock) ! so when you type on the keyboard you'll see what you're typing transformed into uppercase ! I have done this quite easily using Turbo C by calling geninterrupt() and using variables _AH, _AL, i had to read a character using: _AH = 0x07; // Reading a character without echo geninterrupt(0x21); // Dos interrupt Then to transform it into an Upercase letter i have to mask the 5th bit by using: _AL = _AL & 0xDF; // Masking the entered character with 11011111 and then i will display the character using any output routine. Now, this solution will only work under old C DOS compilers. But what we intend to do is to make a close or similar solution to this by using any modern C/C++ compiler under Windows XP ! What i have first thought of is modifying the Keyboard ISR so that it masks the fifth bit of any entered character to turn it uppercase ! But i do not know how exactly to do this. Second, I wanted to create a Win32 console program to either do the same solution (but to no avail) or make a windows-compatible solution, still i do not know which functions to use ! Third I thought to make a windows program that modifies the ISR directly to suit my needs ! and i'm still looking for how to do this ! So please, If you could help me out on this, I would greatly appreciate it ! Thank you in advance ! (I'm using Windows XP on intel X86 with mingw-GCC compiler.)

    Read the article

  • Char C question about encoding signed/unsigned.

    - by drigoSkalWalker
    Hi guys. I read that C not define if a char is signed or unsigned, and in GCC page this says that it can be signed on x86 and unsigned in PowerPPC and ARM. Okey, I'm writing a program with GLIB that define char as gchar (not more than it, only a way for standardization). My question is, what about UTF-8? It use more than an block of memory? Say that I have a variable unsigned char *string = "My string with UTF8 enconding ~ çã"; See, if I declare my variable as unsigned I will have only 127 values (so my program will to store more blocks of mem) or the UTF-8 change to negative too? Sorry if I can't explain it correctly, but I think that i is a bit complex. NOTE: Thanks for all answer I don't understand how it is interpreted normally. I think that like ascii, if I have a signed and unsigned char on my program, the strings have diferently values, and it leads to confuse, imagine it in utf8 so.

    Read the article

  • How to fix Java Image Fetcher error ?

    - by Frank
    My code looks like this : private static JFileChooser fc; if (fc==null) { fc=new JFileChooser(Image_Dir); fc.addChoosableFileFilter(new Image_Filter()); // Add a custom file filter and disable the default (Accept All) file filter. fc.setAcceptAllFileFilterUsed(false); fc.setAccessory(new Image_Preview(fc)); // Add the preview pane. } int returnVal=fc.showDialog(JFileChooser_For_Image.this,"Get Image"); // Show it. After I select an image from the panel, I got the following error message : Exception in thread "Image Fetcher 0" java.lang.UnsatisfiedLinkError: Native Library C:\Program Files (x86)\Java\jre6\bin\jpeg.dll already loaded in another classloader at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at sun.security.action.LoadLibraryAction.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.awt.image.JPEGImageDecoder.<clinit>(Unknown Source) at sun.awt.image.InputStreamImageSource.getDecoder(Unknown Source) at sun.awt.image.FileImageSource.getDecoder(Unknown Source) at sun.awt.image.InputStreamImageSource.doFetch(Unknown Source) at sun.awt.image.ImageFetcher.fetchloop(Unknown Source) at sun.awt.image.ImageFetcher.run(Unknown Source) When I run it from an executable Jar file, it works fine, but after I wrapped it into an exe file, I got the above error, why ? How to fix it ?

    Read the article

  • Cannot open a SQL2000 DTS package I imported into SQL2008

    - by RJ
    I am running into a problem trying to open a SQL2000 DTS package I imported into SQL2008. I set up a new server and installed a fresh install of SQL2008. The database I need to run is a SQL2000 database. I moved the database over with no problem but there are a few DTS packages that need to run in legacy on SQL2008. I exported the DTS packages I need out of SQL2000 and imported them successfully into SQL2008. My SQL2008 server is x64. I can see the DTS packages under Data Transformation Service in Legacy but when I try to open the package I get this message. "SQL Server 2000 DTS Designer components are required to edit DTS packages. Install the special web download, "SQL Server 2000 DTS Designer components" to use this feature. (Microsoft.SqlServer.DtsObjectExplorerUI)" I downloaded the components and installed them and still get this error. I researched and found an article about this not working on x64 so I have an x86 machine that I installed the SQL2008 tools and tried to open the package from there and got the same error. I have spent days on this and need help. Has anyone run across this and can tell me what to do. If you have solved this problem, please help me out. Thanks.

    Read the article

  • On C++ global operator new: why it can be replaced

    - by Jimmy
    I wrote a small program in VS2005 to test whether C++ global operator new can be overloaded. It can. #include "stdafx.h" #include "iostream" #include "iomanip" #include "string" #include "new" using namespace std; class C { public: C() { cout<<"CTOR"<<endl; } }; void * operator new(size_t size) { cout<<"my overload of global plain old new"<<endl; // try to allocate size bytes void *p = malloc(size); return (p); } int main() { C* pc1 = new C; cin.get(); return 0; } In the above, my definition of operator new is called. If I remove that function from the code, then operator new in C:\Program Files (x86)\Microsoft Visual Studio 8\VC\crt\src\new.cpp gets called. All is good. However, in my opinion, my implementations of operator new does NOT overload the new in new.cpp, it CONFLICTS with it and violates the one-definition rule. Why doesn't the compiler complain about it? Or does the standard say since operator new is so special, one-definition rule does not apply here? Thanks.

    Read the article

  • setting the openCV configuration in an openGL project produce several errors

    - by GolSa
    I have a win32 solution which is set for openGL; it works well; but I want to write a function which use functions of openCV; I set the configuration for openCV for both X86 and X64;;I commented the openCV function and just to test the correctness of configuration, I run it; but when I want to run it on X64 I faced with the error below: Error 1 error C2065: 'GWL_HINSTANCE' : undeclared identifier D:\matrix\matrixProjection\src\ControllerMain.cpp 35 1 matrixProjection Error 2 error C2664: 'CreateDialogParamW' : cannot convert parameter 4 from 'BOOL (__cdecl *)(HWND,UINT,WPARAM,LPARAM)' to 'DLGPROC' D:\matrix\matrixProjection\src\DialogWindow.cpp 47 1 matrixProjection Error 2 points to this line of code: HWND DialogWindow::create() { /*-->this line*/ handle = ::CreateDialogParam(instance, MAKEINTRESOURCE(id), parentHandle, Win::dialogProcedure, (LPARAM)controller); return handle; } but on Debug Win32 configure, it runs; I used openGL32 in my project; is there any probability to be the cause? is there any X64 version for openGL? I know that there is something needed in X64 mode which my solution can not handle it; I googled a lot about it but I did not find any solution; How can I solve that?

    Read the article

  • Problem when compressing SWF in Linux with java.util.zip

    - by CaioToOn
    Hi! I've created a servlet that changes the binaries of a SWF file and output it to the user. The SWF is compressed by ZLIB by default. Then I inflate, change the binaries, deflate and output the result. Everything was running right on a Windows Server 2008 (also in 2003). Currently, we need change the server to Linux, and then, this servlet is somehow outputing a corrupted SWF File... what could be the problem? What intrigues me more is that there is no difference between the Windows and Linux servlet versions. Is there any undocumented linux specific behaviour for the java.util.zip package? My Windows Server is (where the servlet is working): Windows Server 2008 (6.0 - x86) Apache 2.2.11 Tomcat 6.0.16.0 Java JDK 1.6.0_12-b04 My CentOS Server is (where te servlet doesn't work) CentOS 5.4 (2.6.18-164.15.1.el5 - i386) Apache 2.2.3 Tomcat 6.0.16.0 Java JDK 1.6.0_12-b04 Any lead would be appreciated! Cheers, CaioToOn!

    Read the article

  • Visual Studio 2010 Bootstrap Folder?

    - by Matma
    We currently run a VS2005 app/environment and im investigating upgrading to 2010. But before i can say yes its all good, go buy bossman... I have to confirm that a bootstrap for a thridparty driver install which is included on our installation cd can be used/included (its a usb security dongle so drivers MUST be included/setup for our app to run). It is in the list of available pre-reqs but i didnt put it in the vs2010 bootstrap folder so is this coming from my vs2005 install? or the fact that i had previously selected it in the installer and its saved/loaded that in vs2010? Either way its got a little error icon and says its not found so i get errors on build. I want to resolve this and have it in the list of available pre-reqs as we did for 2005. i.e. copy it to the C:\Program Files (x86)\Microsoft Visual Studio 8\SDK\v2.0\BootStrapper folder... But i cant seem to find where the bootstraps are for vs2010? does it not use them? Naturally ive searched the entire vs folder structure for appropriate files/folders but nothing? Where/How is this now done?

    Read the article

< Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >