Search Results

Search found 2568 results on 103 pages for 'x86'.

Page 87/103 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • GIT clone repo across local file system

    - by Jon
    Hi all, I am a complete Noob when it comes to GIT. I have been just taking my first steps over the last few days. I setup a repo on my laptop, pulled down the Trunk from an SVN project (had some issues with branches, not got them working), but all seems ok there. I now want to be able to pull or push from the laptop to my main desktop. The reason being the laptop is handy on the train as I spend 2 hours a day travelling and can get some good work done. But my main machine at home is great for development. So I want to be able to push / pull from the laptop to the main computer when I get home. I thought the most simple way of doing this would be to just have the code folder shared out across the LAN and do: git clone file://192.168.10.51/code unfortunately this doesn't seem to be working for me: so I open a git bash cmd and type the above command, I am in C:\code (the shared folder for both machines) this is what I get back: Initialized empty Git repository in C:/code/code/.git/ fatal: 'C:/Program Files (x86)/Git/code' does not appear to be a git repository fatal: The remote end hung up unexpectedly How can I share the repository between the two machines in the most simple of ways. There will be other locations that will be official storage points and places where the other devs and CI server etc will pull from, this is just so that I can work on the same repo across two machines. Thanks

    Read the article

  • How to manage multiple python versions ?

    - by Gyom
    short version: how can I get rid of the multiple-versions-of-python nightmare ? long version: over the years, I've used several versions of python, and what is worse, several extensions to python (e.g. pygame, pylab, wxPython...). Each time it was on a different setup, with different OSes, sometimes different architectures (like my old PowerPC mac). Nowadays I'm using a mac (OSX 10.6 on x86-64) and it's a dependency nightmare each time I want to revive script older than a few months. Python itself already comes in three different flavours in /usr/bin (2.5, 2.6, 3.1), but I had to install 2.4 from macports for pygame, something else (cannot remember what) forced me to install all three others from macports as well, so at the end of the day I'm the happy owner of seven (!) instances of python on my system. But that's not the problem, the problem is, none of them has the right (i.e. same set of) libraries installed, some of them are 32bits, some 64bits, and now I'm pretty much lost. For example right now I'm trying to run a three-year-old script (not written by me) which used to use matplotlib/numpy to draw a real-time plot within a rectangle of a wxwidgets window. But I'm failing miserably: py26-wxpython from macports won't install, stock python has wxwidgets included but also has some conflict between 32 bits and 64 bits, and it doesn't have numpy... what a mess ! Obviously, I'm doing things the wrong way. How do you usally cope with all that chaos ?e

    Read the article

  • Excel Automation Addin UDFs not accesible

    - by Eric
    I created the following automation addin: namespace AutomationAddin { [Guid("6652EC43-B48C-428a-A32A-5F2E89B9F305")] [ClassInterface(ClassInterfaceType.AutoDual)] [ComVisible(true)] public class MyFunctions { public MyFunctions() { } #region UDFs public string ToUpperCase(string input) { return input.ToUpper(); } #endregion [ComRegisterFunctionAttribute] public static void RegisterFunction(Type type) { Registry.ClassesRoot.CreateSubKey( GetSubKeyName(type, "Programmable")); RegistryKey key = Registry.ClassesRoot.OpenSubKey( GetSubKeyName(type, "InprocServer32"), true); key.SetValue("", System.Environment.SystemDirectory + @"\mscoree.dll", RegistryValueKind.String); } [ComUnregisterFunctionAttribute] public static void UnregisterFunction(Type type) { Registry.ClassesRoot.DeleteSubKey( GetSubKeyName(type, "Programmable"), false); } private static string GetSubKeyName(Type type, string subKeyName) { System.Text.StringBuilder s = new System.Text.StringBuilder(); s.Append(@"CLSID\{"); s.Append(type.GUID.ToString().ToUpper()); s.Append(@"}\"); s.Append(subKeyName); return s.ToString(); } } } I build it and it registers just fine. I open excel 2003, go to tools-Add-ins, click on the automation button and the addin appears in the list. I add it and it shows up in the addins list. but, the functions themselves don't appear. If I type it in it doesn't work and if I look in the function wizard, my addin doesn't show up as a category and the functions are not in the list. I am using excel 2003 on windows 7 x86. I built the project with visual studio 2010. This addin worked fine on windows xp built with visual studio 2008.

    Read the article

  • is it possible to turn off vdso on glibc side?

    - by heroxbd
    I am aware that passing vdso=0 to kernel can turn this feature off, and that the dynamic linker in glibc can automatic detect and use vdso feature from kernel. Here I met with this problem. There is a RHEL 5.6 box (kernel 2.6.18-238.el5) in my institution where I only have a normal user access, probably suffering from RHEL bug 673616. As I compile a toolchain of linux-headers-3.9/gcc-4.7.2/glibc-2.17/binutils-2.23 on top of it, gcc bootstrap fails in cc1 in stage2 cannnot be run Program received signal SIGSEGV, Segmentation fault. 0x00002aaaaaaca6eb in ?? () (gdb) info sharedlibrary From To Syms Read Shared Object Library 0x00002aaaaaaabba0 0x00002aaaaaac3249 Yes (*) /home/benda/gnto/lib64/ld-linux-x86-64.so.2 0x00002aaaaacd29b0 0x00002aaaaace2480 Yes (*) /home/benda/gnto/usr/lib/libmpc.so.3 0x00002aaaaaef2cd0 0x00002aaaaaf36c08 Yes (*) /home/benda/gnto/usr/lib/libmpfr.so.4 0x00002aaaab14f280 0x00002aaaab19b658 Yes (*) /home/benda/gnto/usr/lib/libgmp.so.10 0x00002aaaab3b3060 0x00002aaaab3b3b50 Yes (*) /home/benda/gnto/lib/libdl.so.2 0x00002aaaab5b87b0 0x00002aaaab5c4bb0 Yes (*) /home/benda/gnto/usr/lib/libz.so.1 0x00002aaaab7d0e70 0x00002aaaab80f62c Yes (*) /home/benda/gnto/lib/libm.so.6 0x00002aaaaba70d40 0x00002aaaabb81aec Yes (*) /home/benda/gnto/lib/libc.so.6 (*): Shared library is missing debugging information. and a simple program #include <sys/time.h> #include <stdio.h> int main () { struct timeval tim; gettimeofday(&tim, NULL); return 0; } get segment fault in the same way if compiled against glibc-2.17 and xgcc from stage1. Both cc1 and the test program can be run on another running RHEL 5.5 (kernel 2.6.18-194.26.1.el5) with gcc-4.7.2/glibc-2.17/binutils-2.23 as normal user. I cannot simply upgrade the box to a newer RHEL version, nor could I turn VDSO off via sysctl or proc. The question is, is there a way to compile glibc so that it turns off VDSO unconditionally?

    Read the article

  • why doesn't winmain set the errorlevel?

    - by Brian R. Bondy
    int APIENTRY _tWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { MessageBox(NULL, _T("This should return 90 no?"), _T("OK"), MB_OK); return 90; } Why does the above program correctly display the message box, but does not set the error level? I compile this program to the name a.exe. Then from command prompt I type: c:\> a.exe (message box is displayed, I press ok) c:\> echo %ERRORLEVEL% 0 I get the same results if I do exit(90); right before the return. It still says 0. I also tried to start the program via CreateProcess and obtain the result with GetExitCodeProcess but it also returns 0 to me. I did error checking to ensure it was all started correctly. I originally seen this problem in a more complex program. But made this simple program to verify the problem. Results are the same, both programs that have WinMain return 0 always. I tried both x64, x86 and unicode and MBCS compiling options. All give 0 as an error level/status code.

    Read the article

  • Orphan IBM JVM process

    - by Nicholas Key
    Hi people, I have this issue about orphan IBM JVM process being created in the process tree: For example: C:\Program Files\IBM\WebSphere\AppServer\bin>wsadmin -lang jython -f "C:\Hello.py" Hello.py has the simple implementation: import time i = 0 while (1): i = i + 1 print "Hello World " + str(i) time.sleep(3.0) My machine has such JVM information: C:\Program Files\WebSphere\java\bin>java -verbose:sizes -version -Xmca32K RAM class segment increment -Xmco128K ROM class segment increment -Xmns0K initial new space size -Xmnx0K maximum new space size -Xms4M initial memory size -Xmos4M initial old space size -Xmox1624995K maximum old space size -Xmx1624995K memory maximum -Xmr16K remembered set size -Xlp4K large page size available large page sizes: 4K 4M -Xmso256K operating system thread stack size -Xiss2K java thread stack initial size -Xssi16K java thread stack increment -Xss256K java thread stack maximum size java version "1.6.0" Java(TM) SE Runtime Environment (build pwi3260sr6ifix-20091015_01(SR6+152211+155930+156106)) IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Windows Server 2003 x86-32 jvmwi3260sr6-20091001_43491 (JIT enabled, AOT enabled) J9VM - 20091001_043491 JIT - r9_20090902_1330ifx1 GC - 20090817_AA) JCL - 20091006_01 While the program is running, I tried to kill it and subsequently I found an orphan IBM JVM process in the process tree. Is there a way to fix this issue? Why is there an orphan process in the first place? Is there something wrong with my code? I really don't believe that my simplistic code is wrongly implemented. Any suggestions?

    Read the article

  • How can you start a process from asp.net without interfering with the website?

    - by Sem Dendoncker
    Hi, We have an asp.net application that is able to create .air files. To do this we use the following code: System.Diagnostics.Process process = new System.Diagnostics.Process(); //process.StartInfo.FileName = strBatchFile; if (File.Exists(@"C:\Program Files\Java\jre6\bin\java.exe")) { process.StartInfo.FileName = @"C:\Program Files\Java\jre6\bin\java.exe"; } else { process.StartInfo.FileName = @"C:\Program Files (x86)\Java\jre6\bin\java.exe"; } process.StartInfo.Arguments = GetArguments(); process.StartInfo.RedirectStandardOutput = true; process.StartInfo.RedirectStandardError = true; process.StartInfo.UseShellExecute = false; process.PriorityClass = ProcessPriorityClass.Idle; process.Start(); string strOutput = process.StandardOutput.ReadToEnd(); string strError = process.StandardError.ReadToEnd(); HttpContext.Current.Response.Write(strOutput + "<p>" + strError + "</p>"); process.WaitForExit(); Well the problem now is that sometimes the cpu of the server is reaching 100% causing the application to run very slow and even lose sessions (we think this is the problem). Is there any other solution on how to generate air files or run an external process without interfering with the asp.net application? Cheers, M.

    Read the article

  • JAVASCRIPT ENABLED [closed]

    - by kirchoffs415
    HI, I hope somebody can help, i keep getting the following message when i log on-- Your Javascript is disabled. Limited functionality is available. it will stay for maybe a day sometimes two.I have uninstalled javascript and reinstalled but still the same. Iam using chrome. any help would be gratefull many thanks Dominic p.s. my system spec is as follows System InformationOS Name Microsoft® Windows Vista™ Home Premium Version 6.0.6002 Service Pack 2 Build 6002 Other OS Description Not Available OS Manufacturer Microsoft Corporation System Name DOM-PC System Manufacturer Dell Inc. System Model Inspiron 1545 System Type X86-based PC Processor Pentium(R) Dual-Core CPU T4200 @ 2.00GHz, 2000 Mhz, 2 Core(s), 2 Logical Processor(s) BIOS Version/Date Dell Inc. A05, 25/02/2009 SMBIOS Version 2.4 Windows Directory C:\Windows System Directory C:\Windows\system32 Boot Device \Device\HarddiskVolume3 Locale United Kingdom Hardware Abstraction Layer Version = "6.0.6002.18005" User Name DOM-PC\DOM Time Zone GMT Standard Time Installed Physical Memory (RAM) 3.00 GB Total Physical Memory 2.96 GB Available Physical Memory 1.38 GB Total Virtual Memory 5.89 GB Available Virtual Memory 4.25 GB Page File Space 3.00 GB Page File C:\pagefile.sys My System Specs

    Read the article

  • Error loading shared libraries

    - by user459811
    Hi, I'm running eclipse on Ubuntu using a g++ compiler and I'm trying to run a sample program that utilizes xerces. The build produced no errors however, when i attempted to run the program, I would receive this error: "error while loading shared libraries: libxerces-c-3.1.so: cannot open shared object file: No such file or directory" libxerces-c-3.1.so is in the directory /opt/lib which I have included as a library in eclipse. The file is there when I checked the folder. When I perform an 'echo $LD_LIBRARY_PATH', /opt/lib is also listed. Any ideas into where the problem lies? Thanks. An 'ldd libxerces-c-3.1.so' command yields the following output: linux-vdso.so.1 => (0x00007fffeafff000) libnsl.so.1 => /lib/libnsl.so.1 (0x00007fa3d2b83000) libpthread.so.0 => /lib/libpthread.so.0 (0x00007fa3d2966000) libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00007fa3d265f000) libm.so.6 => /lib/libm.so.6 (0x00007fa3d23dc000) libc.so.6 => /lib/libc.so.6 (0x00007fa3d2059000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00007fa3d1e42000) /lib64/ld-linux-x86-64.so.2 (0x00007fa3d337d000)

    Read the article

  • Access cost of dynamically created objects with dynamically allocated members

    - by user343547
    I'm building an application which will have dynamic allocated objects of type A each with a dynamically allocated member (v) similar to the below class class A { int a; int b; int* v; }; where: The memory for v will be allocated in the constructor. v will be allocated once when an object of type A is created and will never need to be resized. The size of v will vary across all instances of A. The application will potentially have a huge number of such objects and mostly need to stream a large number of these objects through the CPU but only need to perform very simple computations on the members variables. Could having v dynamically allocated could mean that an instance of A and its member v are not located together in memory? What tools and techniques can be used to test if this fragmentation is a performance bottleneck? If such fragmentation is a performance issue, are there any techniques that could allow A and v to allocated in a continuous region of memory? Or are there any techniques to aid memory access such as pre-fetching scheme? for example get an object of type A operate on the other member variables whilst pre-fetching v. If the size of v or an acceptable maximum size could be known at compile time would replacing v with a fixed sized array like int v[max_length] lead to better performance? The target platforms are standard desktop machines with x86/AMD64 processors, Windows or Linux OSes and compiled using either GCC or MSVC compilers.

    Read the article

  • How to run White + SL4 UATs through TeamCity?

    - by Duncan Bayne
    After experiencing a series of unpleasant issues with TFS, including source code orruption and project management inflexibility, we (meaning the project team of which I'm a part) have decided to move from TFS 2010 to TeamCity + SVN + V1. I've managed to get our MSTest component and unit tests running as part of every build. However, our UATs are failing, and I was hoping for some advice from the TeamCity community as to best practices w.r.t. running web servers and interacting with the desktop. Each of our UAT fixtures starts a web server to host the site, like this: public static void StartWebServer() { var pathToSite = @"C:\projects\myproject\FrontEnd\MyProject.FrontEnd.Web"; var webServer = new Process { StartInfo = new ProcessStartInfo { Arguments = string.Format("/port:9150 /path:\"{0}\"", pathToSite), FileName = @"C:\Program Files (x86)\Common Files\microsoft shared\DevServer\10.0\WebDev.WebServer40.EXE" } }; webServer.Start(); } Needless to say, this doesn't work when running through TeamCity, as the pathToSite value is different each time. I'm hoping there is a way of determining the path into which the the code is checked out prior to building? That would allow me to point the web server at the right place. The other issue is that our UATs use White to drive the Silverlight UI through an instance of Internet Explorer: _browserWindow = InternetExplorer.Launch("http://localhost:9150/index.html#/Home", "Home - Windows Internet Explorer"); _document = _browserWindow.SilverlightDocument; I've ensured that the TeamCity service is granted the ability to interact with the desktop, and I've set the build agent machine up to log in automatically (an open session is a pre-requisite for White to work properly). Is that all I need to do or are there additional steps required?

    Read the article

  • Powershell 2.0 Hang When Run From MsDeploy pre- post- ops using c/

    - by SonOfNun
    I am trying to invoke powershell during the preSync call in a MSDeploy command, but powershell does not exit the process after it has been called. The command (from command line): "tools/MSDeploy/msdeploy.exe" -verb:sync -preSync:runCommand="powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command C:/MyInstallPath/deploy.ps1 Set-WebAppOffline Uninstall-Service ",waitInterval=60000 -usechecksum -source:dirPath="build/for-deployment" -dest:wmsvc=BLUEPRINT-X86,username=deployer,password=deployer,dirPath=C:/MyInstallPath I used a hack here (http://therightstuff.de/2010/02/06/How-We-Practice-Continuous-Integration-And-Deployment-With-MSDeploy.aspx) that gets the powershell process and kills it but that didn't work. I also tried taskkill and the sysinternals equivalent, but nothing will kill the process so that MSDeploy errors out. The command is executed, but then just sits there. Any ideas what might be causing powershell to hang like this? I have found a few other similar issues around the web but no answers. Environment is Win 2K3, using Powershell 2.0. UPDATE: Here is a .vbs script I use to invoke my powershell command now. Invoke using 'cscript.exe path/to/script.vbs': Option Explicit Dim oShell, appCmd,oShellExec Set oShell = CreateObject("WScript.Shell") appCmd = "powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command ""&{ . c:/development/materialstesting/deploy/web/deploy.ps1; Set-WebAppOffline }"" " Set oShellExec = oShell.Exec(appCmd) oShellExec.StdIn.Close()

    Read the article

  • How to use VC++ intrinsic functions w/o run-time library

    - by Adrian McCarthy
    I'm involved in one of those challenges where you try to produce the smallest possible binary, so I'm building my program without the C or C++ run-time libraries (RTL). I don't link to the DLL version or the static version. I don't even #include the header files. I have this working fine. For some code constructs, the compiler generates calls to memset(). For example: struct MyStruct { int foo; int bar; }; MyStruct blah = {}; // calls memset() Since I don't include the RTL, this results in a missing symbol at link time. I've been getting around this by avoiding those constructs. For the given example, I'll explicitly initialize the struct. MyStruct blah; blah.foo = 0; blah.bar = 0; But memset() can be useful, so I tried adding my own implementation. It works fine in Debug builds, even for those places where the compiler generates an implicit call to memset(). But in Release builds, I get an error saying that I cannot define an intrinsic function. You see, in Release builds, intrinsic functions are enabled, and memset() is an intrinsic. I would love to use the intrinsic for memset() in my release builds, since it's probably inlined and smaller and faster than my implementation. But I seem to be a in catch-22. If I don't define memset(), the linker complains that it's undefined. If I do define it, the compiler complains that I cannot define an intrinsic function. I've tried adding #pragma intrinsic(memset) with and without declarations of memset, but no luck. Does anyone know the right combination of definition, declaration, #pragma, and compiler and linker flags to get an intrinsic function without pulling in RTL overhead? Visual Studio 2008, x86, Windows XP+.

    Read the article

  • Why do bind1st and bind2nd require constant function objects?

    - by rlbond
    So, I was writing a C++ program which would allow me to take control of the entire world. I was all done writing the final translation unit, but I got an error: error C3848: expression having type 'const `anonymous-namespace'::ElementAccumulator<T,BinaryFunction>' would lose some const-volatile qualifiers in order to call 'void `anonymous-namespace'::ElementAccumulator<T,BinaryFunction>::operator ()(const point::Point &,const int &)' with [ T=SideCounter, BinaryFunction=std::plus<int> ] c:\program files (x86)\microsoft visual studio 9.0\vc\include\functional(324) : while compiling class template member function 'void std::binder2nd<_Fn2>::operator ()(point::Point &) const' with [ _Fn2=`anonymous-namespace'::ElementAccumulator<SideCounter,std::plus<int>> ] c:\users\****\documents\visual studio 2008\projects\TAKE_OVER_THE_WORLD\grid_divider.cpp(361) : see reference to class template instantiation 'std::binder2nd<_Fn2>' being compiled with [ _Fn2=`anonymous-namespace'::ElementAccumulator<SideCounter,std::plus<int>> ] I looked in the specifications of binder2nd and there it was: it took a const AdaptibleBinaryFunction. So, not a big deal, I thought. I just used boost::bind instead, right? Wrong! Now my take-over-the-world program takes too long to compile (bind is used inside a template which is instantiated quite a lot)! At this rate, my nemesis is going to take over the world first! I can't let that happen -- he uses Java! So can someone tell me why this design decision was made? It seems like an odd decision. I guess I'll have to make some of the elements of my class mutable for now...

    Read the article

  • fesetround with MSVC x64

    - by mr grumpy
    I'm porting some code to Windows (sigh) and need to use fesetround. MSVC doesn't support C99, so for x86 I copied an implementation from MinGW and hacked it about: //__asm__ volatile ("fnstcw %0;": "=m" (_cw)); __asm { fnstcw _cw } _cw &= ~(FE_TONEAREST | FE_DOWNWARD | FE_UPWARD | FE_TOWARDZERO); _cw |= mode; //__asm__ volatile ("fldcw %0;" : : "m" (_cw)); __asm { fldcw _cw } if (has_sse) { unsigned int _mxcsr; //__asm__ volatile ("stmxcsr %0" : "=m" (_mxcsr)); __asm { stmxcsr _mxcsr } _mxcsr &= ~ 0x6000; _mxcsr |= (mode << __MXCSR_ROUND_FLAG_SHIFT); //__asm__ volatile ("ldmxcsr %0" : : "m" (_mxcsr)); __asm { ldmxcsr _mxcsr } } The commented lines are the originals for gcc; uncommented for msvc. This appears to work. However the x64 cl.exe doesn't support inline asm, so I'm stuck. Is there some code out there I can "borrow" for this? (I've spent hours with Google). Or will I have to go on a 2 week detour to learn some assembly and figure out how to get/use MASM? Any advice is appreciated. Thank you.

    Read the article

  • How to determine values saved on the stack?

    - by Brian
    I'm doing some experimenting and would like to be able to see what is saved on the stack during a system call (the saved state of the user land process). According to http://lxr.linux.no/#linux+v2.6.30.1/arch/x86/kernel/entry_32.S it shows that the various values of registers are saved at those particular offsets to the stack pointer. Here is the code I have been trying to use to examine what is saved on the stack (this is in a custom system call I have created): asm("movl 0x1C(%esp), %ecx"); asm("movl %%ecx, %0" : "=r" (value)); where value is an unsigned long. As of right now, this value is not what is expected (it is showing a 0 is saved for the user value of ds). Am I correctly accessing the offset of the stack pointer? Another possibility might be could I use a debugger such as GDB to examine the stack contents while in the kernel? I don't have much extensive use with debugging and am not sure of how to debug code inside the kernel. Any help is much appreciated.

    Read the article

  • Using Tcl DSL in Python

    - by Sridhar Ratnakumar
    I have a bunch of Python functions. Let's call them foo, bar and baz. They accept variable number of string arguments and does other sophisticated things (like accessing the network). I want the "user" (let's assume he is only familiar with Tcl) to write scripts in Tcl using those functions. Here's an example (taken from Macports) that user can come up with: post-configure { if {[variant_isset universal]} { set conflags "" foreach arch ${configure.universal_archs} { if {${arch} == "i386"} {append conflags "x86 "} else { if {${arch} == "ppc64"} {append conflags "ppc_64 "} else { append conflags ${arch} " " } } } set profiles [exec find ${worksrcpath} -name "*.pro"] foreach profile ${profiles} { reinplace -E "s|^(CONFIG\[ \\t].*)|\\1 ${conflags}|" ${profile} # Cures an isolated case system "cd ${worksrcpath}/designer && \ ${qt_dir}/bin/qmake -spec ${qt_dir}/mkspecs/macx-g++ -macx \ -o Makefile python.pro" } } } Here, variant_issset, reinplace are so on (other than Tcl builtins) are implemented as Python functions. if, foreach, set, etc.. are normal Tcl constructs. post-configure is a Python function that accepts, well, a Tcl code block that can later be executed (which in turns would obviously end up calling the above mentioned Python "functions"). Is this possible to do in Python? If so, how? from Tkinter import *; root= Tk(); root.tk.eval('puts [array get tcl_platform]') is the only integration I know of, which is obviously very limited (not to mention the fact that it starts up X11 server on mac).

    Read the article

  • Linux C debugging library to detect memory corruptions

    - by calandoa
    When working sometimes ago on an embedded system with a simple MMU, I used to program dynamically this MMU to detect memory corruptions. For instance, at some moment at runtime, the foo variable was overwritten with some unexpected data (probably by a dangling pointer or whatever). So I added the additional debugging code : at init, the memory used by foo was indicated as a forbidden region to the MMU; each time foo was accessed on purpose, access to the region was allowed just before then forbidden just after; a MMU irq handler was added to dump the master and the address responsible of the violation. This was actually some kind of watchpoint, but directly self-handled by the code itself. Now, I would like to reuse the same trick, but on a x86 platform. The problem is that I am very far from understanding how is working the MMU on this platform, and how it is used by Linux, but I wonder if any library/tool/system call already exist to deal with this problem. Note that I am aware that various tools exist like Valgrind or GDB to manage memory problems, but as far as I know, none of these tools car be dynamically reconfigured by the debugged code. I am mainly interested for user space under Linux, but any info on kernel mode or under Windows is also welcome!

    Read the article

  • How does PATH environment affect my running executable from using msvcr90 to msvcr80 ???

    - by Runner
    #include <gtk/gtk.h> int main( int argc, char *argv[] ) { GtkWidget *window; gtk_init (&argc, &argv); window = gtk_window_new (GTK_WINDOW_TOPLEVEL); gtk_widget_show (window); gtk_main (); return 0; } I tried putting various versions of MSVCR80.dll under the same directory as the generated executable(via cmake),but none matched. Is there a general solution for this kinda problem? UPDATE Some answers recommend install the VS redist,but I'm not sure whether or not it will affect my installed Visual Studio 9, can someone confirm? Manifest file of the executable <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"></requestedExecutionLevel> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugCRT" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> </assembly> It seems the manifest file says it should use the MSVCR90, why it always reporting missing MSVCR80.dll? FOUND After spending several hours on it,finally I found it's caused by this setting in PATH: D:\MATLAB\R2007b\bin\win32 After removing it all works fine.But why can that setting affect my running executable from using msvcr90 to msvcr80 ???

    Read the article

  • How to benchmark on multi-core processors

    - by Pascal Cuoq
    I am looking for ways to perform micro-benchmarks on multi-core processors. Context: At about the same time desktop processors introduced out-of-order execution that made performance hard to predict, they, perhaps not coincidentally, also introduced special instructions to get very precise timings. Example of these instructions are rdtsc on x86 and rftb on PowerPC. These instructions gave timings that were more precise than could ever be allowed by a system call, allowed programmers to micro-benchmark their hearts out, for better or for worse. On a yet more modern processor with several cores, some of which sleep some of the time, the counters are not synchronized between cores. We are told that rdtsc is no longer safe to use for benchmarking, but I must have been dozing off when we were explained the alternative solutions. Question: Some systems may save and restore the performance counter and provide an API call to read the proper sum. If you know what this call is for any operating system, please let us know in an answer. Some systems may allow to turn off cores, leaving only one running. I know Mac OS X Leopard does when the right Preference Pane is installed from the Developers Tools. Do you think that this make rdtsc safe to use again? More context: Please assume I know what I am doing when trying to do a micro-benchmark. If you are of the opinion that if an optimization's gains cannot be measured by timing the whole application, it's not worth optimizing, I agree with you, but I cannot time the whole application until the alternative data structure is finished, which will take a long time. In fact, if the micro-benchmark were not promising, I could decide to give up on the implementation now; I need figures to provide in a publication whose deadline I have no control over.

    Read the article

  • How do I patch a Windows API at runtime so that it to returns 0 in x64?

    - by Jorge Vasquez
    In x86, I get the function address using GetProcAddress() and write a simple XOR EAX,EAX; RET; in it. Simple and effective. How do I do the same in x64? bool DisableSetUnhandledExceptionFilter() { const BYTE PatchBytes[5] = { 0x33, 0xC0, 0xC2, 0x04, 0x00 }; // XOR EAX,EAX; RET; // Obtain the address of SetUnhandledExceptionFilter HMODULE hLib = GetModuleHandle( _T("kernel32.dll") ); if( hLib == NULL ) return false; BYTE* pTarget = (BYTE*)GetProcAddress( hLib, "SetUnhandledExceptionFilter" ); if( pTarget == 0 ) return false; // Patch SetUnhandledExceptionFilter if( !WriteMemory( pTarget, PatchBytes, sizeof(PatchBytes) ) ) return false; // Ensures out of cache FlushInstructionCache(GetCurrentProcess(), pTarget, sizeof(PatchBytes)); // Success return true; } static bool WriteMemory( BYTE* pTarget, const BYTE* pSource, DWORD Size ) { // Check parameters if( pTarget == 0 ) return false; if( pSource == 0 ) return false; if( Size == 0 ) return false; if( IsBadReadPtr( pSource, Size ) ) return false; // Modify protection attributes of the target memory page DWORD OldProtect = 0; if( !VirtualProtect( pTarget, Size, PAGE_EXECUTE_READWRITE, &OldProtect ) ) return false; // Write memory memcpy( pTarget, pSource, Size ); // Restore memory protection attributes of the target memory page DWORD Temp = 0; if( !VirtualProtect( pTarget, Size, OldProtect, &Temp ) ) return false; // Success return true; } This example is adapted from code found here: http://www.debuginfo.com/articles/debugfilters.html#overwrite .

    Read the article

  • 32/64 bit problems with Eclipse CDT on Ubuntu

    - by waffleShirt
    I have just recently started running Linux on my PC and I am trying to start learning OpenGL. I am using the latest version of Eclipse CDT as my IDE, and my system is Ubuntu 10.10, 64 bit version. The problem I am having is that whenever I try to run a build from within the IDE I get the error message "Launch Failed. Binary Not Found." Ive done a lot of looking around on the internet but I still cant solve the problem. I know for a fact that the binary is built, it can be run from a terminal window. According to posts I have seen the problem is that Eclipse tries to run a 32 bit binary, but GCC 4.4.5 defaults to 64 bit binaries on a 64 bit system. Ive seen a lot of information about using the -m32 flag in makefiles, but then I still get the following output in Eclipse: make all g++ -o HelloWorld2 main.o /usr/bin/ld: i386 architecture of input file `main.o' is incompatible with i386:x86-64 output /usr/bin/ld: final link failed: Invalid operation collect2: ld returned 1 exit status make: *** [HelloWorld2] Error 1 What I would like to know is how to either get Eclipse to launch the 64 bit binaries, or have Eclipse correctly compile 32 bit binaries.

    Read the article

  • LNK 1104 error to lib file - Continues despite removing includes and links

    - by user1556594
    A link error to a lib file popped up out of the blue in a c++ application of mine after code was working fine in my last session. Error 1 error LNK1104: cannot open file '..........\Program Files (x86)\FMOD SoundSystem\FMOD Programmers API Windows\api\lib\fmodex_vc.lib' I triple checked my project directories were set up correctly to link to the lib file, that the file existed in said directory and that it was a working version of the .lib. My next step was to remove the includes to the file and the links to bypass the error and work on the rest of my code until the problem was solved. The error remains, however, despite: Commenting out absolutely every include relating to the lib. Commenting out absolutely every line of code dependant on the includes. Removing the directory from VC++ Directories in the project properties. Checking the Additional Library Directories field was also clear of references. To my understanding this should have made the library and related code virtually non-existant to the compiler. What am I missing? The library itself is fmodex_vc.lib - part of the FMOD API for providing sound to interactive applications. Again, the application was working one session, but failed to compile the next. I hadn't touched the code since so this led me to believe some aspect of VS is at fault. I'd like to avoid the time involded in re-installing if possible as I'm on the clock for a review tomorrow evening and there are a few more things I'd like to smooth out before then. If necessary, however, I won't hesitate. Very much appreciate the help.

    Read the article

  • Android - Two different programs at the same time in an emulator

    - by Léa Massiot
    I'm new to Android development. My OS is WinXP. I'm trying to install two different applications on an Android Device Emulator in command line. I have two Android projects "ap1" and "ap2". In the "ap1" project directory, I ran "ant debug". I got an "ap1.apk" executable. In the "ap2" project directory, I ran "ant debug". I got an "ap2.apk" executable. I created an Android Virtual Device: cmd_line android create avd -n avd1 -t 1 --abi x86 I launched the emulator: cmd_line emulator -avd avd1 -verbose The "adb devices" command returns: List of devices attached emulator-5554 device I installed the first program on the emulator: cmd_line adb -s emulator-5554 install "ap1.apk" I ran the program: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg.android/.Activity1 = It worked. I installed the second program on the emulator: cmd_line adb -s emulator-5554 install "ap2.apk" I ran the program: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg2.android/.AnotherActivity1 = It worked. All this works except that the second executable "replaced" of the first one. If I try to run the first executable, I get an error: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg.android/.Activity1 Starting: Intent { act=android.intent.action.MAIN cmp=my.pkg.android/.Activity1 } Error type 3 Error: Activity class {my.pkg.android/my.pkg.android.Activity1} does not exist. It looks like I can't have the two apps at the same time in the emulator. What do you think? What do I have to do to have the two apps available (at the same time) in the emulator? Thank you for helping. Best regards.

    Read the article

  • Visual Studio 2010 randomly unable to debug WCF service.

    - by rossisdead
    I'm running Visual Studio 2010 on a Windows 7 x64 machine, and occasionally VS is giving me the good old "The remote procedure could not be debugged.This usually indicates that debugging has not been enabled on the server" error that a lot of people ask about. My problem, though, is that it seems to only do this randomly(it can be anywhere from a few minutes to a few hours), and after I've made plenty of successful calls to the service already. It doesn't prevent the service from working. It still returns values and doesn't throw any errors. The only difference is that annoying dialog pops up everytime I start to debug my application. I should mention that I'm connecting the WCF service from a WPF application. If I launch the web site the service is part of, I don't get the dialog. A few of the things I've tried that do not work: Killing and restarting the server. Compiling the web server in x86 Enabling tracing, but couldn't find any problems. Is this just a bug in Visual Studio 2010, or is there something I'm missing?

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >