Search Results

Search found 3162 results on 127 pages for 'compiled'.

Page 73/127 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • DllImport Based on OS Platform

    - by Ngu Soon Hui
    I have a mixture of unmanaged code ( backend) and managed code ( front end), as such, I would need to call the unmanaged code from my managed code, using interop techniques and DllImport attribute. Now, I've compiled two versions of unmanaged code, for both 32 and 64 bit OS; they are named service32.dll and service64.dll respectively. So, in my .Net code, I would have to do a DllImport for both dlls: [DllImport(@"service32.dll")] //for 32 bit OS invocation public static void SimpleFunction(); [DllImport(@"service64.dll")] //for 64 bit OS invocation public static void SimpleFunction(); And call them depending on which platform my application is running on. The issue now is that for every unmanaged function, I have to declared it twice, one for 32 bit OS and one for 64 bit OS. This is a duplication of work, and everytime I change the signature of an unmanaged function, I have to modified it in two places. Is there anyway that I can change the argument in DllImport so that the correct dll will be invoked automagically, depending on the platform?

    Read the article

  • #define vs enum in an embedded environment (How do they compile?)

    - by Alexander Kondratskiy
    This question has been done to death, and I would agree that enums are the way to go. However, I am curious as to how enums compile in the final code- #defines are just string replacements, but do enums add anything to the compiled binary? Or are they both equivalent at that stage. When writing firmware and memory is very limited, is there any advantage, no matter how small, to using #defines? Thanks! EDIT: As requested by the comment below, by embedded, I mean a digital camera. Thanks for the answers! I am all for enums!

    Read the article

  • why this assembly program is loaded from the address 0B3D:0000?

    - by viperchaos
    I have seen a assembly program written from a book about assemble: assume cs:code code segment dw 0123h,0456h,0789h,0abch,0defh,0fedh,0cbah,0987h mov bx,0 mov ax,0 mov cx,8 s: add ax,cs:[bx] add bx,2 loop s mov ax,4c00h int 21h code ends end This program's function is to add eight numbers. The author compiled this program in the DOS and use the DEBUG to see how this program be loaded. The author use the R command and got that DS = 0B2DH ES = 0B2D SS = 0B3D CS = 0B3D IP = 0000 And then the author said that this program is loaded from the address 0B3D:0000. I'm a confused that why this program is loaded from the address 0B3D:0000? Is this because the existence of the Program Segment Prefix(PSP)? If the answer is the existence of the PSP, what is in the PSP?

    Read the article

  • How to make pdb recognize that the source has changed between runs?

    - by user88028
    From what I can tell, pdb does not recognize when the source code has changed between "runs". That is, if I'm debugging, notice a bug, fix that bug, and rerun the program in pdb (i.e. without exiting pdb), pdb will not recompile the code. I'll still be debugging the old version of the code, even if pdb lists the new source code. So, does pdb not update the compiled code as the source changes? If not, is there a way to make it do so? I'd like to be able to stay in a single pdb session in order to keep my breakpoints and such. FWIW, gdb will notice when the program it's debugging changes underneath it, though only on a restart of that program. This is the behavior I'm trying to replicate in pdb.

    Read the article

  • Shortcuts and MSI updates

    - by Filip Navara
    We have an installer for application that is compiled using WiX and each version is updated using a new setup package. The installer creates advertised shortcut in Start menu and users often copy this shortcut to desktop or other location. During an application update a major upgrade is performed and the old shortcuts are removed, which causes the ones copied by users to disappear. This causes a major annoyance to the users. Is there a way to update advertised shortcuts when doing MSI major upgrade (ie. different product code)? Or, is there a way to allow minor updates by just running the setup.msi file (without passing a REINSTALLMODE option on the command line)? Or, is the only way to solve this problem to use non-advertised shortcuts?

    Read the article

  • Tool for checking source for dependencies on specific Java versions

    - by Gregor
    Is there a quick way (e.g. tool) to detect, from the source (or maybe even from compiled classes), which parts of an application call Java API methods that are only implemented in a specific Java version? (e.g. which parts of my app are Java6-specific) I don't necessarily want to hop through all ClassMismatchErrors and avoid the trial-and-error-method. Let's say I only want to document which parts of an application won't work if they were writte for, e.g., Java6 and I want to run it in a version 5 JDK. Is there something like this? Google did not help this time, nor did I find any solution here (a rare case indeed:)

    Read the article

  • How can I create variable variables in ActionScript

    - by Daniel Angel
    I'm doing this mcomp7d101.onRelease = function() { getURL("javascript:Compartir("+id7d101+");"); } mcomp7d102.onRelease = function() { getURL("javascript:Compartir("+id7d101+");"); } mcomp7d103.onRelease = function() { getURL("javascript:Compartir("+id7d101+");"); } mcomp7d150.onRelease = function() { getURL("javascript:Compartir("+id7d101+");"); } you get the idea :) How can I use a for loop to do something like: for(ii = 101; ii < 150; ii++) { mcomp7d+ii.onRelease = function() { getURL("javascript:Compartir("+id7d+ii);"); } } I'm getting a syntax error. It seems that I can't create variable variables in compiled languages.

    Read the article

  • Getting closure-compiler and Node.js to play nice

    - by bukzor
    Are there any projects that used node.js and closure-compiler (CC for short) together? The official CC recommendation is to compile all code for an application together, but when I compile some simple node.js code which contains a require("./MyLib.js"), that line is put directly into the output, but it doesn't make any sense in that context. I see a few options: Code the entire application as a single file. This solves the problem by avoiding it, but is bad for maintenance. Assume that all files will be concatenated before execution. Again this avoids the problem, but makes it harder to implement a un-compiled debug mode. I'd like to get CC to "understand" the node.js require() function, but that probably can't be done without editing the compiler itself, can it?

    Read the article

  • Tracing\profiling instructions

    - by LeChuck2k
    Hi Y'all. I'd like to statistically profile my C code at the instruction level. I need to know how many additions, multiplications, devides, etc,... I'm performing. This is not your usual run of the mill code profiling requirement. I'm an algorithm developer and I want to estimate the cost of converting my code to hardware implementations. For this, I'm being asked the instruction call breakdown during run-time (parsing the compiled assembly isn't sufficient as it doesn't consider loops in the code). After looking around, It seems VMWare may offer a possible solution, but I still couldn't find the specific feature that will allow me to trace the instruction call stream of my process. Are you aware of any profiling tools which enable this?

    Read the article

  • Running a java program in linux terminal with -class path

    - by Arya
    Hello I've been trying for an hour to run the following program with a the postgresql classpath class Test{ public static void main(String[] args){ try { Class.forName("org.postgresql.Driver"); } catch (ClassNotFoundException cnfe) { System.err.println("Couldn't find Postgresql driver class!"); } } } The program compiled fine with the javac command, but I'm having a hard time running it with the postgresql classpath. I have "postgresql-9.0-801.jdbc4.jar" in the same directory as the file and I tried the following, but non of them worked java -classpath ./postgresql-9.0-801.jdbc4.jar Test java -classpath postgresql-9.0-801.jdbc4.jar Test java -classpath "postgresql-9.0-801.jdbc4.jar" Test What am I doing wrong? Regards!

    Read the article

  • How can I optimize MVC and IIS pipeline to obtain higher speed?

    - by Andy
    Hi, I am doing performance tweaking of a simple app that uses MVC on IIS 7.5. I have a StopWatch starting up in Application_BeginRequest and I take a snapshot at Controller.OnActionExecuting. So I measure the time spend in the entire IIS pipeline: from request receipt to the moment execution finally gets to my controller. I obtain 700 microseconds on my 3GHz quad-core (project compiled Release x64), and I wonder where the bottleneck is, especially hearing some people say that one can get up to 8000 page loads per second with MVC. How can I optimize MVC and IIS pipeline to obtain higher speed?

    Read the article

  • Forcing GWT to assume List is implemented as ArrayList

    - by joecks
    For some reason I'm stucked with model classes using List as Collection Type and I would like to use the model on the client side. However GWT of course fails serializing java.util.List. However all implementations of List in this model are based on ArrayList. So is it possible to tell GWT to assume List is ArrayList? Edit GWT fails on compile time, since a possible candidate for List is also java.util.Collections.SingeltonList - which can not be compiled. I'm using GWT 2.1 and Java 1.6 .

    Read the article

  • <function> referenced from; symbol(s) not found.

    - by jfm429
    I have a piece of C code that is used from a C++ function. At the top of my C++ file I have the line: #include "prediction.h" In prediction.h I have this: #ifndef prediction #define prediction #include "structs.h" typedef struct { double estimation; double variance; } response; response runPrediction(int obs, location* positions, double* observations, int targets, location* targetPositions); #endif I also have prediction.c, which has: #include "prediction.h" response runPrediction(int obs, location* positions, double* observations, int targets, location* targetPositions) { // code here } Now, in my C++ file (which as I said includes prediction.h) I call that function, then compile (through Xcode) I get this error: "runPrediction(int, location*, double*, int, location*)", referenced from: mainFrame::respondTo(char*, int)in mainFrame.o ld: symbol(s) not found collect2: ld returned 1 exit status prediction.c is marked for compilation for the current target. I don't have any problems with other .cpp files not being compiled. Any thoughts here?

    Read the article

  • Sharing some info with all DLLs pulled into a process

    - by JBRWilkinson
    Hi all, We've got an Enterprise system which has many processes (EXEs, services, DCOM servers, COM+ apps, ISAPI, MMC snapins) all of which make use of many COM components. We've recently seen failures in some of the customer deployments, but are finding it hard to troubleshoot the cause. In order to track down the problem, we've augmented the entire source with logging statements where errors occur. In order to identify which logs came from what processes, the C++ logging code (compiled into all components) uses the EXE name to name the log. This is good for some cases, but not all - COM+ apps, ISAPI and MMC snapins all have system EXE names and the logs end up interleaved. I saw this post about shared data sections which might help, but what I don't understand is who decides what goes in the shared section. Is there any way I can guarantee that a particular piece of code writes into the shared section before anyone else reads it? Or is there a better solution to this problem?

    Read the article

  • Test for undefined references in Linux

    - by Charles
    Is there a built in linux utility that I can use to test a newly compiled shared library for external undefined references? Gcc seems to be intelligent enough to check for undefined symbols in my own binary, but if the symbol is a reference to another library gcc does not check at link time. Instead I only get the message when I try to link to my new library from another program. It seems a little silly to get undefined reference messages in a library when I am compiling a different project so I want to know if I can do a check on all references internal and external when I build the library not when I link to it. Example error: make -C UnitTests debug make[1]: Entering directory `~/projects/Foo/UnitTests` g++ [ tons of objects ] -L../libbar/bin -lbar -o UnitTests libbar.so: undefined reference to `DoSomethingFromAnotherLibrary` collect2: ld returned 1 exit status make[1]: *** [~/projects/Foo/UnitTests] Error 1

    Read the article

  • Import module stored in a cStringIO data structure vs. physical disk file

    - by Malcolm
    Is there a way to import a Python module stored in a cStringIO data structure vs. physical disk file? It looks like "imp.load_compiled(name, pathname[, file])" is what I need, but the description of this method (and similar methods) has the following disclaimer: Quote: "The file argument is the byte-compiled code file, open for reading in binary mode, from the beginning. It must currently be a real file object, not a user-defined class emulating a file." [1] I tried using a cStringIO object vs. a real file object, but the help documentation is correct - only a real file object can be used. Any ideas on why these modules would impose such a restriction or is this just an historical artifact? Are there any techniques I can use to avoid this physical file requirement? Thanks, Malcolm [1] http://docs.python.org/library/imp.html#imp.load_module

    Read the article

  • Windows/C++: Is it possible to find the line of code where exception was thrown having "Exception Of

    - by Pavel
    One of our users having an Exception on our product startup. She has sent us the following error message from Windows: Problem Event Name: APPCRASH Application Name: program.exe Application Version: 1.0.0.1 Application Timestamp: 4ba62004 Fault Module Name: agcutils.dll Fault Module Version: 1.0.0.1 Fault Module Timestamp: 48dbd973 Exception Code: c0000005 Exception Offset: 000038d7 OS Version: 6.0.6002.2.2.0.768.2 Locale ID: 1033 Additional Information 1: 381d Additional Information 2: fdf78cd6110fd6ff90e9fff3d6ab377d Additional Information 3: b2df Additional Information 4: a3da65b92a4f9b2faa205d199b0aa9ef Is it possible to locate the exact place in the source code where the exception has occured having this information? What is the common technique for C++ programmers on Windows to locate the place of an error that has occured on user computer? Our project is compiled with Release configuration, PDB file is generated. I hope my question is not too naive.

    Read the article

  • C++ include .h includes .cpp with same name as well?

    - by aaron
    so I have text.cpp, which 'includes' header.h, and then I have a header.cpp which includes header.h. How is header.cpp compiled as well? I'm walking through a guide here, and thoroughly confused. Also, what is the correct terminology for what I am asking? I know I sound like a moron, and I apologize, but I'm ignorant. Oh, main is in test.cpp. Also, if header.cpp includes , why can't I use iostream function calls in text.cpp if it is included? If I include iostream in text.cpp will it be included in the program twice (in other words, bloat it)?

    Read the article

  • Which .NET performance and/or memory profilers will allow me to profile a DLL?

    - by Eric
    I write a lot of .NET based plug-ins for other programs which are usually compiled as a DLL which is up to the native application to start up. I've been using Equatec's profiler, which works great, but now would like something with more features, including the ability to profile memory usage. I tried out Red Gate's Ant Profiler, but as far as I can see there is no way to profile a DLL. The only option is to profile an EXE. So my question is what other profiling tools are available that will allow me to profile a single library DLL rather than an EXE. I'm assuming this would require injecting profile code into the library as Equatec does?

    Read the article

  • Are there Adaptive Replacement Cache patent-free alternatives?

    - by aleccolocco
    An open source high-performance project I'm working on needs to keep a cache of parsed/compiled files. A plain LRU or a plain LFU wouldn't fit. Plain LRU wouldn't work as there will be remote batch/spider processes hitting the service regularly. Plain LFU wouldn't work because content will age. ARC seems like the perfect solution but since IBM holds patents to it at least one open source project dropped it. Are there any (good enough) alternatives? EDIT: I'm not looking for exactly the same thing, just something that could handle those two situations. Perhaps some simple strategy with timestamps and sources. There have to be many programmers who faced this situation before. That's why the "good enough" bit.

    Read the article

  • Eclipse call hierarchy skips calls in undefined #ifdef regions

    - by stupakov
    Hi all, The "call hierarchy" and "declaration" features in Eclipse CDT omit results that exist in undefined (greyed out) #ifdef regions. Example: void blah(void) { #ifndef ABC foo(); #else //line is greyed out bar(); //line is greyed out #endif //line is greyed out } The call hierarchy for foo() will list blah() as a caller; the call hierarchy for bar() will not list blah(). I'm not expecting it to do full resolution of which #define blocks will get compiled, I simply would like it to return all calls/declarations of the function I'm searching for, regardless of the #define blocks that surround it. Other IDEs such as SlickEdit are able to do this. Does anyone know of a way to get Eclipse to adopt this behavior? Thanks.

    Read the article

  • Deploying a .Net App Source Control (SVN) over 32-bit AND 64-bit dev stations

    - by Mika Jacobi
    Here is the situation : Our Dev Team has heterogeneous OS systems, scattered between 32-bit and 64-bit. This is not ideal, we are actually planning to homogenize our infrastructure, but in the meantime we have to deal with it. The issue is that when a 32-bit developer checks out a 64-bit solution on SVN, he has to manually change the target platforms all over again to get it compiled (not to mention other side problems) My question is : What clean (though temporary) solution could be addressed in such situation, permitting each developer to keep his default project/platform settings while checking out and in from SVN. I guess that -at least for the first time a project/solution is checked out, a dev still has to tweak the setting manually to compile it properly. After that, according to relevant SVN filters, it is possible to ignore some settings files (which ones, by the way?) I am open to all clever and detailed suggestions. Thanks.

    Read the article

  • Getting the errors for code in unopened .aspx pages

    - by Glennular
    Is there a way to check for errors in unopened *.ASPX pages. For example, if you change the name of a function Visual Studio will catch the error on the page and list it in the "Error List" only if the page is opened and being validated? I guess the question could be is there a validation option opposed to the compile option to check for errors? (Yes, i know code should go into the pre-compiled code-behind pages.) How do i find out about the following without running the page through the webserver or opening the page to be validated in VS? <script runat="server"> Public Sub MyFunciton() Undefined_FUNCTION() End Sub </script>

    Read the article

  • How to program three editions Light, Pro, Ultimate in one solution

    - by Henry99
    I'd like to know how best to program three different editions of my C# ASP.NET 3.5 application in VS2008 Professional (which includes a web deployment project). I have a Light, Pro and Ultimate edition (or version) of my application. At the moment I've put all in one solution with three build versions in configuration manager and I use preprocessor directives all over the code (there are around 20 such constructs in some ten thousand lines of code, so it's overseeable): #if light //light code #endif #if pro //pro code #endif //etc... I've read in stackoverflow for hours and thought to encounter how e.g. Microsoft does this with its different Windows editions, but did not find what I expected. Somewhere there is a heavy discussion about if preprocessor directives are evil. What I like with those #if-directives is: the side-by-side code of differences, so I will understand the code for the different editions after six months and the special benefit to NOT give out compiled code of other versions to the customer. OK, long explication, repeated question: What's the best way to go?

    Read the article

  • Consolidating files in a single directory before you link them into the final executable

    - by David
    I am working on Solaris 10, Sun Studio 11. I am refactoring some old code, and trying to write unit tests for them. My make file looks like: my_model.o:my_model.cc CC -c my_model.cc -I/../../include -library=stlport4 -instances=extern unit_test: unit_test.o my_model.o symbol_dictionary.o CC -o unit_test unit_test.o my_model.o symbol_dictionary.o -I../../include \ -library=stlport4 -instances=extern unit_test.o: unit_test.cc CC -c unit_test.cc -I/../../include -library=stlport4 -instances=extern symbol_dictionary.o: cd ../../test-fixtures && ($MAKE) symbol_dictionary.o mv ../../test-fixtures/symbol_dictionary.o . In the ../../test-fixtures makefile, I have the following target: symbol_dictionary.o: CC -c symbol_dictionary.cc -I/../../include -library=stlport4 -instances=extern I do the instances=extern because I had linking problems before, and this was the recommended solution. The consequence is in each directory that is being compiled, a SunWS_Cache directory is created to store the template instances. This is the long way to get to this question. Is it a standard practice to consolidate object files in a single directory before you link them?

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >