Search Results

Search found 3162 results on 127 pages for 'compiled'.

Page 73/127 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • SHA1CryptoServiceProvider changed in .NET 4

    - by WebDude
    I am currently trying to upgrade a project of mine from .NET 3.5 to .NET 4.0 Everything was going really well, all code compiled, all tests passed. Then I hit a problem deploying to my stagomg environment. Suddenly my logins were no longer working. It seems my SHA1 hashed passwords are being hashed differently in .NET 4. I am using the SHA1CryptoServiceProvider: SHA1CryptoServiceProvidercryptoTransformSHA1 = new SHA1CryptoServiceProvider(); To test I created a new Visual Studio project with 2 console applications. The first targeted at .NET Framework 3.5 and the second at 4.0. I ran exactly the same hashing code in both and different results were produced. Why is this happening and how can I fix this? I obviously cannot go update all of my users passwords considering I do not know what they are. Any help would be greatly appreciated.

    Read the article

  • Tracing\profiling instructions

    - by LeChuck2k
    Hi Y'all. I'd like to statistically profile my C code at the instruction level. I need to know how many additions, multiplications, devides, etc,... I'm performing. This is not your usual run of the mill code profiling requirement. I'm an algorithm developer and I want to estimate the cost of converting my code to hardware implementations. For this, I'm being asked the instruction call breakdown during run-time (parsing the compiled assembly isn't sufficient as it doesn't consider loops in the code). After looking around, It seems VMWare may offer a possible solution, but I still couldn't find the specific feature that will allow me to trace the instruction call stream of my process. Are you aware of any profiling tools which enable this?

    Read the article

  • Tool for checking source for dependencies on specific Java versions

    - by Gregor
    Is there a quick way (e.g. tool) to detect, from the source (or maybe even from compiled classes), which parts of an application call Java API methods that are only implemented in a specific Java version? (e.g. which parts of my app are Java6-specific) I don't necessarily want to hop through all ClassMismatchErrors and avoid the trial-and-error-method. Let's say I only want to document which parts of an application won't work if they were writte for, e.g., Java6 and I want to run it in a version 5 JDK. Is there something like this? Google did not help this time, nor did I find any solution here (a rare case indeed:)

    Read the article

  • Are there Adaptive Replacement Cache patent-free alternatives?

    - by aleccolocco
    An open source high-performance project I'm working on needs to keep a cache of parsed/compiled files. A plain LRU or a plain LFU wouldn't fit. Plain LRU wouldn't work as there will be remote batch/spider processes hitting the service regularly. Plain LFU wouldn't work because content will age. ARC seems like the perfect solution but since IBM holds patents to it at least one open source project dropped it. Are there any (good enough) alternatives? EDIT: I'm not looking for exactly the same thing, just something that could handle those two situations. Perhaps some simple strategy with timestamps and sources. There have to be many programmers who faced this situation before. That's why the "good enough" bit.

    Read the article

  • Getting closure-compiler and Node.js to play nice

    - by bukzor
    Are there any projects that used node.js and closure-compiler (CC for short) together? The official CC recommendation is to compile all code for an application together, but when I compile some simple node.js code which contains a require("./MyLib.js"), that line is put directly into the output, but it doesn't make any sense in that context. I see a few options: Code the entire application as a single file. This solves the problem by avoiding it, but is bad for maintenance. Assume that all files will be concatenated before execution. Again this avoids the problem, but makes it harder to implement a un-compiled debug mode. I'd like to get CC to "understand" the node.js require() function, but that probably can't be done without editing the compiler itself, can it?

    Read the article

  • How can I create variable variables in ActionScript

    - by Daniel Angel
    I'm doing this mcomp7d101.onRelease = function() { getURL("javascript:Compartir("+id7d101+");"); } mcomp7d102.onRelease = function() { getURL("javascript:Compartir("+id7d101+");"); } mcomp7d103.onRelease = function() { getURL("javascript:Compartir("+id7d101+");"); } mcomp7d150.onRelease = function() { getURL("javascript:Compartir("+id7d101+");"); } you get the idea :) How can I use a for loop to do something like: for(ii = 101; ii < 150; ii++) { mcomp7d+ii.onRelease = function() { getURL("javascript:Compartir("+id7d+ii);"); } } I'm getting a syntax error. It seems that I can't create variable variables in compiled languages.

    Read the article

  • DllImport Based on OS Platform

    - by Ngu Soon Hui
    I have a mixture of unmanaged code ( backend) and managed code ( front end), as such, I would need to call the unmanaged code from my managed code, using interop techniques and DllImport attribute. Now, I've compiled two versions of unmanaged code, for both 32 and 64 bit OS; they are named service32.dll and service64.dll respectively. So, in my .Net code, I would have to do a DllImport for both dlls: [DllImport(@"service32.dll")] //for 32 bit OS invocation public static void SimpleFunction(); [DllImport(@"service64.dll")] //for 64 bit OS invocation public static void SimpleFunction(); And call them depending on which platform my application is running on. The issue now is that for every unmanaged function, I have to declared it twice, one for 32 bit OS and one for 64 bit OS. This is a duplication of work, and everytime I change the signature of an unmanaged function, I have to modified it in two places. Is there anyway that I can change the argument in DllImport so that the correct dll will be invoked automagically, depending on the platform?

    Read the article

  • CGAL replacement for iOS

    - by Aleks N.
    I have a set of nodes that define streets. Each node has latitude and longitude. Also I have user location with latitude and longitude. My intention is to build Voronoi diagram for segments defined by each pair of nodes, and then find which node user location is closest to. Looks like this task can be accomplished with CGAL library. While I'm in the process of compiling it for iOS environment, probably you guys will be able to give links to libs that are already compiled against iOS, or were intended to be used in Objective C environment from the very beginning... Because I'm afraid that even if CGAL compiles for me, I might get into trouble when using it. Thanks!

    Read the article

  • How to make pdb recognize that the source has changed between runs?

    - by user88028
    From what I can tell, pdb does not recognize when the source code has changed between "runs". That is, if I'm debugging, notice a bug, fix that bug, and rerun the program in pdb (i.e. without exiting pdb), pdb will not recompile the code. I'll still be debugging the old version of the code, even if pdb lists the new source code. So, does pdb not update the compiled code as the source changes? If not, is there a way to make it do so? I'd like to be able to stay in a single pdb session in order to keep my breakpoints and such. FWIW, gdb will notice when the program it's debugging changes underneath it, though only on a restart of that program. This is the behavior I'm trying to replicate in pdb.

    Read the article

  • Sharing some info with all DLLs pulled into a process

    - by JBRWilkinson
    Hi all, We've got an Enterprise system which has many processes (EXEs, services, DCOM servers, COM+ apps, ISAPI, MMC snapins) all of which make use of many COM components. We've recently seen failures in some of the customer deployments, but are finding it hard to troubleshoot the cause. In order to track down the problem, we've augmented the entire source with logging statements where errors occur. In order to identify which logs came from what processes, the C++ logging code (compiled into all components) uses the EXE name to name the log. This is good for some cases, but not all - COM+ apps, ISAPI and MMC snapins all have system EXE names and the logs end up interleaved. I saw this post about shared data sections which might help, but what I don't understand is who decides what goes in the shared section. Is there any way I can guarantee that a particular piece of code writes into the shared section before anyone else reads it? Or is there a better solution to this problem?

    Read the article

  • Test for undefined references in Linux

    - by Charles
    Is there a built in linux utility that I can use to test a newly compiled shared library for external undefined references? Gcc seems to be intelligent enough to check for undefined symbols in my own binary, but if the symbol is a reference to another library gcc does not check at link time. Instead I only get the message when I try to link to my new library from another program. It seems a little silly to get undefined reference messages in a library when I am compiling a different project so I want to know if I can do a check on all references internal and external when I build the library not when I link to it. Example error: make -C UnitTests debug make[1]: Entering directory `~/projects/Foo/UnitTests` g++ [ tons of objects ] -L../libbar/bin -lbar -o UnitTests libbar.so: undefined reference to `DoSomethingFromAnotherLibrary` collect2: ld returned 1 exit status make[1]: *** [~/projects/Foo/UnitTests] Error 1

    Read the article

  • Style question: Writing "this." before instance variable and methods: good or bad idea?

    - by Uri
    One of my nasty (?) programming habits in C++ and Java is to always precede calls or accesses to members with a this. For example: this.process(this.event). A few of my students commented on this, and I'm wondering if I am teaching bad habits. My rationale is: 1) Makes code more readable — Easier to distinguish fields from local variables. 2) Makes it easier to distinguish standard calls from static calls (especially in Java) 3) Makes me remember that this call (unless the target is final) could end up on a different target, for example in an overriding version in a subclass. Obviously, this has zero impact on the compiled program, it's just readability. So am I making it more or less readable? Related Question Note: I turned it into a CW since there really isn't a correct answer.

    Read the article

  • Forcing GWT to assume List is implemented as ArrayList

    - by joecks
    For some reason I'm stucked with model classes using List as Collection Type and I would like to use the model on the client side. However GWT of course fails serializing java.util.List. However all implementations of List in this model are based on ArrayList. So is it possible to tell GWT to assume List is ArrayList? Edit GWT fails on compile time, since a possible candidate for List is also java.util.Collections.SingeltonList - which can not be compiled. I'm using GWT 2.1 and Java 1.6 .

    Read the article

  • Shortcuts and MSI updates

    - by Filip Navara
    We have an installer for application that is compiled using WiX and each version is updated using a new setup package. The installer creates advertised shortcut in Start menu and users often copy this shortcut to desktop or other location. During an application update a major upgrade is performed and the old shortcuts are removed, which causes the ones copied by users to disappear. This causes a major annoyance to the users. Is there a way to update advertised shortcuts when doing MSI major upgrade (ie. different product code)? Or, is there a way to allow minor updates by just running the setup.msi file (without passing a REINSTALLMODE option on the command line)? Or, is the only way to solve this problem to use non-advertised shortcuts?

    Read the article

  • Getting the errors for code in unopened .aspx pages

    - by Glennular
    Is there a way to check for errors in unopened *.ASPX pages. For example, if you change the name of a function Visual Studio will catch the error on the page and list it in the "Error List" only if the page is opened and being validated? I guess the question could be is there a validation option opposed to the compile option to check for errors? (Yes, i know code should go into the pre-compiled code-behind pages.) How do i find out about the following without running the page through the webserver or opening the page to be validated in VS? <script runat="server"> Public Sub MyFunciton() Undefined_FUNCTION() End Sub </script>

    Read the article

  • Import module stored in a cStringIO data structure vs. physical disk file

    - by Malcolm
    Is there a way to import a Python module stored in a cStringIO data structure vs. physical disk file? It looks like "imp.load_compiled(name, pathname[, file])" is what I need, but the description of this method (and similar methods) has the following disclaimer: Quote: "The file argument is the byte-compiled code file, open for reading in binary mode, from the beginning. It must currently be a real file object, not a user-defined class emulating a file." [1] I tried using a cStringIO object vs. a real file object, but the help documentation is correct - only a real file object can be used. Any ideas on why these modules would impose such a restriction or is this just an historical artifact? Are there any techniques I can use to avoid this physical file requirement? Thanks, Malcolm [1] http://docs.python.org/library/imp.html#imp.load_module

    Read the article

  • Examining C/C++ Heap memory statistics in gdb

    - by fd
    I'm trying to investigate the state of the C/C++ heap from within gdb on Linux amd64, is there a nice way to do this? One approach I've tried is to "call mallinfo()" but unfortunately I can't then extract the values I want since gdb deal with the return value properly. I'm not easily able to write a function to be compiled into the binary for the process I am attached to, so I can simply implement my own function to extract the values by calling mallinfo() in my own code this way. Is there perhaps a clever trick that will allow me to do this on-the-fly? Another option could be to locate the heap and traverse the malloc headers / free list; I'd appreciate any pointers to where I could start in finding the location and layout of these. I've been trying to Google and read around the problem for about 2 hours and I've learnt some fascinating stuff but still not found what I need.

    Read the article

  • C++ -malign-double compiler flag

    - by Martin
    I need some help on compiler flags in c++. I'm using a library that is a port to linux from windows, that has to be compiled with the -malign-double flag, "for Win32 compatibility". It's my understanding that this mean I absolutely have to compile my own code with this flag as well? How about other .so shared libraries, do they have be recompiled with this flag as well? If so, is there any way around this? I'm a linux newbie (and c++), so even though I tried to recompile all the libraries I'm using for my project, it was just too complicated to recursively find the source for all the libraries and the libraries they're dependent on, and recompile everything.

    Read the article

  • why this assembly program is loaded from the address 0B3D:0000?

    - by viperchaos
    I have seen a assembly program written from a book about assemble: assume cs:code code segment dw 0123h,0456h,0789h,0abch,0defh,0fedh,0cbah,0987h mov bx,0 mov ax,0 mov cx,8 s: add ax,cs:[bx] add bx,2 loop s mov ax,4c00h int 21h code ends end This program's function is to add eight numbers. The author compiled this program in the DOS and use the DEBUG to see how this program be loaded. The author use the R command and got that DS = 0B2DH ES = 0B2D SS = 0B3D CS = 0B3D IP = 0000 And then the author said that this program is loaded from the address 0B3D:0000. I'm a confused that why this program is loaded from the address 0B3D:0000? Is this because the existence of the Program Segment Prefix(PSP)? If the answer is the existence of the PSP, what is in the PSP?

    Read the article

  • C++ app fails to initialize (0xc0000005), when using C# dll

    - by Simon
    Hi, I have a C# DLL, which I call from a native C++ programm. As I use Qt and /clr compiler option did not work I followed this tutorial for a bridge. So I have a VS2008 project (compiled with /clr), which links to the C# DLL and contains the bridge class and the native class, which exposes interfaces to my C++ programm. Another VS2008 project (no .net stuff) calls the native class (statically linked). I had some issues, but now the programm at least compiles. However, if I try to run this programm, I get a (0xc0000005) error on initialization, when I try to use the native class. As this happens on initialization, I don't even see, which DLLs fail to initialize. All DLLs should be in the right place. Any hints? Thank you.

    Read the article

  • Automatic initialization routine in C++ library?

    - by Robert Mason
    If i have a header file foo.h and a source file foo.cpp, and foo.cpp contains something along the lines of: #ifdef WIN32 class asdf { asdf() { startup_code(); } ~asdf() { cleanup_code(); } }; asdf __STARTUP_HANDLE__ #else //unix does not require startup or cleanup code in this case #endif but foo.h does not define class asdf, say i have an application bar.cpp: #include "foo.h" //link in foo.lib, foo.dll, foo.so, etc int main() { //do stuff return 0; } If bar.cpp is compiled on a WIN32 platform, will the asdf() and ~asdf() be called at the appropriate times (before main() and at program exit, respectively) even though class asdf is not defined in foo.h, but is linked in through foo.cpp?

    Read the article

  • Deploying a .Net App Source Control (SVN) over 32-bit AND 64-bit dev stations

    - by Mika Jacobi
    Here is the situation : Our Dev Team has heterogeneous OS systems, scattered between 32-bit and 64-bit. This is not ideal, we are actually planning to homogenize our infrastructure, but in the meantime we have to deal with it. The issue is that when a 32-bit developer checks out a 64-bit solution on SVN, he has to manually change the target platforms all over again to get it compiled (not to mention other side problems) My question is : What clean (though temporary) solution could be addressed in such situation, permitting each developer to keep his default project/platform settings while checking out and in from SVN. I guess that -at least for the first time a project/solution is checked out, a dev still has to tweak the setting manually to compile it properly. After that, according to relevant SVN filters, it is possible to ignore some settings files (which ones, by the way?) I am open to all clever and detailed suggestions. Thanks.

    Read the article

  • <function> referenced from; symbol(s) not found.

    - by jfm429
    I have a piece of C code that is used from a C++ function. At the top of my C++ file I have the line: #include "prediction.h" In prediction.h I have this: #ifndef prediction #define prediction #include "structs.h" typedef struct { double estimation; double variance; } response; response runPrediction(int obs, location* positions, double* observations, int targets, location* targetPositions); #endif I also have prediction.c, which has: #include "prediction.h" response runPrediction(int obs, location* positions, double* observations, int targets, location* targetPositions) { // code here } Now, in my C++ file (which as I said includes prediction.h) I call that function, then compile (through Xcode) I get this error: "runPrediction(int, location*, double*, int, location*)", referenced from: mainFrame::respondTo(char*, int)in mainFrame.o ld: symbol(s) not found collect2: ld returned 1 exit status prediction.c is marked for compilation for the current target. I don't have any problems with other .cpp files not being compiled. Any thoughts here?

    Read the article

  • How to program three editions Light, Pro, Ultimate in one solution

    - by Henry99
    I'd like to know how best to program three different editions of my C# ASP.NET 3.5 application in VS2008 Professional (which includes a web deployment project). I have a Light, Pro and Ultimate edition (or version) of my application. At the moment I've put all in one solution with three build versions in configuration manager and I use preprocessor directives all over the code (there are around 20 such constructs in some ten thousand lines of code, so it's overseeable): #if light //light code #endif #if pro //pro code #endif //etc... I've read in stackoverflow for hours and thought to encounter how e.g. Microsoft does this with its different Windows editions, but did not find what I expected. Somewhere there is a heavy discussion about if preprocessor directives are evil. What I like with those #if-directives is: the side-by-side code of differences, so I will understand the code for the different editions after six months and the special benefit to NOT give out compiled code of other versions to the customer. OK, long explication, repeated question: What's the best way to go?

    Read the article

  • How to deal with constructor argument names?

    - by Bane
    Say I have a class that has some properties, like x, y, width and height. In its constructor, I couldn't do this: class A { public: A(int, int, int, int); int x; int y; int width; int height; }; //Wrong and makes little sense name-wise: A::A(int x, int y, int width, int height) { x = x; y = y; width = width; height = height; } First of all, this doesn't really make sense. Second, x, y, width and height become some weird values (-1405737648) when compiled using g++. It does work, however, if I append "a" to the argument names. What is the optimal way of solving these naming conflicts?

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >