Search Results

Search found 3162 results on 127 pages for 'compiled'.

Page 19/127 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Web SITE publishing, dynamic compilation, smoke & mirrors

    - by tbehunin
    When you publish a web SITE in Visual Studio, in the dialog box that follows, you are given an option to "Allow this precompiled site to be updatable". According to MSDN, checking this option "specifies that all program code is compiled into assemblies, but that .aspx files (including single-file ASP.NET Web pages) are copied as-is to the target folder". With this option checked, you can update existing .aspx files as well as add new ones without any issue. When a page, that has either been updated or newly created, is requested, the page gets dynamically compiled at run-time and is then processed and returned to the user. If, on the other hand, you didn't check that checkbox during the publish phase, the .aspx files get compiled, along with the code-behind and App_Code files in separate assemblies. The .aspx files are then completely overwritten with a line of text that says: This is a marker file generated by the precompilation tool, and should not be deleted! You obviously can't edit an existing page in this scenario. If you were to ADD a new .aspx file to this site, you would get a .Net run-time error saying that the file hasn't been precompiled. With that background, my questions are these: Something must be able to determine that this website was published to be updatable (allow dynamic compilation) or not. If it was published as updatable, it must also be able to determine whether a file was changed or added, so it can do a dynamic compile. Who makes those determinations? IIS? ASP.NET worker process? HOW does it make those determinations? If I had the same website published in both of those scenarios, could I make a visual determination that one is updatable and the other is not? Is there some bit I can look at in the assemblies using Reflector to make that determination myself? In addition to answering those questions, what also might be helpful would be information on the process flow from when a resource is requested to when it starts being processed, not necessarily the ASP.NET Page Lifecycle, but what happens BEFORE ASP.Net worker process starts processing the page and firing off events. The dynamic compilation appears to be smoke and mirrors. Can someone demystify this for me?

    Read the article

  • Setting up Netbeans/Eclipse for Linux Kernel Development

    - by red.october
    Hi: I'm doing some Linux kernel development, and I'm trying to use Netbeans. Despite declared support for Make-based C projects, I cannot create a fully functional Netbeans project. This is despite compiling having Netbeans analyze a kernel binary that was compiled with full debugging information. Problems include: files are wrongly excluded: Some files are incorrectly greyed out in the project, which means Netbeans does not believe they should be included in the project, when in fact they are compiled into the kernel. The main problem is that Netbeans will miss any definitions that exist in these files, such as data structures and functions, but also miss macro definitions. cannot find definitions: Pretty self-explanatory - often times, Netbeans cannot find the definition of something. This is partly a result of the above problem. can't find header files: self-explanatory I'm wondering if anyone has had success with setting up Netbeans for Linux kernel development, and if so, what settings they used. Ultimately, I'm looking for Netbeans to be able to either parse the Makefile (preferred) or extract the debug information from the binary (less desirable, since this can significantly slow down compilation), and automatically determine which files are actually compiled and which macros are actually defined. Then, based on this, I would like to be able to find the definitions of any data structure, variable, function, etc. and have complete auto-completion. Let me preface this question with some points: I'm not interested in solutions involving Vim/Emacs. I know some people like them, but I'm not one of them. As the title suggest, I would be also happy to know how to set-up Eclipse to do what I need While I would prefer perfect coverage, something that only misses one in a million definitions is obviously fine SO's useful "Related Questions" feature has informed me that the following question is related: http://stackoverflow.com/questions/149321/what-ide-would-be-good-for-linux-kernel-driver-development. Upon reading it, the question is more of a comparison between IDE's, whereas I'm looking for how to set-up a particular IDE. Even so, the user Wade Mealing seems to have some expertise in working with Eclipse on this kind of development, so I would certainly appreciate his (and of course all of your) answers. Cheers

    Read the article

  • Access violation when running native C++ application that uses a /clr built DLL

    - by doobop
    I'm reorganzing a legacy mixed (managed and unmanaged DLLs) application so that the main application segment is unmanaged MFC and that will call a C++ DLL compiled with /clr flag that will bridge the communication between the managed (C# DLLs) and unmanaged code. Unfortuantely, my changed have resulted in an Access violation that occurs before the application InitInstance() is called. This makes it very difficult to debug. The only information I get is the following stack trace. > 64006108() ntdll.dll!_ZwCreateMutant@16() + 0xc bytes kernel32.dll!_CreateMutexW@12() + 0x7a bytes So, here are some sceanrios I've tried. - Turned on Exceptions-Win32 Exceptions-c0000005 Access Violation to break when Thrown. Still the most detail I get is from the above stack trace. I've tried the application with F10, but it fails before any breakpoints are hit and fails with the above stack trace. - I've stubbed out the bridge DLL so that it only has one method that returns a bool and that method is coded to just return false (no C# code called). bool DllPassthrough::IsFailed() { return false; } If the stubbed out DLL is compiled with the /clr flag, the application fails. If it is compiled without the /clr flag, the application runs. - I've created a stub MFC application using the Visual Studio wizard for multidocument applications and call DllPassthrough::IsFailed(). This succeeds even with the /clr flag used to compile the DLL. - I've tried doing a manual LoadLibrary on winmm.lib as outlined in the following note Access violation when using c++/cli. The application still fails. So, my questions are how to solve the problem? Any hints, strategies, or previous incidents. And, failing that, how can I get more information on what code segment or library is causing the access exception? If I try more involved workarounds like doing LoadLibrary calls, I'd like to narrow it to the failing libraries. Thanks. BTW, we are using Visual Studio 2008 and the project is being built against the .NET 2.0 framework for the managed sections.

    Read the article

  • debugging on bsd using gdb or similar tootls

    - by agent.smith
    I have started using freebsd lately and realized gdb does not support remote debugging on it. Whenever, I try to do remote debugging using gdbserver, I run into SIGSEGV crashes and error message says can’t find definition of “r_debug_state”. Has anyone ever experienced this and solved it? Statically compiled single threaded programs can be compiled using gdbserver. However, other than that it is mostly looking difficult to use. Let me know if anyone knows any other tools to do remote application debugging on bsd or how to fix the issue. (I am on x64 freebsd 9) Thanks

    Read the article

  • Why does an EXE file that does *nothing* contain so many dummy zero bytes?

    - by Lambert
    Hi, I've compiled a C file that does absolutely nothing (just a main that returns... not even a "Hello, world" gets printed), and I've compiled it with various compilers (MinGW GCC, Visual C++, Windows DDK, etc.). All of them link with the C runtime, which is standard. But what I don't get is: When I open up the file in a hex editor (or a disassembler), why do I see that almost half of the 16 KB is just huge sections of either 0x00 bytes or 0xCC bytes? It seems rather ridiculous to me... is there any way to prevent these from occurring? And why are they there in the first place? Thank you!

    Read the article

  • C# Script version of PyBinding for WPF

    - by Jim Strav
    I wrote a CSharpScriptBinding roughly equivalent to the PyBinding on CodePlex. It uses the C# script engine from http://www.csscript.net. After I wrote it, I kind of decided it might not really be something good to use. Although it caches the compiled script code as an already compiled Assembly, my concern is that I will have one temporary Assembly created each time I use the binding. Will this add up to a problem in the future? If so, maybe there is a way in the C# script engine that I don't know about to optimize this further...? Any thoughts to confirm my suspicion that this was just a bad idea (but useful excersise in learning more about bindings and converters)?

    Read the article

  • C# compile finalize method's on runtime?

    - by Royi Namir
    As im reading through 3 books about GC , ive notice some strange fact : C# via CLR CriticalFinalizerObject : the CLR treats this class and classes derived from it in a very special manner what ??? "not find enough memory to COMPILE a method? " IMHO - the code should be already compiled... no ? when Im writing c# code - the whole code is compiled to IL before its running... no? but according to the text - at RUNTIME - he MAY find insufficient memory for compile... Help ?

    Read the article

  • Moving from Windows to Ubuntu.

    - by djzmo
    Hello there, I used to program in Windows with Microsoft Visual C++ and I need to make some of my portable programs (written in portable C++) to be cross-platform, or at least I can release a working version of my program for both Linux and Windows. I am total newcomer in Linux application development (and rarely use the OS itself). So, today, I installed Ubuntu 10.04 LTS (through Wubi) and equipped Code::Blocks with the g++ compiler as my main weapon. Then I compiled my very first Hello World linux program, and I confused about the output program. I can run my program through the "Build and Run" menu option in Code::Blocks, but when I tried to launch the compiled application externally through a File Browser (in /media/MyNTFSPartition/MyProject/bin/Release; yes, I saved it in my NTFS partition), the program didn't show up. Why? I ran out of idea. I need to change my Windows and Microsoft Visual Studio mindset to Linux and Code::Blocks mindset. So I came up with these questions: How can I execute my compiled linux programs externally (outside IDE)? In Windows, I simply run the generated executable (.exe) file How can I distribute my linux application? In Windows, I simply distribute the executable files with the corresponding DLL files (if any) What is the equivalent of LIBs (static library) and DLLs (dynamic library) in linux and how to use them? In Windows/Visual Studio, I simply add the required libraries to the Additional Dependencies in the Project Settings, and my program will automatically link with the required static library(-ies)/DLLs. Is it possible to use the "binary form" of a C++ library (if provided) so that I wouldn't need to recompile the entire library source code? In Windows, yes. Sometimes precompiled *.lib files are provided. If I want to create a wxWidgets application in Linux, which package should I pick for Ubuntu? wxGTK or wxX11? Can I run wxGTK program under X11? In Windows, I use wxMSW, Of course. If question no. 4 is answered possible, are precompiled wxX11/wxGTK library exists out there? Haven't tried deep google search. In Windows, there is a project called "wxPack" (http://wxpack.sourceforge.net/) that saves a lot of my time. Sorry for asking many questions, but I am really confused on these linux development fundamentals. Any kind of help would be appreciated =) Thanks.

    Read the article

  • Dynamically generating high performance functions in clojure

    - by mikera
    I'm trying to use Clojure to dynamically generate functions that can be applied to large volumes of data - i.e. a requirement is that the functions be compiled to bytecode in order to execute fast, but their specification is not known until run time. e.g. suppose I specify functions with a simple DSL like: (def my-spec [:add [:multiply 2 :param0] 3]) I would like to create a function compile-spec such that: (compile-spec my-spec) Would return a compiled function of one parameter x that returns 2x+3. What is the best way to do this in Clojure?

    Read the article

  • How to validate Windows VC++ DLL on Unix systems

    - by Guildencrantz
    I have a solution, mostly C#, but with a few VC++ projects, that is pushed through our standard release process (perl and bash scripts on Unix boxes). Currently the initiative is to validate DLL and EXE versions as they pass through the process. All the versioning is set so that File Version is of the format $Id: $ (between the colon and the second dollar should be a git commit hash), and the Product Version is of the format $Hudson Build: $ (between the colon and the second dollar should be a string representing the hudson build details). Currently this system works extremely well for the C# projects because this version information is stored as plain strings within the compiled code (you can literally use the unix strings command and see the version information); the problem is that the VC++ projects do not expose this information as strings (I have used a windows system to verify that the version information is correctly being set), so I'm not sure how to extract the version on a unix system. Any suggestions for either A) Getting a string representation of the version embedded in the compiled code, or B) A utility/script which can extract this information?

    Read the article

  • Ofstream writes empty file on linux

    - by commanderz
    Hi, I have a program which writes its output using ofstream. Everything works perfectly fine on Windows when compiled with Visual Studio, but it only writes empty file on Linux when compiled with GCC. ofstream out(path_out_cstr, ofstream::out); if(out.bad()){ cout << "Could not write the file" << flush; } else{ cout << "writing"; out << "Content" << endl; if(out.fail()) cout << "writing failed"; out.flush(); out.close(); } The directory which is being writen into has 0777 privileges. Thanks for help

    Read the article

  • On Mac OS X, do you use the shipped python or your own?

    - by The MYYN
    On Tiger, I used a custom python installation to evaluate newer versions and I did not have any problems with that*. Now Snow Leopard is a little more up-to-date and by default ships with $ ls /System/Library/Frameworks/Python.framework/Versions/ 2.3 2.5 2.6 @Current What could be considered best practice? Using the python shipped with Mac OS X or a custom compiled version in, say $HOME. Are there any advantages/disadvantages using the one option over the other? My setup was fairly simple so far and looked like this: Custom compiled Python in $HOME and a $PATH that would look into $HOME/bin first, and subsequently would use my private Python version. Also $PYTHONPATH pointed to this local installation. This way, I did not need to sudo–install packages - virtualenv took care of the rest.

    Read the article

  • "Cannot use fixed local inside lambda expression"

    - by JulianR
    I have an XNA 3.0 project that compiled just fine in VS2008, but that gives compile errors in VS2010 (with XNA 4.0 CTP). The error: Cannot use fixed local 'depthPtr' inside an anonymous method, lambda expression, or query expression depthPtr is a fixed float* into an array, that is used inside a Parallel.For lambda expression from System.Threading. As I said, this compiled and ran just fine on VS2008, but it does not on VS2010, even when targeting .NET 3.5. Has this changed in .NET 4.0, and even so, shouldn't it still compile when I choose .NET 3.5 as the target framework? Searching for the term "Cannot use fixed local" yields exactly one (useless) result, both in Google and Bing. If this has changed, what is the reason for this? I can imagine capturing a fixed pointer-type in a closure could get a bit weird, is that why? So I'm guessing this is bad practice? And before anyone asks: no, the use of pointers is not absolutely critical here. I would still like to know though :)

    Read the article

  • How can I use the fucntion CoGetClassObject in x64

    - by fishbein
    I have COM DLL that compiled in x32(the server side). I registered it and tried to use the function CoGetClassObject with client that works in x32 for getting the IClassFactory. Hr = CoGetClassObject(CLSID_IOrbCom, CLSCTX_INPROC_SERVER, 0 , IDD_IClassFactory, (LPVOID*)&ClassFactory) the result with client x32 was fine and everything works well. But when I tried to use the CoGetClassObject in x64 client I received failed error "Class not registered". P.S. Restriction: - I can only compiled the COM server with x32. - My OS is XP 64bit.

    Read the article

  • Altering IFrame src content

    - by Nick
    I dont believe this can be done, but is it possible to alter the content of an iframe that is rendered via a src. The 3rd party compiled ASP.Net control (Telerik RadEditor .Net 2 version) I use has an iframe in part of its rendered code and does not contain a doctype and it is causing problems in IE8 with certain elements. As it is compiled, I can not add it in the source. I was wondering if it is possible to add it in another way? I have tried multiple things in jquery such as: $(element).html().prepend("doc type here"); $(element).html("doctype here" + $(element).html()); and all other kinds of dodgy work.

    Read the article

  • Why are C, C++, and LISP so prevalent in embedded devices and robots?

    - by David
    It seems that the software language skills most sought for embedded devices and robots are C, C++, and LISP. Why haven't more recent languages made inroads into these applications? For example, Erlang would seem particularly well-suited to robotic applications, since it makes concurrent programming easier and allows hot swapping of code. Python would seem to be useful, if for no other reason than its support of multiple programming paradigms. I'm even surprised that Java hasn't made a foray into general robotic programming. I'm sure one argument would be, "Some newer languages are interpreted, not compiled" - implying that compiled languages are quicker and use fewer computational resources. Is this still the case, in a time when we can put a Java Virtual Machine on a cell phone or a SunSpot? (and isn't LISP interpreted anyway?)

    Read the article

  • dll runtime error(C/C++/GCC/MSVC)

    - by coanor
    After two days fighting, I make the dll(compiled in GCC/G++) link correctly in MSVC, but while debuging, I got the runtime error, is say that: Runtime Error! Program: my_exe.exe This application has required the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. I have test something in that way: compiled a dll in mingw/gcc, link and debug in MSVC, it works correctly ,but while I implement it in my large project, I got the runtime error. And I tested the dll in mingw/GCC, it works correctly, it says that the runtime error does not come from programming error,it comes from the dll imcompatible between different platform. Does anyone can hele me? Thanks, forgive my poor English.

    Read the article

  • What is the Effect of Declaring 'extern "C"' in the Header to a C++ Shared Library?

    - by Adam
    Based on this question I understand the purpose of the construct in linking C libraries with C++ code. Now suppose the following: I have a '.so' shared library compiled with a C++ compiler. The header has a 'typedef stuct' and a number of function declarations. If the header includes the extern "C" declaration... #ifdef __cplusplus extern "C" { #endif // typedef struct ...; // function decls #ifdef __cplusplus } #endif ... what is the effect? Specifically I'm wondering if there are any detrimental side effects of that declaration since the shared library is compiled as C++, not C. Is there any reason to have the extern "C" declaration in this case?

    Read the article

  • Linking error in OMNeT++ using Boost serialization library

    - by astriffe
    I'm very new to OMNeT++ and I'd like to use the serialization-library contained in the boost framework. However, when trying to use it, I get quite many errors such as: Description Resource Path Location Type undefined reference to `boost::archive::archive_exception::~archive_exception()' OmCCN line 36, external location: /home/alexander/UniBE/BT/simulator/boost-compiledLibs /include/boost/serialization/throw_exception.hpp C/C++ Problem . I guess the problem is that I didn't yet link the compiled library in OMNeT. I've had a look at the makefile but any changes there are worthless since it is generated automatically by makemake. Furthermore, trying to access the menu item 'makemake' in project properties OMNeT++ IDE throws an error (The currently displayed page contains invalid values). Can anyone give me a hint concerning what the error could cause or how to link the compiled library correctly? Any hints are very appreciated! cheers alex

    Read the article

  • How to create a web framework in C# without ASP?

    - by Mark
    I've managed to get a C# asp page running under ubuntu/apache/mono, but I don't want to write my framework in these ASP pages, I want to use straight C# and then I'll use a templating language for my views. But I don't know where to begin? C# is a compiled language, so... how would I do this? Would I compile everything and then have apache hook into the (single) executable and pass in the the request URL? Could I request specific .cs pages and then have apache tell it to compile and then "display" it only if it's been updated? Can the "view" files be compiled individually to avoid having to recompile everything every time there's a change? Is there some "base" I can work from, or am I going to have to reinvent accessing GET and POST variables (by reading header info) and all sorts of other stuff we take for granted in languages like PHP?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >