Search Results

Search found 3162 results on 127 pages for 'compiled'.

Page 25/127 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • VB.Net Application Settings / ClickOnce

    - by B Z
    VS 2008 / VB.Net / WinForms I have an application setting (Settings.settings) for a project and I am using Click Once deployment. I used the VS Editor to create the setting and I can see the setting in the app.config file <applicationSettings> <MyApp.Win.My.MySettings> <setting name="MySetting" serializeAs="String"> <value>False</value> </setting> </MyApp.ArtTracker.Win.My.MySettings> </applicationSettings> I would like to update this setting after the application is compiled. The setting is for testing purposes only. If I change the xxx.config.deploy and I reinstall the app with click once. The new setting value doesn't change (seems to be cached somewhere). Even if I change in my local pc the setting seems to be cached somewhere. If I go in VS it asks me to Re-Sync the settings. But I need to do this after the application is compiled. Thanks for any help

    Read the article

  • Compiling and linking libcurl to create a stand alone dll

    - by Haraldo
    Hi, I've managed to compile a dll with the necessary linked libraries (*.lib) and with CURL_STATICLIB set in the preprocessor section among other settings. I'm using "libcurl-7.19.3-win32-ssl-msvc.zip" package and compiling with VS 2008 express. This has been the first version I managed to get compiled properly with no link issues etc. The problem I have now is that my dll needs libcurl.dll to function and this is not ok. My dll needs to be independent. I have no idea how to implement this. I've taken all day just to get what I've got compiled. I've got runtime library set to Multi threaded dll (debug/release) respectively under C/C++ - code generation. I've a number of preprocessor set - CURL_STATICLIB being one of them. Configuration Type is set to Dynamic Library Use of MFC is set to Use MFC in a static library Additional Library Directories is set to the lib folders (debug/release) respectively. I've noticed there is a curllib_static.lib file which I've tried instead of curllib.lib as an additional dependency but it only compiles with the later. This is driving me nuts! So I guess I need some guidance as to how to make my dll completely static so it doesn't have any dependencies. I notice my dll is currently dependent on: CURLLIB.DLL MSVCR90D.DLL As I'm pretty new to C++ it could be a setting I'm missing in VS 2008 but I'm not sure. One person said I should be using a static library with *.a files (libcurl.a) etc but when I do this I get link errors which I haven't been able to resolve. Any guidance here would be much appreciated.

    Read the article

  • Linking error while using Qt static built libraries

    - by Kamran Amini
    I hope this is not a duplicate. Recently I'm developing a native C++ application using Qt 4.8.3 and VS2008. Since clients run the application on their naked machines, they need to install VC++ 2008 Redistribution package. So I decided to make it statically linked. I changed my project settings (C/C++ Code Generation Runtime Library) to /MTd. Also I compiled Qt again, this time using following commands for a static building; originally found on this blog Static Qt with static CRT (VS 2008) 1- replaced -MD with -MT in lines QMAKE_CFLAGS_RELEASE and QMAKE_CFLAGS_DEBUG in %QDIR%\mkspecs\win32-msvc2008\qmake.conf 2- nmake confclean 3- configure -static -platform win32-msvc2008 -no-webkit 4- nmake sub-src I compiled Qt successfully. But when I tried again to compile my application, it gave me some strange errors. 1>Linking... 1>qtmaind.lib(qtmain_win.obj) : error LNK2005: "public: bool __thiscall QBasicAtomicInt::deref(void)" (?deref@QBasicAtomicInt@@QAE_NXZ) already defined in QtCored4.lib(QtCored4.dll) 1>qtmaind.lib(qtmain_win.obj) : error LNK2005: "public: bool __thiscall QBasicAtomicInt::operator!=(int)const " (??9QBasicAtomicInt@@QBE_NH@Z) already defined in QtCored4.lib(QtCored4.dll) 1>qtmaind.lib(qtmain_win.obj) : error LNK2005: "public: __thiscall QString::~QString(void)" (??1QString@@QAE@XZ) already defined in QtCored4.lib(QtCored4.dll) I changed some lib files but with each change, situation got worse; for example I tried to use QtCored.lib instead of QtCored4.lib because it is newly created after compilation. I think I've missed something in building static Qt libs. Thanks.

    Read the article

  • Problems with reading into buffer using boost::asio::async_read

    - by Max
    Good day. I have a Types.hpp file in my project. And within it i have: .... namespace RC { ..... ..... struct ViewSettings { .... }; ..... } In the Server.cpp file I'm including this Types.hpp file, and i have there: class Session { ..... RC::ViewSettings tmp; boost::asio::async_read(socket_, boost::asio::buffer(&tmp, sizeof(RC::ViewSettings)), boost::bind(&Session::Finish_Reading_Data, shared_from_this(), boost::asio::placeholders::error)); ..... } And during the compilation i have an errors: error C2825: 'F': must be a class or namespace when followed by '::' : see reference to class template instantiation 'boost::_bi::result_traits<R,F>' being compiled with [ R=boost::_bi::unspecified, F=void (__thiscall Session::* )(void) ] : see reference to class template instantiation 'boost::_bi::bind_t<R,F,L>' being compiled with [ R=boost::_bi::unspecified, F=void (__thiscall Session::* )(void), L=boost::_bi::list2<boost::_bi::value<boost::shared_ptr<Session>>,boost::arg<1>> ] error C2039: 'result_type' : is not a member of '`global namespace'' And the code like this works in proper way: int w; boost::asio::async_read(socket_, boost::asio::buffer(&w, sizeof(int)), boost::bind(&Session::Handle_Read_Width, shared_from_this(), boost::asio::placeholders::error)); Please, help. What's the problem here? Thanks in advance.

    Read the article

  • C# debugging issue: No symbols are loaded for any call stack frame.

    - by Ciaran Bruen
    Hi - I'm trying to step into a method referenced in an external dll from a C# web service dll. I'm developing the web service code and can step into it from my Winforms app. The dll I'm trying to step into from the web service was developed by someone else, and I have the dll and pdb files. When I try to step into it I'm getting the message below: 'No symbols are loaded for any call stack frame. The source code cannot be displayed'. Here is my project setup: .NET 3.5, VS 2008 Professional, IIS 7 running on Vista Ultimate Winforms app WF1.exe, referencing web service dll WS1.dll, in 1 solution on my machine Database access dll DA1.dll compiled by another developer, referenced by WS1.dll DA1.dll and DA1.pdb files located in root directory of WS1 web service project WS1 web service compiled and published to my local IIS, DA1.dll and DA1.pdb files get copied to the IIS WS1 bin directory So far so good and everything works to a point. I break and step into WF1.exe then break and step into a method on WS1.dll no problems. However when I try to step into a method on DA1.dll the error occurs. Any help appreciated. Cheers, Ciaran

    Read the article

  • Boost causes an invalid block while overloading new/delete operators

    - by user555746
    Hi, I have a problem which appears to a be an invalid memory block that happens during a Boost call to Boost:runtime:cla::parser::~parser. When that global delete is called on that object, C++ asserts on the memory block as an invalid: dbgdel.cpp(52): /* verify block type */ _ASSERTE(_BLOCK_TYPE_IS_VALID(pHead->nBlockUse)); An investigation I did revealed that the problem happened because of a global overloading of the new/delete operators. Those overloadings are placed in a separate DLL. I discovered that the problem happens only when that DLL is compiled in RELEASE while the main application is compiled in DEBUG. So I thought that the Release/Debug build flavors might have created a problem like this in Boost/CRT when overloading new/delete operators. So I then tried to explicitly call to _malloc_dbg and _free_dbg withing the overloading functions even in release mode, but it didn't solve the invalid heap block problem. Any idea what the root cause of the problem is? is that situation solvable? I should stress that the problem began only when I started to use Boost. Before that CRT never complained about any invalid memory block. So could it be an internal Boost bug? Thanks!

    Read the article

  • How to persist objects between requests in PHP

    - by SztupY
    I've been using rails, merb, django and asp.net mvc applications in the past. What they have common (that is relevant to the question) is that they have code that sets up the framework. This usually means creating objects and state that is persisted until the web server is recycled (like setting up routing, or checking which controllers are available, etc). As far as I know PHP is more like a CGI script that gets compiled to some bytecode each time it's run, and after the request it's discarded. Of course you can have sessions, to persist data between requests from the same user, and as I see there are extensions like APC, with which you can persist objects between requests at the server level. My question is: how can one create a PHP application that works like rails and such? I mean an application that on the first requests sets up the framework, then on the 2nd and later requests use the objects that are already set up. Is there some built in caching facility in mod_php? (for example that stores the compiled bytecode of the executed php applications) Or is using APC or some similar extensions the only way to solve this problem? How would you do it? Thanks.

    Read the article

  • How to reliably replace a library-defined error handler with my own?

    - by sharptooth
    On certain error cases ATL invokes AtlThrow() which is implemented as ATL::AtlThrowImpl() which in turn throws CAtlException. The latter is not very good - CAtlException is not even derived from std::exception and also we use our own exceptions hierarchy and now we will have to catch CAtlException separately here and there which is lots of extra code and error-prone. Looks like it is possible to replace ATL::AtlThrowImpl() with my own handler - define _ATL_CUSTOM_THROW and define AtlThrow() to be the custom handler before including atlbase.h - and ATL will call the custom handler. Not so easy. Some of ATL code is not in sources - it comes compiled as a library - either static or dynamic. We use the static - atls.lib. And... it is compiled in such way that it has ATL::ThrowImpl() inside and some code calling it. I used a static analysis tool - it clearly shows that there're paths on which the old default handler is called. To ensure I even tried to "reimplement" ATL::AtlThrowImpl() in my code. Now the linker says it sees two declarations of ATL::AtlThrowImpl() which I suppose confirms that there's another implementation that can be called by some code. How can I handle this? How do I replace the default handler completely and ensure that the default handler is never called?

    Read the article

  • Moving .NET assemblies away from the application base directory?

    - by RasmusKL
    I have a WinForms application with a bunch of third party references. This makes the output folder quite messy. I'd like to place the compiled / referenced dlls into a common subdirectory in the output folder, bin / lib - whatever - and just have the executables (+ needed configs etc) reside in the output folder. After some searching I ran into assembly probing (http://msdn.microsoft.com/en-us/library/4191fzwb.aspx) - and verified that if I set this up and manually move the assemblies my application will still work if they are stored in the designated subdirectory like so: <configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <probing privatePath="bin" /> </assemblyBinding> </runtime> </configuration> However, this doesn't solve the build part - is there any way to specify where referenced assemblies and compiled library assemblies go? Only solutions I can think of off the top of my head is either post-build actions or dropping the idea and using ILMerge or something. There has got to be a better way of defining the structure :-)

    Read the article

  • Compiling and using NTL c++ library for Windows

    - by Martin Lauridsen
    Hi there, I have compiled the NTL inifite precision integer arithmetic library for c++, using Microsoft Visual Studio 2008. I did as explained, on this site, using the Visual Studio interface, rather than from the command prompt. Actually I would rather do it from the command prompt, but I was not sure how to. Anyhow, I got the library compiled, and I now want to compile a program using the library, from the command prompt. The program I am trying to compile, has been tested on a linux system, where I compile it with the following c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include mpqs.cpp main.cpp -o main -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm Nevermind the gmp stuff, I dont have that installed on Windows. It is purely an optional thing that will make the NTL run faster. Anyhow, this works fine on linux. Now on Windows I write the following cl /EHsc /I D:\Downloads\WinNTL-5_5_2\include mpqs.cpp main.cpp /link /LIBPATH:"D:\Documents\Visual Studio 2008\Projects\ntl\Debug" But this results in the following errors: mpqs.cpp mpqs.cpp(38) : error C2039: 'find_smooth_vals' : is not a member of 'QS' d:\desktop\qs\mpqs.h(12) : see declaration of 'QS' mpqs.cpp(41) : error C2065: 'M' : undeclared identifier mpqs.cpp(41) : error C2065: 'n' : undeclared identifier mpqs.cpp(42) : error C2065: 'sieve_table' : undeclared identifier mpqs.cpp(42) : error C2228: left of '.size' must have class/struct/union type is ''unknown-type'' mpqs.cpp(43) : error C2065: 'sieve_table' : undeclared identifier mpqs.cpp(44) : error C2065: 'qx_table' : undeclared identifier mpqs.cpp(44) : error C3861: 'test_smoothness': identifier not found mpqs.cpp(45) : error C2065: 'smooth_indices' : undeclared identifier mpqs.cpp(45) : error C2228: left of '.push_back' must have class/struct/union type is ''unknown-type'' main.cpp Generating Code... It is as if, my mpqs.h file is not included into the compilation process? Also I dont understand why it complains about .push_back() for a vector type? Help is much appreciated!

    Read the article

  • C# vs C - Big performance difference

    - by John
    I'm finding massive performance differences between similar code in C anc C#. The C code is: #include <stdio.h> #include <time.h> #include <math.h> main() { int i; double root; clock_t start = clock(); for (i = 0 ; i <= 100000000; i++){ root = sqrt(i); } printf("Time elapsed: %f\n", ((double)clock() - start) / CLOCKS_PER_SEC); } And the C# (console app) is: using System; using System.Collections.Generic; using System.Text; namespace ConsoleApplication2 { class Program { static void Main(string[] args) { DateTime startTime = DateTime.Now; double root; for (int i = 0; i <= 100000000; i++) { root = Math.Sqrt(i); } TimeSpan runTime = DateTime.Now - startTime; Console.WriteLine("Time elapsed: " + Convert.ToString(runTime.TotalMilliseconds/1000)); } } } With the above code, the C# completes in 0.328125 seconds (release version) and the C takes 11.14 seconds to run. The c is being compiled to a windows executable using mingw. I've always been under the assumption that C/C++ were faster or at least comparable to C#.net. What exactly is causing the C to run over 30 times slower? EDIT: It does appear that the C# optimizer was removing the root as it wasn't being used. I changed the root assignment to root += and printed out the total at the end. I've also compiled the C using cl.exe with the /O2 flag set for max speed. The results are now: 3.75 seconds for the C 2.61 seconds for the C# The C is still taking longer, but this is acceptable

    Read the article

  • How to provide js-ctypes in a spidermonkey embedding?

    - by Triston J. Taylor
    Summary I have looked over the code the SpiderMonkey 'shell' application uses to create the ctypes JavaScript object, but I'm a less-than novice C programmer. Due to the varying levels of insanity emitted by modern build systems, I can't seem to track down the code or command that actually links a program with the desired functionality. method.madness This js-ctypes implementation by The Mozilla Devs is an awesome addition. Since its conception, scripting has been primarily used to exert control over more rigorous and robust applications. The advent of js-ctypes to the SpiderMonkey project, enables JavaScript to stand up and be counted as a full fledged object oriented rapid application development language flying high above 'the bar' set by various venerable application development languages such as Microsoft's VB6. Shall we begin? I built SpiderMonkey with this config: ./configure --enable-ctypes --with-system-nspr followed by successful execution of: make && make install The js shell works fine and a global ctypes javascript object was verified operational in that shell. Working with code taken from the first source listing at How to embed the JavaScript Engine -MDN, I made an attempt to instantiate the JavaScript ctypes object by inserting the following code at line 66: /* Populate the global object with the ctypes object. */ if (!JS_InitCTypesClass(cx, global)) return NULL; /* I compiled with: g++ $(./js-config --cflags --libs) hello.cpp -o hello It compiles with a few warnings: hello.cpp: In function ‘int main(int, const char**)’: hello.cpp:69:16: warning: converting to non-pointer type ‘int’ from NULL [-Wconversion-null] hello.cpp:80:20: warning: deprecated conversion from string constant to ‘char*’ [-Wwrite-strings] hello.cpp:89:17: warning: NULL used in arithmetic [-Wpointer-arith] But when you run the application: ./hello: symbol lookup error: ./hello: undefined symbol: JS_InitCTypesClass Moreover JS_InitCTypesClass is declared extern in 'dist/include/jsapi.h', but the function resides in 'ctypes/CTypes.cpp' which includes its own header 'CTypes.h' and is compiled at some point by some command during 'make' to yeild './CTypes.o'. As I stated earlier, I am less than a novice with the C code, and I really have no idea what to do here. Please give or give direction to a generic example of making the js-ctypes object functional in an embedding.

    Read the article

  • Applet Not Loading In Java 1.6.0_16

    - by Wayne Hartman
    I am running a Java applet compiled in 1.5 and am experiencing odd behavior when running it on computers running 1.6.0_07 and 1.6.0_16. On the *_07 version, the applet initializes and loads in the browser perfectly fine. However, computers with the *_16 are not loading at all. Even more strange, there is nothing in the Java Console to indicate any problems will loading the applet in the browser. If I run the applet from as a standalone app on said machines, it loads up just fine. The compatibility mode in the JNLP is set to 1.5+. Firefox reports no errors attempting to load the applet. Even with full tracing and logging set to all, nothing is reported in the console window. Quick facts: The JAR is signed Compiled in 1.5 Works flawlessly in browsers (FF & IE) running *_07 Does not work in browsers (FF & IE) running *_16 Works running as stand alone app in *_16 JNLP set to 1.5+ Clients are mixed *_07, *_16, and other version of 1.6 Things I have tried: Forcing the JVM to use version 1.6.0_07. This requires the user to download *_07. In my situation this is not an option, unfortunately. Ran the app on *_16 as a standalone app. This works fine, but this needs to run as an applet. Ideas?

    Read the article

  • What languages should a microISV use to write commercial software?

    - by Wal
    I've been writing software in Java for many years now, but it was always for internal applications that would be deployed to a server. I'd like to get into writing desktop applications now but I don't know where to start. I've written a few Java/Swing applications but again they were for internal use. My understanding is that Java and other semi-compiled and interpreted languages are too easy to reverse engineer, making them unsuitable for commercial software. I am aware that there are compilers for Java and some other interpreted language, but I've also heard that they are pricey and/or unreliable. Assuming I start a microISV and wish to develop and sell applications to a broad audience, what's my best bet? I would prefer something that can be written close to once, and compiled for different operating systems but I am not opposed to .NET and a Windows-only audience if other languages would compromise the experience (installation ease & user experience) in Windows. My only issue there is that I don't have a large starting budget and paying out the wazoo for the required development tools is not really in the cards.

    Read the article

  • Compiling gstreamer plugin in windows

    - by utnapistim
    Hello all, My question: What is the correct way to compile a gstreamer plugin in windows, so that it will be accepted by gstreamer (actually Songbird on top of gstreamer). My setup: I have downloaded the songbird sources following the steps described here and I have a trunk/dependencies/windows-i686-msvc8 directory within my svn sources with all the gstreamer binaries. I have created a gstreamer empty plugin skeleton following the steps detailed in the GStreamer Plugin Writer's Guide, and compiled it against the gstreamer binaries in the Songbird dependencies folder. The compilation was done with VS2010 RC1 (Visual Studio 2008 yelded the same results), using an empty DLL project with the .h and .c files generated using the GStreamer Plugin Writer's Guide. The DLL was lined with libcpmt.lib, libcmt.lib, ws2_32.lib, gobject-2.0.lib, gthread-2.0.lib, gstreamer-0.10-0.lib, glib-2.0.lib, kernel32.lib, nspr4.lib and ignoring all default libraries. I have compiled the files as both .c and .cpp with the same results. Testing: I have installed the Songbird binaries corresponding to the correct svn version, then installed Songbird Developer Tools addon and used it to create an addon for testing my gstreamer plugin. Songbird will not load the pluggin. I have also tried to load it with gst-launch.exe from the trunk/dependencies/windows-i686-msvc8/[...] directory and that generated runtime error R6034: An application has made an attempt to load the C runtime library incorrectly. Most resources I found for this problem recommended restarting or reinstalling windows :(.

    Read the article

  • Managing logs/warnings in Python extensions

    - by Dimitri Tcaciuc
    TL;DR version: What do you use for configurable (and preferably captured) logging inside your C++ bits in a Python project? Details follow. Say you have a a few compiled .so modules that may need to do some error checking and warn user of (partially) incorrect data. Currently I'm having a pretty simplistic setup where I'm using logging framework from Python code and log4cxx library from C/C++. log4cxx log level is defined in a file (log4cxx.properties) and is currently fixed and I'm thinking how to make it more flexible. Couple of choices that I see: One way to control it would be to have a module-wide configuration call. # foo/__init__.py import sys from _foo import import bar, baz, configure_log configure_log(sys.stdout, WARNING) # tests/test_foo.py def test_foo(): # Maybe a custom context to change the logfile for # the module and restore it at the end. with CaptureLog(foo) as log: assert foo.bar() == 5 assert log.read() == "124.24 - foo - INFO - Bar returning 5" Have every compiled function that does logging accept optional log parameters. # foo.c int bar(PyObject* x, PyObject* logfile, PyObject* loglevel) { LoggerPtr logger = default_logger("foo"); if (logfile != Py_None) logger = file_logger(logfile, loglevel); ... } # tests/test_foo.py def test_foo(): with TemporaryFile() as logfile: assert foo.bar(logfile=logfile, loglevel=DEBUG) == 5 assert logfile.read() == "124.24 - foo - INFO - Bar returning 5" Some other way? Second one seems to be somewhat cleaner, but it requires function signature alteration (or using kwargs and parsing them). First one is.. probably somewhat awkward but sets up entire module in one go and removes logic from each individual function. What are your thoughts on this? I'm all ears to alternative solutions as well. Thanks,

    Read the article

  • PDCurses TUI with C++ Win32 console application

    - by Bach
    I have downloaded pdcurses source and was able to successfully include curses.h in my project, linked the pre-compiled library and all good. After few hours of trying out the library, I saw the tuidemo.c in the demos folder, compiled it into an executable and brilliant! exactly what I needed for my project. Now the problem is that it's a C code, and I am working on a C++ project in VS c++ 2008. The files I need are tui.c and tui.h How can I include that C file in my C++ code? I saw few suggestions here but the compiler was not too happy with 100's of warnings and errors. How can I go on including/using that TUI pdcurses includes!? Thanks EDIT: I added extern "C" statement, so my test looks like this now, but I'm getting some other type of error #include <stdio.h> #include <stdlib.h> using namespace std; extern "C" { #include <tui.h> } void sub0(void) { //do nothing } void sub1(void) { //do nothing } int main (int argc, char * const argv[]) { menu MainMenu[] = { { "Asub", sub0, "Go inside first submenu" }, { "Bsub", sub1, "Go inside second submenu" }, { "", (FUNC)0, "" } /* always add this as the last item! */ }; startmenu(MainMenu, "TUI - 'textual user interface' demonstration program"); return 0; } Although it is compiling successfully, it is throwing an Error at runtime: 0xC0000005: Access violation reading location 0x021c52f9 at line startmenu(MainMenu, "TUI - 'textual user interface' demonstration program"); Not sure where to go from here. thanks again.

    Read the article

  • QT QSslError being signaled with the error code set to NoError

    - by Nantucket
    My Problem I compiled OpenSSL into QT to enable OpenSSL support. Everything appeared to go correctly in the compile. However, when I try to use the official HTTP example application that can be found here, everytime I try to download an https page, it will signal two QSslError, each with contents NoError. The types of QSslErrors, including NoError, are documented here, poorly. There is no explanation on why they even included an error type called NoError, or what it means. Bizarrely, the NoError error code seems to be true, as it downloads the remote https document perfectly even while signaling the error. Does anyone have any idea what this means and what could possibly be causing it? Optional Background Reading Here is the relevant part of the code from the example app (this is connected to the network connection's sslErrors signal by the constructor): void HttpWindow::sslErrors(QNetworkReply*,const QList<QSslError> &errors) { QString errorString; foreach (const QSslError &error, errors) { if (!errorString.isEmpty()) errorString += ", "; errorString += error.errorString(); } if (QMessageBox::warning(this, tr("HTTP"), tr("One or more SSL errors has occurred: %1").arg(errorString), QMessageBox::Ignore | QMessageBox::Abort) == QMessageBox::Ignore) { reply->ignoreSslErrors(); } } I have tried the old version of this example, and it produced the same result. I have tried OpenSSL 1.0.0a and 0.9.8o. I have tried tried compiling OpenSSL myself, I have tried using pre-compiled versions of OpenSSL from the net. All produce the same result. If this were my first time using QT with SSL, I would almost think this is the intended result (even though their example application is popping up error warning message windows), if not for the fact that last time I played with QT, using what would now be an old version of QT with an old version of SSL, I distinctly remember everything working fine with no error windows. My system is running Windows 7 x64.

    Read the article

  • C++ Exception Handling

    - by user1413793
    So I was writing some code and I noticed that apart from syntactical, type, and other compile-time errors, C++ does not throw any other exceptions. So I decided to test this out with a very trivial program: #include<iostream> int main() { std::count<<5/0<<std::endl; return 1 } When I compiled it using g++, g++ gave me a warning saying I was dividing by 0. But it still compiled the code. Then when I ran it, it printed some really large arbitrary number. When I want to know is, how does C++ deal with exceptions? Integer division by 0 should be a very trivial example of when an exception should be thrown and the program should terminate. Do I have to essentially enclose my entire program in a huge try block and then catch certain exceptions? I know in Python when an exception is thrown, the program will immediately terminate and print out the error. What does C++ do? Are there even runtime exceptions which stop execution and kill the program?

    Read the article

  • Question about compilers and how they work

    - by Marin Doric
    This is the C code that frees memory of a singly linked list. It is compiled with Visual C++ 2008 and code works as it should be. /* Program done, so free allocated memory */ current = head; struct film * temp; temp = current; while (current != NULL) { temp = current->next; free(current); current = temp; } But I also encountered ( even in a books ) same code written like this: /* Program done, so free allocated memory */ current = head; while (current != NULL) { free(current); current = current->next; } If I compile that code with my VC++ 2008, program crashes because I am first freeing current and then assigning current-next to current. But obviously if I compile this code with some other complier ( for example, compiler that book author used ) program will work. So question is, why does this code compiled with specific compiler work? Is it because that compiler put instructions in binary file that remember address of current-next although I freed current and my VC++ doesn't. I just want to understand how compilers work.

    Read the article

  • VS 2008 C++ build output?

    - by STingRaySC
    Why when I watch the build output from a VC++ project in VS do I see: 1Compiling... 1a.cpp 1b.cpp 1c.cpp 1d.cpp 1e.cpp [etc...] 1Generating code... 1x.cpp 1y.cpp [etc...] The output looks as though several compilation units are being handled before any code is generated. Is this really going on? I'm trying to improve build times, and by using pre-compiled headers, I've gotten great speedups for each ".cpp" file, but there is a relatively long pause during the "Generating Code..." message. I do not have "Whole Program Optimization" nor "Link Time Code Generation" turned on. If this is the case, then why? Why doesn't VC++ compile each ".cpp" individually (which would include the code generation phase)? If this isn't just an illusion of the output, is there cross-compilation-unit optimization potentially going on here? There don't appear to be any compiler options to control that behavior (I know about WPO and LTCG, as mentioned above). EDIT: The build log just shows the ".obj" files in the output directory, one per line. There is no indication of "Compiling..." vs. "Generating code..." steps. EDIT: I have confirmed that this behavior has nothing to do with the "maximum number of parallel project builds" setting in Tools - Options - Projects and Solutions - Build and Run. Nor is it related to the MSBuild project build output verbosity setting. Indeed if I cancel the build before the "Generating code..." step, the ".obj" files will not exist for the most recent set of "compiled" files. E.g., if I cancel the build during "c.cpp" above, I will see only "a.obj" and "b.obj".

    Read the article

  • Are .NET 4.0 Runtime slower than .NET 2.0 Runtime?

    - by DxCK
    After I upgraded my projects to .NET 4.0 (With VS2010) I realized than they run slower than they were in .NET 2.0 (VS2008). So i decided to benchmark a simple console application in both VS2008 & VS2010 with various Target Frameworks: using System; using System.Diagnostics; using System.Reflection; namespace RuntimePerfTest { class Program { static void Main(string[] args) { Console.WriteLine(Assembly.GetCallingAssembly().ImageRuntimeVersion); Stopwatch sw = new Stopwatch(); while (true) { sw.Reset(); sw.Start(); for (int i = 0; i < 1000000000; i++) { } TimeSpan elapsed = sw.Elapsed; Console.WriteLine(elapsed); } } } } Here is the results: VS2008 Target Framework 2.0: ~0.25 seconds Target Framework 3.0: ~0.25 seconds Target Framework 3.5: ~0.25 seconds VS2010 Target Framework 2.0: ~3.8 seconds Target Framework 3.0: ~3.8 seconds Target Framework 3.5: ~1.51 seconds Target Framework 3.5 Client Profile: ~3.8 seconds Target Framework 4.0: ~1.01 seconds Target Framework 4.0 Client Profile: ~1.01 seconds My initial conclusion is obviously that programs compiled with VS2008 working faster than programs compiled with VS2010. Can anyone explain those performance changes between VS2008 and VS2010? and between different Target Frameworks inside VS2010 itself?

    Read the article

  • Why can't I simply copy installed Perl modules to other machines?

    - by pistacchio
    Being very new to Perl but not to dynamic languages, I'm a bit surprised at how not straight forward the manage of modules is. Sure, cpan X does theoretically work, but I'm working on the same project from three different machines and OSs (at work, at home, testing in an external environment). At work (Windows 7) I have problem using cpan because of our firewall that makes ftp unusable At home (Mac OS X) it does work In the external environment (Linux CentOs) it worked after hours because I don't have root access and I had to configure cpan to operate as a non-root user I've tried on another server where I have an access. If the previous external environment is a VPS and so I have a shell access, this other one is a cheap shared hosting where I have no way to install new modules other than the ones pre-installed At the moment I still can't install Template under Windows. I've seen that as an alternative I could compile it and I've also tried ActiveState's PPM but the module is not existent there. Now, my perplexity is about Perl being a dynamic language. I've had all these kind of problems while working, for example, with C where I had to compile all the libraries for all the platform, but I thought that with Perl the approach would have been very similar to Python's or PHP's where in 90% of the cases copying the module in a directory and importing it simply works. So, my question: if Perl's modules are written in Perl, why the copy/paste approach will not work? If some (or some part) of the modules have to be compiled, how to see in CPAN if a module is Perl-only or it relies upon compiled libraries? Isn't there a way to download the module (tar, zip...) and use cpan to deploy it? This would solve my problem under Windows.

    Read the article

  • VSC++, virtual method at bad adress, curious bug

    - by antoon.groenewoud
    Hello, This guy: virtual phTreeClass* GetTreeClass() const { return (phTreeClass*)m_entity_class; } When called, crashed the program with an access violation, even after a full recompile. All member functions and virtual member functions had correct memory adresses (I hovered mouse over the methods in debug mode), but this function had a bad memory adress: 0xfffffffc. Everything looked okay: the 'this' pointer, and everything works fine up until this function call. This function is also pretty old and I didn't change it for a long time. The problem just suddenly popped up after some work, which I commented all out to see what was doing it, without any success. So I removed the virtual, compiled, and it works fine. I add virtual, compiled, and it still works fine! I basically changed nothing, and remember that I did do a full recompile earlier, and still had the error back then. I wasn't able to reproduce the problem. But now it is back. I didn't change anything. Removing virtual fixes the problem. Sincerely, Antoon

    Read the article

  • Should the code being tested compile to a DLL or an executable file?

    - by uriDium
    I have a solution with two projects. One for project for the production code and another project for the unit tests. I did this as per the suggestions I got here from SO. I noticed that in the Debug Folder that it includes the production code in executable form. I used NUnit to run the tests after removing the executable and they all fail trying to find the executable. So it definitely is trying to find it. I then did a quick read to find out which is better, a DLL or an executable. It seems that an DLL is much faster as they share memory space where communication between executables is slower. Unforunately our production code needs to be an exectuable. So the unit tests will be slightly slower. I am not too worried about that. But the project does rely on code written in another library which is also in executable format at the moment. Should the projects that expose some sort of SDK rather be compiled to an DLL and then the projects that use the SDK be compiled to executable?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >