Search Results

Search found 207 results on 9 pages for 'msvc'.

Page 8/9 | < Previous Page | 4 5 6 7 8 9  | Next Page >

  • Problem with libcurl cookie engine

    - by Seb Rose
    [Cross-posted from lib-curl mailing list] I have a single threaded app (MSVC C++ 2005) build against a static LIBCURL 7.19.4 A test application connects to an in house server & performs a bespoke authentication process that includes posting a couple of forms, and when this succeeds creates a new resource (POST) and then updates the resource (PUT) using If-Match. I only use a single connection to libcurl (i.e. only one CURL*) The cookie engine is enabled from the start using curl_easy_setopt(CURLOPT_COOKIEFILE, "") The cookie cache is cleared at the end of the authentication process using curl_easy_setopt(CURLOPT_COOKIELIST, "SESS"). This is required by the authentication process. The next call, which completes a successful authentication, results in a couple of security cookies being returned from the server - they have no expiry date set. The server (and I) expect the security cookies to then be sent with all subsequent requests to the server. The problem is that sometimes they are sent and sometimes they aren't. I'm not a CURL expert, so I'm probably doing something wrong, but I can't figure out what. Running the test app in a loop results shows a random distribution of correct cookie handling. As a workaround I've disabled the cookie engine and am doing basic manual cookie handling. Like this it works as expected, but I'd prefer to use the library if possible. Does anyone have any ideas? Thanks Seb

    Read the article

  • Compiling and linking libcurl to create a stand alone dll

    - by Haraldo
    Hi, I've managed to compile a dll with the necessary linked libraries (*.lib) and with CURL_STATICLIB set in the preprocessor section among other settings. I'm using "libcurl-7.19.3-win32-ssl-msvc.zip" package and compiling with VS 2008 express. This has been the first version I managed to get compiled properly with no link issues etc. The problem I have now is that my dll needs libcurl.dll to function and this is not ok. My dll needs to be independent. I have no idea how to implement this. I've taken all day just to get what I've got compiled. I've got runtime library set to Multi threaded dll (debug/release) respectively under C/C++ - code generation. I've a number of preprocessor set - CURL_STATICLIB being one of them. Configuration Type is set to Dynamic Library Use of MFC is set to Use MFC in a static library Additional Library Directories is set to the lib folders (debug/release) respectively. I've noticed there is a curllib_static.lib file which I've tried instead of curllib.lib as an additional dependency but it only compiles with the later. This is driving me nuts! So I guess I need some guidance as to how to make my dll completely static so it doesn't have any dependencies. I notice my dll is currently dependent on: CURLLIB.DLL MSVCR90D.DLL As I'm pretty new to C++ it could be a setting I'm missing in VS 2008 but I'm not sure. One person said I should be using a static library with *.a files (libcurl.a) etc but when I do this I get link errors which I haven't been able to resolve. Any guidance here would be much appreciated.

    Read the article

  • How can I get the mapi system stub dll to pass extended mapi calls to my dll?

    - by Bogatyr
    For various reasons (questioning the reasons is not helpful to me), I'd like to implement my own extended mapi dll for windows xp. I have a skeleton dll now, just a few entrypoints exist for testing, but the system mapi stub (c:\windows\system32\mapi32.dll, I've checked that it's identical to mapistub.dll) will not pass through calls to my dll, while it happily passes the same calls through to MS Outlook's msmapi32.dll, (MAPIInitialize, MAPILoginEx are two such calls). There's some secret handshake between the stub and the extended mapi dll wherein the stub checks that "yup, it's an extended mapi dll": maybe it's the presence of some additional entrypoints I haven't implemented yet, maybe it's the return value from some function, I don't know. I've tried tracing a sample app I wrote that calls MAPIInitialize with STraceNT and ProcessMonitor but that didn't show anything obvious. Tracing has shown that indeed the stub loads my dll, but then finds the secret sauce is missing apparently, and returns an error code instead of calling my dll's function. What more could be needed for calling MAPIInitialize than the presence of MAPIInitialize in my dll's exports table? GetProcAddress says it's there. What I'd like to know is how to minimally extend my skeleton extended mapi dll so that the stub mapi dll will pass through extended mapi calls to my dll. What's the secret sauce? I'd rather not spend a painful week in msvc reverse engineering the stub behavior.

    Read the article

  • int considered harmful?

    - by Chris Becke
    Working on code meant to be portable between Win32 and Win64 and Cocoa, I am really struggling to get to grips with what the @#$% the various standards committees involved over the past decades were thinking when they first came up with, and then perpetuated, the crime against humanity that is the C native typeset - char, short, int and long. On the one hand, as a old-school c++ programmer, there are few statements that were as elegant and/or as simple as for(int i=0; i<some_max; i++) but now, it seems that, in the general case, this code can never be correct. Oh sure, given a particular version of MSVC or GCC, with specific targets, the size of 'int' can be safely assumed. But, in the case of writing very generic c/c++ code that might one day be used on 16 bit hardware, or 128, or just be exposed to a particularly weirdly setup 32/64 bit compiler, how does use int in c++ code in a way that the resulting program would have predictable behavior in any and all possible c++ compilers that implemented c++ according to spec. To resolve these unpredictabilities, C99 and C++98 introduced size_t, uintptr_t, ptrdiff_t, int8_t, int16_t, int32_t, int16_t and so on. Which leaves me thinking that a raw int, anywhere in pure c++ code, should really be considered harmful, as there is some (completely c++xx conforming) compiler, thats going to produce an unexpected or incorrect result with it. (and probably be a attack vector as well)

    Read the article

  • Side by side madness - running binaries on different computer (with a twist)

    - by sbk
    Here's my configuration: Computer A - Windows 7, MS Visual Studio 2005 patched for Win7 compatibility (8.0.50727.867) Computer B - Windows XP SP2, MS Visual Studio 2005 installed (8.0.50727.42) My project has some external dependencies (prebuilt DLLs - either build on A or downloaded from the Internet), a couple of DLLs built from sources and one executable. I am mostly developing on A and all is fine there. At some point I try to build my project on computer B, copying the prebuilt DLLs to the output folder. Everything builds fine, but trying to start my application I get The application failed to initialize properly (0xc0150002).... The event log contains two of those: Dependent Assembly Microsoft.VC80.CRT could not be found and Last Error was The referenced assembly is not installed on your system. plus the slightly more amusing Generate Activation Context failed for some.dll. Reference error message: The operation completed successfully. At this point I'm trying my Google-Fu, but in vain - virtually all hits are about running binaries on machines without Visual Studio installed. In my case, however, the executables fail to run on the computer they are built. Next step was to try dependency walker and it baffled me even more - my DLLs built from sources on the same box cannot find MSVCR80.DLL and MSVCP80.DLL, however the executable seems to be alright in respect to those two DLLs i.e. when I open the executable with dependency walker it shows that the MSVC?80.DLLs can be found, but when I open one of my DLLs it says they cannot. That's where I am completely out of ideas what to do so I'm asking you, dear stackoverflow :) I admit I'm a bit blurry on the whole side-by-side thing, so general reading on the topic will also be appreciated.

    Read the article

  • How do I merge multiple PDB files ?

    - by blue.tuxedo
    We are currently using a single command line tool to build our product on both Windows and Linux. Si far its works nicely, allowing us to build out of source and with finer dependencies than what any of our previous build system allowed. This buys us great incremental and parallel build capabilities. To describe shortly the build process, we get the usual: .cpp -- cl.exe --> .obj and .pdb multiple .obj and .pdb -- cl.exe --> single .dll .lib .pdb multiple .obj and .pdb -- cl.exe --> single .exe .pdb The msvc C/C++ compiler supports it adequately. Recently the need to build a few static libraries emerged. From what we gathered, the process to build a static library is: multiple .cpp -- cl.exe --> multiple .obj and a single .pdb multiple .obj -- lib.exe --> a single .lib The single .pdb means that cl.exe should only be executed once for all the .cpp sources. This single execution means that we can't parallelize the build for this static library. This is really unfortunate. We investigated a bit further and according to the documentation (and the available command line options): cl.exe does not know how to build static libraries lib.exe does not know how to build .pdb files Does anybody know a way to merge multiple PDB files ? Are we doomed to have slow builds for static libraries ? How do tools like Incredibuild work around this issue ?

    Read the article

  • Access cost of dynamically created objects with dynamically allocated members

    - by user343547
    I'm building an application which will have dynamic allocated objects of type A each with a dynamically allocated member (v) similar to the below class class A { int a; int b; int* v; }; where: The memory for v will be allocated in the constructor. v will be allocated once when an object of type A is created and will never need to be resized. The size of v will vary across all instances of A. The application will potentially have a huge number of such objects and mostly need to stream a large number of these objects through the CPU but only need to perform very simple computations on the members variables. Could having v dynamically allocated could mean that an instance of A and its member v are not located together in memory? What tools and techniques can be used to test if this fragmentation is a performance bottleneck? If such fragmentation is a performance issue, are there any techniques that could allow A and v to allocated in a continuous region of memory? Or are there any techniques to aid memory access such as pre-fetching scheme? for example get an object of type A operate on the other member variables whilst pre-fetching v. If the size of v or an acceptable maximum size could be known at compile time would replacing v with a fixed sized array like int v[max_length] lead to better performance? The target platforms are standard desktop machines with x86/AMD64 processors, Windows or Linux OSes and compiled using either GCC or MSVC compilers.

    Read the article

  • Code smells galore. Can this be a good company?

    - by Paperflyer
    I am currently doing some contract work for a company. Now they want to hire me for real. I have been reading on SO about code smells lately. The thing is, I have worked with some of their code and it smells. Badly. They use incredibly old versions of MSVC (2003), they do not seem to use version control systems, most code is completely undocumented, variable names with more than three letters are a rarity, there is commented out code all over the place, some methods take huge amounts of arguments, UI design is seemingly done by blind people... Yet they seem to be quite successful with what they do and their actual algorithms seem to be pretty sound and rather sophisticated. Since they mostly do DSP stuff, I am willing to ignore the UI side of things, but really these code smells are worrying. What would you think of a company that doesn't seem to value readable code? The people are nice enough and payment would be good. How much would you value code smells in this context? You see, this is my first job and SO got me worried, so I turn to you for suggestions ;-)

    Read the article

  • Are function-local typedefs visible inside C++0x lambdas?

    - by GMan - Save the Unicorns
    I've run into a strange problem. The following simplified code reproduces the problem in MSVC 2010 Beta 2: template <typename T> struct dummy { static T foo(void) { return T(); } }; int main(void) { typedef dummy<bool> dummy_type; auto x = [](void){ bool b = dummy_type::foo(); }; // auto x = [](void){ bool b = dummy<bool>::foo(); }; // works } The typedef I created locally in the function doesn't seem to be visible in the lambda. If I replace the typedef with the actual type, it works as expected. Here are some other test cases: // crashes the compiler, credit to Tarydon int main(void) { struct dummy {}; auto x = [](void){ dummy d; }; } // works as expected int main(void) { typedef int integer; auto x = [](void){ integer i = 0; }; } I don't have g++ 4.5 available to test it, right now. Is this some strange rule in C++0x, or just a bug in the compiler? From the results above, I'm leaning towards bug. Though the crash is definitely a bug. For now, I have filed two bug reports. All code snippets above should compile. The error has to do with using the scope resolution on locally defined scopes. (Spotted by dvide.) And the crash bug has to do with... who knows. :) Update According to the bug reports, they have both been fixed for the next release of Visual Studio 2010.

    Read the article

  • Timing related crash when unloading a DLL?

    - by fbrereto
    I know I'm reaching for straws here, but this one is a mystery... any pointers or help would be most welcome, so I'm appealing to those more intelligent than I: We have a crash exhibited in our release binaries only. The crash takes place as the binary is bringing itself down and terminating sub-libraries upon which it depends. Its ability to be reproduced is dependent on the machine- some are 100% reliable in reproducing the crash, some don't exhibit the issue at all, and some are in between. The crash is deep within one of the sublibraries, and there is a good likelihood the stack is corrupt by the time the rubble can be brought into a debugger (MSVC 2008 SP1) to be examined. Running the binary under the debugger prevents the bug from happening, as does remote debugging, as does (of all things) connecting to the machine via VNC. We have tried to install the Microsoft Driver Development Kit, and doing so also squelches the bug. What would be the next best place to look? What tools would be best in this circumstance? Does it sound like a race condition, or something else?

    Read the article

  • Template compilation error in Sun Studio 12

    - by Jagannath
    We are migrating to Sun Studio 12.1 and with the new compiler [ CC: Sun C++ 5.10 SunOS_sparc 2009/06/03 ]. I am getting compilation error while compiling a code that compiled fine with earlier version of Sun Compiler [ CC: Sun WorkShop 6 update 2 C++ 5.3 2001/05/15 ]. This is the compilation error I get. "Sample.cc": Error: Could not find a match for LoopThrough(int[2]) needed in main(). 1 Error(s) detected. * Error code 1. CODE: #include <iostream> #define PRINT_TRACE(STR) \ std::cout << __FILE__ << ":" << __LINE__ << ":" << STR << "\n"; template<size_t SZ> void LoopThrough(const int(&Item)[SZ]) { PRINT_TRACE("Specialized version"); for (size_t index = 0; index < SZ; ++index) { std::cout << Item[index] << "\n"; } } /* template<typename Type, size_t SZ> void LoopThrough(const Type(&Item)[SZ]) { PRINT_TRACE("Generic version"); } */ int main() { { int arr[] = { 1, 2 }; LoopThrough(arr); } } If I uncomment the code with Generic version, the code compiles fine and the generic version is called. I don't see this problem with MSVC 2010 with extensions disabled and the same case with ideone here. The specialized version of the function is called. Now the question is, is this a bug in Sun Compiler ? If yes, how could we file a bug report ?

    Read the article

  • Which function pair in QString to use for converting to/from std::string?

    - by Noah Roberts
    I'm working on a project that we want to use Unicode and could end up in countries like Japan, etc... We want to use std::string for the underlying type that holds string data in the data layer (see Qt, MSVC, and /Zc:wchar_t- == I want to blow up the world as to why). The problem is that I'm not completely sure which function pair (to/from) to use for this and be sure we're 100% compatible with anything the user might enter in the Qt layer. A look at to/fromStdString indicates that I'd have to use setCodecForCStrings. The documentation for that function though indicates that I wouldn't want to do this for things like Japanese. This is the set that I'd LIKE to use though. Does someone know enough to explain how I'd set this up if it's possible? The other option that looks like I could be pretty sure of it working is the to/fromUTF8 functions. Those would require a two step approach though so I'd prefer the other if possible. Is there anything I've missed?

    Read the article

  • How can you change the font color of a theme-enabled control?

    - by Edouard Westphal
    Yes, this is again this question: How can I change the font color of a TCheckBox (or any handled control) with Delphi7-Delphi2007 on a themes enabled application? After reading a lot on the internet and on this site, I found 4 kinds of answer: and Most populare (even from QC): You can't, it's designed like that by Microsoft. Create a component that let you draw it like you want. Buy expensive component set that draws like you want. Do not use themes. OK, but I am still unhappy with that. Giving a user colored feedback for the status of a property/data he has on a form, seems legitimate to me. Then I just installed the MSVC# 2008 Express edition, and what a surprise, they can change the color of the font (property ForeColor of the check box) Then what? It does not seem to be a "it's designed like that, by Microsoft." then now the question again: How can I change the font color of a TCheckBox (or any handled control) with Delphi 7 through Delphi 2007 on a theme-enabled application?

    Read the article

  • When is a bool not a bool (compiler warning C4800)

    - by omatai
    Consider this being compiled in MS Visual Studio 2005 (and probably others): CPoint point1( 1, 2 ); CPoint point2( 3, 4 ); const bool point1And2Identical( point1 == point2 ); // C4800 warning const bool point1And2TheSame( ( point1 == point2 ) == TRUE ); // no warning What the...? Is the MSVC compiler brain-dead? As far as I can tell, TRUE is #defined as 1, without any type information. So by what magic is there any difference between these two lines? Surely the type of the expression inside the brackets is the same in both cases? [This part of the question now satisfactorily answered in the comments just below] Personally, I think that avoiding the warning by using the == TRUE option is ugly (though less ugly than the != 0 alternative, despite being more strictly correct), and it is better to use #pragma warning( disable:4800 ) to imply "my code is good, the compiler is an ass". Agree? Note - I have seen all manner of discussion on C4800 talking about assigning ints to bools, or casting a burger combo with large fries (hold the onions) to a bool, and wondering why there are strange results. I can't find a clear answer on what seems like a much simpler question... that might just shine line on C4800 in general.

    Read the article

  • Creating serializeable unique compile-time identifiers for arbitrary UDT's.

    - by Endiannes
    I would like a generic way to create unique compile-time identifiers for any C++ user defined types. for example: unique_id<my_type>::value == 0 // true unique_id<other_type>::value == 1 // true I've managed to implement something like this using preprocessor meta programming, the problem is, serialization is not consistent. For instance if the class template unique_id is instantiated with other_type first, then any serialization in previous revisions of my program will be invalidated. I've searched for solutions to this problem, and found several ways to implement this with non-consistent serialization if the unique values are compile-time constants. If RTTI or similar methods, like boost::sp_typeinfo are used, then the unique values are obviously not compile-time constants and extra overhead is present. An ad-hoc solution to this problem would be, instantiating all of the unique_id's in a separate header in the correct order, but this causes additional maintenance and boilerplate code, which is not different than using an enum unique_id{my_type, other_type};. A good solution to this problem would be using user-defined literals, unfortunately, as far as I know, no compiler supports them at this moment. The syntax would be 'my_type'_id; 'other_type'_id; with udl's. I'm hoping somebody knows a trick that allows implementing serialize-able unique identifiers in C++ with the current standard (C++03/C++0x), I would be happy if it works with the latest stable MSVC and GNU-G++ compilers, although I expect if there is a solution, it's not portable.

    Read the article

  • C++: Constructor/destructor unresolved when not inline?

    - by Anamon
    In a plugin-based C++ project, I have a TmpClass that is used to exchange data between the main application and the plugins. Therefore the respective TmpClass.h is included in the abstract plugin interface class that is included by the main application project, and implemented by each plugin. As the plugins work on STL vectors of TmpClass instances, there needs to be a default constructor and destructor for the TmpClass. I had declared these in TmpClass.h: class TmpClass { TmpClass(); ~TmpClass(); } and implemented them in TmpClass.cpp. TmpClass::~TmpClass() {} TmpClass::TmpClass() {} However, when compiling plugins this leads to the linker complaining about two unresolved externals - the default constructor and destructor of TmpClass as required by the std::vector<TmpClass> template instantiation - even though all other functions I declare in TmpClass.h and implement in TmpClass.cpp work. As soon as I remove the (empty) default constructor and destructor from the .cpp file and inline them into the class declaration in the .h file, the plugins compile and work. Why is it that the default constructor and destructor have to be inline for this code to compile? Why does it even maatter? (I'm using MSVC++8).

    Read the article

  • Importing a C DLL's functions into a C++ program

    - by bobobobo
    I have a 3rd party library that's written in C. It exports all of its functions to a DLL. I have the .h file, and I'm trying to load the DLL from my C++ program. The first thing I tried was surrounding the parts where I #include the 3rd party lib in #ifdef __cplusplus extern "C" { #endif and, at the end #ifdef __cplusplus } // extern "C" #endif But the problem there was, all of the DLL file function linkage looked like this in their header files: a_function = (void *)GetProcAddress(dll, "a_function"); While really a_function had type int (*a_function) (int *). Apparently MSVC++ compiler doesn't like this, while MSVC compiler does not seem to mind. So I went through (brutal torture) and fixed them all to the pattern typedef int (*_a_function) (int *); _a_function a_function ; Then, to link it to the DLL code, in main(): a_function = (_a_function)GetProcAddress(dll, "a_function"); This SEEMS to make the compiler MUCH, MUCH happier, but it STILL complains with this final set of 143 errors, each saying for each of the DLL link attempts: error LNK2005: _a_function already defined in main.obj main.obj Multiple symbol definition errors.. sounds like a job for extern! SO I went and made ALL the function pointer declarations as follows: function_pointers.h typedef int (*_a_function) (int *); extern _a_function a_function ; And in a cpp file: function_pointers.cpp #include "function_pointers.h" _a_function a_function ; ALL fine and dandy.. except for linker errors now of the form: error LNK2001: unresolved external symbol _a_function main.obj Main.cpp includes "function_pointers.h", so it should know where to find each of the functions.. I am bamboozled. Does any one have any pointers to get me functional? (Pardon the pun..)

    Read the article

  • boost::asio::io_service throws exception

    - by Ace
    Okay, I seriously cannot figure this out. I have a DLL project in MSVC that is attempting to use Asio (from Boost 1.45.0), but whenever I create my io_service, an exception is thrown. Here is what I am doing for testing purposes: void run() { boost::this_thread::sleep(boost::posix_time::seconds(5)); try { boost::asio::io_service io_service; } catch (std::exception & e) { MessageBox(NULL, e.what(), "Exception", MB_OK); } } BOOL WINAPI DllMain(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved) { if (fdwReason == DLL_PROCESS_ATTACH) { boost::thread thread(run); } return TRUE; } This is what the message box shows: winsock: WSAStartup cannot function at this time because the underlying system it uses to provide network services is currently unavailable Here is what MSDN says about it (error code 10091, WSASYSNOTREADY): Network subsystem is unavailable. This error is returned by WSAStartup if the Windows Sockets implementation cannot function at because the underlying system it uses to provide network services is currently unavailable. Users should check: That the appropriate Windows Sockets DLL file is in the current path. That they are not trying to use more than one Windows Sockets implementation simultaneously. If there is more than one Winsock DLL on your system, be sure the first one in the path is appropriate for the network subsystem currently loaded. The Windows Sockets implementation documentation to be sure all necessary components are currently installed and configured correctly. Yet none of this seems to apply to me (or so I think). Here is my command line: /O2 /GL /D "_WIN32_WINNT=0x0501" /D "_WINDLL" /FD /EHsc /MD /Gy /Fo"Release\" /Fd"Release\vc90.pdb" /W3 /WX /nologo /c /TP /errorReport:prompt If anyone knows what might be wrong, please help me out! Thanks.

    Read the article

  • boost::function & boost::lambda - call site invocation & accessing _1 and _2 as the type

    - by John Dibling
    Sorry for the confusing title. Let me explain via code: #include <string> #include <boost\function.hpp> #include <boost\lambda\lambda.hpp> #include <iostream> int main() { using namespace boost::lambda; boost::function<std::string(std::string, std::string)> f = _1.append(_2); std::string s = f("Hello", "There"); std::cout << s; return 0; } I'm trying to use function to create a function that uses the labda expressions to create a new return value, and invoke that function at the call site, s = f("Hello", "There"); When I compile this, I get: 1>------ Build started: Project: hacks, Configuration: Debug x64 ------ 1>Compiling... 1>main.cpp 1>.\main.cpp(11) : error C2039: 'append' : is not a member of 'boost::lambda::lambda_functor<T>' 1> with 1> [ 1> T=boost::lambda::placeholder<1> 1> ] Using MSVC 9. My fundamental understanding of function and lambdas may be lacking. The tutorials and docs did not help so far this morning. How do I do what I'm trying to do?

    Read the article

  • How to correctly pass a float from C# to C++ (dll)

    - by RavelT
    I'm getting huge differences when I pass a float from C# to C++. I'm passing a dynamic float wich changes over time. With a debuger I get this: c++ lonVel -0.036019072 float c# lonVel -0.029392920 float I did set my MSVC++2010 floating point model to /fp:fast wich should be the standard in .NET if I'm not mistaken, but this didnt help. Now I cant give out the code but I can show a fraction of it. From C# side it looks like this: namespace Example { public class Wheel { public bool loging = true; #region Members public IntPtr nativeWheelObject; #endregion Members public Wheel() { this.nativeWheelObject = Sim.Dll_Wheel_Add(); return; } #region Wrapper methods public void SetVelocity(float lonRoadVelocity,float latRoadVelocity){Sim.Dll_Wheel_SetVelocity(this.nativeWheelObject,lonRoadVelocity,latRoadVelocity);} #endregion Wrapper methods } internal class Sim { #region PInvokes [DllImport(pluginName, CallingConvention=CallingConvention.Cdecl)] public static extern void Dll_Wheel_SetVelocity(IntPtr wheel,float lonRoadVelocity,float latRoadVelocity); #endregion PInvokes } } And in C++ side @ exportFunctions.cpp: EXPORT_API void Dll_Wheel_SetVelocity(CarWheel* wheel,float lonRoadVelocity,float latRoadVelocity){ wheel->SetVelocity(lonRoadVelocity,latRoadVelocity);} So any sugestions on what I should do in order to get 1:1 results or atleast 99% correct results.

    Read the article

  • Null pointer to struct which has zero size (empty)... It is a good practice?

    - by ProgramWriter
    Hi2All.. I have some null struct, for example: struct null_type { NullType& someNonVirtualMethod() { return *this; } }; And in some function i need to pass reference to this type. Reason: template <typename T1 = null_type, typename T2 = null_type, ... > class LooksLikeATupleButItsNotATuple { public: LooksLikeATupleButItsNotATuple(T1& ref1 = defParamHere, T2& ref2 = andHere..) : _ref1(ref1), _ref2(ref2), ... { } void someCompositeFunctionHere() { _ref1.someNonVirtualMethod(); _ref2.someNonVirtualMethod(); ... } private: T1& _ref1; T2& _ref2; ...; }; It is a good practice to use null reference as a default parameter?: *static_cast<NullType*>(0) It works on MSVC, but i have some doubts...

    Read the article

  • Win32 and Win64 programming in C sources?

    - by Nick Rosencrantz
    I'm learning OpenGL with C and that makes me include the windows.h file in my project. I'd like to look at some more specific windows functions and I wonder if you can cite some good sources for learning the basics of Win32 and Win64 programming in C (or C++). I use MS Visual C++ and I prefer to stick with C even though much of the Windows API seems to be C++. I'd like my program to be portable and using some platform-indepedent graphics library like OpenGL I could make my program portable with some slight changes for window management. Could you direct me with some pointers to books or www links where I can find more info? I've already studied the OpenGL red book and the C programming language, what I'm looking for is the platform-dependent stuff and how to handle that since I run both Linux and Windows where I find the development environment Visual Studio is pretty good but the debugger gdb is not available on windows so it's a trade off which environment i'll choose in the end - Linux with gcc or Windows with MSVC. Here is the program that draws a graphics primitive with some use of windows.h This program is also runnable on Linux without changing the code that actually draws the graphics primitive: #include <windows.h> #include <gl/gl.h> LRESULT CALLBACK WindowProc(HWND, UINT, WPARAM, LPARAM); void EnableOpenGL(HWND hwnd, HDC*, HGLRC*); void DisableOpenGL(HWND, HDC, HGLRC); int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { WNDCLASSEX wcex; HWND hwnd; HDC hDC; HGLRC hRC; MSG msg; BOOL bQuit = FALSE; float theta = 0.0f; /* register window class */ wcex.cbSize = sizeof(WNDCLASSEX); wcex.style = CS_OWNDC; wcex.lpfnWndProc = WindowProc; wcex.cbClsExtra = 0; wcex.cbWndExtra = 0; wcex.hInstance = hInstance; wcex.hIcon = LoadIcon(NULL, IDI_APPLICATION); wcex.hCursor = LoadCursor(NULL, IDC_ARROW); wcex.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH); wcex.lpszMenuName = NULL; wcex.lpszClassName = "GLSample"; wcex.hIconSm = LoadIcon(NULL, IDI_APPLICATION);; if (!RegisterClassEx(&wcex)) return 0; /* create main window */ hwnd = CreateWindowEx(0, "GLSample", "OpenGL Sample", WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, 256, 256, NULL, NULL, hInstance, NULL); ShowWindow(hwnd, nCmdShow); /* enable OpenGL for the window */ EnableOpenGL(hwnd, &hDC, &hRC); /* program main loop */ while (!bQuit) { /* check for messages */ if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { /* handle or dispatch messages */ if (msg.message == WM_QUIT) { bQuit = TRUE; } else { TranslateMessage(&msg); DispatchMessage(&msg); } } else { /* OpenGL animation code goes here */ glClearColor(0.0f, 0.0f, 0.0f, 0.0f); glClear(GL_COLOR_BUFFER_BIT); glPushMatrix(); glRotatef(theta, 0.0f, 0.0f, 1.0f); glBegin(GL_TRIANGLES); glColor3f(1.0f, 0.0f, 0.0f); glVertex2f(0.0f, 1.0f); glColor3f(0.0f, 1.0f, 0.0f); glVertex2f(0.87f, -0.5f); glColor3f(0.0f, 0.0f, 1.0f); glVertex2f(-0.87f, -0.5f); glEnd(); glPopMatrix(); SwapBuffers(hDC); theta += 1.0f; Sleep (1); } } /* shutdown OpenGL */ DisableOpenGL(hwnd, hDC, hRC); /* destroy the window explicitly */ DestroyWindow(hwnd); return msg.wParam; } LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { switch (uMsg) { case WM_CLOSE: PostQuitMessage(0); break; case WM_DESTROY: return 0; case WM_KEYDOWN: { switch (wParam) { case VK_ESCAPE: PostQuitMessage(0); break; } } break; default: return DefWindowProc(hwnd, uMsg, wParam, lParam); } return 0; } void EnableOpenGL(HWND hwnd, HDC* hDC, HGLRC* hRC) { PIXELFORMATDESCRIPTOR pfd; int iFormat; /* get the device context (DC) */ *hDC = GetDC(hwnd); /* set the pixel format for the DC */ ZeroMemory(&pfd, sizeof(pfd)); pfd.nSize = sizeof(pfd); pfd.nVersion = 1; pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER; pfd.iPixelType = PFD_TYPE_RGBA; pfd.cColorBits = 24; pfd.cDepthBits = 16; pfd.iLayerType = PFD_MAIN_PLANE; iFormat = ChoosePixelFormat(*hDC, &pfd); SetPixelFormat(*hDC, iFormat, &pfd); /* create and enable the render context (RC) */ *hRC = wglCreateContext(*hDC); wglMakeCurrent(*hDC, *hRC); } void DisableOpenGL (HWND hwnd, HDC hDC, HGLRC hRC) { wglMakeCurrent(NULL, NULL); wglDeleteContext(hRC); ReleaseDC(hwnd, hDC); }

    Read the article

  • GCC problem with raw double type comparisons

    - by Monomer
    I have the following bit of code, however when compiling it with GCC 4.4 with various optimization flags I get some unexpected results when its run. #include <iostream> int main() { const unsigned int cnt = 10; double lst[cnt] = { 0.0 }; const double v[4] = { 131.313, 737.373, 979.797, 731.137 }; for(unsigned int i = 0; i < cnt; ++i) { lst[i] = v[i % 4] * i; } for(unsigned int i = 0; i < cnt; ++i) { double d = v[i % 4] * i; if(lst[i] != d) { std::cout << "error @ : " << i << std::endl; return 1; } } return 0; } when compiled with: "g++ -pedantic -Wall -Werror -O1 -o test test.cpp" I get the following output: "error @ : 3" when compiled with: "g++ -pedantic -Wall -Werror -O2 -o test test.cpp" I get the following output: "error @ : 3" when compiled with: "g++ -pedantic -Wall -Werror -O3 -o test test.cpp" I get no errors when compiled with: "g++ -pedantic -Wall -Werror -o test test.cpp" I get no errors I do not believe this to be an issue related to rounding, or epsilon difference in the comparison. I've tried this with Intel v10 and MSVC 9.0 and they all seem to work as expected. I believe this should be nothing more than a bitwise compare. If I replace the if-statement with the following: if (static_cast<long long int>(lst[i]) != static_cast<long long int>(d)), and add "-Wno-long-long" I get no errors in any of the optimization modes when run. If I add std::cout << d << std::endl; before the "return 1", I get no errors in any of the optimization modes when run. Is this a bug in my code, or is there something wrong with GCC and the way it handles the double type?

    Read the article

  • Connecting a LAN to an OpenVPN server via a windows 7 client gateway

    - by user705142
    I've got OpenVPN set up between my windows 7 client and linux server. The goal is that I'll get secure access to a webapp running on the server from any computer on the client LAN. I'm using ccd to assign static ip addresses to each client connection, with key authentication. It's working on my client machine (10.83.41.9), and when you go to the gateway IP address (10.83.41.1), it loads up the webapp. Now I really need the other computers on the client LAN to be able to connect to the webapp as well, via the windows machine. The client has a static IP address of 192.168.2.100 on the LAN, and I've enabled IP forwarding in windows (confirmed by ipconfig /all). In my router I've forwarded 10.83.41.1 / 255.255.255.255 to 192.168.2.100. In server.conf I have.. route 192.168.2.0 255.255.255.0 And in the office ccd.. ifconfig-push 10.83.41.9 10.83.41.10 iroute 192.168.2.0 255.255.255.0 The client log is as follows: Thu Mar 15 20:19:56 2012 OpenVPN 2.2.2 Win32-MSVC++ [SSL] [LZO2] [PKCS11] built on Dec 15 2011 Thu Mar 15 20:19:56 2012 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Thu Mar 15 20:19:56 2012 Control Channel Authentication: using 'ta.key' as a OpenVPN static key file Thu Mar 15 20:19:56 2012 Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Thu Mar 15 20:19:56 2012 Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Thu Mar 15 20:19:56 2012 LZO compression initialized Thu Mar 15 20:19:56 2012 Control Channel MTU parms [ L:1558 D:166 EF:66 EB:0 ET:0 EL:0 ] Thu Mar 15 20:19:56 2012 Socket Buffers: R=[8192->8192] S=[64512->64512] Thu Mar 15 20:19:56 2012 Data Channel MTU parms [ L:1558 D:1450 EF:58 EB:135 ET:0 EL:0 AF:3/1 ] Thu Mar 15 20:19:56 2012 Local Options hash (VER=V4): '9e7066d2' Thu Mar 15 20:19:56 2012 Expected Remote Options hash (VER=V4): '162b04de' Thu Mar 15 20:19:56 2012 UDPv4 link local: [undef] Thu Mar 15 20:19:56 2012 UDPv4 link remote: 111.65.224.202:1194 Thu Mar 15 20:19:56 2012 TLS: Initial packet from 111.65.224.202:1194, sid=ceb04c22 8cc6d151 Thu Mar 15 20:19:56 2012 VERIFY OK: depth=1, /C=NZ/O=XXX./CN=XXX Thu Mar 15 20:19:56 2012 VERIFY OK: nsCertType=SERVER Thu Mar 15 20:19:56 2012 VERIFY OK: depth=0, /C=NZ/O=XXX./CN=XXX Thu Mar 15 20:19:56 2012 Replay-window backtrack occurred [1] Thu Mar 15 20:19:56 2012 Data Channel Encrypt: Cipher 'AES-256-CBC' initialized with 256 bit key Thu Mar 15 20:19:56 2012 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Thu Mar 15 20:19:56 2012 Data Channel Decrypt: Cipher 'AES-256-CBC' initialized with 256 bit key Thu Mar 15 20:19:56 2012 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Thu Mar 15 20:19:56 2012 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Thu Mar 15 20:19:56 2012 [server] Peer Connection Initiated with 111.65.224.202:1194 Thu Mar 15 20:19:58 2012 SENT CONTROL [server]: 'PUSH_REQUEST' (status=1) Thu Mar 15 20:19:59 2012 PUSH: Received control message: 'PUSH_REPLY,route 10.83.41.1,topology net30,ping 10,ping-restart 120,ifconfig 10.83.41.9 10.83.41.10' Thu Mar 15 20:19:59 2012 OPTIONS IMPORT: timers and/or timeouts modified Thu Mar 15 20:19:59 2012 OPTIONS IMPORT: --ifconfig/up options modified Thu Mar 15 20:19:59 2012 OPTIONS IMPORT: route options modified Thu Mar 15 20:19:59 2012 ROUTE default_gateway=192.168.2.1 Thu Mar 15 20:19:59 2012 TAP-WIN32 device [OpenVPN] opened: \\.\Global\{B32D85C9-1942-42E2-80BA-7E0B5BB5185F}.tap Thu Mar 15 20:19:59 2012 TAP-Win32 Driver Version 9.9 Thu Mar 15 20:19:59 2012 TAP-Win32 MTU=1500 Thu Mar 15 20:19:59 2012 Notified TAP-Win32 driver to set a DHCP IP/netmask of 10.83.41.9/255.255.255.252 on interface {B32D85C9-1942-42E2-80BA-7E0B5BB5185F} [DHCP-serv: 10.83.41.10, lease-time: 31536000] Thu Mar 15 20:19:59 2012 Successful ARP Flush on interface [45] {B32D85C9-1942-42E2-80BA-7E0B5BB5185F} Thu Mar 15 20:20:04 2012 TEST ROUTES: 1/1 succeeded len=1 ret=1 a=0 u/d=up Thu Mar 15 20:20:04 2012 C:\WINDOWS\system32\route.exe ADD 10.83.41.1 MASK 255.255.255.255 10.83.41.10 Thu Mar 15 20:20:04 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Thu Mar 15 20:20:04 2012 Route addition via IPAPI succeeded [adaptive] Thu Mar 15 20:20:04 2012 Initialization Sequence Completed From the other machines I can ping 192.169.2.100, but not 10.83.41.1. In the how-to, it mentions "Make sure your network interface is in promiscuous mode." as well. I can't find in the windows network config, so this may or may not be part of it. Ideally this would be achieved without any special configuration the other LAN computers. Not sure how far I'm going to get on my own at this point, any ideas? Is there something I'm missing, or anything I should need to know?

    Read the article

  • OpenVPN stopped working, what could have happened?

    - by jaja
    I have Openvpn, and it worked great when I used it on PC (Windows 8), then I copied all files (Certificates and config) to an Android 4 phone to use them. Now, Openvpn works on the phone, but not the PC. Specifically, when I open Google I get: The server at www.google.com can't be found, because the DNS lookup failed, but the VPN seems to be connected. I have a simple question, could the problem be because I copied the same files? Routing table before connecting:- IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.254 192.168.1.101 25 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.1.0 255.255.255.0 On-link 192.168.1.101 281 192.168.1.101 255.255.255.255 On-link 192.168.1.101 281 192.168.1.255 255.255.255.255 On-link 192.168.1.101 281 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.101 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.101 281 =========================================================================== Routing table after connecting:- IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.254 192.168.1.101 25 0.0.0.0 128.0.0.0 10.8.0.5 10.8.0.6 30 10.8.0.1 255.255.255.255 10.8.0.5 10.8.0.6 30 10.8.0.4 255.255.255.252 On-link 10.8.0.6 286 10.8.0.6 255.255.255.255 On-link 10.8.0.6 286 10.8.0.7 255.255.255.255 On-link 10.8.0.6 286 **.**.***.** 255.255.255.255 192.168.1.254 192.168.1.101 25 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.0.0.0 128.0.0.0 10.8.0.5 10.8.0.6 30 192.168.1.0 255.255.255.0 On-link 192.168.1.101 281 192.168.1.101 255.255.255.255 On-link 192.168.1.101 281 192.168.1.255 255.255.255.255 On-link 192.168.1.101 281 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.101 281 224.0.0.0 240.0.0.0 On-link 10.8.0.6 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.101 281 255.255.255.255 255.255.255.255 On-link 10.8.0.6 286 =========================================================================== Server conf:- port 1194 proto udp dev tun ca ca.crt cert myservername.crt key myservername.key dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt duplicate-cn keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 push "redirect-gateway def1" Client conf:- client dev tun proto udp remote 89.32.148.35 1194 resolv-retry infinite nobind persist-key persist-tun mute-replay-warnings ca ca.crt cert client1.crt key client1.key verb 3 comp-lzo redirect-gateway def1 Here is the log file:- Tue Dec 18 16:34:27 2012 OpenVPN 2.2.2 Win32-MSVC++ [SSL] [LZO2] [PKCS11] built on Dec 15 2011 Tue Dec 18 16:34:27 2012 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info. Tue Dec 18 16:34:27 2012 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Tue Dec 18 16:34:27 2012 LZO compression initialized Tue Dec 18 16:34:27 2012 Control Channel MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Tue Dec 18 16:34:27 2012 Socket Buffers: R=[65536-65536] S=[65536-65536] Tue Dec 18 16:34:27 2012 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Tue Dec 18 16:34:27 2012 Local Options hash (VER=V4): '41690919' Tue Dec 18 16:34:27 2012 Expected Remote Options hash (VER=V4): '530fdded' Tue Dec 18 16:34:27 2012 UDPv4 link local: [undef] Tue Dec 18 16:34:27 2012 UDPv4 link remote: ..*.:1194 Tue Dec 18 16:34:27 2012 TLS: Initial packet from ..*.:1194, sid=4d1496ad 2079a5fa Tue Dec 18 16:34:28 2012 VERIFY OK: depth=1, /C=/ST=/L=/O=/OU=/CN=/name=/emailAddress= Tue Dec 18 16:34:28 2012 VERIFY OK: depth=0, /C=/ST=/L=/O=/OU=/CN=/name=/emailAddress= Tue Dec 18 16:34:29 2012 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Tue Dec 18 16:34:29 2012 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Dec 18 16:34:29 2012 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Tue Dec 18 16:34:29 2012 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Dec 18 16:34:29 2012 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Tue Dec 18 16:34:29 2012 [myservername] Peer Connection Initiated with ..*.:1194 Tue Dec 18 16:34:32 2012 SENT CONTROL [myservername]: 'PUSH_REQUEST' (status=1) Tue Dec 18 16:34:32 2012 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1,route 10.8.0.1,topology net30,ping 10,ping-restart 120,ifconfig 10.8.0.6 10.8.0.5' Tue Dec 18 16:34:32 2012 OPTIONS IMPORT: timers and/or timeouts modified Tue Dec 18 16:34:32 2012 OPTIONS IMPORT: --ifconfig/up options modified Tue Dec 18 16:34:32 2012 OPTIONS IMPORT: route options modified Tue Dec 18 16:34:32 2012 ROUTE default_gateway=192.168.1.254 Tue Dec 18 16:34:32 2012 TAP-WIN32 device [Local Area Connection] opened: \.\Global{F0CFEBBF-9B1B-4CFB-8A82-027330974C30}.tap Tue Dec 18 16:34:32 2012 TAP-Win32 Driver Version 9.9 Tue Dec 18 16:34:32 2012 TAP-Win32 MTU=1500 Tue Dec 18 16:34:32 2012 Notified TAP-Win32 driver to set a DHCP IP/netmask of 10.8.0.6/255.255.255.252 on interface {F0CFEBBF-9B1B-4CFB-8A82-027330974C30} [DHCP-serv: 10.8.0.5, lease-time: 31536000] Tue Dec 18 16:34:32 2012 Successful ARP Flush on interface [26] {F0CFEBBF-9B1B-4CFB-8A82-027330974C30} Tue Dec 18 16:34:37 2012 TEST ROUTES: 2/2 succeeded len=1 ret=1 a=0 u/d=up Tue Dec 18 16:34:37 2012 C:\WINDOWS\system32\route.exe ADD ..*. MASK 255.255.255.255 192.168.1.254 Tue Dec 18 16:34:37 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=25 and dwForwardType=4 Tue Dec 18 16:34:37 2012 Route addition via IPAPI succeeded [adaptive] Tue Dec 18 16:34:37 2012 C:\WINDOWS\system32\route.exe ADD 0.0.0.0 MASK 128.0.0.0 10.8.0.5 Tue Dec 18 16:34:37 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Tue Dec 18 16:34:37 2012 Route addition via IPAPI succeeded [adaptive] Tue Dec 18 16:34:37 2012 C:\WINDOWS\system32\route.exe ADD 128.0.0.0 MASK 128.0.0.0 10.8.0.5 Tue Dec 18 16:34:37 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Tue Dec 18 16:34:37 2012 Route addition via IPAPI succeeded [adaptive] Tue Dec 18 16:34:37 2012 C:\WINDOWS\system32\route.exe ADD 10.8.0.1 MASK 255.255.255.255 10.8.0.5 Tue Dec 18 16:34:37 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Tue Dec 18 16:34:37 2012 Route addition via IPAPI succeeded [adaptive] Tue Dec 18 16:34:37 2012 Initialization Sequence Completed

    Read the article

< Previous Page | 4 5 6 7 8 9  | Next Page >