Search Results

Search found 19102 results on 765 pages for 'runtime library'.

Page 242/765 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • How do you parse the XDG/gnome/kde menu/desktop item structure in c++??

    - by Joe Soul-bringer
    I would like to parse the menu structure for Gnome Panels (the standard Gnome Desktop application launcher) and it's KDE equivalent using c/c++ function calls. That is, I'd like a list of what the base menu categories and submenu are installed in a given machine. I would like to do with using fairly simple c/c++ function calls (with NO shelling out please). I understand that these menus are in the standard xdg format. I understand that this menu structure is stored in xml files such as: /home/user/.config/menus/applications.menu I've look here: http://www.freedesktop.org/wiki/Specifications/menu-spec?action=show&redirect=Standards%2Fmenu-spec but all they offer is the standard and some shell files to insert item entries (I don't want shell scripts, I don't want installation, I definitely don't want to create a c-library from the XDG specification. I want to find the existing menu structure). I've looked here: http://library.gnome.org/admin/system-admin-guide/stable/menustructure-13.html.en for more notes on these structures. None of this gives me a good idea of how determine the menu structures using a c/c++ program. The actual gnome menu structures seem to be a horrifically hairy things - they don't seem to show the menu structure but to give an XML-coded description of all the changes that the menus have gone through since installation. I assume gnome panels parses these file so there's a function buried somewhere to do this but I've yet to find where that function is after scanning library.gnome.org for a couple of days. I've scanned the Nautilus source code as well but Panels seem to exist elsewhere or are burried well. Thanks in advance

    Read the article

  • Module not found

    - by TMP
    Hi There! I've been working on this one quite a bit and haven't gotten any closer to a solution. I juut dug up my old copy of the WindowsHookLib again - It's available with source at http://www.codeproject.com/KB/DLL/WindowsHookLib.aspx. This library allows Global Windows Mouse/Keyboard/Clipboard Hooks, which is very useful. I'm trying to use the Mouse Hook in here to Capture Mouse-Motion (I could use a Timer that always polls Cursor.Position, but I plan on using more features of WindowsHookLib later). Code as follows: MouseHook mh = new MouseHook(); mh.InstallHook(); mh.MouseMove += new EventHandler<WindowsHookLib.MouseEventArgs>(mh_MouseMove); But on the call to InstallHook(), I get an Exception: "The specified Module could not be found". Strange. Searching, I found that someone thought this occurs because a DLL is not in a place included in the Windows PATH variable, and because placing it in system32 didn't help I went the whole hog and translated the thing to C# for inclusion directly in my project (I was curious how it works). However the error was obstinately persistent, so I dug a bit on this, and found the Code in the Library that is responsible: In InstallHook(), we have IntPtr hinstDLL = Marshal.GetHINSTANCE(Assembly.GetExecutingAssembly().GetModules()[0]); this._hMouseHook = UnsafeNativeMethods.SetWindowsHookEx(14, this._mouseProc, hinstDLL, 0); if (this._hMouseHook == IntPtr.Zero) { throw new MouseHookException(new Win32Exception(Marshal.GetLastWin32Error()).Message); } And this (after modification and recompile) tells me that what I'm really getting is a Windows error "ERROR_MOD_NOT_FOUND"! Now, Here I'm stumped. Didn't I just compile the Hook Library directly into my project? (UnsafeMethods.SetWindowsHookEx is just a DllImported Method from user32) Any Answers, or Prods in the right direction, or any hints, pointers or similar are very much appreciated!

    Read the article

  • multi-core processing in R on windows XP - via doMC and foreach

    - by Jan
    Hi guys, I'm posting this question to ask for advice on how to optimize the use of multiple processors from R on a Windows XP machine. At the moment I'm creating 4 scripts (each script with e.g. for (i in 1:100) and (i in 101:200), etc) which I run in 4 different R sessions at the same time. This seems to use all the available cpu. I however would like to do this a bit more efficient. One solution could be to use the "doMC" and the "foreach" package but this is not possible in R on a Windows machine. e.g. library("foreach") library("strucchange") library("doMC") # would this be possible on a windows machine? registerDoMC(2) # for a computer with two cores (processors) ## Nile data with one breakpoint: the annual flows drop in 1898 ## because the first Ashwan dam was built data("Nile") plot(Nile) ## F statistics indicate one breakpoint fs.nile <- Fstats(Nile ~ 1) plot(fs.nile) breakpoints(fs.nile) # , hpc = "foreach" --> It would be great to test this. lines(breakpoints(fs.nile)) Any solutions or advice? Thanks, Jan

    Read the article

  • PHP on Ubuntu autoload_register and namespacing issue

    - by Tian Loon
    I have a problem here. I have created the namespace for all classes. previously i was using windows 7 to develop current app, everything is fine. now i just moved to ubuntu, the problem comes. index.php spl_autoload_extensions(".php"); /*spl_autoload_register(function ($class) { require __DIR__ . '/../' . $class . '.php'; });*/ //provided i have tried the above method, which works on windows 7 but not Ubuntu spl_autoload_register(function ($class) { require '/../' . $class . '.php'; }); //for your info, i do this //require "../resources/library/Config.php"; //it works, no error use resources\library as LIB; use resources\dal as DAL; //instantiation $config = new LIB\Config(); print_r($config->fbKey()); i got this error PHP Warning: require(../resources\\library\\Config.php): failed to open stream: No such file or directory in /home/user/dir1/dir2/index.php i cannot find the error. hope you guys can help me with this. any question don hesitate to comment i will edit. thanks in advance. UPDATE - Extra Info PHP version 5.4.6 LATEST UPDATE any idea how to solve this without using str_replace ?

    Read the article

  • My iPhone app ran fine in simulator but crashed on device (iPod touch 3.1.2) test, I got the followi

    - by Mickey Shine
    I was running myapp on an iPod touch and I noticed it missed some libraries. Is that the reason? [Session started at 2010-03-19 15:57:04 +0800.] GNU gdb 6.3.50-20050815 (Apple version gdb-1128) (Fri Dec 18 10:08:53 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys007 Loading program into debugger… Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-237-78 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none run Running… [Switching to thread 11779] [Switching to thread 11779] sharedlibrary apply-load-rules all (gdb) continue warning: Unable to read symbols for "/Library/MobileSubstrate/MobileSubstrate.dylib" (file not found). 2010-03-19 15:57:18.892 myapp[2338:207] MS:Notice: Installing: com.yourcompany.myapp [myapp] (478.52) 2010-03-19 15:57:19.145 myapp[2338:207] MS:Notice: Loading: /Library/MobileSubstrate/DynamicLibraries/Backgrounder.dylib warning: Unable to read symbols for "/Library/MobileSubstrate/DynamicLibraries/Backgrounder.dylib" (file not found). warning: Unable to read symbols for "/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.2.sdk/usr/lib/libsubstrate.dylib" (file not found). MS:Warning: message not found [myappAppDelegate applicationWillResignActive:] MS:Warning: message not found [myappAppDelegate applicationDidBecomeActive:] 2010-03-19 15:57:19.550 myapp[2338:207] in FirstViewController 2010-03-19 15:57:20.344 myapp[2338:207] in load table view 2010-03-19 15:57:20.478 myapp[2338:207] in loading splash view 2010-03-19 15:57:22.793 myapp[2338:207] in set interface Program received signal: “0”. warning: check_safe_call: could not restore current frame

    Read the article

  • LNK 1104 error to lib file - Continues despite removing includes and links

    - by user1556594
    A link error to a lib file popped up out of the blue in a c++ application of mine after code was working fine in my last session. Error 1 error LNK1104: cannot open file '..........\Program Files (x86)\FMOD SoundSystem\FMOD Programmers API Windows\api\lib\fmodex_vc.lib' I triple checked my project directories were set up correctly to link to the lib file, that the file existed in said directory and that it was a working version of the .lib. My next step was to remove the includes to the file and the links to bypass the error and work on the rest of my code until the problem was solved. The error remains, however, despite: Commenting out absolutely every include relating to the lib. Commenting out absolutely every line of code dependant on the includes. Removing the directory from VC++ Directories in the project properties. Checking the Additional Library Directories field was also clear of references. To my understanding this should have made the library and related code virtually non-existant to the compiler. What am I missing? The library itself is fmodex_vc.lib - part of the FMOD API for providing sound to interactive applications. Again, the application was working one session, but failed to compile the next. I hadn't touched the code since so this led me to believe some aspect of VS is at fault. I'd like to avoid the time involded in re-installing if possible as I'm on the clock for a review tomorrow evening and there are a few more things I'd like to smooth out before then. If necessary, however, I won't hesitate. Very much appreciate the help.

    Read the article

  • Clone existing structs with different alignment in Visual C++

    - by Crend King
    Is there a way to clone an existing struct with different member alignment in Visual C++? Here is the background: I use an 3rd-party library, which uses several structs. To fill up the structs, I pass the address of the struct instances to some functions. Unfortunately, the functions only returns unaligned buffer, so that data of some members are always wrong. /Zp is out of choice, since it breaks the other parts of the program. I know #pragma pack modifies the alignment of the following struct, but I would like to avoid copying the structs into my code, for the definitions in the library might change in the future. Sample code: test.h: struct am_aligned { BYTE data1[10]; ULONG data2; }; test.cpp: #include "test.h" // typedef alignment(1) struct am_aligned am_unaligned; int APIENTRY wWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { char buffer[20] = {}; for (int i = 0; i < sizeof(unaligned_struct); i++) { buffer[i] = i; } am_aligned instance = *(am_aligned*) buffer; return 0; } Consider am_aligned is defined in the library header file. am_unaligned is my custom declaration, and only effective in test.cpp. The commented line does not work of course. instance.data2 is 0x0f0e0d0c, while 0x0d0c0b0a is desired. Thanks for help!

    Read the article

  • Is there a fundamental difference between malloc and HeapAlloc (aside from the portability)?

    - by Lambert
    Hi, I'm having code that, for various reasons, I'm trying to port from the C runtime to one that uses the Windows Heap API. I've encountered a problem: If I redirect the malloc/calloc/realloc/free calls to HeapAlloc/HeapReAlloc/HeapFree (with GetProcessHeap for the handle), the memory seems to be allocated correctly (no bad pointer returned, and no exceptions thrown), but the library I'm porting says "failed to allocate memory" for some reason. I've tried this both with the Microsoft CRT (which uses the Heap API underneath) and with another company's run-time library (which uses the Global Memory API underneath); the malloc for both of those works well with the library, but for some reason, using the Heap API directly doesn't work. I've checked that the allocations aren't too big (= 0x7FFF8 bytes), and they're not. The only problem I can think of is memory alignment; is that the case? Or other than that, is there a fundamental difference between the Heap API and the CRT memory API that I'm not aware of? If so, what is it? And if not, then why does the static Microsoft CRT (included with Visual Studio) take some extra steps in malloc/calloc before calling HeapAlloc? I'm suspecting there's a difference but I can't think of what it might be. Thank you!

    Read the article

  • How to easily map c++ enums to strings

    - by Roddy
    I have a bunch of enum types in some library header files that I'm using, and I want to have a way of converting enum values to user strings - and vice-versa. RTTI won't do it for me, because the 'user strings' need to be a bit more readable than the enumerations. A brute force solution would be a bunch of functions like this, but I feel that's a bit too C-like. enum MyEnum {VAL1, VAL2,VAL3}; String getStringFromEnum(MyEnum e) { switch e { case VAL1: return "Value 1"; case VAL2: return "Value 2"; case VAL1: return "Value 3"; default: throw Exception("Bad MyEnum"); } } I have a gut feeling that there's an elegant solution using templates, but I can't quite get my head round it yet. UPDATE: Thanks for suggestions - I should have made clear that the enums are defined in a third-party library header, so I don't want to have to change the definition of them. My gut feeling now is to avoid templates and do something like this: char * MyGetValue(int v, char *tmp); // implementation is trivial #define ENUM_MAP(type, strings) char * getStringValue(const type &T) \ { \ return MyGetValue((int)T, strings); \ } ; enum eee {AA,BB,CC}; - exists in library header file ; enum fff {DD,GG,HH}; ENUM_MAP(eee,"AA|BB|CC") ENUM_MAP(fff,"DD|GG|HH") // To use... eee e; fff f; std::cout<< getStringValue(e); std::cout<< getStringValue(f);

    Read the article

  • XSLT: insert parameter value inside of an html attribute

    - by usr
    How to make the following code insert the youtubeId parameter: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE xsl:stylesheet [ <!ENTITY nbsp "&#x00A0;"> ]> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxml="urn:schemas-microsoft-com:xslt" xmlns:YouTube="urn:YouTube" xmlns:umbraco.library="urn:umbraco.library" exclude-result-prefixes="msxml umbraco.library YouTube"> <xsl:output method="xml" omit-xml-declaration="yes"/> <xsl:param name="videoId"/> <xsl:template match="/"> <a href="{$videoId}">{$videoId}</a> <object width="425" height="355"> <param name="movie" value="http://www.youtube.com/v/{$videoId}&amp;hl=en"></param> <param name="wmode" value="transparent"></param> <embed src="http://www.youtube.com/v/{$videoId}&amp;hl=en" type="application/x-shockwave-flash" wmode="transparent" width="425" height="355"></embed> </object>$videoId {$videoId} {$videoId} <xsl:value-of select="/macro/videoId" /> </xsl:template> </xsl:stylesheet> As you can see I have experimented quite a bit. <xsl:value-of select="/macro/videoId" /> actually outputs the videoId but all other occurences do not. This must be an easy question to answer but I just cannot get it to work.

    Read the article

  • Does the pointer to free() have to point to beginning of the memory block, or can it point to the interior?

    - by Lambert
    The question is in the title... I searched but couldn't find anything. Edit: I don't really see any need to explain this, but because people think that what I'm saying makes no sense (and that I'm asking the wrong questions), here's the problem: Since people seem to be very interested in the "root" cause of all the problem rather than the actual question asked (since that apparently helps things get solved better, let's see if it does), here's the problem: I'm trying to make a D runtime library based on NTDLL.dll, so that I can use that library for subsystems other than the Win32 subsystem. So that forces me to only link with NTDLL.dll. Yes, I'm aware that the functions are "undocumented" and could change at any time (even though I'd bet a hundred dollars that wcstombs will still do the same exact thing 20 years from now, if it still exists). Yes, I know people (especially Microsoft) don't like developers linking to that library, and that I'll probably get criticized for the right here. And yes, those two points above mean that programs like chkdsk and defragmenters that run before the Win32 subsystem aren't even supposed to be created in the first place, because it's literally impossible to link with anything like kernel32.dll or msvcrt.dll and still have NT-native executables, so we developers should just pretend that those stages are meant to be forever out of our reaches. But no, I doubt that anyone here would like me to paste a few thousand lines of code and help me look through them and try to figure out why memory allocations that aren't failing are being rejected by the source code I'm modifying. So that's why I asked about a different problem than the "root" cause, even though that's supposedly known to be the best practice by the community. If things still don't make sense, feel free to post comments below! :)

    Read the article

  • Not all symbols of an DLL-exported class is exported (VS9)

    - by mandrake
    I'm building a DLL from a group of static libraries and I'm having a problem where only parts of classes are exported. What I'm doing is declaring all symbols I want to export with a preprocessor definition like: #if defined(MYPROJ_BUILD_DLL) //Build as a DLL # define MY_API __declspec(dllexport) #elif defined(MYPROJ_USE_DLL) //Use as a DLL # define MY_API __declspec(dllimport) #else //Build or use as a static lib # define MY_API #endif For example: class MY_API Foo{ ... } I then build static library with MYPROJ_BUILD_DLL & MYPROJ_USE_DLL undefined causing a static library to be built. In another build I create a DLL from these static libraries. So I define MYPROJ_BUILD_DLL causing all symbols I want to export to be attributed with __declspec(dllexport) (this is done by including all static library headers in the DLL-project source file). Ok, so now to the problem. When I use this new DLL I get unresolved externals because not all symbols of a class is exported. For example in a class like this: class MY_API Foo{ public: Foo(char const* ); int bar(); private: Foo( char const*, char const* ); }; Only Foo::Foo( char const*, char const*); and int Foo::bar(); is exported. How can that be? I can understand if the entire class was missing, due to e.g. I forgot to include the header in the DLL-build. But it's only partial missing. Also, say if Foo::Foo( char const*) was not implemented; then the DLL build would have unresolved external errors. But the build is fine (I also double checked for declarations without implementation). Note: The combined size of the static libraries I'm combining is in the region of 30MB, and the resulting DLL is 1.2MB. I'm using Visual Studio 9.0 (2008) to build everything. And Depends to check for exported symbols.

    Read the article

  • Boost::Thread linking error on OSX?

    - by gct
    So I'm going nuts trying to figure this one out. Here's my basic setup: I'm compiling a shared library with a bunch of core functionality that uses a lot of boost stuff. We'll call this library libpf_core.so. It's linked with the boost static libraries, specifically the python, system, filesystem, thread, and program_options libraries. This all goes swimmingly. Now, I have a little test program called test_socketio which is compiled into a shared library (it's loaded as a plugin at runtime). It uses some boost stuff like boost::bind and boost::thread, and it's linked again libpf_core.so (which has the boost libraries included remember). When I go to compile test_socketio though, out of all my plugins it gives me a linking error: [ Building test_socketio ] g++ -c -pg -g -O0 -I/usr/local/include -I../include test_socketio.cc -o test_socketio.o g++ -shared test_socketio.o -lpy_core -o test_socketio.so Undefined symbols: "boost::lock_error::lock_error()", referenced from: boost::unique_lock<boost::mutex>::lock() in test_socketio.o ld: symbol(s) not found collect2: ld returned 1 exit status And I'm going crazy trying to figure out why this is. I've tried explicitly linking boost::thread into the plugin to no avail, tried ensuring that I'm using the boost headers associated with the libraries linked into libpf_core.so in case there was a conflict there. Is there something OSX specific regarding boost that I'm missing? In my searching on google I've seen a number of other people get this error but no one seems to have come up with a satisfactory solution.

    Read the article

  • What pattern is layered architecture in asp.net ?

    - by haansi
    Hi, I am a asp.net developer and don't know much about patterns and architecture. I will very thankful if you can please guide me here. In my web applications I use 4 layers. Web site project (having web forms + code behind cs files, user controls + code behind cs files, master pages + code behind cs files) CustomTypesLayer a class library (having custom types, enumerations, DTOs, constructers, get, set and validations) BusinessLogicLayer a class library (having all business logic, rules and all calls to DAL functions) DataAccessLayer a class library( having just classes communicating to database.) -My user interface just calls BusinessLogicLayer. BusinessLogicLayer do proecessign in it self and for data it calls DataAccessLayer funtions. -Web forms do not calls directly DAL. -CustomTypesLayer is shared by all layers. Please guide me is this approach a pattern ? I though it may be MVC or MVP but pages have there code behind files as well which are confusing me. If it is no patren is it near to some patren ? pleaes guide thanks

    Read the article

  • Efficient (basic) regular expression implementation for streaming data

    - by Brendan Dolan-Gavitt
    I'm looking for an implementation of regular expression matching that operates on a stream of data -- i.e., it has an API that allows a user to pass in one character at a time and report when a match is found on the stream of characters seen so far. Only very basic (classic) regular expressions are needed, so a DFA/NFA based implementation seems like it would be well-suited to the problem. Based on the fact that it's possible to do regular expression matching using a DFA/NFA in a single linear sweep, it seems like a streaming implementation should be possible. Requirements: The library should try to wait until the full string has been read before performing the match. The data I have really is streaming; there is no way to know how much data will arrive, it's not possible to seek forward or backward. Implementing specific stream matching for a couple special cases is not an option, as I don't know in advance what patterns a user might want to look for. For the curious, my use case is the following: I have a system which intercepts memory writes inside a full system emulator, and I would like to have a way to identify memory writes that match a regular expression (e.g., one could use this to find the point in the system where a URL is written to memory). I have found (links de-linkified because I don't have enough reputation): stackoverflow.com/questions/1962220/apply-a-regex-on-stream stackoverflow.com/questions/716927/applying-a-regular-expression-to-a-java-i-o-stream www.codeguru.com/csharp/csharp/cs_data/searching/article.php/c14689/Building-a-Regular-Expression-Stream-Search-with-the-NET-Framework.htm But all of these attempt to convert the stream to a string first and then use a stock regular expression library. Another thought I had was to modify the RE2 library, but according to the author it is architected around the assumption that the entire string is in memory at the same time. If nothing's available, then I can start down the unhappy path of reinventing this wheel to fit my own needs, but I'd really rather not if I can avoid it. Any help would be greatly appreciated!

    Read the article

  • Link failure with either abnormal memory consumption or LNK1106 in Visual Studio 2005.

    - by Corvin
    Hello, I am trying to build a solution for windows XP in Visual Studio 2005. This solution contains 81 projects (static libs, exe's, dlls) and is being successfully used by our partners. I copied the solution bundle from their repository and tried setting it up on 3 similar machines of people in our group. I was successful on two machines and the solution failed to build on my machine. The build on my machine encountered two problems: During a simple build creation of the biggest static library (about 522Mb in debug mode) would fail with the message "13libd\ui1d.lib : fatal error LNK1106: invalid file or disk full: cannot seek to 0x20101879" Full solution rebuild creates this library, however when it comes to linking the library to main .exe file, devenv.exe spawns link.exe which consumes about 80Mb of physical memory and 250MB of virtual and spawns another link.exe, which does the same. This goes on until the system runs out of memory. On PCs of my colleagues where successful build could be performed, there is only one link.exe process which uses all the memory required for linking (about 500Mb physical). There is a plenty of hard drive space on my machine and the file system is NTFS. All three of our systems are similar - Core2Quad processors, 4Gb of RAM, Windows XP SP3. We are using Visual studio installed from the same source. I tried using a different RAM and CPU, using dedicated graphics adapter to eliminate possibility of video memory sharing influencing the build, putting solution files to different location, using different versions of VS 2005 (Professional, Standard and Team Suite), changing the amount of available virtual memory, running memtest86 and building the project from scratch (i.e. a clean bundle). I have read what MSDN says about LNK1106, none of the cases apply to me except for maybe "out of heap space", however I am not sure how I should fight this. The only idea that I have left is reinstalling the OS, however I am not sure that it would help and I am not sure that my situation wouldn't repeat itself on a different machine. Would anyone have any sort of advice for me? Thanks

    Read the article

  • streaming xml pretty printer in C/C++ using expat or libxml2?

    - by Mark Zeren
    I have a library that outputs xml without whitespace all on one line. In some cases I'd like to pretty print that output. I'm looking for a BSD-ish licensed C/C++ library or sample code that will take a raw xml byte stream and pretty print it. Here's some pseudo code showing one way that I might use this functionality: void my_write(const char* buf, int len); PrettyPrinter pp(bind(&my_write)); while (...) { // ... get some more xml ... const char* buf = xmlSource.get_buf(); int len = xmlSource.get_buf_len(); int written = pp.write(buf, len); // calls my_write with pretty printed xml // ... error handling, maybe call write again, etc. ... } I'd like to avoid instantiating a DOM representation. I already have dependencies on the expat and libxml2 shared libraries, and I'd rather not add any more shared library dependencies.

    Read the article

  • C++ Optimize if/else condition

    - by Heye
    I have a single line of code, that consumes 25% - 30% of the runtime of my application. It is a less-than comparator for an std::set (the set is implemented with a Red-Black-Tree). It is called about 180 Million times within 52 seconds. struct Entry { const float _cost; const long _id; // some other vars Entry(float cost, float id) : _cost(cost), _id(id) { } }; template<class T> struct lt_entry: public binary_function <T, T, bool> { bool operator()(const T &l, const T &r) const { // Most readable shape if(l._cost != r._cost) { return r._cost < l._cost; } else { return l._id < r._id; } } }; The entries should be sorted by cost and if the cost is the same by their id. I have many insertions for each extraction of the minimum. I thought about using Fibonacci-Heaps, but I have been told that they are theoretically nice, but suffer from high constants and are pretty complicated to implement. And since insert is in O(log(n)) the runtime increase is nearly constant with large n. So I think its okay to stick to the set. To improve performance I tried to express it in different shapes: return l._cost < r._cost || r._cost > l._cost || l._id < r._id; return l._cost < r._cost || (l._cost == r._cost && l._id < r._id); Even this: typedef union { float _f; int _i; } flint; //... flint diff; diff._f = (l._cost - r._cost); return (diff._i && diff._i >> 31) || l._id < r._id; But the compiler seems to be smart enough already, because I haven't been able to improve the runtime. I also thought about SSE but this problem is really not very applicable for SSE... The assembly looks somewhat like this: movss (%rbx),%xmm1 mov $0x1,%r8d movss 0x20(%rdx),%xmm0 ucomiss %xmm1,%xmm0 ja 0x410600 <_ZNSt8_Rb_tree[..]+96> ucomiss %xmm0,%xmm1 jp 0x4105fd <_ZNSt8_Rb_[..]_+93> jne 0x4105fd <_ZNSt8_Rb_[..]_+93> mov 0x28(%rdx),%rax cmp %rax,0x8(%rbx) jb 0x410600 <_ZNSt8_Rb_[..]_+96> xor %r8d,%r8d I have a very tiny bit experience with assembly language, but not really much. I thought it would be the best (only?) point to squeeze out some performance, but is it really worth the effort? Can you see any shortcuts that could save some cycles? The platform the code will run on is an ubuntu 12 with gcc 4.6 (-stl=c++0x) on a many-core intel machine. Only libraries available are boost, openmp and tbb. I am really stuck on this one, it seems so simple, but takes that much time. I have been crunching my head since days thinking how I could improve this line... Can you give me a suggestion how to improve this part, or is it already at its best?

    Read the article

  • Freetype2 (error-)return value documentation

    - by Awaki
    In short, I'm looking for documentation that would limit the error situations to check for after a Freetype library function failed, much like the OpenGL and Win32 APIs document the error codes generated by their respective functions. I can't seem to find such documentation though, so I was wondering how to best handle translation of Freetype errors to typed exceptions. Background: I am currently in the process of implementing font-rendering capability (using Freetype) for my GUI framework, which makes strong use of typed exceptions to indicate error situations. However, the Freetype docs seem to completely omit what errors can be expected from what functions. That, if such documentation does indeed not exist, would basically leave me with two options: either guessing which errors make sense for a certain Freetype function (obviously prone to mistakes on my part), or considering every error code for translation into appropriate exceptions (less verbose since I would have to write the translation only once). Performance isn't really critical in the code that calls the Freetype library, so even the latter option would probably be acceptable, but surely there must be some kind of documentation on which library calls may return what Freetype error? Is there any such documentation which I just somehow managed to not find? Should I go the route of generically expecting every error code for translation? Or are there other ways to approach this problem? By the way, I wanted to avoid introducing some kind of generic FreetypeException (containing a description of the Freetype error) since I intended to completely hide what libraries I'm using (not from a legal point-of-view, mind you), but I guess I can be convinced to do this anyway if the consensus is that it would be the best option. I don't think it matters for this question, but I'm writing in C++.

    Read the article

  • what patern is layerd architechture in asp.net ?

    - by haansi
    Hi, I am a asp.net developer and don't know much about patterns and architecture. I will very thankful if you can please guide me here. In my web applications I use 4 layers. Web site project (having web forms + code behind cs files, user controls + code behind cs files, master pages + code behind cs files) CustomTypesLayer a class library (having custom types, enumerations, DTOs, constructers, get, set and validations) BusinessLogicLayer a class library (having all business logic, rules and all calls to DAL functions) DataAccessLayer a class library( having just classes communicating to database.) -My user interface just calls BusinessLogicLayer. BusinessLogicLayer do proecessign in it self and for data it calls DataAccessLayer funtions. -Web forms do not calls directly DAL. -CustomTypesLayer is shared by all layers. Please guide me is this approach a pattern ? I though it may be MVC or MVP but pages have there code behind files as well which are confusing me. If it is no patren is it near to some patren ? pleaes guide thanks

    Read the article

  • When using Direct3D, how much math is being done on the CPU?

    - by zirgen
    Context: I'm just starting out. I'm not even touching the Direct3D 11 API, and instead looking at understanding the pipeline, etc. From looking at documentation and information floating around the web, it seems like some calculations are being handled by the application. That, is, instead of simply presenting matrices to multiply to the GPU, the calculations are being done by a math library that operates on the CPU. I don't have any particular resources to point to, although I guess I can point to the XNA Math Library or the samples shipped in the February DX SDK. When you see code like mViewProj = mView * mProj;, that projection is being calculated on the CPU. Or am I wrong? If you were writing a program, where you can have 10 cubes on the screen, where you can move or rotate cubes, as well as viewpoint, what calculations would you do on the CPU? I think I would store the geometry for the a single cube, and then transform matrices representing the actual instances. And then it seems I would use the XNA math library, or another of my choosing, to transform each cube in model space. Then get the coordinates in world space. Then push the information to the GPU. That's quite a bit of calculation on the CPU. Am I wrong? Am I reaching conclusions based on too little information and understanding? What terms should I Google for, if the answer is STFW? Or if I am right, why aren't these calculations being pushed to the GPU as well?

    Read the article

  • BufferedImage & ColorModel in Java

    - by spol
    I am using a image processing library in java to manipulate images.The first step I do is I read an image and create a java.awt.Image.BufferedImage object. I do it in this way, BufferedImage sourceImage = ImageIO.read( new File( filePath ) ); The above code creates a BufferedImage ojbect with a DirectColorModel: rmask=ff0000 gmask=ff00 bmask=ff amask=0. This is what happens when I run the above code on my macbook. But when I run this same code on a linux machine (hosted server), this creates a BufferedImage object with ColorModel: #pixelBits = 24 numComponents = 3 color space = java.awt.color.ICC_ColorSpace@c39a20 transparency = 1 has alpha = false isAlphaPre = false. And I use the same jpg image in both the cases. I don't know why the color model on the same image is different when run on mac and linux. The colormodel for mac has 4 components and the colormodel for linux has 3 components.There is a problem arising because of this, the image processing library that I use always assumes that there are always 4 components in the colormodel of the image passed, and it throws array out of bounds exception when run on linux box. But on macbook, it runs fine. I am not sure if I am doing something wrong or there is a problem with the library. Please let me know your thoughts. Also ask me any questions if I am not making sense!

    Read the article

  • Windows Workflow Foundation 4.0 Designer Rehosting with Custom Activities

    - by Robert
    I have several WF 4.0 workflows that I have created for an application my company is developing. Some of these workflows are simple, and some are very complex (i.e. many steps, several different types of activities, custom activities). For many of these workflows, I have created several custom code activities to support some internal process types. The workflows work great and we have had very few problems when it comes to maintaining them within VS 2010. We now want to move that responsibility off to our business users, so I have created a WPF application to rehost the WF designer (according to the MS samples). My problem is that when I open one of the workflows that contains custom code activities, those activities are represented as red boxes with the error message of "Activity could not be loaded because of errors in XAML." I have done research and have found several posts that mention that this is usually a problem with namespacing and referencing. The rehosted designer is in a namespace similar to this: Company.Application.Workflow.Designer And the custom code activities are contained within a separate custom workflow library, which I have included as a reference in the designer project. The library's namespace is similar to this: Company.Application.Workflow.Data.Activities As I have mentioned, the library is set as a reference in the designer's project, and I see it being copied to the output when I build the project. I have also included the reference in the XAML of the main designed application. What am I missing?

    Read the article

  • Compiling my Boost/NTL program with c++ on Linux.

    - by Martin Lauridsen
    Hi SO, I wrote a client program and a server program, that uses the NTL library and Boost::Asio, to do client/server communication for an integer factorization application, in C++. Both sides consist of several headers and cpp files. Both project compile fine individually on Windows in Visual Studio. All I did, was add the include path of NTL and Boost to both projects: Additional include paths: "D:\Downloads\WinNTL-5_5_2\include";D:\boost_1_42_0 Furthermore, for both projects, I added the two library paths to both projects in VS: Additional library directories: D:\boost_1_42_0\stage\lib;"D:\Documents\Visual Studio 2008\Projects\ntl\Debug" And added under Additional dependencies: ntl.lib As said, it compiles fine on Windows. But when I put the code on a Linux machine provided by university, I try to compile with the following statement c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include -I/appl/htopopt/Linux_x86_64/boost_1_43_0/include client_protocol.cpp mpqs_client.cpp mpqs_sieve.cpp mpqs_helper.cpp -o mpqs_helper -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm -L/appl/htopopt/Linux_x86_64/boost_1_43_0/lib -lboost_system -static Upon doing this, I get a huuuge error, which I posted here. Any idea how to fix this, please??

    Read the article

  • IN SQL operator in R-Shiny

    - by Piyush
    I am taking multiple selection for component as per below code. selectInput("cmpnt", "Choose Component:", choices = as.character(levels(Material_Data()$CMPNT_NM)),multiple = TRUE) But I am trying to write a sql statement as given below, then its not working. Neither it is throwing any error message. When I was selecting one option at a time (without mutiple = TRUE) then it was working (since I was using "=" operator). But after using "multiple=TRUE" I need to use IN operator, which is not working. Input_Data2 <- fn$sqldf( paste0( "select * from Input_Data1 where MTRL_NBR = '$mtrl1' and CMPNT_NM in ('$cmpnt1')") ) Thanks in advance for any help on this. Thanks jdharrison! Pleasefind the detailed code: # server.R library(RODBC) library(shiny) library(sqldf) Input_Data <- readRDS("InputSource.rds") Mtrl <- factor(Input_Data$MTRL_NBR) Mtrl_List <- levels(Mtrl) shinyServer(function(input, output) { # First UI input (Service column) filter clientData output$Choose_Material <- renderUI({ if (is.null(clientData())) return("No client selected") selectInput("mtrl", "Choose Material:", choices = as.character(levels(clientData()$MTRL_NBR)), selected = input$mtrl ) }) # Second UI input (Rounds column) filter service-filtered clientData output$Choose_Component <- renderUI({ if(is.null(input$mtrl)) return() if (is.null(Material_Data())) return("No service selected") selectInput("cmpnt", "Choose Component:", choices = as.character(levels(Material_Data()$CMPNT_NM)),multiple = TRUE) }) # First data load (client data) clientData <- reactive({ # get(input$Input_Data) return(Input_Data) }) # Second data load (filter by service column) Material_Data <- reactive({ dat <- clientData() if (is.null(dat)) return(NULL) if (!is.null(input$mtrl)) # ! dat <- dat[dat$MTRL_NBR %in% input$mtrl,] dat <- droplevels(dat) return(dat) }) output$Choose_Columns <- renderUI({ if(is.null(input$mtrl)) return() if(is.null(input$cmpnt)) return() colnames <- names(Input_Data) checkboxGroupInput("columns", "Choose Columns To Display The Data:", choices = colnames, selected = colnames) }) output$text <- renderText({ print(input$cmpnt) }) output$data_table <- renderTable({ if(is.null(input$mtrl)) return() if (is.null(input$columns) || !(input$columns %in% names(Input_Data))) return() Input_Data1 <- Input_Data[, input$columns, drop = FALSE] cmpnt1 <- input$cmpnt mtrl1 <- input$mtrl Input_Data2 <- fn$sqldf( paste0( "select * from Input_Data1 where MTRL_NBR = '$mtrl1' and CMPNT_NM in ('$cmpnt1')") ) head(Input_Data2, 10) }) })

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >