Search Results

Search found 1764 results on 71 pages for 'boost interprocess'.

Page 26/71 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Work with function references

    - by Ockonal
    Hello, I have another one question about functions reference. For example, I have such definition: typedef boost::function<bool (Entity &handle)> behaviorRef; std::map< std::string, ptr_vector<behaviorRef> > eventAssociation; The first question is: how to insert values into such map object? I tried: eventAssociation.insert(std::pair< std::string, ptr_vector<behaviorRef> >(eventType, ptr_vector<behaviorRef>(callback))); But the error: no matching function for call to ‘boost::ptr_vector<boost::function<bool(Entity&)> >::push_back(Entity::behaviorRef&)’ And I undersatnd it, but can't make workable code. The second question is how to call such functions? For example, I have one object of behaviorRef, how to call it with boost::bind with passing my own values?

    Read the article

  • Boost::Mutex & Malloc

    - by M. Tibbits
    Hi all, I'm trying to use a faster memory allocator in C++. I can't use Hoard due to licensing / cost. I was using NEDMalloc in a single threaded setting and got excellent performance, but I'm wondering if I should switch to something else -- as I understand things, NEDMalloc is just a replacement for C-based malloc() & free(), not the C++-based new & delete operators (which I use extensively). The problem is that I now need to be thread-safe, so I'm trying to malloc an object which is reference counted (to prevent excess copying), but which also contains a mutex pointer. That way, if you're about to delete the last copy, you first need to lock the pointer, then free the object, and lastly unlock & free the mutex. However, using malloc to create a boost::mutex appears impossible because I can't initialize the private object as calling the constructor directly ist verboten. So I'm left with this odd situation, where I'm using new to allocate the lock and nedmalloc to allocate everything else. But when I allocate a large amount of memory, I run into allocation errors (which disappear when I switch to malloc instead of nedmalloc ~ but the performance is terrible). My guess is that this is due to fragmentation in the memory and an inability of nedmalloc and new to place nice side by side. There has to be a better solution. What would you suggest?

    Read the article

  • Why aren't these shared_ptrs pointing to the same container?

    - by BeeBand
    I have a class Model: class Model { ... boost::shared_ptr<Deck> _deck; boost::shared_ptr<CardStack> _stack[22]; }; Deck inherits from CardStack. I tried to make _stack[0] point to the same thing that _deck points to by going: { _deck = boost::shared_ptr<Deck>(new Deck()); _stack[0] = _deck; } It seems that the assignment to _deck of _stack[0] results in a copy of _deck being made. How can I get them to point to the same thing?

    Read the article

  • shared_ptr requires complete type; cannot use it with lua_State*

    - by topright
    Hello! I'm writing a C++/OOP wrapper for Lua. My code is: class LuaState { boost::shared_ptr<lua_State> L; LuaState(): L( luaL_newstate(), LuaState::CustomDeleter ) { } } The problem is lua_State is incomplete type and shared_ptr constructor requires complete type. And I need safe pointer sharing. (Funny thing boost docs say most functions do not require complete type, but constructor requires, so there is no way of using it. http://www.boost.org/doc/libs/1_42_0/libs/smart_ptr/smart_ptr.htm) Can can I solve this? Thank you.

    Read the article

  • Is there an easy way to make `boost::ptr_vector` more debugger friendly in Visual Studio?

    - by Billy ONeal
    I'm considering using boost::ptr_container as a result of the responses from this question. My biggest problem with the library is that I cannot view the contents of the collection in the debugger, because the MSVC debugger doesn't recognize it, and therefore I cannot see the contents of the containers. (All the data gets stored as void * internally) I've heard MSVC has a feature called "debugger visualizers" which would allow the user to make the debugger smarter about these kinds of things, but I've never written anything like this, and I'm not hugely firmiliar with such things. For example, compare the behavior of boost::shared_ptr with MSVC's own std::tr1::shared_ptr. In the debugger (i.e. in the Watch window), the boost version shows up as a big mess of internal variables used for implementing the shared pointer, but the MSVC version shows up as a plain pointer to the object (and the shared_ptr's innards are hidden). How can I get started either using or implementing such a thing?

    Read the article

  • Preparing for the next C++ standard

    - by Neil Butterworth
    The spate of questions regarding BOOST_FOREACH prompts me to ask users of the Boost library what (if anything) they are doing to prepare their code for portability to the proposed new C++ standard (aka C++0x). For example, do you write code like this if you use shared_ptr: #ifdef CPPOX #include <memory> #else #include "boost/shared_ptr.hpp" #endif There is also the namespace issue - in the future, shared_ptr will be part of the std, namespace - how do you deal with that? I'm interested in these questions because I've decided to bite the bullet and start learning boost seriously, and I'd like to use best practices in my code. Not exactly a flood of answers - does this mean it's a non-issue? Anyway, thanks to those that replied; I'm accepting jalfs answer because I like being advised to do nothing!

    Read the article

  • Released object crashes app

    - by John Smith
    I am using objective-C++ (+Boost) for iPhone development. I am in a rather tight loop and need to allocate and release a certain object. The code is something like this. for (int i=0;i<100;i++) { opt = [[FObj alloc] init]; //do stuff with opt [opt release]; } The FObj object is something like @interface FObj MyCPPObj * cppobj; @end In the implementation of FObj there is a dealloc method: -(void) dealloc { delete cppobj; //previously allocated with 'new' [super dealloc]; } I am afraid that if i don't release then the 'MyCPPObj's will just pile up. But releasing makes the app crash after the first loop. What am I doing wrong? Or perhaps should I make cppobj and boost::shared_ptr? (do boost shared pointers automatically release their objects when an objective-C++ object is deleted?)

    Read the article

  • Get result type of function

    - by Robert
    I want to specialize a template function declared as: template<typename Type> Type read(std::istream& is); I then have a lot of static implementations static int read_integer(std::istream& is); a.s.o. Now I'd like to do a macro so that specialization of read is as simple as: SPECIALIZE_READ(read_integer) So I figured I'd go the boost::function_traits way and declare SPECIALIZE_READ as: #define SPECIALIZE_READ(read_function) \ template<> boost::function_traits<read_function>::result_type read(std::istream& is) { \ return read_function(is); \ } but VC++ (2008) compiler complains with: 'boost::function_traits' : 'read_integer' is not a valid template type argument for parameter 'Function' Ideas ?

    Read the article

  • Learning low latency C++ and Java?

    - by user997112
    I'm currently in a role where I dont get to write any C++ or Java. However, the role is good because provides me with exposure to the business side (i'm interested in finance). Eventually I would like to get into high frequency trading infrastructure. Therefore, outside of work hours i'd like to maximise the knowledge I can gain about high performance Java and C++. I already have the Java Performance Tuning book, which is ok but not impressive. Can people recommend anymore latency blogs/books/websites for learning about making C++/C/Java or even Unix very fast? Or perhaps making the network parts of the OS (if re-writing Unix components) faster? EDIT: Or perhaps we could make this THE thread for advice on writing fast code

    Read the article

  • Unix as opposed to Windows (Java and C++)

    - by user997112
    Firstly I should explain the background. I am interested in high frequency trading programming roles. After looking at many job specs it is very clear that there is a big demand for programmers who have programmed Java and C++ on Unix as opposed to Windows. My question is what are the differences a High Freq programmer would come across? It cannot be something in the language itself because syntactically they do not differ over OS? Therefore I thought it must be something which the programming language has to interface, resources etc? Could anyone please help me out as I am trying to improve my C++/Java on Unix, in order to aim for this type of career? ps I'm guessing part of this answer lies with the socket infrastructure on Unix?

    Read the article

  • Help with infrequent segmentation fault in accessing boost::unordered_multimap or struct

    - by Sarah
    I'm having trouble debugging a segmentation fault. I'd appreciate tips on how to go about narrowing in on the problem. The error appears when an iterator tries to access an element of a struct Infection, defined as: struct Infection { public: explicit Infection( double it, double rt ) : infT( it ), recT( rt ) {} double infT; // infection start time double recT; // scheduled recovery time }; These structs are kept in a special structure, InfectionMap: typedef boost::unordered_multimap< int, Infection > InfectionMap; Every member of class Host has an InfectionMap carriage. Recovery times and associated host identifiers are kept in a priority queue. When a scheduled recovery event arises in the simulation for a particular strain s in a particular host, the program searches through carriage of that host to find the Infection whose recT matches the recovery time (double recoverTime). (For reasons that aren't worth going into, it's not as expedient for me to use recT as the key to InfectionMap; the strain s is more useful, and coinfections with the same strain are possible.) assert( carriage.size() > 0 ); pair<InfectionMap::iterator,InfectionMap::iterator> ret = carriage.equal_range( s ); InfectionMap::iterator it; for ( it = ret.first; it != ret.second; it++ ) { if ( ((*it).second).recT == recoverTime ) { // produces seg fault carriage.erase( it ); } } I get a "Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address..." on the line specified above. The recoverTime is fine, and the assert(...) in the code is not tripped. As I said, this seg fault appears 'randomly' after thousands of successful recovery events. How would you go about figuring out what's going on? I'd love ideas about what could be wrong and how I can further investigate the problem. Update I added a new assert and a check just inside the for loop: assert( carriage.size() > 0 ); assert( carriage.count( s ) > 0 ); pair<InfectionMap::iterator,InfectionMap::iterator> ret = carriage.equal_range( s ); InfectionMap::iterator it; cout << "carriage.count(" << s << ")=" << carriage.count(s) << endl; for ( it = ret.first; it != ret.second; it++ ) { cout << "(*it).first=" << (*it).first << endl; // error here if ( ((*it).second).recT == recoverTime ) { carriage.erase( it ); } } The EXC_BAD_ACCESS error now appears at the (*it).first call, again after many thousands of successful recoveries. Can anyone give me tips on how to figure out how this problem arises? I'm trying to use gdb. Frame 0 from the backtrace reads "#0 0x0000000100001d50 in Host::recover (this=0x100530d80, s=0, recoverTime=635.91148029170529) at Host.cpp:317" I'm not sure what useful information I can extract here. Update 2 I added a break; after the carriage.erase(it). This works, but I have no idea why (e.g., why it would remove the seg fault at (*it).first.

    Read the article

  • How to solve "The ChannelDispatcher is unable to open its IChannelListener" error?

    - by kyrisu
    Hi, I'm trying to communicate between WCF hosted in Windows Service and my service GUI. The problem is when I'm trying to execute OperationContract method I'm getting "The ChannelDispatcher at 'net.tcp://localhost:7771/MyService' with contract(s) '"IContract"' is unable to open its IChannelListener." My app.conf looks like that: <configuration> <system.serviceModel> <bindings> <netTcpBinding> <binding name="netTcpBinding"> <security> <transport protectionLevel="EncryptAndSign" /> </security> </binding> </netTcpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="MyServiceBehavior"> <serviceMetadata httpGetEnabled="true" httpGetUrl="http://localhost:7772/MyService" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="MyServiceBehavior" name="MyService.Service"> <endpoint address="net.tcp://localhost:7771/MyService" binding="netTcpBinding" bindingConfiguration="netTcpBinding" name="netTcp" contract="MyService.IContract" /> </service> </services> </system.serviceModel> Port 7771 is listening (checked using netstat) and svcutil is able to generate configs for me. Any suggestions would be appreciated. Stack trace from exception Server stack trace: at System.ServiceModel.Channels.ServiceChannel.ThrowIfFaultUnderstood(Message reply, MessageFault fault, String action, MessageVersion version, FaultConverter faultConverter) at System.ServiceModel.Channels.ServiceChannel.HandleReply(ProxyOperationRuntime operation, ProxyRpc& rpc) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) There's one inner exeption (but not under Exeption.InnerExeption but under Exeption.Detail.InnerExeption - ToString() method doesn't show that) A registration already exists for URI 'net.tcp://localhost:7771/MyService'. But my service have specified this URI only in app.config file nowhere else. In entire solution this URI apears in server once and client once.

    Read the article

  • Extracting pair member in lambda expressions and template typedef idiom

    - by Nazgob
    Hi, I have some complex types here so I decided to use nifty trick to have typedef on templated types. Then I have a class some_container that has a container as a member. Container is a vector of pairs composed of element and vector. I want to write std::find_if algorithm with lambda expression to find element that have certain value. To get the value I have to call first on pair and then get value from element. Below my std::find_if there is normal loop that does the trick. My lambda fails to compile. How to access value inside element which is inside pair? I use g++ 4.4+ and VS 2010 and I want to stick to boost lambda for now. #include <vector> #include <algorithm> #include <boost\lambda\lambda.hpp> #include <boost\lambda\bind.hpp> template<typename T> class element { public: T value; }; template<typename T> class element_vector_pair // idiom to have templated typedef { public: typedef std::pair<element<T>, std::vector<T> > type; }; template<typename T> class vector_containter // idiom to have templated typedef { public: typedef std::vector<typename element_vector_pair<T>::type > type; }; template<typename T> bool operator==(const typename element_vector_pair<T>::type & lhs, const typename element_vector_pair<T>::type & rhs) { return lhs.first.value == rhs.first.value; } template<typename T> class some_container { public: element<T> get_element(const T& value) const { std::find_if(container.begin(), container.end(), bind(&typename vector_containter<T>::type::value_type::first::value, boost::lambda::_1) == value); /*for(size_t i = 0; i < container.size(); ++i) { if(container.at(i).first.value == value) { return container.at(i); } }*/ return element<T>(); //whatever } protected: typename vector_containter<T>::type container; }; int main() { some_container<int> s; s.get_element(5); return 0; }

    Read the article

  • Dataflow Programming - Patterns and Frameworks

    - by Styrac
    I just came across the proposed Boost::Dataflow library. It seems like an interesting approach and I was wondering if there are other such alternative frameworks for C++, and if there are any related design patterns. I have not ruled out Boost::Dataflow, I am just looking into any available alternatives so I can understand the domain and my options better (or roll my own if necessary).

    Read the article

  • Communication between EJB3 Instances (JEE inter-bean communication) possible?

    - by Hank
    I'm designing a part of a JEE6 application, consisting of EJB3 beans. Part of the requirements are multiple parallel (say a few hundred) long running (over days) database hunts. Individual hunts have different search parameters (start time, end time, query filter). Parameters may get changed over time. Currently I'm thinking of the following: SearchController (Stateless Session Bean) formulates a set of search parameters, sends it off to a SearchListener via JMS SearchListener (Message Driven Bean) receives search parameters, instantiates a SearchWorker with the parameters SearchWorker (SLSB) hunts repeatedly through the database; when it finds something, the result is sent off via JMS, and the search continues; when the given 'end-time' has reached, it ends What I'm wondering now: Is there a problem, with EJB3 instances running for days? (Other than that I need to be able to deal with container restarts...) How do I know how many and which EJB instances of SearchWorker are currently running? Is it possible to communicate with them individually (similar to sending a System V signal to a unix process), e.g. to send new parameters, to end an instance, etc..

    Read the article

  • How does a portable Thread Specific Storage Mechanism's Naming Scheme Generate Thread Relative Uniqu

    - by Hassan Syed
    A portable thread specific storage reference/identity mechanism, of which boost/thread/tss.hpp is an instance, needs a way to generate a unique keys for itself. This key is unique in the scope of a thread, and is subsequently used to retrieve the object it references. This mechanism is used in code written in a thread neutral manner. Since boost is a portable example of this concept, how specifically does such a mechanism work ?

    Read the article

  • Apples new section 3.3.1

    - by ML
    With Apple making changes to section 3.3.1 on the iPhone dev agreement can one stillness libraries like boost in their apps? I want to use Boost in my iPad app...

    Read the article

  • Across process marhalling problem with an array of points

    - by ElMagn
    Hi All, We have what we think is a marshalling problem with a renderer object when called across process boundaries. The renderer is an ATL COM server with a COM object that implements the IPoints interface defined below: typedef [uuid(B0E01719-005A-427c-B9DD-B42A18E969AE)] struct Point { double X; double Y; } Point; [ object, uuid(3BFECFE3-B4FB-4f14-8257-6E065D02E3B3), helpstring("IPoints Interface"), dual, ] interface IPoints : IDispatch { HRESULT DrawPolyLine([in] long hDC, [in] short count, [in, size_is(count)] Point * points ); // many more like DrawLine } The count parameter represents the number of points and the points parameter represents an array of the actual points. We have two process running, a graphical display process (GDP) and a tabular (grid) display process (TDP). A factory in the GDP, written in C#, creates the renderer and the clients of the renderer in the GDP. When the clients call into the renderer, everything displays correctly. The renderer is created at start up BTW. There is another factory in the TDP, written in VB6, that calls into the factory in the GDP to create the clients. When the clients call into the renderer, only the first point in the array is marshaled correctly, all the other points are garbage. Seems that the rendering works only when the client creation is started from the same process as the renderer. Now, i am not sure what the solution to this problem is. It seems that if we can guarantee that the clients are always created from a thread in the same GDP process as the renderer then the points are marshaled correctly. We tried using a background thread from the Thread Pool in C# and it indeed worked. The problem is that Windows Forms created from the clients stopped working because accessing the form's controls from a thread other than the thread that created the control is not allowed. We might change the calls to access the forms but we have quite a few of them and are trying to look into a different solution that might involve making changes to the renderer. The other problem is that the renderer is legacy code and we can't just change the interface. I am wondering what can we do to the renderer's interface that would help with marshalling from across process calls. Any ideas would be greatly appreciated. Regards, ElMagn

    Read the article

  • Single windows service to provide access to cached data?

    - by Matthias
    I need a solution where I have a single windows service providing access to cached data to various consumers: To an MVC web application, a .Net Assembly (COM interop) used within an classic ASP page, other windows services, a windows forms application. So the data must be accessible from various processes. The data being cached is read-only. For now, all processes are located on the same machine. The environment is .net framework 3.5 and c#. My question is, how can multiple appdomains/processes retrieve cached data from a single windows service?

    Read the article

  • can a python script know that another instance of the same script is running... and then talk to it?

    - by Justin Grant
    I'd like to prevent multiple instances of the same long-running python command-line script from running at the same time, and I'd like the new instance to be able to send data to the original insance before the new instance commits suicide. How can I do this in a cross-platform way? Specifically, I'd like to enable the following behavior: "foo.py" is launched from the command line, and it will stay running for a long time-- days or weeks until the machine is rebooted or the parent process kills it. every few minutes the same script is launched again, but with different command-line parameters when launched, the script should see if any other instances are running. if other instances are running, then instance #2 should send its command-line parameters to instance #1, and then instance #2 should exit. instance #1, if it receives command-line parameters from another script, should spin up a new thread and (using the command-line parameters sent in the step above) start performing the work that instance #2 was going to perform. So I'm looking for two things: how can a python program know another instance of itself is running, and then how can one python command-line program communicate with another? Making this more complicated, the same script needs to run on both Windows and Linux, so ideally the solution would use only the Python standard library and not any OS-specific calls. Although if I need to have a Windows codepath and an *nix codepath (and a big if statement in my code to choose one or the other), that's OK if a "same code" solution isn't possible. I realize I could probably work out a file-based approach (e.g. instance #1 watches a directory for changes and each instance drops a file into that directory when it wants to do work) but I'm a little concerned about cleaning up those files after a non-graceful machine shutdown. I'd ideally be able to use an in-memory solution. But again I'm flexible, if a persistent-file-based approach is the only way to do it, I'm open to that option. More details: I'm trying to do this because our servers are using a monitoring tool which supports running python scripts to collect monitoring data (e.g. results of a database query or web service call) which the monitoring tool then indexes for later use. Some of these scripts are very expensive to start up but cheap to run after startup (e.g. making a DB connection vs. running a query). So we've chosen to keep them running in an infinite loop until the parent process kills them. This works great, but on larger servers 100 instances of the same script may be running, even if they're only gathering data every 20 minutes each. This wreaks havoc with RAM, DB connection limits, etc. We want to switch from 100 processes with 1 thread to one process with 100 threads, each executing the work that, previously, one script was doing. But changing how the scripts are invoked by the monitoring tool is not possible. We need to keep invocation the same (launch a process with different command-line parameters) but but change the scripts to recognize that another one is active, and have the "new" script send its work instructions (from the command line params) over to the "old" script.

    Read the article

  • python object to native c++ pointer

    - by Lodle
    Im toying around with the idea to use python as an embedded scripting language for a project im working on and have got most things working. However i cant seem to be able to convert a python extended object back into a native c++ pointer. So this is my class: class CGEGameModeBase { public: virtual void FunctionCall()=0; virtual const char* StringReturn()=0; }; class CGEPYGameMode : public CGEGameModeBase, public boost::python::wrapper<CGEPYGameMode> { public: virtual void FunctionCall() { if (override f = this->get_override("FunctionCall")) f(); } virtual const char* StringReturn() { if (override f = this->get_override("StringReturn")) return f(); return "FAILED TO CALL"; } }; Boost wrapping: BOOST_PYTHON_MODULE(GEGameMode) { class_<CGEGameModeBase, boost::noncopyable>("CGEGameModeBase", no_init); class_<CGEPYGameMode, bases<CGEGameModeBase> >("CGEPYGameMode", no_init) .def("FunctionCall", &CGEPYGameMode::FunctionCall) .def("StringReturn", &CGEPYGameMode::StringReturn); } and the python code: import GEGameMode def Ident(): return "Alpha" def NewGamePlay(): return "NewAlpha" def NewAlpha(): import GEGameMode import GEUtil class Alpha(GEGameMode.CGEPYGameMode): def __init__(self): print "Made new Alpha!" def FunctionCall(self): GEUtil.Msg("This is function test Alpha!") def StringReturn(self): return "This is return test Alpha!" return Alpha() Now i can call the first to functions fine by doing this: const char* ident = extract< const char* >( GetLocalDict()["Ident"]() ); const char* newgameplay = extract< const char* >( GetLocalDict()["NewGamePlay"]() ); printf("Loading Script: %s\n", ident); CGEPYGameMode* m_pGameMode = extract< CGEPYGameMode* >( GetLocalDict()[newgameplay]() ); However when i try and convert the Alpha class back to its base class (last line above) i get an boost error: TypeError: No registered converter was able to extract a C++ pointer to type class CGEPYGameMode from this Python object of type Alpha I have done alot of searching on the net but cant work out how to convert the Alpha object into its base class pointer. I could leave it as an object but rather have it as a pointer so some non python aware code can use it. Any ideas?

    Read the article

  • How to pass function reference into arguments

    - by Ockonal
    Hi, I'm using boost::function for making function-references: typedef boost::function<void (SomeClass &handle)> Ref; someFunc(Ref &pointer) {/*...*/} void Foo(SomeClass &handle) {/*...*/} What is the best way to pass Foo into the someFunc? I tried something like: someFunc(Ref(Foo));

    Read the article

  • Multiple python scripts sending messages to a single central script

    - by Ipsquiggle
    I have a number of scripts written in Python 2.6 that can be run arbitrarily. I would like to have a single central script that collects the output and displays it in a single log. Ideally it would satisfy these requirements: Every script sends its messages to the same "receiver" for display. If the receiver is not running when the first script tries to send a message, it is started. The receiver can also be launched and ended manually. (Though if ended, it will restart if another script tries to send a message.) The scripts can be run in any order, even simultaneously. Runs on Windows. Multiplatform is better, but at least it needs to work on Windows. I've come across some hints: os.pipe() multiprocess Occupying a port mutex From those pieces, I think I could cobble something together. Just wondering if there is an obviously 'right' way of doing this, or if I could learn from anyone's mistakes.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >