Search Results

Search found 1803 results on 73 pages for 'boost dataflow'.

Page 31/73 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • How to pass a linc to class function and call it?

    - by Kabumbus
    So I have a class like class mySafeData { public: void Set( int i ) { myMutex.lock(); myData = i; myMutex.unlock(); } void Get( int& i) { myMutex.lock(); i = myData; myMutex.unlock(); } private: int myData; boost::mutex myMutex; }; its instance is running. Lets call instance A. I want to create a new class that would take as a start up argument some kind of link to Getter from A and would be capable to somehow save link to thet getter for calling it inside its private methods vhen needed. how to do such thing?

    Read the article

  • How to program a connection pool?

    - by the_drow
    Is there a known algorithm for implementing a connection pool? If not what are the known algorithms and what are their trade-offs? What design patterns are common when designing and programming a connection pool? Are there any code examples implement a connection pool using boost.asio? Is it a good idea to use a connection pool for presisting connections (not http)? How is threading related to connection pooling? When do you need a new thread?

    Read the article

  • Force an object to be allocated on the heap

    - by Warren Seine
    A C++ class I'm writing uses shared_from_this() to return a valid boost::shared_ptr<>. Besides, I don't want to manage memory for this kind of object. At the moment, I'm not restricting the way the user allocates the object, which causes an error if shared_from_this() is called on a stack-allocated object. I'd like to force the object to be allocated with new and managed by a smart pointer, no matter how the user declares it. I thought it could be done through a proxy or an overloaded new operator, but I can't find a proper way of doing that. Is there a common design pattern for such usage? If it's not possible, how can I test it at compile time?

    Read the article

  • How to get the next prefix in C++?

    - by Vicente Botet Escriba
    Given a sequence (for example a string "Xa"), I want to get the next prefix in order lexicographic (i.e "Xb"). As I don't want to reinvent the wheel, I'm wondering if there is any function in C++ STL or boost that can help to define this generic function easily? If not, do you think that this function can be useful? Notes The next of "aZ" should be "b". Even if the examples are strings, the function should work for any Sequence. The lexicographic order should be a template parameter of the function.

    Read the article

  • TCP 30 small packets per second polutes connection with server

    - by Denis Ermolin
    I'm testing connection with flash client and cloud server(boost::asio for software) over TCP connection. My connection with server already is really poor - 120 ms ping in average. I found when i start to send packets with 2 bytes size (without tcp header) with speed 30 packets/s ping grow to 170-200 average. I think that it's really bad and my bad connection and bad cloud provider is reason for this high ping without any load. What do you think? (I tested my software and can compute about 50k packets/s so software is not a problem).

    Read the article

  • How to know if the argument that is passed to the function is a class, union or enum in c++?

    - by Narek
    I want to define an operator<< for all enums, to cout the value and print that it is an enum like this: code: enum AnyEnum{A,B,C}; AnyEnum enm = A; cout << enm <<endl; output: This is an enum which has a value equal to 0 I know a way of doing this with Boost library by using is_enum struct. But I don’t understand how it works. So that's why, in general, I am interested how to identify if the veriable is a class type, union type or an enum (in compile time).

    Read the article

  • Portable Socket programming in C/C++ possible?

    - by questions
    I am thinking of creating a multi-platform portable C++ server-client application. Is it even possible while using only standard libraries? If no, what other libraries are there? Are there any improvements in this direction in C++11x? Like for threads, now we have std::threads. To make it more clear.. I want something like boost::thread, which provides multiplatform portable multithreading, for networking. And why C++ doesn't have libraries(standard) for such basic things like networking?

    Read the article

  • How to copy a structure with pointers to data inside (so to copy pointers and data they point to)?

    - by Kabumbus
    so I have a structure like struct GetResultStructure { int length; char* ptr; }; I need a way to make a full copy of it meaning I need a copy to have a structure with new ptr poinnting on to copy of data I had in original structure. Is It any how possible? I mean any structure I have which contains ptrs will have some fields with its lengths I need a function that would copy my structure coping all ptrs and data they point to by given array of lengthes... Any cool boost function for it? Or any way how to create such function?

    Read the article

  • Quickest way to compute the number of shared elements between two vectors

    - by shn
    Suppose I have two vectors of the same size vector< pair<float, NodeDataID> > v1, v2; I want to compute how many elements from both v1 and v2 have the same NodeDataID. For example if v1 = {<3.7, 22>, <2.22, 64>, <1.9, 29>, <0.8, 7>}, and v2 = {<1.66, 7>, <0.03, 9>, <5.65, 64>, <4.9, 11>}, then I want to return 2 because there are two elements from v1 and v2 that share the same NodeDataIDs: 7 and 64. What is the quickest way to do that in C++ ? Just for information, note that the type NodeDataIDs is defined as I use boost as: typedef adjacency_list<setS, setS, undirectedS, NodeData, EdgeData> myGraph; typedef myGraph::vertex_descriptor NodeDataID; But it is not important since we can compare two NodeDataID using the operator == (that is, possible to do v1[i].second == v2[j].second)

    Read the article

  • Core i5 Turboboost/C states Freezing computer

    - by Aaron Smith
    I'm not sure which, but I just had a heck of a time getting my computer to boot up and not freeze. It would run until it finished booting windows, then everything would freeze. This happened until I turned off turboboost and all the c states on the processor. What could be causing this? Is the processor going bad?

    Read the article

  • can I force server to always use turboboost?

    - by javapowered
    I'm using HP DL360p Gen8 with 2 * Xeon E5-2640. I do not load CPU 100%, i load it only ~10% and so I guess turboboost is not activated. However I'm using my server for trading so I absolutely don't care about CPU loading but I always want to process my data asap. So I want server to operate using maximum 3 GHz. I.e. 90% of CPU time I don't have anything to process. 10% of CPU time I have data to process. But I need to process it ASAP. I need every single microsecond. So I want server to operate always at maximum "turboboosted" mode. Is it possible?

    Read the article

  • cannot receive UDP broadcast packets

    - by user292792
    Hello I have 2 boxes: - an embedded device (ARM Omap with linux) which I'll call "Omap". - a PC (can either be Windows or linux). Scenario 1 Both boxes are in the same network (example: my office). The Omap gets its address from a DHCP server (ex: 192.168.10.110). The PC has always the same address (ex. 192.168.10.104). I can successfully exchange UDP broadcast packets on any port. Success. Scenario 2 The 2 boxes are in a network withOUT a DHCP server. The PC has a static IP address (example: 10.10.10.20). The Omap boots, looks for a DHCP server, doesn't find it, and is in what I call "bad IP address" state. Now... Broadcasting UDP packets from the Omap works: the PC can see them. The opposite doesn't work: UDP packets broadcasted by the PC are not seen by the Omap. I am using Wireshark on another PC to verify that the packets are being sent. Failure. I tried to change the Omap ip address (with ifconfig)... no luck. What am I missing? To complete the picture, when the Omap is in Scenario 2, if I run udhcpc ... it can communicate with the DHCP server and get an IP address. I also see the packets with Wireshark. So this means that the DHCP client is able to broadcast UDP packets. (Yes, I tried to use DHCP ports 67/68 but it doesn't work). I am using Boost C++ Asio UDP sockets. Specifically, I took the multicast examples and changed them to do broadcasting. Any help is appreciated. Thanks, Benedetto

    Read the article

  • Reinterpret a CGImageRef using PyObjC in Python

    - by Michael Rondinelli
    Hi, I'm doing something that's a little complicated to sum up in the title, so please bear with me. I'm writing a Python module that provides an interface to my C++ library, which provides some specialized image manipulation functionality. It would be most convenient to be able to access image buffers as CGImageRefs from Python, so they could be manipulated further using Quartz (using PyObjC, which works well). So I have a C++ function that provides a CGImageRef representation from my own image buffers, like this: CGImageRef CreateCGImageRefForImageBuffer(shared_ptr<ImageBuffer> buffer); I'm using Boost::Python to create my Python bridge. What is the easiest way for me to export this function so that I can use the CGImageRef from Python? Problems: The CGImageRef type can't be exported directly because it is a pointer to an undefined struct. So I could make a wrapper function that wraps it in a PyCObject or something to get it to send the pointer to Python. But then how do I "cast" this object to a CGImageRef from Python? Is there a better way to go about this?

    Read the article

  • Instantiating class with custom allocator in shared memory

    - by recipriversexclusion
    I'm pulling my hair due to the following problem: I am following the example given in boost.interprocess documentation to instantiate a fixed-size ring buffer buffer class that I wrote in shared memory. The skeleton constructor for my class is: template<typename ItemType, class Allocator > SharedMemoryBuffer<ItemType, Allocator>::SharedMemoryBuffer( unsigned long capacity ){ m_capacity = capacity; // Create the buffer nodes. m_start_ptr = this->allocator->allocate(); // allocate first buffer node BufferNode* ptr = m_start_ptr; for( int i = 0 ; i < this->capacity()-1; i++ ) { BufferNode* p = this->allocator->allocate(); // allocate a buffer node } } My first question: Does this sort of allocation guarantee that the buffer nodes are allocated in contiguous memory locations, i.e. when I try to access the n'th node from address m_start_ptr + n*sizeof(BufferNode) in my Read() method would it work? If not, what's a better way to keep the nodes, creating a linked list? My test harness is the following: // Define an STL compatible allocator of ints that allocates from the managed_shared_memory. // This allocator will allow placing containers in the segment typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator; //Alias a vector that uses the previous STL-like allocator so that allocates //its values from the segment typedef SharedMemoryBuffer<int, ShmemAllocator> MyBuf; int main(int argc, char *argv[]) { shared_memory_object::remove("MySharedMemory"); //Create a new segment with given name and size managed_shared_memory segment(create_only, "MySharedMemory", 65536); //Initialize shared memory STL-compatible allocator const ShmemAllocator alloc_inst (segment.get_segment_manager()); //Construct a buffer named "MyBuffer" in shared memory with argument alloc_inst MyBuf *pBuf = segment.construct<MyBuf>("MyBuffer")(100, alloc_inst); } This gives me all kinds of compilation errors related to templates for the last statement. What am I doing wrong?

    Read the article

  • A map and set which uses contiguous memory and has a reserve function

    - by edA-qa mort-ora-y
    I use several maps and sets. The lack of contiguous memory, and high number of (de)allocations, is a performance bottleneck. I need a mainly STL-compatbile map and set class which can use a contiguous block of memory for internal objects (or multiple blocks). It also needs to have a reserve function so that I can preallocate for expected sizes. Before I write my own I'd like to check what is available first. Is there something in Boost which does this? Does somebody know of an available implementation elsewhere? Intrusive collection types are not usable here as the same objects need to exist in several collections. As far as I know STL memory pools are per-type, not per instance. These global pools are not efficient with respect to memory locality in mutli-cpu/core processing. Object pools don't work as the types will be shared between instance but their pool should not. In many cases a hash map may be an option in some cases.

    Read the article

  • Parallel version of loop not faster than serial version

    - by Il-Bhima
    I'm writing a program in C++ to perform a simulation of particular system. For each timestep, the biggest part of the execution is taking up by a single loop. Fortunately this is embarassingly parallel, so I decided to use Boost Threads to parallelize it (I'm running on a 2 core machine). I would expect at speedup close to 2 times the serial version, since there is no locking. However I am finding that there is no speedup at all. I implemented the parallel version of the loop as follows: Wake up the two threads (they are blocked on a barrier). Each thread then performs the following: Atomically fetch and increment a global counter. Retrieve the particle with that index. Perform the computation on that particle, storing the result in a separate array Wait on a job finished barrier The main thread waits on the job finished barrier. I used this approach since it should provide good load balancing (since each computation may take differing amounts of time). I am really curious as to what could possibly cause this slowdown. I always read that atomic variables are fast, but now I'm starting to wonder whether they have their performance costs. If anybody has some ideas what to look for or any hints I would really appreciate it. I've been bashing my head on it for a week, and profiling has not revealed much.

    Read the article

  • What makes deployment successful for some users and unsuccessful for others?

    - by Julien
    I am trying to deploy a Visual C++ application (developed with Microsoft Visual Studio 2008) using a Setup and Deployment Project. After installation, users on some target computers get the following error message after launching the application executable: “This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix the problem.” Another user after installation could run the application properly. I cannot find the root cause of this problem, despite spending several hours on the Visual Studio help files and online forums (most postings date back to 2006). Does anyone at Stack Overflow have a suggestion? Thanks in advance. Additional details appear below. The application uses FLTK 1.1.9 for a GUI library, as well as some Boost 1.39 libraries (regex, lexical_cast, date_time, math). I made sure I am trying to deploy the release version (not the debug version) of the application. The Runtime library in the Code Generation settings is Multi-threaded DLL (/MD). The dependency walker of myapp.exe lists the following DLLs: wsock32.dll, comctl32.dll, kernel32.dll, user32.dll, gdi32.dll, shell32.dll, ole32.dll, mvcp90.dll, msvcr90.dll. In the Setup and Deployment Project, I add the following DLLs to the File System on Target Machine: fltkdlld.dll, and a folder named Microsoft.VC90.CRT with msvcm90.dll, msvcp90.dll, mcvcr90.dll and Microsoft.VC90.CRT.manifest. The installation process on the target computers getting the error message requires having the .Net Framework 3.5 installed first. Any suggestion? Thanks in advance!

    Read the article

  • What is the nicest way to parse this in C++ ?

    - by ereOn
    Hi, In my program, I have a list of "server address" in the following format: host[:port] The brackets here, indicate that the port is optional. host can be a hostname, an IPv4 or IPv6 address. port, if present can be a numeric port number or a service string (like: "http" or "ssh"). If port is present and host is an IPv6 address, host must be in "bracket-enclosed" notation (Example: [::1]) Here are some valid examples: localhost localhost:11211 127.0.0.1:http [::1]:11211 ::1 [::1] And an invalid example: ::1:80 // Invalid: Is this the IPv6 address ::1:80 and a default port, or the IPv6 address ::1 and the port 80 ? ::1:http // This is not ambigous, but for simplicity sake, let's consider this is forbidden as well. My goal is to separate such entries in two parts (obviously host and port). I don't care if either the host or port are invalid as long as they don't contain a : (290.234.34.34.5 is ok for host, it will be rejected in the next process); I just want to separate the two parts, or if there is no port part, to know it somehow. I tried to do something with std::stringstream but everything I come up to seems hacky and not really elegant. How would you do this in C++ ? I don't mind answers in C but C++ is prefered. Any boost solution is welcome as well. Thank you.

    Read the article

  • Calling a method at a specific interval rate in C++

    - by Aetius
    This is really annoying me as I have done it before, about a year ago and I cannot for the life of me remember what library it was. Basically, the problem is that I want to be able to call a method a certain number of times or for a certain period of time at a specified interval. One example would be I would like to call a method "x" starting from now, 10 times, once every 0.5 seconds. Alternatively, call method "x" starting from now, 10 times, until 5 seconds have passed. Now I thought I used a boost library for this functionality but I can't seem to find it now and feeling a bit annoyed. Unfortunately I can't look at the code again as I'm not in possession of it any more. Alternatively, I could have dreamt this all up and it could have been proprietary code. Assuming there is nothing out there that does what I would like, what is currently the best way of producing this behaviour? It would need to be high-resolution, up to a millisecond. It doesn't matter if it blocks the thread that it is executed from or not. Thanks!

    Read the article

  • Loading specific files from arbitrary directories?

    - by Haydn V. Harach
    I want to load foo.txt. foo.txt might exist in the data/bar/ directory, or it might exist in the data/New Folder/ directory. There might be a different foo.txt in both of these directories, in which case I would want to either load one and ignore the other according to some order that I've sorted the directories by (perhaps manually, perhaps by date of creation), or else load them both and combine the results somehow. The latter (combining the results of both/all foo.txt files) is circumstantial and beyond the scope of this question, but something I want to be able to do in the future. I'm using SDL and boost::filesystem. I want to keep my list of dependencies as small as possible, and as cross-platform as possible. I'm guessing that my best bet would be to get a list of every directory (within the data/ folder), sort/filter this list, then when I go to load foo.txt, I search for it in each potential directory? This sounds like it would be very inefficient, if I have dozens of potential directories to search through every time. What's the best way to go about accomplishing this? Bonus: What if I want some of the directories to be archives? ie. considering both data/foo/ and data/bar.zip to both be valid, and pull foobar.txt from either one without caring.

    Read the article

  • Safe way for getting/finding a vertex in a graph with custom properties -> good programming practice

    - by Shadow
    Hi, I am writing a Graph-class using boost-graph-library. I use custom vertex and edge properties and a map to store/find the vertices/edges for a given property. I'm satisfied with how it works, so far. However, I have a small problem, where I'm not sure how to solve it "nicely". The class provides a method Vertex getVertex(Vertexproperties v_prop) and a method bool hasVertex(Vertexproperties v_prop) The question now is, would you judge this as good programming practice in C++? My opinion is, that I have first to check if something is available before I can get it. So, before getting a vertex with a desired property, one has to check if hasVertex() would return true for those properties. However, I would like to make getVertex() a bit more robust. ATM it will segfault when one would directly call getVertex() without prior checking if the graph has a corresponding vertex. A first idea was to return a NULL-pointer or a pointer that points past the last stored vertex. For the latter, I haven't found out how to do this. But even with this "robust" version, one would have to check for correctness after getting a vertex or one would also run into a SegFault when dereferencing that vertex-pointer for example. Therefore I am wondering if it is "ok" to let getVertex() SegFault if one does not check for availability beforehand?

    Read the article

  • I don't understand how work call_once

    - by SABROG
    Please help me understand how work call_once Here is thread-safe code. I don't understand why this need Thread Local Storage and global_epoch variables. Variable _fast_pthread_once_per_thread_epoch can be changed to constant/enum like {FAST_PTHREAD_ONCE_INIT, BEING_INITIALIZED, FINISH_INITIALIZED}. Why needed count calls in global_epoch? I think this code can be rewriting with logc: if flag FINISH_INITIALIZED do nothing, else go to block with mutexes and this all. #ifndef FAST_PTHREAD_ONCE_H #define FAST_PTHREAD_ONCE_H #include #include typedef sig_atomic_t fast_pthread_once_t; #define FAST_PTHREAD_ONCE_INIT SIG_ATOMIC_MAX extern __thread fast_pthread_once_t _fast_pthread_once_per_thread_epoch; #ifdef __cplusplus extern "C" { #endif extern void fast_pthread_once( pthread_once_t *once, void (*func)(void) ); inline static void fast_pthread_once_inline( fast_pthread_once_t *once, void (*func)(void) ) { fast_pthread_once_t x = *once; /* unprotected access */ if ( x _fast_pthread_once_per_thread_epoch ) { fast_pthread_once( once, func ); } } #ifdef __cplusplus } #endif #endif FAST_PTHREAD_ONCE_H Source fast_pthread_once.c The source is written in C. The lines of the primary function are numbered for reference in the subsequent correctness argument. #include "fast_pthread_once.h" #include static pthread_mutex_t mu = PTHREAD_MUTEX_INITIALIZER; /* protects global_epoch and all fast_pthread_once_t writes */ static pthread_cond_t cv = PTHREAD_COND_INITIALIZER; /* signalled whenever a fast_pthread_once_t is finalized */ #define BEING_INITIALIZED (FAST_PTHREAD_ONCE_INIT - 1) static fast_pthread_once_t global_epoch = 0; /* under mu */ __thread fast_pthread_once_t _fast_pthread_once_per_thread_epoch; static void check( int x ) { if ( x == 0 ) abort(); } void fast_pthread_once( fast_pthread_once_t *once, void (*func)(void) ) { /*01*/ fast_pthread_once_t x = *once; /* unprotected access */ /*02*/ if ( x _fast_pthread_once_per_thread_epoch ) { /*03*/ check( pthread_mutex_lock(µ) == 0 ); /*04*/ if ( *once == FAST_PTHREAD_ONCE_INIT ) { /*05*/ *once = BEING_INITIALIZED; /*06*/ check( pthread_mutex_unlock(µ) == 0 ); /*07*/ (*func)(); /*08*/ check( pthread_mutex_lock(µ) == 0 ); /*09*/ global_epoch++; /*10*/ *once = global_epoch; /*11*/ check( pthread_cond_broadcast(&cv;) == 0 ); /*12*/ } else { /*13*/ while ( *once == BEING_INITIALIZED ) { /*14*/ check( pthread_cond_wait(&cv;, µ) == 0 ); /*15*/ } /*16*/ } /*17*/ _fast_pthread_once_per_thread_epoch = global_epoch; /*18*/ check (pthread_mutex_unlock(µ) == 0); } } This code from BOOST: #ifndef BOOST_THREAD_PTHREAD_ONCE_HPP #define BOOST_THREAD_PTHREAD_ONCE_HPP // once.hpp // // (C) Copyright 2007-8 Anthony Williams // // Distributed under the Boost Software License, Version 1.0. (See // accompanying file LICENSE_1_0.txt or copy at // http://www.boost.org/LICENSE_1_0.txt) #include #include #include #include "pthread_mutex_scoped_lock.hpp" #include #include #include namespace boost { struct once_flag { boost::uintmax_t epoch; }; namespace detail { BOOST_THREAD_DECL boost::uintmax_t& get_once_per_thread_epoch(); BOOST_THREAD_DECL extern boost::uintmax_t once_global_epoch; BOOST_THREAD_DECL extern pthread_mutex_t once_epoch_mutex; BOOST_THREAD_DECL extern pthread_cond_t once_epoch_cv; } #define BOOST_ONCE_INITIAL_FLAG_VALUE 0 #define BOOST_ONCE_INIT {BOOST_ONCE_INITIAL_FLAG_VALUE} // Based on Mike Burrows fast_pthread_once algorithm as described in // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2444.html template void call_once(once_flag& flag,Function f) { static boost::uintmax_t const uninitialized_flag=BOOST_ONCE_INITIAL_FLAG_VALUE; static boost::uintmax_t const being_initialized=uninitialized_flag+1; boost::uintmax_t const epoch=flag.epoch; boost::uintmax_t& this_thread_epoch=detail::get_once_per_thread_epoch(); if(epoch #endif I right understand, boost don't use atomic operation, so code from boost not thread-safe?

    Read the article

  • Uses of a C++ Arithmetic Promotion Header

    - by OlduvaiHand
    I've been playing around with a set of templates for determining the correct promotion type given two primitive types in C++. The idea is that if you define a custom numeric template, you could use these to determine the return type of, say, the operator+ function based on the class passed to the templates. For example: // Custom numeric class template <class T> struct Complex { Complex(T real, T imag) : r(real), i(imag) {} T r, i; // Other implementation stuff }; // Generic arithmetic promotion template template <class T, class U> struct ArithmeticPromotion { typedef typename X type; // I realize this is incorrect, but the point is it would // figure out what X would be via trait testing, etc }; // Specialization of arithmetic promotion template template <> class ArithmeticPromotion<long long, unsigned long> { typedef typename unsigned long long type; } // Arithmetic promotion template actually being used template <class T, class U> Complex<typename ArithmeticPromotion<T, U>::type> operator+ (Complex<T>& lhs, Complex<U>& rhs) { return Complex<typename ArithmeticPromotion<T, U>::type>(lhs.r + rhs.r, lhs.i + rhs.i); } If you use these promotion templates, you can more or less treat your user defined types as if they're primitives with the same promotion rules being applied to them. So, I guess the question I have is would this be something that could be useful? And if so, what sorts of common tasks would you want templated out for ease of use? I'm working on the assumption that just having the promotion templates alone would be insufficient for practical adoption. Incidentally, Boost has something similar in its math/tools/promotion header, but it's really more for getting values ready to be passed to the standard C math functions (that expect either 2 ints or 2 doubles) and bypasses all of the integral types. Is something that simple preferable to having complete control over how your objects are being converted? TL;DR: What sorts of helper templates would you expect to find in an arithmetic promotion header beyond the machinery that does the promotion itself?

    Read the article

  • Middleware with generic communication media layer

    - by Tom
    Greetings all, I'm trying to implement middleware (driver) for an embedded device with generic communication media layer. Not sure what is the best way to do it so I'm seeking an advice from more experienced stackoverflow users:). Basically we've got devices around the country communicating with our servers (or a pda/laptop in used in field). Usual form of communication is over TCP/IP, but could be also using usb, RF dongle, IR, etc. The plan is to have object corresponding with each of these devices, handling the proprietary protocol on one side and requests/responses from other internal systems on the other. The thing is how create something generic in between the media and the handling objects. I had a play around with the TCP dispatcher using boost.asio but trying to create something generic seems like a nightmare :). Anybody tried to do something like that? What is the best way how to do it? Example: Device connects to our Linux server. New middleware instance is created (on the server) which announces itself to one of the running services (details are not important). The service is responsible for making sure that device's time is synchronized. So it asks the middleware what is the device's time, driver translates it to device language (protocol) and sends the message, device responses and driver again translates it for the service. This might seem as a bit overkill for such a simple request but imagine there are more complex requests which the driver must translate, also there are several versions of the device which use different protocol, etc. but would use the same time sync service. The goal is to abstract the devices through the drivers to be able to use the same service to communicate with them. Another example: we find out that the remote communications with the device are down. So we send somebody out with PDA, he connects to the device using USB cable. Starts up the application which has the same functionality as the timesync service. Again middleware instance is created (on the PDA) to translate communication between application and the device this time only using USB/serial media not TCP/IP as in previous example. I hope it makes more sense now :) Cheers, Tom

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >