Search Results

Search found 232 results on 10 pages for 'ipc'.

Page 3/10 | < Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >

  • Coldfusion 9 Inter portlet communication?

    - by Jakub
    I am having a hard time trying to find documentation at achieving IPC using Coldfusion 9 portlets under JBOSS. Does anyone have any good references I can take a look at? Or maybe some example code that I can go off of? So far I've been successful in getting my portlets working under Liferay (JBOSS), form submission, different views (edit/help) all work fine. Just need to know how to have one portlet on the page talk with another portlet. So far have been unsuccessful :( Anyone?

    Read the article

  • Perl - How to save an handler as an attribute in order to use in outside a module

    - by Zwik
    Ultimately, what I want to do is to start a process in a module and parse the output in real time in another script. What I want to do : Open a process Handler (IPC) Use this attribute outside of the Module How I'm trying to do it and fail : Open the process handler Save the handler in a module's attribute Use the attribute outside the module. Code example : #module.pm self->{PROCESS_HANDLER}; sub doSomething{ ... open( self->{PROCESS_HANDLER}, "run a .jar 2>&1 |" ); ... } #perlScript.pl my $module = new module(...); ... $module->doSomething(); ... while( $module->{PROCESS_HANDLER} ){ ... }

    Read the article

  • C - fork() and sharing memory

    - by Ben
    I need my parent and child process to both be able to read and write the same variable (of type int) so it is "global" between the two processes. I'm assuming this would use some sort of cross-process communication and have one variable on one process being updated. I did a quick google and IPC and various techniques come up but I don't know which is the most suitable for my situation. So what technique is best and could you provide a link to a noobs tutorial for it. Thanks.

    Read the article

  • What are alternatives to Win32 PulseEvent() function?

    - by Bill
    The documentation for the Win32 API PulseEvent() function (kernel32.dll) states that this function is “… unreliable and should not be used by new applications. Instead, use condition variables”. However, condition variables cannot be used across process boundaries like (named) events can. I have a scenario that is cross-process, cross-runtime (native and managed code) in which a single producer occasionally has something interesting to make known to zero or more consumers. Right now, a well-known named event is used (and set to signaled state) by the producer using this PulseEvent function when it needs to make something known. Zero or more consumers wait on that event (WaitForSingleObject()) and perform an action in response. There is no need for two-way communication in my scenario, and the producer does not need to know if the event has any listeners, nor does it need to know if the event was successfully acted upon. On the other hand, I do not want any consumers to ever miss any events. In other words, the system needs to be perfectly reliable – but the producer does not need to know if that is the case or not. The scenario can be thought of as a “clock ticker” – i.e., the producer provides a semi-regular signal for zero or more consumers to count. And all consumers must have the correct count over any given period of time. No polling by consumers is allowed (performance reasons). The ticker is just a few milliseconds (20 or so, but not perfectly regular). Raymen Chen (The Old New Thing) has a blog post pointing out the “fundamentally flawed” nature of the PulseEvent() function, but I do not see an alternative for my scenario from Chen or the posted comments. Can anyone please suggest one? Please keep in mind that the IPC signal must cross process boundries on the machine, not simply threads. And the solution needs to have high performance in that consumers must be able to act within 10ms of each event.

    Read the article

  • Perl: Process Communcation

    - by Shiftbit
    Can anyone explain how I can successfully get my process communicating. I find the perldoc on IPC confusing. What I have so far is: $| = 1; $SIG{CHLD} = {wait}; my $parentPid = $$; if ($pid = fork();) ) { if ($pid == 0) { pipe($parentPid, $$); open PARENT, "<$parentPid"; while (<PARENT>) { print $_; } close PARENT; exit(); } else { pipe($parentPid, $pid); open CHILD, ">$pid"; or error("\nError opening: childPid\nRef: $!\n"); open (FH, "<list") or error("\nError opening: list\nRef: $!\n"); while(<FH>) { print CHILD, $_; } close FH or error("\nError closing: list\nRef: $!\n"); close CHILD or error("\nError closing: childPid\nRef: $!\n); } else { error("\nError forking\nRef: $!\n"); } First Question is what does Perldoc pipe mean by READHANDLE, WRITEHANDLE? Second Questions is can I implement a solution without relying on CPAN or other modules?

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • How can I use `pipe` to facilitate interprocess communication in Perl?

    - by Shiftbit
    Can anyone explain how I can successfully get my processes communicating? I find the perldoc on IPC confusing. What I have so far is: $| = 1; $SIG{CHLD} = {wait}; my $parentPid = $$; if ($pid = fork();) ) { if ($pid == 0) { pipe($parentPid, $$); open PARENT, "<$parentPid"; while (<PARENT>) { print $_; } close PARENT; exit(); } else { pipe($parentPid, $pid); open CHILD, ">$pid"; or error("\nError opening: childPid\nRef: $!\n"); open (FH, "<list") or error("\nError opening: list\nRef: $!\n"); while(<FH>) { print CHILD, $_; } close FH or error("\nError closing: list\nRef: $!\n"); close CHILD or error("\nError closing: childPid\nRef: $!\n); } else { error("\nError forking\nRef: $!\n"); } First: What does perldoc pipe mean by READHANDLE, WRITEHANDLE? Second: Can I implement a solution without relying on CPAN or other modules?

    Read the article

  • QLocalSocket and QLocalServer in browser plugins

    - by kambamsu
    Hi, I have a simple doubt. Does the ipc mechanism in qt work when we use it for developing browser plugins? The reason i ask this is that I can easily get the QLocalSocket and QLocalServer communication to work in a qt application, but when i write a similar piece of code in a browser plugin dll i see that the server does not accept a new connection at all. This is what i do in the server: server = new QLocalServer(this); if( !server->listen("myServer")) { writeFile("Listen failed"); } connect(server, SIGNAL(newConnection()), this, SLOT(handleConn()),Qt::QueuedConnection); and this is what i do in the client: client = new QLocalSocket(this); client->abort(); QObject::connect(client,SIGNAL(connected()),this,SLOT(connClient()),Qt::QueuedConnection); client->connectToServer("myServer"); after i call connectToServer, my client emits the connected() signal and the connClient() slot is called. But, on the server side, there is no signal emitted. It doesn't seem to be receiving any connection at all. Any help would be appreciated. Thanks

    Read the article

  • Binary serialization/de-serialization in C++ and C#

    - by 6pack kid
    Hello. I am working on a distributed application which has two components. One is written in standard C++ (not managed C++) and the other one is written in C#. Both are communicating via a message bus. I have a situation in which I need to pass objects from C++ to C# application and for this I need to serialize those objects in C++ and de-serialize them in C# (something like marshaling/un-marshaling in .NET). I need to perform this serialization in binary and not in XML (due to performance reasons). I have used Boost.Serialization to do this when both ends were implemented in C++ but now that I have a .NET application on one end, Boost.Serialization is not a viable solution. I am looking for a solution that allows me to perform (de)serialization across C++ and .NET boundary i.e., cross platform binary serialization. I know I can implement the (de)serialization code in a C++ dll and use P/Invoke in the .NET application, but I want to keep that as a last resort. Also, I want to know if I use some standard like gzip, will that be efficient? Are there any other alternatives to gzip? What are the pros/cons of them? Thanks

    Read the article

  • Perl: Event-driven Programming

    - by Shiftbit
    Is there any POSIX signals that I could utilize in my perl program to create event-driven programming? Currently I have multi-process program that is able to cross communicate but my parent thread is only able to listen to listen at one child at a time. foreach (@threads) { sysread(${$_}{'read'}, my $line, 100); chomp($line); print "Parent hears: $line\n"; } The problem is that the parent sits in a continual wait state until it receives it a signal from the first child before it can continue on. I am relying on 'pipe' for my intercommunication. My current solution is very similar to: http://stackoverflow.com/questions/2558098/how-can-i-use-pipe-to-facilitate-interprocess-communication-in-perl If possible I would like to rely on a $SIG{...} event or any non-CPAN solution.

    Read the article

  • Named pipe is not flushing in Python

    - by BrainCore
    I have a named pipe created via the os.mkfifo() command. I have two different Python processes accessing this named pipe, process A is reading, and process B is writing. Process A uses the select function to determine when there is data available in the fifo/pipe. Despite the fact that process B flushes after each write call, process A's select function does not always return (it keeps blocking as if there is no new data). After looking into this issue extensively, I finally just programmed process B to add 5KB of garbage writes before and after my real call, and likewise process A is programmed to ignore those 5KB. Now everything works fine, and select is always returning appropriately. I came to this hack-ish solution by noticing that process A's select would return if process B were to be killed (after it was writing and flushing, it would sleep on a read pipe). Is there a problem with flush in Python for named pipes?

    Read the article

  • What are all the differences between pipes and message queues?

    - by aks
    What are all the differences between pipes and message queues? Please explain both from vxworks & unix perspectives. I think pipes are unidirectional but message queues aren't. But don't pipes internally use message queues, then how come pipes are unidirectional but message queues are not? What are the other differences you can think of (from design or usage or other perspectives)?

    Read the article

  • Linux Shared Memory

    - by Betamoo
    The function which creates shared memory in *inux programming takes a key as one of its parameters.. What is the meaning of this key? And How can I use it? Edit: Not shared memory id

    Read the article

  • How do i pipe stdout/stderr in .NET?

    - by acidzombie24
    I want to do something like this ffmpeg -i audio.mp3 -f flac - | oggenc2.exe - -o audio.ogg i know how to do ffmpeg -i audio.mp3 -f flac using the process class in .NET but how do i pipe it to oggenc2? Any example of how to do this (it doesnt need to be ffmpeg or oggenc2) would be fine.

    Read the article

  • Most efficient way to send images across processes

    - by Heinrich Ulbricht
    Goal Pass images generated by one process efficiently and at very high speed to another process. The two processes run on the same machine and on the same desktop. The operating system may be WinXP, Vista and Win7. Detailled description The first process is solely for controlling the communication with a device which produces the images. These images are about 500x300px in size and may be updated up to several hundred times per second. The second process needs these images to display them. The first process uses a third party API to paint the images from the device to a HDC. This HDC has to be provided by me. Note: There is already a connection open between the two processes. They are communicating via anonymous pipes and share memory mapped file views. Thoughts How would I achieve this goal with as little work as possible? And I mean both work for me and the computer. I am using Delphi, so maybe there is some component available for doing this? I think I could always paint to any image component's HDC, save the content to memory stream, copy the contents via the memory mapped file, unpack it on the other side and paint it there to the destination HDC. I also read about a IPicture interface which can be used to marshall images. What are your ideas? I appreciate every thought on this!

    Read the article

  • How do I synchronize access to shared memory in LynxOS/POSIX?

    - by GrahamS
    I am implementing two processes on a LynxOS SE (POSIX conformant) system that will communicate via shared memory. One process will act as a "producer" and the other a "consumer". In a multi-threaded system my approach to this would be to use a mutex and condvar (condition variable) pair, with the consumer waiting on the condvar (with pthread_cond_wait) and the producer signalling it (with pthread_cond_signal) when the shared memory is updated. How do I achieve this in a multi-process, rather than multi-threaded, architecture? Is there a LynxOS/POSIX way to create a condvar/mutex pair that can be used between processes? Or is some other synchronization mechanism more appropriate in this scenario?

    Read the article

  • .net 4.0 creating a MemoryMappedFile with global context throws exception

    - by Christoph
    Hi all, I want to create a global MemoryMappedFile in C# 4.0 using following call: string MemoryMappedFileName = "Global\\20E9C857-C944-4C35-B937-A5941034D073"; ioBuffer = MemoryMappedFile.CreateNew(MemoryMappedFileName, totalIoBufferSize); This always throws following exception "System.UnauthorizedAccessException: Access to the path is denied." If I remove the "Global\" identifier from the memorymapped filename it works but I need a memory mapped file existing accross terminal sessions. thanks, Christoph

    Read the article

  • Reading a child process's /proc/pid/mem file from the parent

    - by Amittai Aviram
    In the program below, I am trying to cause the following to happen: Process A assigns a value to a stack variable a. Process A (parent) creates process B (child) with PID child_pid. Process B calls function func1, passing a pointer to a. Process B changes the value of variable a through the pointer. Process B opens its /proc/self/mem file, seeks to the page containing a, and prints the new value of a. Process A (at the same time) opens /proc/child_pid/mem, seeks to the right page, and prints the new value of a. The problem is that, in step 6, the parent only sees the old value of a in /proc/child_pid/mem, while the child can indeed see the new value in its /proc/self/mem. Why is this the case? Is there any way that I can get the parent to to see the child's changes to its address space through the /proc filesystem? #include <fcntl.h> #include <stdbool.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <sys/wait.h> #include <unistd.h> #define PAGE_SIZE 0x1000 #define LOG_PAGE_SIZE 0xc #define PAGE_ROUND_DOWN(v) ((v) & (~(PAGE_SIZE - 1))) #define PAGE_ROUND_UP(v) (((v) + PAGE_SIZE - 1) & (~(PAGE_SIZE - 1))) #define OFFSET_IN_PAGE(v) ((v) & (PAGE_SIZE - 1)) # if defined ARCH && ARCH == 32 #define BP "ebp" #define SP "esp" #else #define BP "rbp" #define SP "rsp" #endif typedef struct arg_t { int a; } arg_t; void func1(void * data) { arg_t * arg_ptr = (arg_t *)data; printf("func1: old value: %d\n", arg_ptr->a); arg_ptr->a = 53; printf("func1: address: %p\n", &arg_ptr->a); printf("func1: new value: %d\n", arg_ptr->a); } void expore_proc_mem(void (*fn)(void *), void * data) { off_t frame_pointer, stack_start; char buffer[PAGE_SIZE]; const char * path = "/proc/self/mem"; int child_pid, status; int parent_to_child[2]; int child_to_parent[2]; arg_t * arg_ptr; off_t child_offset; asm volatile ("mov %%"BP", %0" : "=m" (frame_pointer)); stack_start = PAGE_ROUND_DOWN(frame_pointer); printf("Stack_start: %lx\n", (unsigned long)stack_start); arg_ptr = (arg_t *)data; child_offset = OFFSET_IN_PAGE((off_t)&arg_ptr->a); printf("Address of arg_ptr->a: %p\n", &arg_ptr->a); pipe(parent_to_child); pipe(child_to_parent); bool msg; int child_mem_fd; char child_path[0x20]; child_pid = fork(); if (child_pid == -1) { perror("fork"); exit(EXIT_FAILURE); } if (!child_pid) { close(child_to_parent[0]); close(parent_to_child[1]); printf("CHILD (pid %d, parent pid %d).\n", getpid(), getppid()); fn(data); msg = true; write(child_to_parent[1], &msg, 1); child_mem_fd = open("/proc/self/mem", O_RDONLY); if (child_mem_fd == -1) { perror("open (child)"); exit(EXIT_FAILURE); } printf("CHILD: child_mem_fd: %d\n", child_mem_fd); if (lseek(child_mem_fd, stack_start, SEEK_SET) == (off_t)-1) { perror("lseek"); exit(EXIT_FAILURE); } if (read(child_mem_fd, buffer, sizeof(buffer)) != sizeof(buffer)) { perror("read"); exit(EXIT_FAILURE); } printf("CHILD: new value %d\n", *(int *)(buffer + child_offset)); read(parent_to_child[0], &msg, 1); exit(EXIT_SUCCESS); } else { printf("PARENT (pid %d, child pid %d)\n", getpid(), child_pid); printf("PARENT: child_offset: %lx\n", child_offset); read(child_to_parent[0], &msg, 1); printf("PARENT: message from child: %d\n", msg); snprintf(child_path, 0x20, "/proc/%d/mem", child_pid); printf("PARENT: child_path: %s\n", child_path); child_mem_fd = open(path, O_RDONLY); if (child_mem_fd == -1) { perror("open (child)"); exit(EXIT_FAILURE); } printf("PARENT: child_mem_fd: %d\n", child_mem_fd); if (lseek(child_mem_fd, stack_start, SEEK_SET) == (off_t)-1) { perror("lseek"); exit(EXIT_FAILURE); } if (read(child_mem_fd, buffer, sizeof(buffer)) != sizeof(buffer)) { perror("read"); exit(EXIT_FAILURE); } printf("PARENT: new value %d\n", *(int *)(buffer + child_offset)); close(child_mem_fd); printf("ENDING CHILD PROCESS.\n"); write(parent_to_child[1], &msg, 1); if (waitpid(child_pid, &status, 0) == -1) { perror("waitpid"); exit(EXIT_FAILURE); } } } int main(void) { arg_t arg; arg.a = 42; printf("In main: address of arg.a: %p\n", &arg.a); explore_proc_mem(&func1, &arg.a); return EXIT_SUCCESS; } This program produces the output below. Notice that the value of a (boldfaced) differs between parent's and child's reading of the /proc/child_pid/mem file. In main: address of arg.a: 0x7ffffe1964f0 Stack_start: 7ffffe196000 Address of arg_ptr-a: 0x7ffffe1964f0 PARENT (pid 20376, child pid 20377) PARENT: child_offset: 4f0 CHILD (pid 20377, parent pid 20376). func1: old value: 42 func1: address: 0x7ffffe1964f0 func1: new value: 53 PARENT: message from child: 1 CHILD: child_mem_fd: 4 PARENT: child_path: /proc/20377/mem CHILD: new value 53 PARENT: child_mem_fd: 7 PARENT: new value 42 ENDING CHILD PROCESS.

    Read the article

  • Invoke Python modules from Java

    - by user36813
    I have a Python interface of a graph library written in C - igraph (the name of library). My need is to invoke the python modules pertaining to this graph library from Java code. It goes like this, the core of library is in c. This core has been imported into Python and interfaces to the functions embedded in core are available in Python. My project's rest of the code is in Java and hence I would like to call the graph functions by Java as well. Jython - which lets you invoke python modules with in Java was an option.I went on trying Jython to discover that it will not work in my case as the core code is in C and Jython wont support anything that is imported as a c dll in python code.I also thought of opting for the approach of calling graph routines directly in c. That is without passing through Python code. I am assuming there must be something which lets you call c code from Java, how ever I am not good in C hence I did not go for it. My last resort seems to execute Python interpreter from command line using Java. But that is a dirty and shameless. Also to deal with the results produced by Python code I will have to write the results in a file and read it back in java. Again dirty way. Is there something that any one can suggest me? Thanks to every one giving time. Thanks Igal for answering. I had a look at it. At first glance it appears as if it is simply calling the python script. Jep jep = new Jep(false, SCRIPT_PATH, cl); jep.set("query", query); jep.runScript(SCRIPT_PATH + file); jep.close(); Isnt it very similar to what we would do if called the python interpreter from command line through a Java code. Runtime runtime = Runtime.getRuntime(); Process proc = runtime.exec("python test.py"); Concern is how do I use the results generated by Python script. The naive way is to write them to file and read it back in Java. I am searching for a smarter approach.Thanks for suggestion anyway.

    Read the article

  • Can I use POSIX signals in my Perl program to create event-driven programming?

    - by Shiftbit
    Is there any POSIX signals that I could utilize in my Perl program to create event-driven programming? Currently, I have multi-process program that is able to cross communicate but my parent thread is only able to listen to listen at one child at a time. foreach (@proc) { sysread(${$_}{'read'}, my $line, 100); #problem here chomp($line); print "Parent hears: $line\n"; } The problem is that the parent sits in a continual wait state until it receives it a signal from the first child before it can continue on. I am relying on 'pipe' for my intercommunication. My current solution is very similar to: http://stackoverflow.com/questions/2558098/how-can-i-use-pipe-to-facilitate-interprocess-communication-in-perl If possible I would like to rely on a $SIG{...} event or any non-CPAN solution. Update: As Jonathan Leffler mentioned, kill can be used to send a signal: kill USR1 = $$; # send myself a SIGUSR1 My solution will be to send a USR1 signal to my child process. This event tells the parent to listen to the particular child. child: kill USR1 => $parentPID if($customEvent); syswrite($parentPipe, $msg, $buffer); #select $parentPipe; print $parentPipe $msg; parent: $SIG{USR1} = { #get child pid? sysread($array[$pid]{'childPipe'}, $msg, $buffer); }; But how do I get my the source/child pid that signaled the parent? Have the child Identify itself in its message. What happens if two children signal USR1 at the same time?

    Read the article

  • Use WM_COPYDATA to send data between processes

    - by Charles Gargent
    I wish to send text between processes. I have found lots of examples of this but none that I can get working. Here is what I have so far: for the sending part: COPYDATASTRUCT CDS; CDS.dwData = 1; CDS.cbData = 8; CDS.lpData = NULL; SendMessage(hwnd, WM_COPYDATA , (WPARAM)hwnd, (LPARAM) (LPVOID) &CDS); the receiving part: case WM_COPYDATA: COPYDATASTRUCT* cds = (COPYDATASTRUCT*) lParam; I dont know how to construct the COPYDATASTRUCT, I have just put something in that seems to work. When debugging the WM_COPYDATA case is executed, but again I dont know what to do with the COPYDATASTRUCT. I would like to send text between the two processes. As you can probably tell I am just starting out, I am using GNU GCC Compiler in Code::Blocks, I am trying to avoid MFC and dependencies.

    Read the article

  • How is Java Process.getOutputStream() Implemented?

    - by Amit Kumar
    I know the answer depends on the particular JVM, but I would like to understand how it is usually implemented? Is it in terms of popen (posix)? In terms of efficiency do I need to keep something in mind (other than using a Buffered stream as suggested by the javadoc). I would be interested to know if there is a general reference about implementations of JVMs which answers such questions.

    Read the article

  • Condition Variable in Shared Memory - is this code POSIX-conformant?

    - by GrahamS
    We've been trying to use a mutex and condition variable to synchronise access to named shared memory on a LynuxWorks LynxOS-SE system (POSIX-conformant). One shared memory block is called "/sync" and contains the mutex and condition variable, the other is "/data" and contains the actual data we are syncing access to. We're seeing failures from pthread_cond_signal() if both processes don't perform the mmap() calls in exactly the same order, or if one process mmaps in some other piece of shared memory before it mmaps the sync memory. This example code is about as short as I can make it: #include <sys/types.h> #include <sys/stat.h> #include <sys/mman.h> #include <sys/file.h> #include <stdlib.h> #include <pthread.h> #include <errno.h> #include <iostream> #include <string> using namespace std; static const string shm_name_sync("/sync"); static const string shm_name_data("/data"); struct shared_memory_sync { pthread_mutex_t mutex; pthread_cond_t condition; }; struct shared_memory_data { int a; int b; }; //Create 2 shared memory objects // - sync contains 2 shared synchronisation objects (mutex and condition) // - data not important void create() { // Create and map 'sync' shared memory int fd_sync = shm_open(shm_name_sync.c_str(), O_CREAT|O_RDWR, S_IRUSR|S_IWUSR); ftruncate(fd_sync, sizeof(shared_memory_sync)); void* addr_sync = mmap(0, sizeof(shared_memory_sync), PROT_READ|PROT_WRITE, MAP_SHARED, fd_sync, 0); shared_memory_sync* p_sync = static_cast<shared_memory_sync*> (addr_sync); // init the cond and mutex pthread_condattr_t cond_attr; pthread_condattr_init(&cond_attr); pthread_condattr_setpshared(&cond_attr, PTHREAD_PROCESS_SHARED); pthread_cond_init(&(p_sync->condition), &cond_attr); pthread_condattr_destroy(&cond_attr); pthread_mutexattr_t m_attr; pthread_mutexattr_init(&m_attr); pthread_mutexattr_setpshared(&m_attr, PTHREAD_PROCESS_SHARED); pthread_mutex_init(&(p_sync->mutex), &m_attr); pthread_mutexattr_destroy(&m_attr); // Create the 'data' shared memory int fd_data = shm_open(shm_name_data.c_str(), O_CREAT|O_RDWR, S_IRUSR|S_IWUSR); ftruncate(fd_data, sizeof(shared_memory_data)); void* addr_data = mmap(0, sizeof(shared_memory_data), PROT_READ|PROT_WRITE, MAP_SHARED, fd_data, 0); shared_memory_data* p_data = static_cast<shared_memory_data*> (addr_data); // Run the second process while it sleeps here. sleep(10); int res = pthread_cond_signal(&(p_sync->condition)); assert(res==0); // <--- !!!THIS ASSERT WILL FAIL ON LYNXOS!!! munmap(addr_sync, sizeof(shared_memory_sync)); shm_unlink(shm_name_sync.c_str()); munmap(addr_data, sizeof(shared_memory_data)); shm_unlink(shm_name_data.c_str()); } //Open the same 2 shared memory objects but in reverse order // - data // - sync void open() { sleep(2); int fd_data = shm_open(shm_name_data.c_str(), O_RDWR, S_IRUSR|S_IWUSR); void* addr_data = mmap(0, sizeof(shared_memory_data), PROT_READ|PROT_WRITE, MAP_SHARED, fd_data, 0); shared_memory_data* p_data = static_cast<shared_memory_data*> (addr_data); int fd_sync = shm_open(shm_name_sync.c_str(), O_RDWR, S_IRUSR|S_IWUSR); void* addr_sync = mmap(0, sizeof(shared_memory_sync), PROT_READ|PROT_WRITE, MAP_SHARED, fd_sync, 0); shared_memory_sync* p_sync = static_cast<shared_memory_sync*> (addr_sync); // Wait on the condvar pthread_mutex_lock(&(p_sync->mutex)); pthread_cond_wait(&(p_sync->condition), &(p_sync->mutex)); pthread_mutex_unlock(&(p_sync->mutex)); munmap(addr_sync, sizeof(shared_memory_sync)); munmap(addr_data, sizeof(shared_memory_data)); } int main(int argc, char** argv) { if(argc>1) { open(); } else { create(); } return (0); } Run this program with no args, then another copy with args, and the first one will fail at the assert checking the pthread_cond_signal(). But change the open() function to mmap() the "/sync" memory first and it will all work fine. This seems like a major bug in LynxOS but LynuxWorks claim that using mutex and condition variable in this way is not covered by the POSIX standard, so they are not interested. Can anyone determine if this code does violate POSIX? Or does anyone have any convincing documentation that it is POSIX compliant?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >