Search Results

Search found 333 results on 14 pages for 'posix'.

Page 4/14 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Get directory path by fd

    - by tylerl
    I've run into the need to be able refer to a directory by path given its file descriptor in Linux. The path doesn't have to be canonical, it just has to be functional so that I can pass it to other functions. So, taking the same parameters as passed to a function like fstatat(), I need to be able to call a function like getxattr() which doesn't have a f-XYZ-at() variant. So far I've come up with these solutions; though none are particularly elegant. The simplest solution is to avoid the problem by calling openat() and then using a function like fgetxattr(). This works, but not in every situation. So another method is needed to fill the gaps. The next solution involves looking up the information in proc: if (!access("/proc/self/fd",X_OK)) { sprintf(path,"/proc/self/fd/%i/",fd); } This, of course, totally breaks on systems without proc, including some chroot environments. The last option, a more portable but potentially-race-condition-prone solution, looks like this: DIR* save = opendir("."); fchdir(fd); getcwd(path,PATH_MAX); fchdir(dirfd(save)); closedir(save); The obvious problem here is that in a multithreaded app, changing the working directory around could have side effects. However, the fact that it works is compelling: if I can get the path of a directory by calling fchdir() followed by getcwd(), why shouldn't I be able to just get the information directly: fgetcwd() or something. Clearly the kernel is tracking the necessary information. So how do I get to it?

    Read the article

  • implementing ioctl() commands in FreeBSD

    - by thecoffman
    I am adding some code to an existing FreeBSD device driver and I am trying to pass a char* from user space to the driver. I've implemented a custom ioctl() command using the _IOW macro like so: #define TIBLOOMFILTER _IOW(0,253,char*) My call looks something like this: int file_desc = open("/dev/ti0", O_RDWR); ioctl(file_desc, TIBLOOMFILTER, (*filter).getBitArray()); close(file_desc); When I call ioctl() I get: Inappropriate ioctl for device as an error message. Any guess as to what may be doing wrong? I've defined the same macro in my device driver, and added it to the case statement.

    Read the article

  • Suggestions for duplicate file finder algorithm (using C)

    - by Andrei Ciobanu
    Hello, I wanted to write a program that test if two files are duplicates (have exactly the same content). First I test if the files have the same sizes, and if they have i start to compare their contents. My first idea, was to "split" the files into fixed size blocks, then start a thread for every block, fseek to startup character of every block and continue the comparisons in parallel. When a comparison from a thread fails, the other working threads are canceled, and the program exits out of the thread spawning loop. The code looks like this: dupf.h #ifndef __NM__DUPF__H__ #define __NM__DUPF__H__ #define NUM_THREADS 15 #define BLOCK_SIZE 8192 /* Thread argument structure */ struct thread_arg_s { const char *name_f1; /* First file name */ const char *name_f2; /* Second file name */ int cursor; /* Where to seek in the file */ }; typedef struct thread_arg_s thread_arg; /** * 'arg' is of type thread_arg. * Checks if the specified file blocks are * duplicates. */ void *check_block_dup(void *arg); /** * Checks if two files are duplicates */ int check_dup(const char *name_f1, const char *name_f2); /** * Returns a valid pointer to a file. * If the file (given by the path/name 'fname') cannot be opened * in 'mode', the program is interrupted an error message is shown. **/ FILE *safe_fopen(const char *name, const char *mode); #endif dupf.c #include <errno.h> #include <pthread.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> #include "dupf.h" FILE *safe_fopen(const char *fname, const char *mode) { FILE *f = NULL; f = fopen(fname, mode); if (f == NULL) { char emsg[255]; sprintf(emsg, "FOPEN() %s\t", fname); perror(emsg); exit(-1); } return (f); } void *check_block_dup(void *arg) { const char *name_f1 = NULL, *name_f2 = NULL; /* File names */ FILE *f1 = NULL, *f2 = NULL; /* Streams */ int cursor = 0; /* Reading cursor */ char buff_f1[BLOCK_SIZE], buff_f2[BLOCK_SIZE]; /* Character buffers */ int rchars_1, rchars_2; /* Readed characters */ /* Initializing variables from 'arg' */ name_f1 = ((thread_arg*)arg)->name_f1; name_f2 = ((thread_arg*)arg)->name_f2; cursor = ((thread_arg*)arg)->cursor; /* Opening files */ f1 = safe_fopen(name_f1, "r"); f2 = safe_fopen(name_f2, "r"); /* Setup cursor in files */ fseek(f1, cursor, SEEK_SET); fseek(f2, cursor, SEEK_SET); /* Initialize buffers */ rchars_1 = fread(buff_f1, 1, BLOCK_SIZE, f1); rchars_2 = fread(buff_f2, 1, BLOCK_SIZE, f2); if (rchars_1 != rchars_2) { /* fread failed to read the same portion. * program cannot continue */ perror("ERROR WHEN READING BLOCK"); exit(-1); } while (rchars_1-->0) { if (buff_f1[rchars_1] != buff_f2[rchars_1]) { /* Different characters */ fclose(f1); fclose(f2); pthread_exit("notdup"); } } /* Close streams */ fclose(f1); fclose(f2); pthread_exit("dup"); } int check_dup(const char *name_f1, const char *name_f2) { int num_blocks = 0; /* Number of 'blocks' to check */ int num_tsp = 0; /* Number of threads spawns */ int tsp_iter = 0; /* Iterator for threads spawns */ pthread_t *tsp_threads = NULL; thread_arg *tsp_threads_args = NULL; int tsp_threads_iter = 0; int thread_c_res = 0; /* Thread creation result */ int thread_j_res = 0; /* Thread join res */ int loop_res = 0; /* Function result */ int cursor; struct stat buf_f1; struct stat buf_f2; if (name_f1 == NULL || name_f2 == NULL) { /* Invalid input parameters */ perror("INVALID FNAMES\t"); return (-1); } if (stat(name_f1, &buf_f1) != 0 || stat(name_f2, &buf_f2) != 0) { /* Stat fails */ char emsg[255]; sprintf(emsg, "STAT() ERROR: %s %s\t", name_f1, name_f2); perror(emsg); return (-1); } if (buf_f1.st_size != buf_f2.st_size) { /* File have different sizes */ return (1); } /* Files have the same size, function exec. is continued */ num_blocks = (buf_f1.st_size / BLOCK_SIZE) + 1; num_tsp = (num_blocks / NUM_THREADS) + 1; cursor = 0; for (tsp_iter = 0; tsp_iter < num_tsp; tsp_iter++) { loop_res = 0; /* Create threads array for this spawn */ tsp_threads = malloc(NUM_THREADS * sizeof(*tsp_threads)); if (tsp_threads == NULL) { perror("TSP_THREADS ALLOC FAILURE\t"); return (-1); } /* Create arguments for every thread in the current spawn */ tsp_threads_args = malloc(NUM_THREADS * sizeof(*tsp_threads_args)); if (tsp_threads_args == NULL) { perror("TSP THREADS ARGS ALLOCA FAILURE\t"); return (-1); } /* Initialize arguments and create threads */ for (tsp_threads_iter = 0; tsp_threads_iter < NUM_THREADS; tsp_threads_iter++) { if (cursor >= buf_f1.st_size) { break; } tsp_threads_args[tsp_threads_iter].name_f1 = name_f1; tsp_threads_args[tsp_threads_iter].name_f2 = name_f2; tsp_threads_args[tsp_threads_iter].cursor = cursor; thread_c_res = pthread_create( &tsp_threads[tsp_threads_iter], NULL, check_block_dup, (void*)&tsp_threads_args[tsp_threads_iter]); if (thread_c_res != 0) { perror("THREAD CREATION FAILURE"); return (-1); } cursor+=BLOCK_SIZE; } /* Join last threads and get their status */ while (tsp_threads_iter-->0) { void *thread_res = NULL; thread_j_res = pthread_join(tsp_threads[tsp_threads_iter], &thread_res); if (thread_j_res != 0) { perror("THREAD JOIN FAILURE"); return (-1); } if (strcmp((char*)thread_res, "notdup")==0) { loop_res++; /* Closing other threads and exiting by condition * from loop. */ while (tsp_threads_iter-->0) { pthread_cancel(tsp_threads[tsp_threads_iter]); } } } free(tsp_threads); free(tsp_threads_args); if (loop_res > 0) { break; } } return (loop_res > 0) ? 1 : 0; } The function works fine (at least for what I've tested). Still, some guys from #C (freenode) suggested that the solution is overly complicated, and it may perform poorly because of parallel reading on hddisk. What I want to know: Is the threaded approach flawed by default ? Is fseek() so slow ? Is there a way to somehow map the files to memory and then compare them ?

    Read the article

  • Can I ignore a SIGFPE resulting from division by zero?

    - by Mikeage
    I have a program which deliberately performs a divide by zero (and stores the result in a volatile variable) in order to halt in certain circumstances. However, I'd like to be able to disable this halting, without changing the macro that performs the division by zero. Is there any way to ignore it? I've tried using #include <signal.h> ... int main(void) { signal(SIGFPE, SIG_IGN); ... } but it still dies with the message "Floating point exception (core dumped)". I don't actually use the value, so I don't really care what's assigned to the variable; 0, random, undefined... EDIT: I know this is not the most portable, but it's intended for an embedded device which runs on many different OSes. The default halt action is to divide by zero; other platforms require different tricks to force a watchdog induced reboot (such as an infinite loop with interrupts disabled). For a PC (linux) test environment, I wanted to disable the halt on division by zero without relying on things like assert.

    Read the article

  • Different Linux message queues have the same id?

    - by Halo
    I open a mesage queue in a .c file, and upon success it says the message queue id is 3. While that program is still running, in another terminal I start another program (of another .c file), that creates a new message queue with a different mqd_t. But its id also appears as 3. Is this a problem? server file goes like this: void server(char* req_mq) { struct mq_attr attr; mqd_t mqdes; struct request* msgptr; int n; char *bufptr; int buflen; pid_t apid; //attr.mq_maxmsg = 300; //attr.mq_msgsize = 1024; mqdes = mq_open(req_mq, O_RDWR | O_CREAT, 0666, NULL); if (mqdes == -1) { perror("can not create msg queue\n"); exit(1); } printf("server mq created, mq id = %d\n", (int) mqdes); and the client goes like: void client(char* req_mq, int min, int max, char* dir_path_name, char* outfile) { pid_t pid; /* get the process id */ if ((pid = getpid()) < 0) { perror("unable to get client pid"); } mqd_t mqd, dq; char pfx[50] = DQ_PRFX; char suffix[50]; // sprintf(suffix, "%d", pid); strcat(pfx, suffix); dq = mq_open(pfx, O_RDWR | O_CREAT, 0666, NULL); if (dq == -1) { perror("can not open data queue\n"); exit(1); } printf("data queue created, mq id = %d\n", (int) dq); mqd = mq_open(req_mq, O_RDWR); if (mqd == -1) { perror("can not open msg queue\n"); exit(1); } mqdes and dq seem to share the same id 3.

    Read the article

  • getnameinfo specifies socklen_t

    - by bobby
    The 2nd arg for the getnameinfo prototype asks for a socklen_t type but sizeof uses size_t. So how can I get socklen_t ? Prototype: int getnameinfo(const struct sockaddr *restrict sa, socklen_t salen, char *restrict node, socklen_t nodelen, char *restrict service, socklen_t servicelen, int flags); Example: struct sockaddr_in SIN; memset(&SIN, 0, sizeof(SIN)); // This should also be socklen_t ? SIN.sin_family = AF_INET; SIN.sin_addr.s_addr = inet_addr(IP); SIN.sin_port = 0; getnameinfo((struct sockaddr *)&SIN, sizeof(SIN) /* socklen_t */, BUFFER, NI_MAXHOST, NULL, 0, 0);

    Read the article

  • pthread and recursively calling execvp in C

    - by eduke
    To begin I'm sorry for my english :) I looking for a way to create a thread each time my program finds a directory, in order to call the program itself but with a new argv[2] argument (which is the current dir). I did it successfully with fork() but with pthread I've some difficulties. I don't know if I can do something like that : #include <unistd.h> #include <stdlib.h> #include <stdio.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <sys/wait.h> #include <dirent.h> int main(int argc, char **argv) { pthread_t threadID[10] = {0}; DIR * dir; struct dirent * entry; struct stat status; pthread_attr_t attr; pthread_attr_init(&attr); int i = 0; char *res; char *tmp; char *file; if(argc != 3) { printf("Usage : %s <file> <dir>\n", argv[0]); exit(EXIT_FAILURE); } if(stat(argv[2],&status) == 0) { dir = opendir(argv[2]); file = argv[1]; } else exit(EXIT_FAILURE); while ((entry = readdir(dir))) { if (strcmp(entry->d_name, ".") && strcmp(entry->d_name, "..")) { tmp = malloc(strlen(argv[2]) + strlen(entry->d_name) + 2); strcpy(tmp, argv[2]); strcat(tmp, "/"); strcat(tmp, entry->d_name); stat(tmp, &status); if (S_ISDIR(status.st_mode)) { argv[2] = tmp; pthread_create( &threadID[i], &attr, execvp(argv[0], argv), NULL); printf("New thread created : %d", i); i++; } else if (!strcmp(entry->d_name, file)) { printf(" %s was found - Thread number = %d\n",tmp, i); break; } free(tmp); } } pthread_join( threadID[i] , &res ); exit(EXIT_SUCCESS); } Actually it doesn't works : pthread_create( &threadID[i], &attr, execvp(argv[0], argv), NULL); I have no runtime error, but when the file to find is in another directory, the thread is not created and so execvp(argv[0], argv) is not called... Thank you for you help, Simon

    Read the article

  • Linux and I/O completion ports?

    - by someguy
    Using winsock, you can configure sockets or seperate I/O operations to "overlap". This means that calls to perform I/O are returned immediately, while the actual operations are completed asynchronously by separate worker threads. Winsock also provides "completion ports". From what I understand, a completion port acts as a multiplexer of handles (sockets). A handle can be demultiplexed if it isn't in the middle of an I/O operation, i.e. if all its I/O operations are completed. So, on to my question... does linux support completion ports or even asynchronous I/O for sockets?

    Read the article

  • stat() doesn't find a file in c++

    - by user1432779
    on Linux 12.04 I have an executable file located in say: /a/b/exe and a config file on /a/b/config when doing: cd /a/b/ ./exe everything's ok and the stat function finds the file config on /a/b/ HOWEVER,when running from root /a/b/exe the stat doesn't find the config file any idea why? it makes it impossible to run the binary using a script that isn't ran from the folder of the exe.... Thanks

    Read the article

  • Setting creation or change timestamps

    - by Thanatos
    Using utimes, futimes, futimens, etc., it is possible to set the access and modification timestamps on a file. Is there a function to set change timestamps? (I understand the cyclic nature of wanting to change the change timestamp, but think archiving software - it would be nice to restore a file exactly as it was.) Are there any functions at all for creation timestamps? (I realize that ext2 does not support this, but I was wondering if Linux did, for those filesystems that do support it.) If it's not possible, what is the reasoning behind it not being so?

    Read the article

  • Function overloading in C

    - by Andrei Ciobanu
    Today, looking at the man page for open(), I've noticed this function is 'overloaded': int open(const char *pathname, int flags); int open(const char *pathname, int flags, mode_t mode); I didn't thought it's possible on C. What's the 'trick' for achieving this ?

    Read the article

  • pthread functions "_np" suffix

    - by osgx
    What does "_np" suffix mean here: pthread_mutex_timedlock_np or in macros PTHREAD_MUTEX_TIMED_NP Upd: From glibc2.2 enum { PTHREAD_MUTEX_TIMED_NP, PTHREAD_MUTEX_RECURSIVE_NP, PTHREAD_MUTEX_ERRORCHECK_NP, PTHREAD_MUTEX_ADAPTIVE_NP #ifdef __USE_UNIX98 , PTHREAD_MUTEX_NORMAL = PTHREAD_MUTEX_TIMED_NP, PTHREAD_MUTEX_RECURSIVE = PTHREAD_MUTEX_RECURSIVE_NP, PTHREAD_MUTEX_ERRORCHECK = PTHREAD_MUTEX_ERRORCHECK_NP, PTHREAD_MUTEX_DEFAULT = PTHREAD_MUTEX_NORMAL #endif #ifdef __USE_GNU /* For compatibility. */ , PTHREAD_MUTEX_FAST_NP = PTHREAD_MUTEX_ADAPTIVE_NP #endif }; Does defining __USE_UNIX98 change portability of _NP functions/macro?

    Read the article

  • Daemonize() issues on Debian

    - by djTeller
    Hi, I'm currently writing a multi-process client and a multi-treaded server for some project i have. The server is a Daemon. In order to accomplish that, i'm using the following daemonize() code: static void daemonize(void) { pid_t pid, sid; /* already a daemon */ if ( getppid() == 1 ) return; /* Fork off the parent process */ pid = fork(); if (pid < 0) { exit(EXIT_FAILURE); } /* If we got a good PID, then we can exit the parent process. */ if (pid > 0) { exit(EXIT_SUCCESS); } /* At this point we are executing as the child process */ /* Change the file mode mask */ umask(0); /* Create a new SID for the child process */ sid = setsid(); if (sid < 0) { exit(EXIT_FAILURE); } /* Change the current working directory. This prevents the current directory from being locked; hence not being able to remove it. */ if ((chdir("/")) < 0) { exit(EXIT_FAILURE); } /* Redirect standard files to /dev/null */ freopen( "/dev/null", "r", stdin); freopen( "/dev/null", "w", stdout); freopen( "/dev/null", "w", stderr); } int main( int argc, char *argv[] ) { daemonize(); /* Now we are a daemon -- do the work for which we were paid */ return 0; } I have a strange side effect when testing the server on Debian (Ubuntu). The accept() function always fail to accept connections, the pid returned is -1 I have no idea what causing this, since in RedHat & CentOS it works well. When i remove the call to daemonize(), everything works well on Debian, when i add it back, same accept() error reproduce. I've been monitring the /proc//fd, everything looks good. Something in the daemonize() and the Debian release just doesn't seem to work. (Debian GNU/Linux 5.0, Linux 2.6.26-2-286 #1 SMP) Any idea what causing this? Thank you

    Read the article

  • Does waitpid yield valid status information for a child process that has already exited?

    - by dtrebbien
    If I fork a child process, and the child process exits before the parent even calls waitpid, then is the exit status information that is set by waitpid still valid? If so, when does it become not valid; i.e., how do I ensure that I can call waitpid on the child pid and continue to get valid exit status information after an arbitrary amount of time, and how do I "clean up" (tell the OS that I am no longer interested in the exit status information for the finished child process)? I was playing around with the following code, and it appears that the exit status information is valid for at least a few seconds after the child finishes, but I do not know for how long or how to inform the OS that I won't be calling waitpid again: #include <assert.h> #include <pthread.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/wait.h> int main() { pid_t pid = fork(); if (pid < 0) { fprintf(stderr, "Failed to fork\n"); return EXIT_FAILURE; } else if (pid == 0) { // code for child process _exit(17); } else { // code for parent sleep(3); int status; waitpid(pid, &status, 0); waitpid(pid, &status, 0); // call `waitpid` again just to see if the first call had an effect assert(WIFEXITED(status)); assert(WEXITSTATUS(status) == 17); } return EXIT_SUCCESS; }

    Read the article

  • What is the best way to manage unix process from java?

    - by erotsppa
    I'm looking for some simple tasks like listing all the running process of a user, or kill a particular process by pid etc. Basic unix process management from Java. Is there a library out there that is relatively mature and documented? I could run a external command from the JVM and then parse the standard output/error but that seems like a lot of work and not robust at all. Any suggestions?

    Read the article

  • Are there any platforms where using structure copy on an fd_set (for select() or pselect()) causes p

    - by Jonathan Leffler
    The select() and pselect() system calls modify their arguments (the 'struct fd_set *' arguments), so the input value tells the system which file descriptors to check and the return values tell the programmer which file descriptors are currently usable. If you are going to call them repeatedly for the same set of file descriptors, you need to ensure that you have a fresh copy of the descriptors for each call. The obvious way to do that is to use a structure copy: struct fd_set ref_set_rd; struct fd_set ref_set_wr; struct fd_set ref_set_er; ... ...code to set the reference fd_set_xx values... ... while (!done) { struct fd_set act_set_rd = ref_set_rd; struct fd_set act_set_wr = ref_set_wr; struct fd_set act_set_er = ref_set_er; int bits_set = select(max_fd, &act_set_rd, &act_set_wr, &act_set_er, &timeout); if (bits_set > 0) { ...process the output values of act_set_xx... } } My question: Are there any platforms where it is not safe to do a structure copy of the struct fd_set values as shown? I'm concerned lest there be hidden memory allocation or anything unexpected like that. (There are macros/functions FD_SET(), FD_CLR(), FD_ZERO() and FD_ISSET() to mask the internals from the application.) I can see that MacOS X (Darwin) is safe; other BSD-based systems are likely to be safe, therefore. You can help by documenting other systems that you know are safe in your answers. (I do have minor concerns about how well the struct fd_set would work with more than 8192 open file descriptors - the default maximum number of open files is only 256, but the maximum number is 'unlimited'. Also, since the structures are 1 KB, the copying code is not dreadfully efficient, but then running through a list of file descriptors to recreate the input mask on each cycle is not necessarily efficient either. Maybe you can't do select() when you have that many file descriptors open, though that is when you are most likely to need the functionality.) There's a related SO question - asking about 'poll() vs select()' which addresses a different set of issues from this question.

    Read the article

  • What the best approach to iterate and "store" files over a directory in C (Linux) ?

    - by Andrei Ciobanu
    I have written a function that checks if to files are duplicates or not. This function signature is: int check_dup_memmap(char *f1_name, char *f2_name) It returns: (-1) - If something went wrong; (0) - If the two files are similar; (+1) - If the two files are different; The next step is to write a function that iterates through all the files in a certain directory,apply the previous function, and gives a report on every existing duplicates. Initially I've thought to write a function that generates a file with all the filenames in a certain directory and then, read that file again and gain and compare every two files. Here is that version of the function, that gets all the filenames in a certain directory. void *build_dir_tree(char *dirname, FILE *f) { DIR *cdir = NULL; struct dirent *ent = NULL; struct stat buf; if(f == NULL){ fprintf(stderr, "NULL file submitted. [build_dir_tree].\n"); exit(-1); } if(dirname == NULL){ fprintf(stderr, "NULL dirname submitted. [build_dir_tree].\n"); exit(-1); } if((cdir = opendir(dirname)) == NULL){ char emsg[MFILE_LEN]; sprintf(emsg, "Cannot open dir: %s [build_dir_tree]\t",dirname); perror(emsg); } chdir(dirname); while ((ent = readdir(cdir)) != NULL) { lstat(ent->d_name, &buf); if (S_ISDIR(buf.st_mode)) { if (strcmp(".", ent->d_name) == 0 || strcmp("..", ent->d_name) == 0) { continue; } build_dir_tree(ent->d_name, f); } else{ fprintf(f, "/%s/%s\n",util_get_cwd(),ent->d_name); } } chdir(".."); closedir(cdir); } Still I consider this approach a little inefficient, as I have to parse the file again and again. In your opinion what are other approaches should I follow: Write a datastructure and hold the files instead of writing them in the file ? I think for a directory with a lot of files, the memory will become very fragmented. Hold all the filenames in auto-expanding array, so that I can easy access every file by their index, because they will in a contiguous memory location. Map this file in memory using mmap() ? But mmap may fail, as the file gets to big. Any opinions on this. I want to choose the most efficient path, and access as few resources as possible. This is the requirement of the program... EDIT: Is there a way to get the numbers of files in a certain directory, without iterating through it ?

    Read the article

  • What can cause a spontaneous EPIPE error without either end calling close() or crashing?

    - by Hongli
    I have an application that consists of two processes (let's call them A and B), connected to each other through Unix domain sockets. Most of the time it works fine, but some users report the following behavior: A sends a request to B. This works. A now starts reading the reply from B. B sends a reply to A. The corresponding write() call returns an EPIPE error, and as a result B close() the socket. However, A did not close() the socket, nor did it crash. A's read() call returns 0, indicating end-of-file. A thinks that B prematurely closed the connection. Users have also reported variations of this behavior, e.g.: A sends a request to B. This works partially, but before the entire request is sent A's write() call returns EPIPE, and as a result A close() the socket. However B did not close() the socket, nor did it crash. B reads a partial request and then suddenly gets an EOF. The problem is I cannot reproduce this behavior locally at all. I've tried OS X and Linux. The users are on a variety of systems, mostly OS X and Linux. Things that I've already tried and considered: Double close() bugs (close() is called twice on the same file descriptor): probably not as that would result in EBADF errors, but I haven't seen them. Increasing the maximum file descriptor limit. One user reported that this worked for him, the rest reported that it did not. What else can possibly cause behavior like this? I know for certain that neither A nor B close() the socket prematurely, and I know for certain that neither of them have crashed because both A and B were able to report the error. It is as if the kernel suddenly decided to pull the plug from the socket for some reason.

    Read the article

  • Multithreading with STL container

    - by Steven
    I have an unordered map which stores a pointer of objects. I am not sure whether I am doing the correct thing to maintain the thread safety. typedef std::unordered_map<string, classA*>MAP1; MAP1 map1; pthread_mutex_lock(&mutexA) if(map1.find(id) != map1.end()) { pthread_mutex_unlock(&mutexA); //already exist, not adding items } else { classA* obj1 = new classA; map1[id] = obj1; obj1->obtainMutex(); //Should I create a mutex for each object so that I could obtain mutex when I am going to update fields for obj1? pthread_mutex_unlock(&mutexA); //release mutex for unordered_map so that other threads could access other object obj1->field1 = 1; performOperation(obj1); //takes some time obj1->releaseMutex(); //release mutex after updating obj1 }

    Read the article

  • Checking if a file is a directory or just a file.

    - by Jookia
    I'm writing a program to check if something is a file or is a directory. Is there a better way to do it than this? #include <stdio.h> #include <sys/types.h> #include <dirent.h> #include <errno.h> int isFile(const char* name) { DIR* directory = opendir(name); if(directory != NULL) { closedir(directory); return 0; } if(errno == ENOTDIR) { return 1; } return -1; } int main(void) { const char* file = "./testFile"; const char* directory = "./"; printf("Is %s a file? %s.\n", file, ((isFile(file) == 1) ? "Yes" : "No")); printf("Is %s a directory? %s.\n", directory, ((isFile(directory) == 0) ? "Yes" : "No")); return 0; }

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >