Search Results

Search found 1680 results on 68 pages for 'berkeley sockets'.

Page 7/68 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • WCF or Sockets for Communication in a real-time environment?

    - by PieterG
    I have a scenario where I need to send a series of data across 2 clients. The data includes Serialized XML that will contain commands that the other client will need to react to. I will also need to send images across the wire as I need to provide a chat facility in the form of video/audio chat. I would like a single communication medium for both, as the number of messages/commands might be a few. WCF or Sockets?

    Read the article

  • Adobe Air turn based multiplayer Game, sockets vs http bandwidth

    - by Arin Aivazian
    I am developing an Adobe Air multiplayer game for iPad. It is turn based and not realtime. It is like checkers game. I want to use a client server model. I have found 2 options to connect to server so far: socket connection and http requests My question is: Is the bandwidth requirement for socket connection vs http requests different? I need the game to work with very low speed internet connections

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • telephone sockets connecting them to an ATA

    - by ageis23
    We have many extension sockets in the house therefore be fairly expensive to buy an ata for each phone. Since extension sockets are daisy chained together, could I just plug it into the FXS socket instead of the BT master socket? Or is the FXS port strictly for one phone only?

    Read the article

  • Delay command execution over sockets

    - by David
    I've been trying to fix the game loop in a real time (tick delay) MUD. I realized using Thread.Sleep would seem clunky when the user spammed commands through their choice of client (Zmud, etc) e.g. east;south;southwest would wait three move ticks and then output everything from the past couple rooms. The game loop basically calls a Flush and Fill method for each socket during each tick (50ms) private void DoLoop() { Stopwatch stopWatch = new Stopwatch(); stopWatch.Start(); while (running) { // for each socket, flush and fill ConnectionMonitor.Update(); stopWatch.Stop(); WaitIfNeeded(stopWatch.ElapsedMilliseconds); stopWatch.Reset(); } } The Fill method fires the command events, but as mentioned before, they currently block using Thread.Sleep. I tried adding a "ready" flag to the state object that attempts to execute the command along with a queue of spammed commands, but it ends up executing one command and queuing up the rest i.e. each subsequent command executes something that got queued up that should've been executed before. I must be missing something about the timer. private readonly Queue<SpammedCommand> queuedCommands = new Queue<SpammedCommand>(); private bool ready = true; private void TryExecuteCommand(string input) { var commandContext = CommandContext.Create(input); var player = Server.Current.Database.Get<Player>(Session.Player.Key); var commandInfo = Server.Current.CommandLookup .FindCommand(commandContext.CommandName, player.IsAdmin); if (commandInfo != null) { if (!ready) { // queue command queuedCommands.Enqueue(new SpammedCommand() { Context = commandContext, Info = commandInfo }); return; } if (queuedCommands.Count > 0) { // queue the incoming command queuedCommands.Enqueue(new SpammedCommand() { Context = commandContext, Info = commandInfo, }); // dequeue and execute var command = queuedCommands.Dequeue(); command.Info.Command.Execute(Session, command.Context); setTimeout(command.Info.TickLength); return; } commandInfo.Command.Execute(Session, commandContext); setTimeout(commandInfo.TickLength); } else { Session.WriteLine("Command not recognized"); } } Finally, setTimeout was supposed to set the execution delay (TickLength) for that command, and makeReady just sets the ready flag on the state object to true. private void setTimeout(TickDelay tickDelay) { ready = false; var t = new System.Timers.Timer() { Interval = (long) tickDelay, AutoReset = false, }; t.Elapsed += makeReady; t.Start(); // fire this in tickDelay ms } // MAKE READYYYYY!!!! private void makeReady(object sender, System.Timers.ElapsedEventArgs e) { ready = true; } Am I missing something about the System.Timers.Timer created in setTimeout? How can I execute (and output) spammed commands per TickLength without using Thread.Sleep?

    Read the article

  • Connection make b/w iphone to windows pc-rajay [on hold]

    - by Rajay
    I Need to create connection between ios to windows by using socket programming in ios. I'm trying to to use an application to communicate with Windows via sockets. At the minimum, I'm trying to at least figure out how I can create a connection from the iPhone (maybe using the iPhone to ping the Windows machine?) I'm not really clear on where I need to start. I'm pretty new to iOS development in general, and brand new to socket/network programming. I've tried several tutorials that haven't gotten me far. My goal is: Connect to a server via sockets (the server will be a Windows machine with a service waiting for incoming connections from the iPhone) If possible, I would like to write/build the client piece first, but I have been lost thus far. Hopefully the nice folks in the SO community can lend a hand and point me in the right direction.

    Read the article

  • Why do socket.makefile objects fail after the first read for UDP sockets?

    - by Eli Courtwright
    I'm using the socket.makefile method to create a file-like object on a UDP socket for the purposes of reading. When I receive a UDP packet, I can read the entire contents of the packet all at once by using the read method, but if I try to split it up into multiple reads, my program hangs. Here's a program which demonstrates this problem: import socket from sys import argv SERVER_ADDR = ("localhost", 12345) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(SERVER_ADDR) f = sock.makefile("rb") sock.sendto("HelloWorld", SERVER_ADDR) if "--all" in argv: print f.read(10) else: print f.read(5) print f.read(5) If I run the above program with the --all option, then it works perfectly and prints HelloWorld. If I run it without that option, it prints Hello and then hangs on the second read. I do not have this problem with socket.makefile objects when using TCP sockets. Why is this happening and what can I do to stop it?

    Read the article

  • Use XQuery to Access XML in Emacs

    - by Gregory Burd
    There you are working on a multi-MB/GB/TB XML document or set of documents, you want to be able to quickly query the content but you don't want to load the XML into a full-blown XML database, the time spent setting things up is simply too expensive. Why not combine a great open source editor, Emacs, and a great XML XQuery engine, Berkeley DB XML? That is exactly what Donnie Cameron did. Give it a try.

    Read the article

  • Apache Proxy Pass and Web Sockets

    - by James
    I'm using Apache with the mod_proxy module to reverse proxy my Node.js application through to port 80, so that we can access it as an internal application. I have a file in sites-enabled which contains this: VirtualHost *:80> DocumentRoot /var/www/internal/ ServerName internal ServerAlias internal <Directory /var/www/internal/public/> Options All AllowOverride All Order allow,deny Allow from all </Directory> ProxyRequests off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://localhost:8080/ retry=0 ProxyPassReverse / http://localhost:8080/ ProxyPreserveHost on ProxyTimeout 1200 LogLevel debug AllowEncodedSlashes on </VirtualHost> As I said, our application is written in Node.js and we're using socket.io to make use of web-sockets, as our application also contains realtime elements to it. The problem is, mod_proxy doesn't seem to handle web sockets and we get errors when trying to use them: WebSocket connection to 'ws://bloot/socket.io/1/websocket/nHtTh6ZwQjSXlmI7UMua' failed: Unexpected response code: 502 How can we fix this issue and keep sockets working, as the only way we can get it working currently is to access the site via ip:port which we don't want to do. Also, as a side question, how can I get ErrorDocument to work properly? Our error files are stored in /var/www/internal/public/error/ but they seem to get put through the proxy too?

    Read the article

  • Berkeley DB java edition, any LGPL or BSD alternatives in Java?

    - by Ali
    Hi All, I am dealing with a huge dataset consisting of key-value pairs. The queries are always in the form of range queries on the key space (keys are numbers) hence any persistent B-Tree like structure will handle the situation. I would like to use BDB-Java Edition but the product is closed source and my company doesn't want to buy BDB-JE License. I am wondering, would you please share your experience with any non-GPL java based key-value storage system. Thanks, -A

    Read the article

  • Berkeley DB: btree prefix comparison for directory-like keys?

    - by Mark Harrison
    I'm going to index a BDB with keys that look a lot like directory paths ('/foo/bar', '/foo/baz', etc, with levels of slashes generally < 10). Does anybody have any experience with using a Btree prefix comparison routine[1] for this? Are the savings worthwhile? Any references to experience papers on this subject? [1] http://www.stanford.edu/class/cs276a/projects/docs/berkeleydb/ref/am_conf/bt_prefix.html

    Read the article

  • SO_LINGER and closing sockets(WINSOCK)

    - by Johnny Walked
    hey. im writing a multithreaded winsock application and im having some issues with closing the sockets. first of all, is there a limit for a number of simultaneously open sockets? lets say like 32 sockets all in once. i establish a connection on one of the sockets, and passing information and it all goes right. problem is when i disconnect the socket and then reconnect to the same destination, i get a RST from the server after my SYN. i dont have the code for the server app so i cant debug it. when i used SO_LINGER and it sent a RST flag at the end of each session - it worked. but i dont want to end my connections this way. when not using SO_LINGER a FIN flag was sent but it seems the connection was not really closed. any help? thanks

    Read the article

  • How to use CFNetwork to get byte array from sockets?

    - by Vic
    Hi, I'm working in a project for the iPad, it is a small program and I need it to communicate with another software that runs on windows and act like a server; so the application that I'm creating for the iPad will be the client. I'm using CFNetwork to do sockets communication, this is the way I'm establishing the connection: char ip[] = "192.168.0.244"; NSString *ipAddress = [[NSString alloc] initWithCString:ip]; /* Build our socket context; this ties an instance of self to the socket */ CFSocketContext CTX = { 0, self, NULL, NULL, NULL }; /* Create the server socket as a TCP IPv4 socket and set a callback */ /* for calls to the socket's lower-level connect() function */ TCPClient = CFSocketCreate(NULL, PF_INET, SOCK_STREAM, IPPROTO_TCP, kCFSocketDataCallBack, (CFSocketCallBack)ConnectCallBack, &CTX); if (TCPClient == NULL) return; /* Set the port and address we want to listen on */ struct sockaddr_in addr; memset(&addr, 0, sizeof(addr)); addr.sin_len = sizeof(addr); addr.sin_family = AF_INET; addr.sin_port = htons(PORT); addr.sin_addr.s_addr = inet_addr([ipAddress UTF8String]); CFDataRef connectAddr = CFDataCreate(NULL, (unsigned char *)&addr, sizeof(addr)); CFSocketConnectToAddress(TCPClient, connectAddr, -1); CFRunLoopSourceRef sourceRef = CFSocketCreateRunLoopSource(kCFAllocatorDefault, TCPClient, 0); CFRunLoopAddSource(CFRunLoopGetCurrent(), sourceRef, kCFRunLoopCommonModes); CFRelease(sourceRef); CFRunLoopRun(); And this is the way I sent the data, which basically is a byte array /* The native socket, used for various operations */ // TCPClient is a CFSocketRef member variable CFSocketNativeHandle sock = CFSocketGetNative(TCPClient); Byte byteData[3]; byteData[0] = 0; byteData[1] = 4; byteData[2] = 0; send(sock, byteData, strlen(byteData)+1, 0); Finally, as you may have noticed, when I create the server socket, I registered a callback for the kCFSocketDataCallBack type, this is the code. void ConnectCallBack(CFSocketRef socket, CFSocketCallBackType type, CFDataRef address, const void *data, void *info) { // SocketsViewController is the class that contains all the methods SocketsViewController *obj = (SocketsViewController*)info; UInt8 *unsignedData = (UInt8 *) CFDataGetBytePtr(data); char *value = (char*)unsignedData; NSString *text = [[NSString alloc]initWithCString:value length:strlen(value)]; [obj writeToTextView:text]; [text release]; } Actually, this callback is being invoked in my code, the problem is that I don't know how can I get the data that the windows client sent me, I'm expecting to receive an array of bytes, but I don't know how can I get those bytes from the data param. If anyone can help me to find a way to do this, or maybe me point me to another way to get the data from the server in my client application I would really appreciate it. Thanks.

    Read the article

  • Sockets, Threads and Services in android, how to make them work together ?

    - by Spredzy
    Hi all, I am facing a probleme with threads and sockets I cant figure it out, if someone can help me please i would really appreciate. There are the facts : I have a service class NetworkService, inside this class I have a Socket attribute. I would like it be at the state of connected for the whole lifecycle of the service. To connect the socket I do it in a thread, so if the server has to timeout, it would not block my UI thread. Problem is, into the thread where I connect my socket everything is fine, it is connected and I can talk to my server, once this thread is over and I try to reuse the socket, in another thread, I have the error message Socket is not connected. Questions are : - Is the socket automatically disconnected at the end of the thread? - Is their anyway we can pass back a value from a called thread to the caller ? Thanks a lot, Here is my code public class NetworkService extends Service { private Socket mSocket = new Socket(); private void _connectSocket(String addr, int port) { Runnable connect = new connectSocket(this.mSocket, addr, port); new Thread(connect).start(); } private void _authentification() { Runnable auth = new authentification(); new Thread(auth).start(); } private INetwork.Stub mBinder = new INetwork.Stub() { @Override public int doConnect(String addr, int port) throws RemoteException { _connectSocket(addr, port); _authentification(); return 0; } }; class connectSocket implements Runnable { String addrSocket; int portSocket; int TIMEOUT=5000; public connectSocket(String addr, int port) { addrSocket = addr; portSocket = port; } @Override public void run() { SocketAddress socketAddress = new InetSocketAddress(addrSocket, portSocket); try { mSocket.connect(socketAddress, TIMEOUT); PrintWriter out = new PrintWriter(mSocket.getOutputStream(), true); out.println("test42"); Log.i("connectSocket()", "Connection Succesful"); } catch (IOException e) { Log.e("connectSocket()", e.getMessage()); e.printStackTrace(); } } } class authentification implements Runnable { private String constructFirstConnectQuery() { String query = "toto"; return query; } @Override public void run() { BufferedReader in; PrintWriter out; String line = ""; try { in = new BufferedReader(new InputStreamReader(mSocket.getInputStream())); out = new PrintWriter(mSocket.getOutputStream(), true); out.println(constructFirstConnectQuery()); while (mSocket.isConnected()) { line = in.readLine(); Log.e("LINE", "[Current]- " + line); } } catch (IOException e) {e.printStackTrace();} } }

    Read the article

  • Compatibility between Qt and Boost sockets libraries

    - by cake
    Hello In my work, I'm developing a Viewer client for a Offshore simulation server, using sockets to send the simulation data from the Simulator to de Viewer. But, the server uses Boost.asio as it's sockets library. As the client uses Qt for it's GUI, I was wondering if there is any problem in using de Qt Networking library for handling the sockets. Is there any compatibility issues? Thanks in advance, and sorry for my bad english.

    Read the article

  • Why sockets does not die when server dies? Why socket dies when server is alive?

    - by Roman
    I try to play with sockets a bit. For that I wrote very simple "client" and "server" applications. Client: import java.net.*; public class client { public static void main(String[] args) throws Exception { InetAddress localhost = InetAddress.getLocalHost(); System.out.println("before"); Socket clientSideSocket = null; try { clientSideSocket = new Socket(localhost,12345,localhost,54321); } catch (ConnectException e) { System.out.println("Connection Refused"); } System.out.println("after"); if (clientSideSocket != null) { clientSideSocket.close(); } } } Server: import java.net.*; public class server { public static void main(String[] args) throws Exception { ServerSocket listener = new ServerSocket(12345); while (true) { Socket serverSideSocket = listener.accept(); System.out.println("A client-request is accepted."); } } } And I found a behavior that I cannot explain: I start a server, than I start a client. Connection is successfully established (client stops running and server is running). Then I close the server and start it again in a second. After that I start a client and it writes "Connection Refused". It seems to me that the server "remember" the old connection and does not want to open the second connection twice. But I do not understand how it is possible. Because I killed the previous server and started a new one! I do not start the server immediately after the previous one was killed (I wait like 20 seconds). In this case the server "forget" the socket from the previous server and accepts the request from the client. I start the server and then I start the client. Connection is established (server writes: "A client-request is accepted"). Then I wait a minute and start the client again. And server (which was running the whole time) accept the request again! Why? The server should not accept the request from the same client-IP and client-port but it does!

    Read the article

  • Apache2 + FCGI + PHP5 not creating sockets / pools

    - by CodyRo
    I have a pretty vanilla setup with Apache2, FastCGI setup as DSO and serving PHP through an external CGI script that sets the max children / serves the request to PHP. The issue is FastCGI doesn't appear to be creating the PHP sockets / pooling them so each request calls the php-cgi binary, then dies off .. effectively making the reason I want to use FastCGI moot. The only configuration directives I have are: AddHandler php5-fastcgi .php Action php5-fastcgi /cgi-bin/fcgi.cgi FastCgiIpcDir /usr/local/apache2/fastcgi The dyanmic/ directory is getting created as anticipated, but there are no sockets in there. Permissions are indeed correct. Any help would be greatly appreciated, thanks!

    Read the article

  • C sockets, chat server and client, problem echoing back.

    - by wretrOvian
    Hi This is my chat server : #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/socket.h> #include <netdb.h> #include <string.h> #define LISTEN_Q 20 #define MSG_SIZE 1024 struct userlist { int sockfd; struct sockaddr addr; struct userlist *next; }; int main(int argc, char *argv[]) { // declare. int listFD, newFD, fdmax, i, j, bytesrecvd; char msg[MSG_SIZE], ipv4[INET_ADDRSTRLEN]; struct addrinfo hints, *srvrAI; struct sockaddr_storage newAddr; struct userlist *users, *uptr, *utemp; socklen_t newAddrLen; fd_set master_set, read_set; // clear sets FD_ZERO(&master_set); FD_ZERO(&read_set); // create a user list users = (struct userlist *)malloc(sizeof(struct userlist)); users->sockfd = -1; //users->addr = NULL; users->next = NULL; // clear hints memset(&hints, 0, sizeof hints); // prep hints hints.ai_family = AF_INET; hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; // get srver info if(getaddrinfo("localhost", argv[1], &hints, &srvrAI) != 0) { perror("* ERROR | getaddrinfo()\n"); exit(1); } // get a socket if((listFD = socket(srvrAI->ai_family, srvrAI->ai_socktype, srvrAI->ai_protocol)) == -1) { perror("* ERROR | socket()\n"); exit(1); } // bind socket bind(listFD, srvrAI->ai_addr, srvrAI->ai_addrlen); // listen on socket if(listen(listFD, LISTEN_Q) == -1) { perror("* ERROR | listen()\n"); exit(1); } // add listfd to master_set FD_SET(listFD, &master_set); // initialize fdmax fdmax = listFD; while(1) { // equate read_set = master_set; // run select if(select(fdmax+1, &read_set, NULL, NULL, NULL) == -1) { perror("* ERROR | select()\n"); exit(1); } // query all sockets for(i = 0; i <= fdmax; i++) { if(FD_ISSET(i, &read_set)) { // found active sockfd if(i == listFD) { // new connection // accept newAddrLen = sizeof newAddr; if((newFD = accept(listFD, (struct sockaddr *)&newAddr, &newAddrLen)) == -1) { perror("* ERROR | select()\n"); exit(1); } // resolve ip if(inet_ntop(AF_INET, &(((struct sockaddr_in *)&newAddr)->sin_addr), ipv4, INET_ADDRSTRLEN) == -1) { perror("* ERROR | inet_ntop()"); exit(1); } fprintf(stdout, "* Client Connected | %s\n", ipv4); // add to master list FD_SET(newFD, &master_set); // create new userlist component utemp = (struct userlist*)malloc(sizeof(struct userlist)); utemp->next = NULL; utemp->sockfd = newFD; utemp->addr = *((struct sockaddr *)&newAddr); // iterate to last node for(uptr = users; uptr->next != NULL; uptr = uptr->next) { } // add uptr->next = utemp; // update fdmax if(newFD > fdmax) fdmax = newFD; } else { // existing sockfd transmitting data // read if((bytesrecvd = recv(i, msg, MSG_SIZE, 0)) == -1) { perror("* ERROR | recv()\n"); exit(1); } msg[bytesrecvd] = '\0'; // find out who sent? for(uptr = users; uptr->next != NULL; uptr = uptr->next) { if(i == uptr->sockfd) break; } // resolve ip if(inet_ntop(AF_INET, &(((struct sockaddr_in *)&(uptr->addr))->sin_addr), ipv4, INET_ADDRSTRLEN) == -1) { perror("* ERROR | inet_ntop()"); exit(1); } // print fprintf(stdout, "%s\n", msg); // send to all for(j = 0; j <= fdmax; j++) { if(FD_ISSET(j, &master_set)) { if(send(j, msg, strlen(msg), 0) == -1) perror("* ERROR | send()"); } } } // handle read from client } // end select result handle } // end looping fds } // end while return 0; } This is my client: #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/socket.h> #include <netdb.h> #include <string.h> #define MSG_SIZE 1024 int main(int argc, char *argv[]) { // declare. int newFD, bytesrecvd, fdmax; char msg[MSG_SIZE]; fd_set master_set, read_set; struct addrinfo hints, *srvrAI; // clear sets FD_ZERO(&master_set); FD_ZERO(&read_set); // clear hints memset(&hints, 0, sizeof hints); // prep hints hints.ai_family = AF_INET; hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; // get srver info if(getaddrinfo(argv[1], argv[2], &hints, &srvrAI) != 0) { perror("* ERROR | getaddrinfo()\n"); exit(1); } // get a socket if((newFD = socket(srvrAI->ai_family, srvrAI->ai_socktype, srvrAI->ai_protocol)) == -1) { perror("* ERROR | socket()\n"); exit(1); } // connect to server if(connect(newFD, srvrAI->ai_addr, srvrAI->ai_addrlen) == -1) { perror("* ERROR | connect()\n"); exit(1); } // add to master, and add keyboard FD_SET(newFD, &master_set); FD_SET(STDIN_FILENO, &master_set); // initialize fdmax if(newFD > STDIN_FILENO) fdmax = newFD; else fdmax = STDIN_FILENO; while(1) { // equate read_set = master_set; if(select(fdmax+1, &read_set, NULL, NULL, NULL) == -1) { perror("* ERROR | select()"); exit(1); } // check server if(FD_ISSET(newFD, &read_set)) { // read data if((bytesrecvd = recv(newFD, msg, MSG_SIZE, 0)) < 0 ) { perror("* ERROR | recv()"); exit(1); } msg[bytesrecvd] = '\0'; // print fprintf(stdout, "%s\n", msg); } // check keyboard if(FD_ISSET(STDIN_FILENO, &read_set)) { // read data from stdin if((bytesrecvd = read(STDIN_FILENO, msg, MSG_SIZE)) < 0) { perror("* ERROR | read()"); exit(1); } msg[bytesrecvd] = '\0'; // send if((send(newFD, msg, bytesrecvd, 0)) == -1) { perror("* ERROR | send()"); exit(1); } } } return 0; } The problem is with the part where the server recv()s data from an FD, then tries echoing back to all [send() ]; it just dies, w/o errors, and my client is left looping :(

    Read the article

  • OpenBSD configuration: Client unable to mount via NFS using Berkeley Automounter (amd)

    - by Rilindo
    What I am trying to do is to have my openBSD client (OpenBSD 4.9) auto mount a Linux NFS file system (Scientific Linux 6.1). So far, I am not sure if it is configured correctly. To get things out of the way, I am able to mount nfs manually: # mount_nfs -T -3 192.168.15.100:/exports /mnt # ls -la /mnt total 52 drwxr-xr-x 7 root wheel 4096 Oct 4 22:42 . drwxr-xr-x 16 root wheel 512 Nov 26 16:33 .. drwxrwxr-x 5 _sndio _sndio 4096 Oct 31 21:58 centos drwxr-xr-x 15 root wheel 4096 Nov 6 09:17 home drwxr-xr-x 5 root wheel 4096 Oct 31 21:27 sl drwxr-xr-x 3 root wheel 4096 Nov 19 16:02 sles drwxr-xr-x 17 503 503 4096 Nov 10 17:37 users # So connectivity is not an issue, as far as I can tell. As per man page, the following is configured in /etc/amd/auto.home: /defaults type:=nfs;sublink:=${key};opts:=rw,soft,intr,vers=3,proto=tcp * rhost:=192.168.15.100;rfs:=/exports In turn, /etc/amd/master is configured as such: # cat /etc/amd/master /exports amd.home Upon reboot, I can it see mount, but curiously enough, instead of the hostname: amd:24490 0 0 0 100% /exports From what I understand, amd acts a little different from FreeBSD. Still, I tried to see if I it can automount. Nope: ksh: cd: /exports/users - Resource temporarily unavailable # cd /exports/192.168.15.100/host/users ksh: cd: /exports/192.168.15.100/host/users - Resource temporarily unavailable A search in google doesn't help too much - it seems that automounting NFS with OpenBSD is not something that is usually done. Other than this, information is fairly sparse. I can, of course, always mount is permanently, but I tend to be a bit anal on convention, so no for now. :) Some direction would be appreciation. (And oh, in case you are a wondering, I tried FreeBSD way of using amd and that hasn't worked out - although I wouldn't mind an explanation of the difference between how FreeBSD implements and how OpenBSD implements it) UPDATE: After re-writing the map file several times, I got as far as actually communicating with the NFS server with this configuration: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport However, for some reason, it seems that amd will only default to NFS version 2 over udp: # tcpdump dst kerberos tcpdump: listening on pcn0, link-type EN10MB tcpdump: WARNING: compensating for unaligned libpcap packets 20:38:28.558385 openbsd.monzell.com.856 > kerberos.monzell.com.sunrpc: udp 100 20:38:28.559154 openbsd.monzell.com.856 > kerberos.monzell.com.892: udp 96 20:38:30.592761 openbsd.monzell.com.856 > kerberos.monzell.com.nfsd: xid 0x22000000 (NFSv2) 40 null 20:38:33.558107 arp reply openbsd.monzell.com is-at 52:54:00:52:8f:66 I tried various options of forcing it to try to mount as nfsv3 such as: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport or: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=-3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport Nothing yet still. Curious enough, OpenBSD mounts defaults to version 3, so I am not sure why it would start with version in amd. What would be the correct options to pass?

    Read the article

  • Sending large serialized objects over sockets is failing only when trying to grow the byte Array, bu

    - by FinancialRadDeveloper
    I have code where I am trying to grow the byte array while receiving the data over my socket. This is erroring out. public bool ReceiveObject2(ref Object objRec, ref string sErrMsg) { try { byte[] buffer = new byte[1024]; byte[] byArrAll = new byte[0]; bool bAllBytesRead = false; int iRecLoop = 0; // grow the byte array to match the size of the object, so we can put whatever we // like through the socket as long as the object serialises and is binary formatted while (!bAllBytesRead) { if (m_socClient.Receive(buffer) > 0) { byArrAll = Combine(byArrAll, buffer); iRecLoop++; } else { m_socClient.Close(); bAllBytesRead = true; } } MemoryStream ms = new MemoryStream(buffer); BinaryFormatter bf1 = new BinaryFormatter(); ms.Position = 0; Object obj = bf1.Deserialize(ms); objRec = obj; return true; } catch (System.Runtime.Serialization.SerializationException se) { objRec = null; sErrMsg += "SocketClient.ReceiveObject " + "Source " + se.Source + "Error : " + se.Message; return false; } catch (Exception e) { objRec = null; sErrMsg += "SocketClient.ReceiveObject " + "Source " + e.Source + "Error : " + e.Message; return false; } } private byte[] Combine(byte[] first, byte[] second) { byte[] ret = new byte[first.Length + second.Length]; Buffer.BlockCopy(first, 0, ret, 0, first.Length); Buffer.BlockCopy(second, 0, ret, first.Length, second.Length); return ret; } Error: mscorlibError : The input stream is not a valid binary format. The starting contents (in bytes) are: 68-61-73-43-68-61-6E-67-65-73-3D-22-69-6E-73-65-72 ... Yet when I just cheat and use a MASSIVE buffer size its fine. public bool ReceiveObject(ref Object objRec, ref string sErrMsg) { try { byte[] buffer = new byte[5000000]; m_socClient.Receive(buffer); MemoryStream ms = new MemoryStream(buffer); BinaryFormatter bf1 = new BinaryFormatter(); ms.Position = 0; Object obj = bf1.Deserialize(ms); objRec = obj; return true; } catch (Exception e) { objRec = null; sErrMsg += "SocketClient.ReceiveObject " + "Source " + e.Source + "Error : " + e.Message; return false; } } This is really killing me. I don't know why its not working. I have lifted the Combine from a suggestion on here too, so I am pretty sure this is not doing the wrong thing? I hope someone can point out where I am going wrong

    Read the article

  • What is stopping data flow with .NET 3.5 asynchronous System.Net.Sockets.Socket?

    - by TonyG
    I have a .NET 3.5 client/server socket interface using the asynchronous methods. The client connects to the server and the connection should remain open until the app terminates. The protocol consists of the following pattern: send stx receive ack send data1 receive ack send data2 (repeat 5-6 while more data) receive ack send etx So a single transaction with two datablocks as above would consist of 4 sends from the client. After sending etx the client simply waits for more data to send out, then begins the next transmission with stx. I do not want to break the connection between individual exchanges or after each stx/data/etx payload. Right now, after connection, the client can send the first stx, and get a single ack, but I can't put more data onto the wire after that. Neither side disconnects, the socket is still intact. The client code is seriously abbreviated as follows - I'm following the pattern commonly available in online code samples. private void SendReceive(string data) { // ... SocketAsyncEventArgs completeArgs; completeArgs.Completed += new EventHandler<SocketAsyncEventArgs>(OnSend); clientSocket.SendAsync(completeArgs); // two AutoResetEvents, one for send, one for receive if ( !AutoResetEvent.WaitAll(autoSendReceiveEvents , -1) ) Log("failed"); else Log("success"); // ... } private void OnSend( object sender , SocketAsyncEventArgs e ) { // ... Socket s = e.UserToken as Socket; byte[] receiveBuffer = new byte[ 4096 ]; e.SetBuffer(receiveBuffer , 0 , receiveBuffer.Length); e.Completed += new EventHandler<SocketAsyncEventArgs>(OnReceive); s.ReceiveAsync(e); // ... } private void OnReceive( object sender , SocketAsyncEventArgs e ) {} // ... if ( e.BytesTransferred > 0 ) { Int32 bytesTransferred = e.BytesTransferred; String received = Encoding.ASCII.GetString(e.Buffer , e.Offset , bytesTransferred); dataReceived += received; } autoSendReceiveEvents[ SendOperation ].Set(); // could be moved elsewhere autoSendReceiveEvents[ ReceiveOperation ].Set(); // releases mutexes } The code on the server is very similar except that it receives first and then sends a response - the server is not doing anything (that I can tell) to modify the connection after it sends a response. The problem is that the second time I hit SendReceive in the client, the connection is already in a weird state. Do I need to do something in the client to preserve the SocketAsyncEventArgs, and re-use the same object for the lifetime of the socket/connection? I'm not sure which eventargs object should hang around during the life of the connection or a given exchange. Do I need to do something, or Not do something in the server to ensure it continues to Receive data? The server setup and response processing looks like this: void Start() { // ... listenSocket.Bind(...); listenSocket.Listen(0); StartAccept(null); // note accept as soon as we start. OK? mutex.WaitOne(); } void StartAccept(SocketAsyncEventArgs acceptEventArg) { if ( acceptEventArg == null ) { acceptEventArg = new SocketAsyncEventArgs(); acceptEventArg.Completed += new EventHandler<SocketAsyncEventArgs>(OnAcceptCompleted); } Boolean willRaiseEvent = this.listenSocket.AcceptAsync(acceptEventArg); if ( !willRaiseEvent ) ProcessAccept(acceptEventArg); // ... } private void OnAcceptCompleted( object sender , SocketAsyncEventArgs e ) { ProcessAccept(e); } private void ProcessAccept( SocketAsyncEventArgs e ) { // ... SocketAsyncEventArgs readEventArgs = new SocketAsyncEventArgs(); readEventArgs.SetBuffer(dataBuffer , 0 , Int16.MaxValue); readEventArgs.Completed += new EventHandler<SocketAsyncEventArgs>(OnIOCompleted); readEventArgs.UserToken = e.AcceptSocket; dataReceived = ""; // note server is degraded for single client/thread use // As soon as the client is connected, post a receive to the connection. Boolean willRaiseEvent = e.AcceptSocket.ReceiveAsync(readEventArgs); if ( !willRaiseEvent ) this.ProcessReceive(readEventArgs); // Accept the next connection request. this.StartAccept(e); } private void OnIOCompleted( object sender , SocketAsyncEventArgs e ) { // switch ( e.LastOperation ) case SocketAsyncOperation.Receive: ProcessReceive(e); // similar to client code // operate on dataReceived here case SocketAsyncOperation.Send: ProcessSend(e); // similar to client code } // execute this when a data has been processed into a response (ack, etc) private SendResponseToClient(string response) { // create buffer with response // currentEventArgs has class scope and is re-used currentEventArgs.SetBuffer(sendBuffer , 0 , sendBuffer.Length); Boolean willRaiseEvent = currentClient.SendAsync(currentEventArgs); if ( !willRaiseEvent ) ProcessSend(currentEventArgs); } A .NET trace shows the following when sending ABC\r\n: Socket#7588182::SendAsync() Socket#7588182::SendAsync(True#1) Data from Socket#7588182::FinishOperation(SendAsync) 00000000 : 41 42 43 0D 0A Socket#7588182::ReceiveAsync() Exiting Socket#7588182::ReceiveAsync() - True#1 And it stops there. It looks just like the first send from the client but the server shows no activity. I think that could be info overload for now but I'll be happy to provide more details as required. Thanks!

    Read the article

  • How to handle activity life cycle involving sockets in Android?

    - by Henrik
    Hello all, I have an Android activity which in turn starts a thread. In the thread I open a persistent TCP socket connection. When the socket connects to the server dynamic data is downloaded. The thread sends messages using Handler-class to the activity when data has been received. Now if the user happens to switch from portrait to landscape mode the activity gets an onDestroy call. At this moment I close the socket and stop the thread. When Android has switched landscape mode it calls onCreate yet again and I have to do a socket re-connect. Also, all of the data the activity received needs to be downloaded once more because the server does not have the ability to know what has been sent before, i.e. there is no "resume" feature. Thus the problem is that there is alot of data which is resent all the time when landscape mode is changed. What are my options here? Should I create a service which handles the socket traffic towards the server thus I always got all the data which the server has sent in the service. Or should I disable landscape mode all together perhaps? Or would my best bet be to rewrite my server which is a VERY BIG job :-) All input is welcome :-) / Henrik

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >