Search Results

Search found 17651 results on 707 pages for 'unix domain sockets'.

Page 118/707 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Problems with making a simple UNIX shell

    - by Kodemax
    Hai, I am trying to create a simple shell in UNIX. I read a lot and found that everybody uses the strtok a lot. But i want to do without any special functions. So i wrote the code but i cant seem to get it to work. Can anybody point out what i am doing wrong here? void process(char**); int arg_count; char **splitcommand(char* input) { char temp[81][81] ,*cmdptr[40]; int k,done=0,no=0,arg_count=0; for(int i=0 ; input[i] != '\0' ; i++) { k=0; while(1) { if(input[i] == ' ') { arg_count++; break; } if(input[i] == '\0') { arg_count++; done = 1; break; } temp[arg_count][k++] = input[i++]; } temp[arg_count][k++] = '\0'; if(done == 1) { break; } } for(int i=0 ; i<arg_count ; i++) { cmdptr[i] = temp[i]; cout<<endl; } cout<<endl; } void process(char* cmd[]) { int pid = fork(); if(pid < 0) { cout << "Fork Failed" << endl; exit(-1); } else if( pid == 0) { cout<<endl<<"in pid"; execvp(cmd[0], cmd); } else { wait(NULL); cout << "Job's Done" << endl; } } int main() { cout<<"Welcome to shell !!!!!!!!!!!"<<endl; char input[81]; cin.getline(input,81); splitcommand(input); }

    Read the article

  • Xenserver 5.5 U2 a bit unstable with an unstable W2003 VM

    - by twistedbrain
    In the last week I had to reboot the host system twice and the second one by means of the power button. The system is a Dell PE 6950 (4 Opteron dual core, 2,8 Ghz, 16 GB RAM, 900 GB disk in a RAID 10 array composed by 4 450GB 15000 rpm disks) with XenServer 5.5 U2. We're installing it and at now there are working in production a 2003 32 bit Windows server VM with 2 GB RAM, 3 vcpu and about 200 GB disk in 4 partition (12 GB boot, 20 GB program, 80 GB user data, 80 GB other data). The first time I was compressing many Windows 2003 folders (some tens GB by means of W2003 compressed folder option) from a Windows remote console and some hours before my colleague installed Backup Exec agent that was alredy installed and that required a reboot (that was pending). The console stopped responding, it was no more possible to connect by means of remote console or by means of the console of the XenCentre, it was still possible from the network to use the shared folder of the VM and the programs on it (2 db and a GIS program), but the print server didn't work any more and I couldn't give remote reboot from other domain controller hosts. I couldn't stop the virtual machine neither from the XenCentre, neither from the command line of the host also forcing the reboot. I had to reboot the host server. Yesterday it has been worst. I installed a template of another VM, CentOS 5.3 and then, put the DVD in the drive of the host. Then, before the install and after the boot I checked for defect the DVD and the W2003 VM began to respond slowly (I was connected by means of an administration remote console) the task manager showed only mid or low load, it is, only the first of the 3 vcpu was loaded (about 70%), while the other two were about at 20% and also the disk I/O was not so heavy. Then the users were not so happy because they couldn't any more use the MS word docs on the server. I immediately stopped the check of the DVD (to do that I had to force the stop of such Centos 5.3 VM), then some users could again use their docs, but other had still problems, so I decided to reboot the VM, but it doesn't stopped, neither from XenCenter, neither from command line, neither forcing the reboot. Then I tried to reboot of the host, but it didn't worked, neither from the XenCentre, neither from the host prompt (shutdown -r now as root: it told that it was shutting down, but then it didn't did that). So I had to power off by means of the power button of the server (before I tried some Magic SyS REQ, but I saw that Xen isn't compiled with this option enabled). What could you suggest about my problem, what can I look, search and see? In the W2003 VM logs there are no errors or warning to explain what happened. Some more exciting, amusing and inspiring words of poetry (the facts happened around 11 am): \# egrep -i 'err|warning' xensource.log [20100219 10:32:05.597|debug|culo|6301 unix-RPC|VBD.plug R:c81bcda701f6|xenops] watch: watching xenstore paths: [ /xapi/0/frontend/vbd/51712/hotplug; /local/domain/0/backend/vbd/0/51712/tapdisk-error ] with timeout 1200.000000 seconds [20100219 10:32:05.597|debug|culo|6301 unix-RPC|VBD.plug R:c81bcda701f6|xenops] watch: fired on /local/domain/0/backend/vbd/0/51712/tapdisk-error [20100219 10:32:14.314|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] watch: watching xenstore paths: [ /local/domain/0/backend/vbd/0/51712/shutdown-done; /local/domain/0/error/device/vbd/51712/error ] with timeout 1200.000000 seconds [20100219 10:32:14.337|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] xenstore-rm /local/domain/0/error/backend/vbd/0 [20100219 10:32:14.337|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] xenstore-rm /local/domain/0/error/device/vbd/51712 [20100219 10:32:14.338|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] watch: fired on /local/domain/0/error/device/vbd/51712/error [20100219 10:53:48.903|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|helpers] Ignoring exception: INTERNAL_ERROR: [ Xb.Noent ] while Vmops.destroy_domain: Destroying domid 14 guest session [20100219 10:53:52.048|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/tap/14 [20100219 10:53:52.048|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51744 [20100219 10:53:52.085|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/tap/14 [20100219 10:53:52.086|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51728 [20100219 10:53:52.122|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/tap/14 [20100219 10:53:52.122|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51712 [20100219 10:53:52.127|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/vbd/14 [20100219 10:53:52.128|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51760 [20100219 10:53:52.496|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] Device.Vif.hard_shutdown about to blow away backend and error paths [20100219 10:53:52.497|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/vif/14 [20100219 10:53:52.497|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vif/0 [20100219 10:53:53.385|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] Device.Vif.hard_shutdown about to blow away backend and error paths [20100219 10:53:53.386|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/vif/14 [20100219 10:53:53.386|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vif/1 [20100219 10:53:53.389| warn|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|hotplug] Warning, deleting 'vif' entry from /xapi/14/hotplug/vif/0 [20100219 10:53:53.391| warn|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|hotplug] Warning, deleting 'vif' entry from /xapi/14/hotplug/vif/1 [20100219 11:20:49.766|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|helpers] Ignoring exception: INTERNAL_ERROR: [ Xb.Noent ] while Vmops.destroy_domain: Destroying domid 11 guest session [20100219 11:20:50.339|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/tap/11 [20100219 11:20:50.339|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vbd/832 [20100219 11:20:50.360|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/tap/11 [20100219 11:20:50.360|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vbd/768 [20100219 11:20:50.366|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/vbd/11 [20100219 11:20:50.366|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vbd/5696 [20100219 11:20:50.753|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] Device.Vif.hard_shutdown about to blow away backend and error paths [20100219 11:20:50.754|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/vif/11 [20100219 11:20:50.754|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vif/1 [20100219 11:20:50.757| warn|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|hotplug] Warning, deleting 'vif' entry from /xapi/11/hotplug/vif/1 [20100219 11:28:13.803|debug|culo|6610 inet-RPC|Connection to VM console R:e9f8b76e8975|console] error: INTERNAL_ERROR: [ Unix.Unix_error(63, "connect", "") ]

    Read the article

  • Server socket programming in Android 1.5, most power efficient way?

    - by Antek
    Hello people, I am doing a project where I have too develop an application that listens for incoming events by a service. The device that has to listen too events is an Android phone with Android SDK 1.5 on it. Currently the services that call events only implement communication trough UDP or TCP sockets. I can solve my problem by setting up a ServerSocket, but i doubt that's the most power efficient way. This application will be running most of the time, with Wi-Fi on, and I'd like too reach an long battery duration. I've been looking for options on the internet for my question for a while but i couldn't get a real answer. I've got the following questions: What is the most efficient way too listen to incoming events? Should I make an ServerSocket? or what are my options? Are there any other implementations that are more power efficient? Ive been also thinking of implementing communication trough XMPP. Not sure if this is the best way. I'm not forced too an specific implementation. All suggestions are welcome! Thanks for the help, Antek

    Read the article

  • Poco SocketReactors for a Proxy Server

    - by Genesis
    Can anyone give me some idea of the best way to implement a non-blocking proxy server using a Poco Socket Reactor? Currently I have a blocking implementation where if a readable notification arrives from the client I am writing what is read directly to the server, and if a readable notification arrives from the server I am writing what is read directly to the client. To achieve this I keep the thread that initiated the server connection alive but I would prefer to switch to non-blocking and have any threads which are used to initiate a connection removed once the server and client sockets are registered with the reactor and the SOCKS5 handshake is over. With a SocketReactor one can register event handlers for a single socket but the trouble is I would need to store whatever is read from that socket in a global buffer until the corresponding server socket is ready to be written to as from my testing I dont seem to be able to just write directly to the server when client data arrives. I am thinking of using a struct that contains the client socket, server socket, client buffer and server buffer and whenever a writable notification comes along for either the client or server, finding the corresponding buffer and writing this. Any thoughts?

    Read the article

  • Nonblocking Tcp server

    - by hoodoos
    It's not a question really, i'm just looking for some guidelines :) I'm currently writing some abstract tcp server which should use as low number of threads as it can. Currently it works this way. I have a thread doing listening and some worker threads. Listener thread is just sits and wait for clients to connect I expect to have a single listener thread per server instance. Worker threads are doing all read/write/processing job on clients socket. So my problem is in building efficient worker process. And I came to some problem I can't really solve yet. Worker code is something like that(code is really simple just to show a place where i have my problem): List<Socket> readSockets = new List<Socket>(); List<Socket> writeSockets = new List<Socket>(); List<Socket> errorSockets = new List<Socket>(); while( true ){ Socket.Select( readSockets, writeSockets, errorSockets, 10 ); foreach( readSocket in readSockets ){ // do reading here } foreach( writeSocket in writeSockets ){ // do writing here } // POINT2 and here's the problem i will describe below } it works all smothly accept for 100% CPU utilization because of while loop being cycling all over again, if I have my clients doing send-receive-disconnect routine it's not that painful, but if I try to keep alive doing send-receive-send-receive all over again it really eats up all CPU. So my first idea was to put a sleep there, I check if all sockets have their data send and then putting Thread.Sleep in POINT2 just for 10ms, but this 10ms later on produces a huge delay of that 10ms when I want to receive next command from client socket.. For example if I don't try to "keep alive" commands are being executed within 10-15ms and with keep alive it becomes worse by atleast 10ms :( Maybe it's just a poor architecture? What can be done so my processor won't get 100% utilization and my server to react on something appear in client socket as soon as possible? Maybe somebody can point a good example of nonblocking server and architecture it should maintain?

    Read the article

  • Make Python Socket Server More Efficient

    - by BenMills
    I have very little experience working with sockets and multithreaded programming so to learn more I decided to see if I could hack together a little python socket server to power a chat room. I ended up getting it working pretty well but then I noticed my server's CPU usage spiked up over 100% when I had it running in the background. Here is my code in full: http://gist.github.com/332132 I know this is a pretty open ended question so besides just helping with my code are there any good articles I could read that could help me learn more about this? My full code: import select import socket import sys import threading from daemon import Daemon class Server: def __init__(self): self.host = '' self.port = 9998 self.backlog = 5 self.size = 1024 self.server = None self.threads = [] self.send_count = 0 def open_socket(self): try: self.server = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) self.server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.server.bind((self.host,self.port)) self.server.listen(5) print "Server Started..." except socket.error, (value,message): if self.server: self.server.close() print "Could not open socket: " + message sys.exit(1) def remove_thread(self, t): t.join() def send_to_children(self, msg): self.send_count = 0 for t in self.threads: t.send_msg(msg) print 'Sent to '+str(self.send_count)+" of "+str(len(self.threads)) def run(self): self.open_socket() input = [self.server,sys.stdin] running = 1 while running: inputready,outputready,exceptready = select.select(input,[],[]) for s in inputready: if s == self.server: # handle the server socket c = Client(self.server.accept(), self) c.start() self.threads.append(c) print "Num of clients: "+str(len(self.threads)) self.server.close() for c in self.threads: c.join() class Client(threading.Thread): def __init__(self,(client,address), server): threading.Thread.__init__(self) self.client = client self.address = address self.size = 1024 self.server = server self.running = True def send_msg(self, msg): if self.running: self.client.send(msg) self.server.send_count += 1 def run(self): while self.running: data = self.client.recv(self.size) if data: print data self.server.send_to_children(data) else: self.running = False self.server.threads.remove(self) self.client.close() """ Run Server """ class DaemonServer(Daemon): def run(self): s = Server() s.run() if __name__ == "__main__": d = DaemonServer('/var/servers/fserver.pid') if len(sys.argv) == 2: if 'start' == sys.argv[1]: d.start() elif 'stop' == sys.argv[1]: d.stop() elif 'restart' == sys.argv[1]: d.restart() else: print "Unknown command" sys.exit(2) sys.exit(0) else: print "usage: %s start|stop|restart" % sys.argv[0] sys.exit(2)

    Read the article

  • why does text from socket server erase previously written text?

    - by mix
    This is strange enough I'm not sure how to search for an answer. I have a program in Python that communicates via TCP/IP sockets to a telnet-based server. If I telnet in manually and type commands like this: SET MDI G0 X0 Y0 the server will spit back a line like this: SET MDI ACK Pretty standard stuff. Here's the weird part. If, in my code, I precede my printing of each of these lines with some text, the returned line erases what I'm trying to print before it. So for example, if I write the code so it should look like this: SENT: SET MDI G0 X0 Y0 READ: SET MDI ACK What I get instead is: SENT: SET MDI G0 X0 Y0 SET MDI ACK Now, if I make the "READ: " text a bit longer, I can get a better idea of what's happening. Let's say I change READ: to 12345678901234567890, so that it should read as: 12345678901234567890: SET MDI ACK What I get instead is: SET MDI ACK234567890: So it seems like whatever text I'm getting back from the server is somehow deleting what I'm trying to precede it with. I tried saving all of my saved lines in a list, and then printing them out at the end, but it does exactly the same thing. Any ideas on what's going on, or even on how to debug this? Is there a way to get Python to show me any hidden chars in a string, for example? thx!

    Read the article

  • java.net.SocketException: Software caused connection abort: recv failed; Causes and cures?

    - by IVR Avenger
    Hi, all. I've got an application running on Apache Tomcat 5.5 on a Win2k3 VM. The application serves up XML to be consumed by some telephony appliances as part of our IVR infrastructure. The application, in turn, receives its information from a handful of SOAP services. This morning, the SOAP services were timing out intermittently, causing all sorts of Exceptions. Once these stopped, I noticed that our application was still performing very slowly, in that it took it a long time to render and deliver pages. This sluggishness was noticed both on the appliances that consume the Tomcat output, and from a simple test of requesting some static documents from my web browser. Restarting Tomcat immediately resolved the issue. Cracking open the localhost log, I see a ton of these errors, right up until I restarted Tomcat: WARNING: Exception thrown whilst processing POSTed parameters java.net.SocketException: Software caused connection abort: recv failed After a big of Googling, my working theory is that the SOAP issue caused my users to get errors, which caused them to make more requests, which put an increased load on the application. This caused it to run out of available sockets to handle incoming requests. So, here's my quandary: 1. Is this a valid hypothesis, or am I just in over my head with HTTP and Tomcat? 2. If this is a valid hypothesis, is there a way to increase the size of the "socket queue", so that this doesn't happen in the future? Thanks! IVR Avenger

    Read the article

  • Non-blocking TCP buffer issues.

    - by Poni
    Hi! I think I'm in a problem. I have two TCP apps connected to each other which use winsock I/O completion ports to send/receive data (non-blocking sockets). Everything works just fine until there's a data transfer burst. The sender starts sending incorrect/malformed data. I allocate the buffers I'm sending on the stack, and if I understand correctly, that's a wrong to do, because these buffers should remain as I sent them until I get the "write complete" notification from IOCP. Take this for example: void some_function() { char cBuff[1024]; // filling cBuff with some data WSASend(...); // sending cBuff, non-blocking mode // filling cBuff with other data WSASend(...); // again, sending cBuff // ..... and so forth! } If I understand correctly, each of these WSASend() calls should have its own unique buffer, and that buffer can be reused only when the send completes. Correct? Now, what strategies can I implement in order to maintain a big sack of such buffers, how should I handle them, how can I avoid performance penalty, etc'? And, if I am to use buffers that means I should copy the data to be sent from the source buffer to the temporary one, thus, I'd set SO_SNDBUF on each socket to zero, so the system will not re-copy what I already copied. Are you with me? Please let me know if I wasn't clear.

    Read the article

  • -[NSCFData writeStreamHandleEvent:]: unrecognized selector sent to instance in a stream callback

    - by user295491
    Hi everyone, I am working with streams and sockets in iPhone SDK 3.1.3 the issue is when the program accept a callback and I want to handle this writestream callback the following error is triggered " Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: ' -[NSCFData writeStreamHandleEvent:]: unrecognized selector sent to instance 0x17bc70'" But I don't know how to solve it because everything seems fine. Even when I run the debugger there is no error the program works. Any hint here will help! The code of the callback is: void myWriteStreamCallBack (CFWriteStreamRef stream, CFStreamEventType eventType, void *info){ NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; Connection *handlerEv = [(Connection *)info retain] autorelease]; [handlerEv writeStreamHandleEvent:eventType]; [pool release]; } The code of the writeStreamHandleEvent: - (void)writeStreamHandleEvent:(CFStreamEventType) eventType{ switch(eventType) { case kCFStreamEventOpenCompleted: writeStreamOpen = YES; break; case kCFStreamEventCanAcceptBytes: NSLog(@"Writing in the stream"); [self writeOutgoingBufferToStream]; break; case kCFStreamEventErrorOccurred: error = CFWriteStreamGetError(writeStream); fprintf(stderr, "CFReadStreamGetError returned (%ld, %ld)\n", error.domain, error.error); CFWriteStreamUnscheduleFromRunLoop(writeStream, CFRunLoopGetCurrent(),kCFRunLoopCommonModes); CFWriteStreamClose(writeStream); CFRelease(writeStream); break; case kCFStreamEventEndEncountered: CFWriteStreamUnscheduleFromRunLoop(writeStream, CFRunLoopGetCurrent(),kCFRunLoopCommonModes); CFWriteStreamClose(writeStream); CFRelease(writeStream); break; } } The code of the stream configuration: CFSocketContext ctx = {0, self, nil, nil, nil}; CFWriteStreamSetClient (writeStream,registeredEvents, (CFWriteStreamClientCallBack)&myWriteStreamCallBack,(CFStreamClientContext *)(&ctx) ); CFWriteStreamScheduleWithRunLoop (writeStream, CFRunLoopGetCurrent(), kCFRunLoopDefaultMode); You can see that there is nothing strange!, well at least I don't see it. Thank you in advance.

    Read the article

  • .NET TCP socket with session

    - by Zé Carlos
    Is there any way of dealing with sessions with sockets in C#? Example of my problem: I have a server with a socket listening on port 5672. TcpListener socket = new TcpListener(localAddr, 5672); socket.Start(); Console.Write("Waiting for a connection... "); // Perform a blocking call to accept requests. TcpClient client = socket.AcceptTcpClient(); Console.WriteLine("Connected to client!"); And i have two clients that will send one byte. Client A send 0x1 and client B send 0x2. From the server side, i read this data like this: Byte[] bytes = new Byte[256]; String data = null; NetworkStream stream = client.GetStream(); while ((stream.Read(bytes, 0, bytes.Length)) != 0) { byte[] answer = new ... stream.Write(answer , 0, answer.Length); } Then client A sends 0x11. I need a way to know that this client is the same that sent "0x1" before.

    Read the article

  • help with Firefox extension

    - by Johnny Grass
    I'm writing a Firefox extension that creates a socket server which will output the active tab's URL when a client makes a connection to it. I have the following code in my javascript file: var serverSocket; function startServer() { var listener = { onSocketAccepted : function(socket, transport) { try { var outputString = gBrowser.currentURI.spec + "\n"; var stream = transport.openOutputStream(0,0,0); stream.write(outputString,outputString.length); stream.close(); } catch(ex2){ dump("::"+ex2); } }, onStopListening : function(socket, status){} }; try { serverSocket = Components.classes["@mozilla.org/network/server-socket;1"] .createInstance(Components.interfaces.nsIServerSocket); serverSocket.init(7055,true,-1); serverSocket.asyncListen(listener); } catch(ex){ dump(ex); } document.getElementById("status").value = "Started"; } startServer(); As it is, it works for multiple tabs in a single window. If I open multiple windows, it ignores the additional windows. I think it is creating a server socket for each window, but since they are using the same port, the additional sockets fail to initialize. I need it to create a server socket when the browser launches and continue running when I close the windows (Mac OS X). As it is, when I close a window but Firefox remains running, the socket closes and I have to restart firefox to get it up an running. How do I go about that?

    Read the article

  • Help understanding linux/tcp.h

    - by Chris
    I'm learning to use raw sockets, and im trying to prase out the tcp header data, but i can't seem to figure out what res1, ece, and cwr are. Through my networking book and google i know what the rest stand for, but can't seem to find anything on those three. Below is the tcphdr struct in my includes area. Ive commented the parts a bit as i was figureing out what they stood for. struct tcphdr { __be16 source; __be16 dest; __be32 seq; __be32 ack_seq; #if defined(__LITTLE_ENDIAN_BITFIELD) _u16 res1:4, doff:4,//tcp header length fin:1,//final syn:1,//synchronization rst:1,//reset psh:1,//push ack:1,//ack urg:1,// urge ece:1, cwr:1; #elif defined(_BIG_ENDIAN_BITFIELD) __u16 doff:4,//tcp header length res1:4, cwr:1, ece:1, urg:1,//urge ack:1,//ack psh:1,//push rst:1,//reset syn:1,//synchronization fin:1;//final #else #error "Adjust your defines" #endif __be16 window; __sum16 check; __be16 urg_ptr; };

    Read the article

  • How can I work around WinXP using ports 1025-5000 as ephemeral?

    - by Chris Dolan
    If you create a TCP client socket with port 0 instead of a non-zero port, then the operating system chooses any free ephemeral port for you. Most OSes choose ephemeral ports from the IANA dynamic port range of 49152-65535. However in Windows Server 2003 and earlier (including XP) Microsoft used ports 1025-5000 as the ephemeral range, according to their bind() documentation. I run multiple Java services on the same hardware. On rare occasions, this range collides with well-known ports that I use for other services (e.g. port 4160 for Jini discovery). While rare, this has caused real problems. Is there any easy way to tell Windows or Java to use a different port range for client sockets? Microsoft's docs indicate that I can change the high end of that range via the MaxUserPort TcpIP registry setting, but I see no way to change the low end. Update: I've made some progress on this. It looks like Microsoft has a concept of reserved ports that are exceptions to the ephemeral port range. There's a registry setting that lets you change this permanently and apparently there must be an API to do the same thing because there's a data structure that holds high/low values for reserved port ranges, but I can't find the actual function call anywhere... The registry solution may work, but now I'm fixated on this API.

    Read the article

  • server/ client server connection

    - by user312054
    I have a server side program that creates a listening server side socket. The problem occurring is that it seems as if the client side sends a connect request it gets rejected if the server side socket is listening but connects if the server side program is not running. I can see the server side program getting the client request when debugging. It seems as if the client cannot connect to a listening socket. Any suggestions on a resolution? The server side accept code snippet is this. void CSocketListen::OnAccept(int nErrorCode) { CSocket::OnAccept(nErrorCode); CSocketServer* SocketPtr = new CSocketServer(); if (Accept(*SocketPtr)) { // add to list of client sockets connected } else { delete SocketPtr; } The client side code connect is like this. SOCKET cellModem; sockaddr_in handHeld; handHeld.sin_family = AF_INET; //Address family handHeld.sin_addr.s_addr = inet_addr("127.0.0.1"); handHeld.sin_port = htons((u_short)1113); //port to use cellModem=socket(AF_INET,SOCK_STREAM,0); if(cellModem == INVALID_SOCKET) { // log socket failure return false; } else { // log socket success } if (connect(cellModem,(const struct sockaddr*)&handHeld, sizeof(handHeld)) != 0 ) { // log socket connection success } else { // log socket connection failure closesocket(cellModem); }

    Read the article

  • C# TCP socket with session

    - by Zé Carlos
    Is there any way of dealing with sessions with sockets in C#? Example of my problem: I have a server with a socket listening on port 5672. TcpListener socket = new TcpListener(localAddr, 5672); socket.Start(); Console.Write("Waiting for a connection... "); // Perform a blocking call to accept requests. TcpClient client = socket.AcceptTcpClient(); Console.WriteLine("Connected to client!"); And i have two clients that will send one byte. Client A send 0x1 and client B send 0x2. From the server side, i read this data like this: Byte[] bytes = new Byte[256]; String data = null; NetworkStream stream = client.GetStream(); while ((stream.Read(bytes, 0, bytes.Length)) != 0) { byte[] answer = new ... stream.Write(answer , 0, answer.Length); } Then client A sends 0x11. I need a way to know that this client is the same that sent "0x1" before.

    Read the article

  • How do I make a web interface for a socket server

    - by mgroat
    I've got a socket server running (it's something that's basically like a chat server). Users can telnet into it, but I'd like to make a web interface. This is the first time I've ever done something like this, so I'm not really sure where to start. A few thoughts I've had: Have some server-side Python (or PHP) on my webserver, which accesses the socket server. I think I know enough about sockets to have Python interact with the server, but how do I go about getting the website that the user sees to update in real time? Should I just have the website refresh few seconds? I would prefer to do things this way if I can figure out how. Write a Java applet that interacts with the socket server, and embed the applet in the website. I would have to re-learn a language that I haven't touched in years, but my main goal here is learning -- so that wouldn't be such a bad thing. The main problem I have with this is that it requires end users to have Java installed on their computers, which I'd rather not do. Is one of these two solutions the right way to go? Anybody know where I can find a good tutorial to get started? Edit: There's no real security concerns with exposing the server to the internet.

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • Sending files using Winsock - optimal send() data length?

    - by Meta
    I am using Winsock with non-blocking sockets to send a file to a client. The way I'm doing it right now is that I read a chunk of 8192 bytes from the file, and then loop until all of it successfully goes through send() (obviously handling WSAEWOULDBLOCK as it occurs). I then move on and read the next 8192 bytes, and so on... Although I can use any other number than 8192 when I test the transfer on my local machine, once I try it over a network, it seems like 8191 is the largest number I can use. When I try to use any number higher than 8191 (starting with 8192), the file transfer becomes extremely slow (about 5 times slower). Is there any reason why 8191 is so special? I've done some more testing and it turns out that using 8000 is slightly faster (by 0.5%). If you understand why 8191 is so special, can you tell me if there is a number better than the others (better than 8000)? I have a feeling that it has something to do with the fact that the default send buffer allocated to the socket by Winsock is 8KB, but I don't understand why. It might also have something to do with the Nagle algorithm, but again, I'm not sure how. Note that I have not modified the SO_SNDBUF option nor the TCP_NODELAY option. Or am I doing this all wrong? What's the best way of sending a file over a non-blocking socket?

    Read the article

  • Non-blocking TCP connection issues.

    - by Poni
    Hi! I think I'm in a problem. I have two TCP apps connected to each other which use winsock I/O completion ports to send/receive data (non-blocking sockets). Everything works just fine until there's a data transfer burst. The sender starts sending incorrect/malformed data. I allocate the buffers I'm sending on the stack, and if I understand correctly, that's a wrong to do, because these buffers should remain as I sent them until I get the "write complete" notification from IOCP. Take this for example: void some_function() { char cBuff[1024]; // filling cBuff with some data WSASend(...); // sending cBuff, non-blocking mode // filling cBuff with other data WSASend(...); // again, sending cBuff // ..... and so forth! } If I understand correctly, each of these WSASend() calls should have its own unique buffer, and that buffer can be reused only when the send completes. Correct? Now, what strategies can I implement in order to maintain a big sack of such buffers, how should I handle them etc'? And, if I am to use buffers that means I should copy the data to be sent from the source buffer to the temporary one, thus, I'd set SO_SNDBUF on each socket to zero, so the system will not re-copy what I already copied. Are you with me? Please let me know if I wasn't clear.

    Read the article

  • Boost Asio UDP retrieve last packet in socket buffer

    - by Alberto Toglia
    I have been messing around Boost Asio for some days now but I got stuck with this weird behavior. Please let me explain. Computer A is sending continuos udp packets every 500 ms to computer B, computer B desires to read A's packets with it own velocity but only wants A's last packet, obviously the most updated one. It has come to my attention that when I do a: mSocket.receive_from(boost::asio::buffer(mBuffer), mEndPoint); I can get OLD packets that were not processed (almost everytime). Does this make any sense? A friend of mine told me that sockets maintain a buffer of packets and therefore If I read with a lower frequency than the sender this could happen. ¡? So, the first question is how is it possible to receive the last packet and discard the ones I missed? Later I tried using the async example of the Boost documentation but found it did not do what I wanted. http://www.boost.org/doc/libs/1_36_0/doc/html/boost_asio/tutorial/tutdaytime6.html From what I could tell the async_receive_from should call the method "handle_receive" when a packet arrives, and that works for the first packet after the service was "run". If I wanted to keep listening the port I should call the async_receive_from again in the handle code. right? BUT what I found is that I start an infinite loop, it doesn't wait till the next packet, it just enters "handle_receive" again and again. I'm not doing a server application, a lot of things are going on (its a game), so my second question is, do I have to use threads to use the async receive method properly, is there some example with threads and async receive? Thanks for you attention.

    Read the article

  • Socket isn't listed by netstat unless using certain ports

    - by illuzive
    I'm a computer science student with a few years of programming experience. Yesterday, while working on a project (Mac OS X, BSD sockets) at school, I encountered a strange problem. I was adding several modules to a very basic "server" (mostly a bunch of functions to set up and manage an UDP socket on a certain port). While doing this, I started the server from time to time in order to see that everything worked like it should. I've been using port 32000 during the development of the server. When I start the server and run netstat, the socket is listed as expected. > netstat -p UDP | grep 32000 udp46 0 0 *.32000 *.* However, when I run the server on other ports (random (10000 - 50000)), it's not listed by netstat. My thought was that I had somehow hard coded the port somewhere in the code, but that's not the case. The thing is - I can connect to the socket on any of the tested ports, and it reads data sent to it without any problem at all. It just doesn't get listed by netstat. What I wonder, is if anyone of you have any idea of why this happens? Note: Although this is a project at school, it's not homework. This is just something I want to understand for my own benefit.

    Read the article

  • Grails - Removing an item from a hasMany association List on data bind?

    - by ecrane
    Grails offers the ability to automatically create and bind domain objects to a hasMany List, as described in the grails user guide. So, for example, if my domain object "Author" has a List of many "Book" objects, I could create and bind these using the following markup (from the user guide): <g:textField name="books[0].title" value="the Stand" /> <g:textField name="books[1].title" value="the Shining" /> <g:textField name="books[2].title" value="Red Madder" /> In this case, if any of the books specified don't already exist, Grails will create them and set their titles appropriately. If there are already books in the specified indices, their titles will be updated and they will be saved. My question is: is there some easy way to tell Grails to remove one of those books from the 'books' association on data bind? The most obvious way to do this would be to omit the form element that corresponds to the domain instance you want to delete; unfortunately, this does not work, as per the user guide: Then Grails will automatically create a new instance for you at the defined position. If you "skipped" a few elements in the middle ... Then Grails will automatically create instances in between. I realize that a specific solution could be engineered as part of a command object, or as part of a particular controller- however, the need for this functionality appears repeatedly throughout my application, across multiple domain objects and for associations of many different types of objects. A general solution, therefore, would be ideal. Does anyone know if there is something like this included in Grails?

    Read the article

  • InvalidOperationException: The Undo operation encountered a context that is different from what was

    - by McN
    I got the following exception: Exception Type: System.InvalidOperationException Exception Message: The Undo operation encountered a context that is different from what was applied in the corresponding Set operation. The possible cause is that a context was Set on the thread and not reverted(undone). Exception Stack: at System.Threading.SynchronizationContextSwitcher.Undo() at System.Threading.ExecutionContextSwitcher.Undo() at System.Threading.ExecutionContext.runFinallyCode(Object userData, Boolean exceptionThrown) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteBackoutCodeHelper(Object backoutCode, Object userData, Boolean exceptionThrown) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.ContextAwareResult.Complete(IntPtr userToken) at System.Net.LazyAsyncResult.ProtectedInvokeCallback(Object result, IntPtr userToken) at System.Net.Sockets.BaseOverlappedAsyncResult.CompletionPortCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) Exception Source: mscorlib Exception TargetSite.Name: Undo Exception HelpLink: The application is a Visual Studio 2005 (.Net 2.0) console application. It is a server for multiple TCP/IP connections, doing asynchronous socket reads and synchronous socket writes. In searching for an answer I came across this post which talks about a call to Application.Doevents() which I don't use in my code. I also found this post which has a resolution involved with Component which I also don't use in my code. The application does reference a library that I created that contains custom user controls and components, but they are not being used by the application. Question: What caused this to happen and how do I prevent this from happening again? Or a more realistic question: What does this exception actually mean? How is "context" defined in this situation? Anything that can help me understand what is going on would be very much appreciated.

    Read the article

  • Socket select() Handling Abrupt Disconnections

    - by Genesis
    I am currently trying to fix a bug in a proxy server I have written relating to the socket select() call. I am using the Poco C++ libraries (using SocketReactor) and the issue is actually in the Poco code which may be a bug but I have yet to receive any confirmation of this from them. What is happening is whenever a connection abruptly terminates the socket select() call is returning immediately which is what I believe it is meant to do? Anyway, it returns all of the disconnected sockets within the readable set of file descriptors but the problem is that an exception "Socket is not connected" is thrown when Poco tries to fire the onReadable event handler which is where I would be putting the code to deal with this. Given that the exception is silently caught and the onReadable event is never fired, the select() call keeps returning immediately resulting in an infinite loop in the SocketReactor. I was considering modifying the Poco code so that rather than catching the exception silently it fires a new event called onDisconnected or something like that so that a cleanup can be performed. My question is, are there any elegant ways of determining whether a socket has closed abnormally using select() calls? I was thinking of using the exception message to determine when this has occured but this seems dirty to me.

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >