Search Results

Search found 1931 results on 78 pages for 'bsd sockets'.

Page 23/78 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • fsockopen soap request

    - by gosom
    I am trying to send a SOAP message to a service using php. I want to do it with fsockopen, here's is the code : <?php $fp = @fsockopen("ssl://xmlpropp.worldspan.com", 443, $errno, $errstr); if (!is_resource($fp)) { die('fsockopen call failed with error number ' . $errno . '.' . $errstr); } $soap_out = "POST /xmlts HTTP/1.1\r\n"; $soap_out .= "Host: 212.127.18.11:8800\r\n"; //$soap_out .= "User-Agent: MySOAPisOKGuys \r\n"; $soap_out .= "Content-Type: text/xml; charset='utf-8'\r\n"; $soap_out .= "Content-Length: 999\r\n\r\n"; $soap_put .= "Connection: close\r\n"; $soap_out .= "SOAPAction:\r\n"; $soap_out .= ' Worldspan This is a test '; if(!fputs($fp, $soap_out, strlen($soap_out))) echo "could not write"; echo "<xmp>".$soap_out."</xmp>"; echo "--------------------<br>"; while (!feof($fp)) { $soap_in .= fgets($fp, 100); } echo "<xmp>$soap_in</xmp>"; fclose($fp); echo "ok"; the above code just hangs . if i remove the while it types ok, so i suppose the problem is at $soap_in .= fgets($fp, 100) Any ideas of what is happening

    Read the article

  • Java - Save video stream from Socket to File

    - by Alex
    I use my Android application for streaming video from phone camera to my PC Server and need to save them into file on HDD. So, file created and stream successfully saved, but the resulting file can not play with any video player (GOM, KMP, Windows Media Player, VLC etc.) - no picture, no sound, only playback errors. I tested my Android application into phone and may say that in this instance captured video successfully stored on phone SD card and after transfer it to PC played witout errors, so, my code is correct. In the end, I realized that the problem in the video container: data streamed from phone in MP4 format and stored in *.mp4 files on PC, and in this case, file may be incorrect for playback with video players. Can anyone suggest how to correctly save streaming video to a file? There is my code that process and store stream data (without errors handling to simplify): // getOutputMediaFile() returns a new File object DataInputStream in = new DataInputStream (server.getInputStream()); FileOutputStream videoFile = new FileOutputStream(getOutputMediaFile()); int len; byte buffer[] = new byte[8192]; while((len = in.read(buffer)) != -1) { videoFile.write(buffer, 0, len); } videoFile.close(); server.close(); Also, I would appreciate if someone will talk about the possible "pitfalls" in dealing with the conservation of media streams. Thank you, I hope for your help! Alex.

    Read the article

  • Mysterious socket / HttpWebRequest timeouts

    - by Daniel Mošmondor
    I have a great app for capturing shoutcast streams :-) . So far, it worked with a charm on dozens of machines, and never exhibited behaviour I found now, which is ultimately very strange. I use HttpWebRequest to connect to the shoutcast server and when I connect two streams, everything's OK. When I go for third one, response = (HttpWebResponse)request.GetResponse(); throws with Connection Timeout exception. WTF? I must point out that I had to create .config for the application in order to allow my headers to be sent out from the application, otherwise it wouldn't work at all. Here it is: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.net> <settings> <httpWebRequest useUnsafeHeaderParsing = "true" /> </settings> </system.net> </configuration> Does any of this ring a bell?

    Read the article

  • Raw XML Push from input stream captures only the first line of XML

    - by pqsk
    I'm trying to read XML that is being pushed to my java app. I originally had this in my glassfish server working. The working code in glassfish is as follows: public class XMLPush implements Serializable { public void processXML() { StringBuilder sb = new StringBuilder(); BufferedReader br = null; try { br = ((HttpServletRequest)FacesContext.getCurrentInstance().getExternalContext().getRequest()).getReader (); String s = null; while((s = br.readLine ()) != null) { sb.append ( s ); } //other code to process xml ........... ............................. }catch(Exception ex) { XMLCreator.exceptionOutput ( "processXML","Exception",ex); } .... ..... }//processXML }//class It works perfect, but my client is unable to have glassfish on their server. I tried grabbing the raw xml from php, but I couldn't get it to work. I decided to open up a socket and listen for the xml push manually. Here is my code for receiving the push: public class ListenerService extends Thread { private BufferedReader reader = null; private String line; public ListenerService ( Socket connection )thows Exception { this.reader = new BufferedReader ( new InputStreamReader ( connection.getInputStream () ) ); this.line = null; }//ListenerService @Override public void run () { try { while ( (this.line = this.reader.readLine ()) != null) { System.out.println ( this.line ); ........ }//while } System.out.println ( ex.toString () ); } } catch ( Exception ex ) { ... }//catch }//run I haven't done much socket programing, but from what I read for the past week is that passing the xml into a string is bad. What am I doing wrong and why is it that in glassfish server it works, and when I just open a socket myself it doesn't? this is all that I receive from the push: PUT /?XML_EXPORT_REASON=ResponseLoop&TIMESTAMP=1292559547 HTTP/1.1 Host: ************************ Accept: */* Content-Length: 470346 Expect: 100-continue <?xml version="1.0" encoding="UTF-8" ?> Where did the xml go? Is it because I am placing it in a string? I just need to grab the xml and save it into a file and then process it. Everything else works, but this.Any help would be greatly appreciated.

    Read the article

  • Does it make sense to have more than one UDP Datagram socket on standby? Are "simultaneous" packets

    - by Gubatron
    I'm coding a networking application on Android. I'm thinking of having a single UDP port and Datagram socket that receives all the datagrams that are sent to it and then have different processing queues for these messages. I'm doubting if I should have a second or third UDP socket on standby. Some messages will be very short (100bytes or so), but others will have to transfer files. My concern is, will the Android kernel drop the small messages if it's too busy handling the bigger ones? Update "The latter function calls sock_queue_rcv_skb() (in sock.h), which queues the UDP packet on the socket's receive buffer. If no more space is left on the buffer, the packet is discarded. Filtering also is performed by this function, which calls sk_filter() just like TCP did. Finally, data_ready() is called, and UDP packet reception is completed."

    Read the article

  • Does it make sense to have more than one UDP Datagram socket on standby? Are simultaneous packets dr

    - by Gubatron
    I'm coding a networking application on Android. I'm thinking of having a single UDP port and Datagram socket that receives all the datagrams that are sent to it and then have different processing queues for these messages. I'm doubting if I should have a second or third UDP socket on standby. Some messages will be very short (100bytes or so), but others will have to transfer files. My concern is, will the Android kernel drop the small messages if it's too busy handling the bigger ones?

    Read the article

  • How to limit traffic using multicast over localhost

    - by Shane Holloway
    I'm using multicast UDP over localhost to implement a loose collection of cooperative programs running on a single machine. The following code works well on Mac OSX, Windows and linux. The flaw is that the code will receive UDP packets outside of the localhost network as well. For example, sendSock.sendto(pkt, ('192.168.0.25', 1600)) is received by my test machine when sent from another box on my network. import platform, time, socket, select addr = ("239.255.2.9", 1600) sendSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) sendSock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 24) sendSock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_IF, socket.inet_aton("127.0.0.1")) recvSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) recvSock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True) if hasattr(socket, 'SO_REUSEPORT'): recvSock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, True) recvSock.bind(("0.0.0.0", addr[1])) status = recvSock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, socket.inet_aton(addr[0]) + socket.inet_aton("127.0.0.1")); while 1: pkt = "Hello host: {1} time: {0}".format(time.ctime(), platform.node()) print "SEND to: {0} data: {1}".format(addr, pkt) r = sendSock.sendto(pkt, addr) while select.select([recvSock], [], [], 0)[0]: data, fromAddr = recvSock.recvfrom(1024) print "RECV from: {0} data: {1}".format(fromAddr, data) time.sleep(2) I've attempted to recvSock.bind(("127.0.0.1", addr[1])), but that prevents the socket from receiving any multicast traffic. Is there a proper way to configure recvSock to only accept multicast packets from the 127/24 network, or do I need to test the address of each received packet?

    Read the article

  • HttpWebRequest socket operation during WPF binding in a property getter

    - by wpfwannabe
    In a property getter of a C# class I am doing a HTTP GET using HttpWebRequest to some https address. WPF's property binding seems to choke on this. If I try to access the property in a simple method e.g. Button_Clicked, it works perfectly. If I use WPF binding to access the same property, the app seems to be blocked on a socket's recv() method indefinitely. Is it a no-no to do this sort of thing during binding? Is app in some special state during binding? Is there an easy way for me to overcome this limitation and still maintain the same basic idea?

    Read the article

  • Simple perl program failing to execute

    - by yves Baumes
    Here is a sample that fails: #!/usr/bin/perl -w # client.pl #---------------- use strict; use Socket; # initialize host and port my $host = shift || 'localhost'; my $port = shift || 55555; my $server = "10.126.142.22"; # create the socket, connect to the port socket(SOCKET,PF_INET,SOCK_STREAM,(getprotobyname('tcp'))[2]) or die "Can't create a socket $!\n"; connect( SOCKET, pack( 'Sn4x8', AF_INET, $port, $server )) or die "Can't connect to port $port! \n"; my $line; while ($line = <SOCKET>) { print "$line\n"; } close SOCKET or die "close: $!"; with the error: Argument "10.126.142.22" isn't numeric in pack at D:\send.pl line 16. Can't connect to port 55555! I am using this version of Perl: This is perl, v5.10.1 built for MSWin32-x86-multi-thread (with 2 registered patches, see perl -V for more detail) Copyright 1987-2009, Larry Wall Binary build 1006 [291086] provided by ActiveState http://www.ActiveState.com Built Aug 24 2009 13:48:26 Perl may be copied only under the terms of either the Artistic License or the GNU General Public License, which may be found in the Perl 5 source kit. Complete documentation for Perl, including FAQ lists, should be found on this system using "man perl" or "perldoc perl". If you have access to the Internet, point your browser at http://www.perl.org/, the Perl Home Page. While I am running the netcat command on the server side. Telnet does work.

    Read the article

  • bind() fails with windows socket error 10038

    - by herrturtur
    I'm trying to write a simple program that will receive a string of max 20 characters and print that string to the screen. The code compiles, but I get a bind() failed: 10038. After looking up the error number on msdn (socket operation on nonsocket), I changed some code from int sock; to SOCKET sock which shouldn't make a difference, but one never knows. Here's the code: #include <iostream> #include <winsock2.h> #include <cstdlib> using namespace std; const int MAXPENDING = 5; const int MAX_LENGTH = 20; void DieWithError(char *errorMessage); int main(int argc, char **argv) { if(argc!=2){ cerr << "Usage: " << argv[0] << " <Port>" << endl; exit(1); } // start winsock2 library WSAData wsaData; if(WSAStartup(MAKEWORD(2,0), &wsaData)!=0){ cerr << "WSAStartup() failed" << endl; exit(1); } // create socket for incoming connections SOCKET servSock; if(servSock=socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)==INVALID_SOCKET) DieWithError("socket() failed"); // construct local address structure struct sockaddr_in servAddr; memset(&servAddr, 0, sizeof(servAddr)); servAddr.sin_family = AF_INET; servAddr.sin_addr.s_addr = INADDR_ANY; servAddr.sin_port = htons(atoi(argv[1])); // bind to the local address int servAddrLen = sizeof(servAddr); if(bind(servSock, (SOCKADDR*)&servAddr, servAddrLen)==SOCKET_ERROR) DieWithError("bind() failed"); // mark the socket to listen for incoming connections if(listen(servSock, MAXPENDING)<0) DieWithError("listen() failed"); // accept incoming connections int clientSock; struct sockaddr_in clientAddr; char buffer[MAX_LENGTH]; int recvMsgSize; int clientAddrLen = sizeof(clientAddr); for(;;){ // wait for a client to connect if((clientSock=accept(servSock, (sockaddr*)&clientAddr, &clientAddrLen))<0) DieWithError("accept() failed"); // clientSock is connected to a client // BEGIN Handle client cout << "Handling client " << inet_ntoa(clientAddr.sin_addr) << endl; if((recvMsgSize = recv(clientSock, buffer, MAX_LENGTH, 0)) <0) DieWithError("recv() failed"); cout << "Word in the tubes: " << buffer << endl; closesocket(clientSock); // END Handle client } } void DieWithError(char *errorMessage) { fprintf(stderr, "%s: %d\n", errorMessage, WSAGetLastError()); exit(1); }

    Read the article

  • Transparent proxying - how to pass socket to local server without modification?

    - by Luca Farber
    Hello, I have a program that listens on port 443 and then redirects to either an SSH or HTTPS local server depending on the detected protocol. The program does this by connecting to the local server and proxying all data back and forth through its own process. However, this causes the originating host on the local servers to be logged as localhost. Is there any way to pass the socket directly to the local server process (rather than just making a new TCP connection) so that the parameters of sockaddr_in (or sockaddr_in6) will be retained? Platform for this is Linux.

    Read the article

  • TCP Flow control in AS3?

    - by Jeremy Stanley
    I am currently working on a Flash socket client for a pre-existing service/standard. The service uses TCP flow control to throttle itself and the Flash socket is reading in everything as fast as it can despite not being able to process it as fast as it's being taken in. This causes the bytesAvailable on the socket to keep increasing and the server never knows that the client has fallen behind. In short, is there any way to limit the size of bytesAvailable for a Flash Socket object or throttle it in some other way? Note: Rewriting the server isn't a viable option at the current time as it's a standard and the client's utility drops immensely if server-side changes are needed

    Read the article

  • No feedback from Socket.SendAsync

    - by BowserKingKoopa
    I'm creating a socket and I'm trying to send data through it using SendAsync. My socket isn't connected to anything so I expected to get an error of some sort. However I get nothing. I get no indication that the send didn't work. If I use the synchronous Send method instead of the asynchronous SendAsync method I get an Exception stating that the socket isn't connected to anything. That makes sense to me. When using SendAsync the completed event doesn't ever fire and I get no indication that the send didn't work. So basically my question is how can I tell when SendAsync fails? Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); SocketAsyncEventArgs args = new SocketAsyncEventArgs(); args.SetBuffer(new byte[0], 0, 0); args.Completed += delegate(object sender, SocketAsyncEventArgs e) { Debug.WriteLine("async send complete"); Debug.WriteLine("SOCKET ERROR: " + e.SocketError); }; bool completedSynchronously = socket.SendAsync(args); if (completedSynchronously) { Debug.WriteLine("sync send complete"); Debug.WriteLine("socket error: " + args.SocketError); }

    Read the article

  • Is there any way to use getaddrinfo() and freeaddrinfo() and still be the program compatible with le

    - by Sam C.
    Hi, in the Winsock2 library getaddrinfo() and freeaddrinfo() was only added in Windows XP and on. I know how to replace them in legacy systems, but a conditional use depending on the Windows version won't help. The application won't start in 9x with a message saying that it was linked to a missing export in WS2_32.dll. I'm using MinGW to compile and link the code and would like to keep using it. Maybe writing those functions by myself? Thank you very much for everything.

    Read the article

  • How to to icmps and traceroutes in Java

    - by Ricardo
    For some reason i cannot even phantom, Java does not have primitives for ICMPs and traceroute. Any idea how to overcome this? Basically im building code that should run in *nix and windows, and need a piece of code that will run in both platforms.. Thanks!

    Read the article

  • Unable to connect on socket across different networks.

    - by maleki
    I am having trouble connecting my online application to others across another network. I am able to give them the hostAddress to connect when we are on the same network but when we are doing it across the internet the generated host address doesn't allow a connection, nor does using the ip address gotten from online sites such as whatismyip.com My biggest issue isn't debugging this code, because it works over intra-network but The server doesn't see attempts when we try to move to different networks. Also, the test ip I am using is 2222. InetAddress addr = InetAddress.getLocalHost(); String hostname = addr.getHostName(); System.out.println("Hostname: " + hostname); System.out.println("IP: " + addr.getHostAddress()); I display the host to the server when it is starting if (isClient) { System.out.println("Client Starting.."); clientSocket = new Socket(host, port_number); } else { System.out.println("Server Starting.."); echoServer = new ServerSocket(port_number); clientSocket = echoServer.accept(); System.out.println("Warning, Incoming Game.."); }

    Read the article

  • Marshal.PtrToStructure (and back again) and generic solution for endianness swapping

    - by cgyDeveloper
    I have a system where a remote agent sends serialized structures (from and embedded C system) for me to read and store via IP/UDP. In some cases I need to send back the same structure types. I thought I had a nice setup using Marshal.PtrToStructure (receive) and Marshal.StructureToPtr (send). However, a small gotcha is that the network big endian integers need to be converted to my x86 little endian format to be used locally. When I'm sending them off again, big endian is the way to go. Here are the functions in question: private static T BytesToStruct<T>(ref byte[] rawData) where T: struct { T result = default(T); GCHandle handle = GCHandle.Alloc(rawData, GCHandleType.Pinned); try { IntPtr rawDataPtr = handle.AddrOfPinnedObject(); result = (T)Marshal.PtrToStructure(rawDataPtr, typeof(T)); } finally { handle.Free(); } return result; } private static byte[] StructToBytes<T>(T data) where T: struct { byte[] rawData = new byte[Marshal.SizeOf(data)]; GCHandle handle = GCHandle.Alloc(rawData, GCHandleType.Pinned); try { IntPtr rawDataPtr = handle.AddrOfPinnedObject(); Marshal.StructureToPtr(data, rawDataPtr, false); } finally { handle.Free(); } return rawData; } And a quick example structure that might be used like this: byte[] data = this.sock.Receive(ref this.ipep); Request request = BytesToStruct<Request>(ref data); Where the structure in question looks like: [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi, Pack = 1)] private struct Request { public byte type; public short sequence; [MarshalAs(UnmanagedType.ByValArray, SizeConst = 5)] public byte[] address; } What (generic) way can I swap the endianness when marshalling the structures? My need is such that the locally stored 'public short sequence' in this example will be little-endian for displaying to the user. I don't want to have to swap the endianness on a structure-specific way. My first thought was to use Reflection, but I'm not very familiar with that feature. Also, I hoped that there would be a better solution out there that somebody could point me towards. Thanks in advance :)

    Read the article

  • java BufferedReader specific length returns NUL characters

    - by Bastien
    I have a TCP socket client receiving messages (data) from a server. messages are of the type length (2 bytes) + data (length bytes), delimited by STX & ETX characters. I'm using a bufferedReader to retrieve the two first bytes, decode the length, then read again from the same bufferedReader the appropriate length and put the result in a char array. most of the time, I have no problem, but SOMETIMES (1 out of thousands of messages received), when attempting to read (length) bytes from the reader, I get only part of it, the rest of my array being filled with "NUL" characters. I imagine it's because the buffer has not yet been filled. char[] bufLen = new char[2]; _bufferedReader.read(bufLen); int len = decodeLength(bufLen); char[] _rawMsg = new char[len]; _bufferedReader.read(_rawMsg); return _rawMsg; I solved the problem in several iterative ways: first I tested the last char of my array: if it wasn't ETX I would read chars from the bufferedReader one by one until I would reach ETX, then start over my regular routine. the consequence is that I would basically DROP one message. then, in order to still retrieve that message, I would find the first occurence of the NUL char in my "truncated" message, read & store additional characters one at a time until I reached ETX, and append them to my "truncated" messages, confirming length is ok. it works also, but I'm really thinking there's something I could do better, like checking if the total number of characters I need are available in the buffer before reading it, but can't find the right way to do it... any idea / pointer ? thanks !

    Read the article

  • EPIPE blocks server

    - by timn
    I have written a single-threaded asynchronous server in C running on Linux: The socket is non-blocking and as for polling, I am using epoll. Benchmarks show that the server performs fine and according to Valgrind, there are no memory leaks or other problems. The only problem is that when a write() command is interrupted (because the client closed the connection), the server will encounter a SIGPIPE. I am doing the interrupted artifically by running the benchmarking utility "siege" with the parameter -b. It does lots of requests in a row which all work perfectly. Now I press CTRL-C and restart the "siege". Sometimes I am lucky and the server does not manage to send the full response because the client's fd is invalid. As expected errno is set to EPIPE. I handle this situation, execute close() on the fd and then free the memory related to the connection. Now the problem is that the server blocks and does not answer properly anymore. Here is the strace output: accept(3, {sa_family=AF_INET, sin_port=htons(50611), sin_addr=inet_addr("127.0.0.1")}, [16]) = 5 fcntl64(5, F_GETFD) = 0 fcntl64(5, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 epoll_ctl(4, EPOLL_CTL_ADD, 5, {EPOLLIN|EPOLLERR|EPOLLHUP|EPOLLET, {u32=158310248, u64=158310248}}) = 0 epoll_wait(4, {{EPOLLIN, {u32=158310248, u64=158310248}}}, 128, -1) = 1 read(5, "GET /user/register HTTP/1.1\r\nHos"..., 4096) = 161 write(5, "HTTP/1.1 200 OK\r\nContent-Type: t"..., 106) = 106 <<<<< write(5, "00001000\r\n", 10) = -1 EPIPE (Broken pipe) <<<<< Why did the previous write() work fine but not this one? --- SIGPIPE (Broken pipe) @ 0 (0) --- As you can see, the client establishes a new connection which consequently is accepted. Then, it's added to the EPOLL queue. epoll_wait() signalises that the client sent data (EPOLLIN). The request is parsed and and a response is composed. Sending the headers works fine but when it comes to the body, write() results in an EPIPE. It is not a bug in "siege" because it blocks any incoming connections, no matter from which client.

    Read the article

  • Multiple Socket Connections

    - by BSchlinker
    I need to write a server which accepts connections from multiple client machines, maintains track of connected clients and sends individual clients data as necessary. Sometimes, all clients may be contacted at once with the same message, other times, it may be one individual client or a group of clients. Since I need confirmation that the clients received the information and don't want to build an ACK structure for a UDP connection, I decided to use a TCP streaming method. However, I've been struggling to understand how to maintain multiple connections and keep them idle. I seem to have three options. Use a fork for each incoming connection to create a separate child process, use pthread_create to create an entire new thread for each process, or use select() to wait on all open socket IDs for a connection. Recommendations as to how to attack this? I've begun working with pthreads but since performance will likely not be an issue, multicore processing is not necessary and perhaps there is a simpler way.

    Read the article

  • How to cast sockaddr_storage and avoid breaking strict-aliasing rules

    - by sinoth
    I'm using Beej's Guide to Networking and came across an aliasing issue. He proposes a function to return either the IPv4 or IPv6 address of a particular struct: 1 void *get_in_addr( struct sockaddr *sa ) 2 { 3 if (sa->sa_family == AF_INET) 4 return &(((struct sockaddr_in*)sa)->sin_addr); 5 else 6 return &(((struct sockaddr_in6*)sa)->sin6_addr); 7 } This causes GCC to spit out a strict-aliasing error for sa on line 3. As I understand it, it is because I call this function like so: struct sockaddr_storage their_addr; ... inet_ntop(their_addr.ss_family, get_in_addr((struct sockaddr *)&their_addr), connection_name, sizeof connection_name); I'm guessing the aliasing has to do with the fact that the their_addr variable is of type sockaddr_storage and another pointer of a differing type points to the same memory. Is the best way to get around this sticking sockaddr_storage, sockaddr_in, and sockaddr_in6 into a union? It seems like this should be well worn territory in networking, I just can't find any good examples with best practices. Also, if anyone can explain exactly where the aliasing issue takes place, I'd much appreciate it.

    Read the article

  • Python Socket Getting Connection Reset

    - by Ian
    I created a threaded socket listener that stores newly accepted connections in a queue. The socket threads then read from the queue and respond. For some reason, when doing benchmarking with 'ab' (apache benchmark) using a concurrency of 2 or more, I always get a connection reset before it's able to complete the benchmark (this is taking place locally, so there's no external connection issue). class server: _ip = '' _port = 8888 def __init__(self, ip=None, port=None): if ip is not None: self._ip = ip if port is not None: self._port = port self.server_listener(self._ip, self._port) def now(self): return time.ctime(time.time()) def http_responder(self, conn, addr): httpobj = http_builder() httpobj.header('HTTP/1.1 200 OK') httpobj.header('Content-Type: text/html; charset=UTF-8') httpobj.header('Connection: close') httpobj.body("Everything looks good") data = httpobj.generate() sent = conn.sendall(data) def http_thread(self, id): self.log("THREAD %d: Starting Up..." % id) while True: conn, addr = self.q.get() ip, port = addr self.log("THREAD %d: responding to request: %s:%s - %s" % (id, ip, port, self.now())) self.http_responder(conn, addr) self.q.task_done() conn.close() def server_listener(self, host, port): self.q = Queue.Queue(0) sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.bind( (host, port) ) sock.listen(5) for i in xrange(4): #thread count thread.start_new(self.http_thread, (i+1, )) while True: self.q.put(sock.accept()) sock.close() server('', 9999) When running the benchmark, I get totally random numbers of good requests before it errors out, usually between 4 and 500. Edit: Took me a while to figure it out, but the problem was in sock.listen(5). Because I was using apache benchmark with a higher concurrency (5 and up) it was causing the backlog of connections to pile up, at which point the connections started getting dropped by the socket.

    Read the article

  • Why do IOExceptions occur in ReadableByteChannel.read()

    - by Steffen Heil
    Hi The specification of ReadableByteChannel.read() shows -1 as result value for end-of-stream. Moreover it specifies ClosedByInterruptExceptionas possible result if the thread is interrupted. Now I thought that would be all - and it is most of the time. However, now and then I get the following: java.io.IOException: Eine vorhandene Verbindung wurde vom Remotehost geschlossen at sun.nio.ch.SocketDispatcher.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:25) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233) at sun.nio.ch.IOUtil.read(IOUtil.java:206) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236) at ... I do not unterstand why I don't get -1 in this case. Also this is not a clean exception, as I cannot catch it without catching any possible IOException. So here are my questions: Why is this exception thrown in the first place? Is it safe to assume that ANY exception thrown by read are about the socket being closed? Is all this the same for write()? And by the way: If I call SocketChannel.close() do I have to call SocketChannel.socket().close() as well or is this implied by the earlier? Thanks, Steffen

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >