Search Results

Search found 3513 results on 141 pages for 'raw sockets'.

Page 33/141 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Finding an available network port on the machine

    - by Tomer Vromen
    I'm trying to implement a simple FTP server (a variation of the EFTP protocol) in linux. When a client connects and sends the PASV command, the server should respond with a port number, so the client can connect to that port to transmit the file. How can the server choose a port number? Do I need to iterate through all the ports from 1024 to 65535 until I find a port that the process can bind to? I know that calling bind() with 0 as the port automatically chooses the port to bind to, but then how can I know which port was chosen? Many thanks.

    Read the article

  • How do I generate a connection reset programatically?

    - by Brock Adams
    Hi, I'm sure you've seen the "the connection was reset" message displayed when trying to browse web pages. (The text is from Firefox, other browsers differ.) I need to generate that message/error/condition on demand, to test workarounds. So, how do I generate that condition programmatically? (How to generate a TCP RST from PHP -- or one of the other web-app languages?) Caveats and Conditions: It cannot be a general IP block. The test client must still be able to see the test server when not triggering the condition. Ideally, it would be done at the web-application level (Python, PHP, Coldfusion, Javascript, etc.). Access to routers is problematic. Access to Apache config is a pain. Ideally, it would be triggered by fetching a specific web-page. Bonus if it works on a standard, commercial web host. Update: Sending RST is not enough to cause this condition. See my partial answer, below. I've a solution that works on a local machine, Now need to get it working on a remote host.

    Read the article

  • How do you handle live video streaming in Flash AS3?

    - by CodeJustin.com
    I've been dabbling with socket servers in Java and now I'm ready to get my feet wet with an idea I had. I would like to use python for my socket server and obviously AS3 for my client. I'm able to create a full chat using my own python socket server but I'm almost clueless what to do now that I want to add in LIVE video (want to make it a live video "chat"). I've found tutorials but they are for FMS and I can not afford that, also Red5 looked nice but couldn't find a live video tutorial off hand (plus I would have to switch to Red5 from my own socket server). So if someone could even nudge me into some resources on the subject (the subject of live video without using FMS) that would be very helpful, Google is failing me right now.

    Read the article

  • Close socket and select()

    - by kamziro
    So I need to close a particular connection, but the problem is another thread is, at the same time, doing a select() which has the socket as one of the file descriptors it's watching. Will the select() terminate gracefully, or will anything bad happen?

    Read the article

  • How to maintain a persistant network-connection between two applications over a network?

    - by John
    I was recently approached by my management with an interesting problem - where I am pretty sure I am telling my bosses the correct information but I really want to make sure I am telling them the correct stuff. I am being asked to develop some software that has this function: An application at one location is constantly processing real-time data every second and only generates data if the underlying data has changed in any way. On the event that the data has changed send the results to another box over a network Maintains a persistent connection between the both machines, altering the remote box if for some reason the network connection went down From what I understand, I imagine that I need to do some reading on doing some sort of TCP/IP socket-level stuff. That way if the connection is dropped the remote location will be aware that the data it has received may be stale. However management seems to be very convinced that this can be accomplished using SOAP. I was under the impression that SOAP is more or less a way for a client to initiate a procedure from a server and get some results via the HTTP protocol. Am I wrong in assuming this? I haven't been able to find much information on how SOAP might be able to solve a problem like this. I feel like a lot of people around my office are using SOAP as a buzzword and that has generated a bit of confusion over what SOAP actually is - and is capable of. Any thoughts on how to accomplish this task would be appreciated!

    Read the article

  • SocketAsyncEventArgs and buffering while messages are in parts

    - by Rob
    C# socket server, which has roughly 200 - 500 active connections, each one constantly sending messages to our server. About 70% of the time the messages are handled fine (in the correct order etc), however in the other 30% of cases we have jumbled up messages and things get screwed up. We should note that some clients send data in unicode and others in ASCII, so that's handled as well. Messages sent to the server are a variable length string which end in a char3, it's the char3 that we break on, other than that we keep receiving data. Could anyone shed any light on our ProcessReceive code and see what could possibly be causing us issues and how we can solve this small issue (here's hoping it's a small issue!) Code below:

    Read the article

  • If a nonblocking recv with MSG_PEEK succeeds, will a subsequent recv without MSG_PEEK also succeed?

    - by Michael Wolf
    Here's a simplified version of some code I'm working on: void stuff(int fd) { int ret1, ret2; char buffer[32]; ret1 = recv(fd, buffer, 32, MSG_PEEK | MSG_DONTWAIT); /* Error handling -- and EAGAIN handling -- would go here. Bail if necessary. Otherwise, keep going. */ /* Can this call to recv fail, setting errno to EAGAIN? */ ret2 = recv(fd, buffer, ret1, 0); } If we assume that the first call to recv succeeds, returning a value between 1 and 32, is it safe to assume that the second call will also succeed? Can ret2 ever be less than ret1? In which cases? (For clarity's sake, assume that there are no other error conditions during the second call to recv: that no signal is delivered, that it won't set ENOMEM, etc. Also assume that no other threads will look at fd. I'm on Linux, but MSG_DONTWAIT is, I believe, the only Linux-specific thing here. Assume that the right fnctl was set previously on other platforms.)

    Read the article

  • Sending http headers with python

    - by Niklas R
    I've set up a little script that should feed a client with html. import socket sock = socket.socket() sock.bind(('', 8080)) sock.listen(5) client, adress = sock.accept() print "Incoming:", adress print client.recv(1024) print client.send("Content-Type: text/html\n\n") client.send('<html><body></body></html>') print "Answering ..." print "Finished." import os os.system("pause") But it is shown as plain text in the browser. Can you please tell what I need to do ? I just can't find something in google that helps me.. Thanks.

    Read the article

  • Java Socket Connection is flooding network OR resulting in high ping

    - by user1461100
    i have a little problem with my java socket code. I'm writing an android client application which is sending data to a java multithreaded socket server on my pc through direct(!) wireless connection. It works fine but i want to improve it for mobile applications as it is very power consuming by now. When i remove two special lines in my code, the cpu usage of my mobile device (htc one x) is totally okay but then my connection seems to have high ping rates or something like that... Here is a server code snippet where i receive the clients data: while(true) { try { .... Object obj = in.readObject(); if(obj != null) { Class clazz = obj.getClass(); String className = clazz.getName(); if(className.equals("java.lang.String")) { String cmd = (String)obj; if(cmd.equals("dc")) { System.out.println("Client "+id+" disconnected!"); Server.connectedClients[id-1] = false; break; } if(cmd.substring(0,1).equals("!")) { robot.keyRelease(PlayerEnum.getKey(cmd,id)); } else { robot.keyPress(PlayerEnum.getKey(cmd,id)); } } } } catch .... Heres the client part, where i send my data in a while loop: private void networking() { try { if(client != null) { .... out.writeObject(sendQueue.poll()); .... } } catch .... when i write it this why, i send data everytime the while loop gets executed.. when sendQueue is empty, a null "Object" will be send. this results in "high" network traffic and in "high" cpu usage. BUT: all send comments are received nearly immediately. when i change the code to following: while(true) ... if(sendQueue.peek() != null) { out.writeObject(sendQueue.poll()); } ... the cpu usage is totally okay but i'm getting some laggs.. the commands do not arrive fast enough.. as i said, it works fine (besides cpu usage) if i'm sending data(with that null objects) every while execution. but i'm sure that this is very rough coding style because i'm kind of flooding the network. any hints? what am i doing wrong?? Thanks for your Help! Sincerly yours, maaft

    Read the article

  • Sending buffered images between Java client and Twisted Python socket server

    - by PattimusPrime
    I have a server-side function that draws an image with the Python Imaging Library. The Java client requests an image, which is returned via socket and converted to a BufferedImage. I prefix the data with the size of the image to be sent, followed by a CR. I then read this number of bytes from the socket input stream and attempt to use ImageIO to convert to a BufferedImage. In abbreviated code for the client: public String writeAndReadSocket(String request) { // Write text to the socket BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream())); bufferedWriter.write(request); bufferedWriter.flush(); // Read text from the socket BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream())); // Read the prefixed size int size = Integer.parseInt(bufferedReader.readLine()); // Get that many bytes from the stream char[] buf = new char[size]; bufferedReader.read(buf, 0, size); return new String(buf); } public BufferedImage stringToBufferedImage(String imageBytes) { return ImageIO.read(new ByteArrayInputStream(s.getBytes())); } and the server: # Twisted server code here # The analog of the following method is called with the proper client # request and the result is written to the socket. def worker_thread(): img = draw_function() buf = StringIO.StringIO() img.save(buf, format="PNG") img_string = buf.getvalue() return "%i\r%s" % (sys.getsizeof(img_string), img_string) This works for sending and receiving Strings, but image conversion (usually) fails. I'm trying to understand why the images are not being read properly. My best guess is that the client is not reading the proper number of bytes, but I honestly don't know why that would be the case. Side notes: I realize that the char[]-to-String-to-bytes-to-BufferedImage Java logic is roundabout, but reading the bytestream directly produces the same errors. I have a version of this working where the client socket isn't persistent, ie. the request is processed and the connection is dropped. That version works fine, as I don't need to care about the image size, but I want to learn why the proposed approach doesn't work.

    Read the article

  • Python 3.3 Webserver restarting problems

    - by IPDGino
    I have made a simple webserver in python, and had some problems with it before as described here: Python (3.3) Webserver script with an interesting error In that question, the answer was to use a While True: loop so that any crashes or errors would be resolved instantly, because it would just start itself again. I've used this for a while, and still want to make the server restart itself every few minutes, but on Linux for some reason it won't work for me. On windows the code below works fine, but on linux it keeps saying Handler class up here ... ... class Server: def __init__(self): self.server_class = HTTPServer self.server_adress = ('MY IP GOES HERE, or localhost', 8080) global httpd httpd = self.server_class(self.server_adress, Handler) self.main() def main(self): if count > 1: global SERVER_UP_SINCE HOUR_CHECK = int(((count - 1) * RESTART_INTERVAL) / 60) SERVER_UPTIME = str(HOUR_CHECK) + " MINUTES" if HOUR_CHECK > 60: minutes = int(HOUR_CHECK % 60) hours = int(HOUR_CHECK // 60) SERVER_UPTIME = ("%s HOURS, %s MINUTES" % (str(hours), str(minutes))) SERVING_ON_ADDR = self.server_adress SERVER_UP_SINCE = str(SERVER_UP_SINCE) SERVER_RESTART_NUMBER = count - 1 print(""" SERVER INFO ------------------------------------- SERVER_UPTIME: %s SERVER_UP_SINCE: %s TOTAL_FILES_SERVED: %d SERVING_ON_ADDR: %s SERVER_RESTART_NUMBER: %s \n\nSERVER HAS RESTARTED """ % (SERVER_UPTIME, SERVER_UP_SINCE, TOTAL_FILES, SERVING_ON_ADDR, SERVER_RESTART_NUMBER)) else: print("SERVER_BOOT=1\nSERVER_ONLINE=TRUE\nRESTART_LOOP=TRUE\nSERVING_ON_ADDR:%s" % str(self.server_adress)) while True: try: httpd.serve_forever() except KeyboardInterrupt: print("Shutting down...") break httpd.shutdown() httpd.socket.close() raise(SystemExit) return def server_restart(): """If you want the restart timer to be longer, replace the number after the RESTART_INTERVAL variable""" global RESTART_INTERVAL RESTART_INTERVAL = 10 threading.Timer(RESTART_INTERVAL, server_restart).start() global count count = count + 1 instance = Server() if __name__ == "__main__": global SERVER_UP_SINCE SERVER_UP_SINCE = strftime("%d-%m-%Y %H:%M:%S", gmtime()) server_restart() Basically, I make a thread to restart it every 10 seconds (For testing purposes) and start the server. After ten seconds it will say File "/home/username/Desktop/Webserver/server.py", line 199, in __init__ httpd = self.server_class(self.server_adress, Handler) File "/usr/lib/python3.3/socketserver.py", line 430, in __init__ self.server_bind() File "/usr/lib/python3.3/http/server.py", line 135, in server_bind socketserver.TCPServer.server_bind(self) File "/usr/lib/python3.3/socketserver.py", line 441, in server_bind self.socket.bind(self.server_address) OSError: [Errno 98] Address already in use As you can see in the except KeyboardInterruption line, I tried everything to make the server stop, and the program stop, but it will NOT stop. But the thing I really want to know is how to make this server able to restart, without giving some wonky errors.

    Read the article

  • C# socket blocking behavior

    - by Gearoid Murphy
    My situation is this : I have a C# tcp socket through which I receive structured messages consisting of a 3 byte header and a variable size payload. The tcp data is routed through a network of tunnels and is occasionally susceptible to fragmentation. The solution to this is to perform a blocking read of 3 bytes for the header and a blocking read of N bytes for the variable size payload (the value of N is in the header). The problem I'm experiencing is that occasionally, the blocking receive operation returns a partial packet. That is, it reads a volume of bytes less than the number I explicitly set in the receive call. After some debugging, it appears that the number of bytes it returns is equal to the number of bytes in the Available property of the socket before the receive op. This behavior is contrary to my expectation. If the socket is blocking and I explicitly set the number of bytes to receive, shouldn't the socket block until it recv's those bytes?, any help, pointers, etc would be much appreciated.

    Read the article

  • Flex: client / server messaging question (RPC or socket ?)

    - by Patrick
    hi, I'm building a Flex application, which is going to perform many server requests (let's say, that almost all interactions require an update from server). At the moment I'm using remote procedure calls for it. But I was wondering if using a socket would be better. In other terms, is maybe better to keep the connection alive rather then performing many calls in sequence ? For my demo app I only have 1 client. Is the number of clients connecting to the server a factor for this choice ? thanks

    Read the article

  • Asynchronous IO in Java?

    - by thr
    What options for async io (socket-based) are there in java other then java.nio? Also does java.nio use threads in the backround (as I think .NET's async-socket-library does, maybe it's been changed) or is it "true" async io using a proper select call?

    Read the article

  • inet_ntoa problem

    - by codingfreak
    Hi I am declaring following variables unsigned long dstAddr; unsigned long gateWay; unsigned long mask; These variables contains ipaddresses in network byte order. So when I am trying to print them using inet_ntoa function for mask variable sometimes it is printing strange values printf("%s\t%s\t%s\t",inet_ntoa(dstAddr),inet_ntoa(gateWay),inet_ntoa(mask)); 192.168.122.0 0.0.0.0 0.255.255.255 but it should be 192.168.122.0 0.0.0.0 255.255.255.0 So is this because of inet_ntoa ??

    Read the article

  • why can't i bind ipv6 socket to a linklocal address

    - by Haiyuan Zhang
    #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <stdio.h> void error(char *msg) { perror(msg); exit(0); } int main(int argc, char *argv[]) { int sock, length, fromlen, n; struct sockaddr_in6 server; struct sockaddr_in6 from; int portNr = 5555; char buf[1024]; length = sizeof (struct sockaddr_in6); sock=socket(AF_INET6, SOCK_DGRAM, 0); if (sock < 0) error("Opening socket"); bzero((char *)&server, length); server.sin6_family=AF_INET6; server.sin6_addr=in6addr_any; server.sin6_port=htons(portNr); inet_pton( AF_INET6, "fe80::21f:29ff:feed:2f7e", (void *)&server.sin6_addr.s6_addr); //inet_pton( AF_INET6, "::1", (void *)&server.sin6_addr.s6_addr); if (bind(sock,(struct sockaddr *)&server,length)<0) error("binding"); fromlen = sizeof(struct sockaddr_in6); while (1) { n = recvfrom(sock,buf,1024,0,(struct sockaddr *)&from,&fromlen); if (n < 0) error("recvfrom"); write(1,"Received a datagram: ",21); write(1,buf,n); n = sendto(sock,"Got your message\n",17, 0,(struct sockaddr *)&from,fromlen); if (n < 0) error("sendto"); } } when I compile and run the above code I got : binding: Invalid argument and if change to bind the ::1 and leave other thing unchanged in the source code, the code works! so could you tell me what's wrong with my code ? thanks in advance.

    Read the article

  • socket timeout and remove O_NONBLOCK option

    - by juxstapose
    Hello, I implemented a socket timeout and retry but in order to do it I had to set the socket as a non-blocking socket. However, I need the socket to block. This was my attempt at a solution to these two problems. This is not working. Subsequent send calls block but never send any data. When I connect without the select and the timeout, subsequent send calls work normally. References: C: socket connection timeout How to reset a socket back to blocking mode (after I set it to nonblocking mode)? Code: fd_set fdset; struct timeval tv; fcntl(dsock, F_SETFL, O_NONBLOCK); tv.tv_sec = theDeviceTimeout; tv.tv_usec = 0; int retries=0; logi(theLogOutput, LOG_INFO, "connecting to device socket num retrys: %i", theDeviceRetry); for(retries=0;retries<theDeviceRetry;retries++) { connect(dsock, (struct sockaddr *)&daddr, sizeof daddr); FD_ZERO(&fdset); FD_SET(dsock, &fdset); if (select(dsock + 1, NULL, &fdset, NULL, &tv) == 1) { int so_error; socklen_t slen = sizeof so_error; getsockopt(dsock, SOL_SOCKET, SO_ERROR, &so_error, &slen); if (so_error == 0) { logi(theLogOutput, LOG_INFO, "connected to socket on port %i on %s", theDevicePort, theDeviceIP); break; } else { logi(theLogOutput, LOG_WARN, "connect to %i failed on ip %s because %s retries %i", theDevicePort, theDeviceIP, strerror(errno), retries); logi(theLogOutput, LOG_WARN, "failed to connect to device %s", strerror(errno)); logi(theLogOutput, LOG_WARN, "error: %i %s", so_error, strerror(so_error)); continue; } } } int opts; opts = fcntl(dsock,F_GETFL); logi(theLogOutput, LOG_DEBUG, "clearing nonblock option %i retries %i", opts, retries); opts ^= O_NONBLOCK; fcntl(dsock, F_SETFL, opts);

    Read the article

  • Packet fragmentation when sending data via SSLStream

    - by Ive
    When using an SSLStream to send a 'large' chunk of data (1 meg) to a (already authenticated) client, the packet fragmentation / dissasembly I'm seeing is FAR greater than when using a normal NetworkStream. Using an async read on the client (i.e. BeginRead()), the ReadCallback is repeatedly called with exactly the same size chunk of data up until the final packet (the remainder of the data). With the data I'm sending (it's a zip file), the segments happen to be 16363 bytes long. Note: My receive buffer is much bigger than this and changing it's size has no effect I understand that SSL encrypts data in chunks no bigger than 18Kb, but since SSL sits on top of TCP, I wouldn't think that the number of SSL chunks would have any relevance to the TCP packet fragmentation? Essentially, the data is taking about 20 times longer to be fully read by the client than with a standard NetworkStream (both on localhost!) What am I missing? EDIT: I'm beginning to suspect that the receive (or send) buffer size of an SSLStream is limited. Even if I use synchronous reads (i.e. SSLStream.Read()), no more data ever becomes available, regardless of how long I wait before attempting to read. This would be the same behavior as if I were to limit the receive buffer to 16363 bytes. Setting the Underlying NetworkStream's SendBufferSize (on the server), and ReceiveBufferSize (on the client) has no effect.

    Read the article

  • Ruby TCPSocket doesn't notice it when server is killed

    - by user303308
    I've this ruby code that connects to a TCP server (namely, netcat). It loops 20 times, and sends "ABCD ". If I kill netcat, it takes TWO iterations of the loop for an exception to be triggered. On the first loop after netcat is killed, no exception is triggered, and "send" reports that 5 bytes have been correctly written... Which in the end is not true, since of course the server never received them. Is there a way to work around this issue ? Right now I'm losing data : since I think it's been correctly transfered, I'm not replaying it. #!/usr/bin/env ruby require 'rubygems' require 'socket' sock = TCPSocket.new('192.168.0.10', 5443) sock.sync = true 20.times do sleep 2 begin count = sock.write("ABCD ") puts "Wrote #{count} bytes" rescue Exception => myException puts "Exception rescued : #{myException}" end end

    Read the article

  • sendto: Network unreachable

    - by devin
    Hello. I have two machines I'm testing my code on, one works fine, the other I'm having some problems and I don't know why it is. I'm using an object (C++) for the networking part of my project. On the server side, I do this: (error checking removed for clarity) res = getaddrinfo(NULL, port, &hints, &server)) < 0 for(p=server; p!=NULL; p=p->ai_next){ fd = socket(p->ai_family, p->ai_socktype, p->ai_protocol); if(fd<0){ continue; } if(bind(fd, p->ai_addr, p->ai_addrlen)<0){ close(fd); continue; } break; } This all works. I then make an object with this constructor net::net(int fd, struct sockaddr *other, socklen_t *other_len){ int counter; this->fd = fd; if(other != NULL){ this->other.sa_family = other->sa_family; for(counter=0;counter<13;counter++) this->other.sa_data[counter]=other->sa_data[counter]; } else cerr << "Networking error" << endl; this->other_len = *other_len; } void net::gsend(string s){ if(sendto(this->fd, s.c_str(), s.size()+1, 0, &(this->other), this->other_len)<0){ cerr << "Error Sending, " << s << endl; cerr << strerror(errno) << endl; } return; } string net::grecv(){ stringstream ss; string s; char buf[BUFSIZE]; buf[BUFSIZE-1] = '\0'; if(recvfrom(this->fd, buf, BUFSIZE-1, 0, &(this->other), &(this->other_len))<0){ cerr << "Error Recieving\n"; cerr << strerror(errno) << endl; } // convert to c++ string and if there are multiple trailing ';' remove them ss << buf; s=ss.str(); while(s.find(";;", s.size()-2) != string::npos) s.erase(s.size()-1,1); return s; } So my problem is, is that on one machine, everything works fine. On another, everything works fine until I call my server's gsend() function. In which I get a "Error: Network Unreachable." I call gercv() first before calling gsend() too. Can anyone help me? I would really appreciate it.

    Read the article

  • How to use data receive event in Socket class?

    - by affan
    I have wrote a simple client that use TcpClient in dotnet to communicate. In order to wait for data messages from server i use a Read() thread that use blocking Read() call on socket. When i receive something i have to generate various events. These event occur in the worker thread and thus you cannot update a UI from it directly. Invoke() can be use but for end developer its difficult as my SDK would be use by users who may not use UI at all or use Presentation Framework. Presentation framework have different way of handling this. Invoke() on our test app as Microstation Addin take a lot of time at the moment. Microstation is single threaded application and call invoke on its thread is not good as it is always busy doing drawing and other stuff message take too long to process. I want my events to generate in same thread as UI so user donot have to go through the Dispatcher or Invoke. Now i want to know how can i be notified by socket when data arrive? Is there a build in callback for that. I like winsock style receive event without use of separate read thread. I also do not want to use window timer to for polling for data. I found IOControlCode.AsyncIO flag in IOControl() function which help says Enable notification for when data is waiting to be received. This value is equal to the Winsock 2 FIOASYNC constant. I could not found any example on how to use it to get notification. If i am write in MFC/Winsock we have to create a window of size(0,0) which was just used for listening for the data receive event or other socket events. But i don't know how to do that in dotnet application.

    Read the article

  • Listening for TCP and UDP requests on the same port

    - by user339328
    I am writing a Client/Server set of programs Depending on the operation requested by the client, I use make TCP or UDP request. Implementing the client side is straight-forward, since I can easily open connection with any protocol and send the request to the server-side. On the servers-side, on the other hand, I would like to listen both for UDP and TCP connections on the same port. Moreover, I like the the server to open new thread for each connection request. I have adopted the approach explained in: link text I have extended this code sample by creating new threads for each TCP/UDP request. This works correctly if I use TCP only, but it fails when I attempt to make UDP bindings. Please give me any suggestion how can I correct this. tnx Here is the Server Code: public class Server { public static void main(String args[]) { try { int port = 4444; if (args.length > 0) port = Integer.parseInt(args[0]); SocketAddress localport = new InetSocketAddress(port); // Create and bind a tcp channel to listen for connections on. ServerSocketChannel tcpserver = ServerSocketChannel.open(); tcpserver.socket().bind(localport); // Also create and bind a DatagramChannel to listen on. DatagramChannel udpserver = DatagramChannel.open(); udpserver.socket().bind(localport); // Specify non-blocking mode for both channels, since our // Selector object will be doing the blocking for us. tcpserver.configureBlocking(false); udpserver.configureBlocking(false); // The Selector object is what allows us to block while waiting // for activity on either of the two channels. Selector selector = Selector.open(); tcpserver.register(selector, SelectionKey.OP_ACCEPT); udpserver.register(selector, SelectionKey.OP_READ); System.out.println("Server Sterted on port: " + port + "!"); //Load Map Utils.LoadMap("mapa"); System.out.println("Server map ... LOADED!"); // Now loop forever, processing client connections while(true) { try { selector.select(); Set<SelectionKey> keys = selector.selectedKeys(); // Iterate through the Set of keys. for (Iterator<SelectionKey> i = keys.iterator(); i.hasNext();) { SelectionKey key = i.next(); i.remove(); Channel c = key.channel(); if (key.isAcceptable() && c == tcpserver) { new TCPThread(tcpserver.accept().socket()).start(); } else if (key.isReadable() && c == udpserver) { new UDPThread(udpserver.socket()).start(); } } } catch (Exception e) { e.printStackTrace(); } } } catch (Exception e) { e.printStackTrace(); System.err.println(e); System.exit(1); } } } The UDPThread code: public class UDPThread extends Thread { private DatagramSocket socket = null; public UDPThread(DatagramSocket socket) { super("UDPThread"); this.socket = socket; } @Override public void run() { byte[] buffer = new byte[2048]; try { DatagramPacket packet = new DatagramPacket(buffer, buffer.length); socket.receive(packet); String inputLine = new String(buffer); String outputLine = Utils.processCommand(inputLine.trim()); DatagramPacket reply = new DatagramPacket(outputLine.getBytes(), outputLine.getBytes().length, packet.getAddress(), packet.getPort()); socket.send(reply); } catch (IOException e) { e.printStackTrace(); } socket.close(); } } I receive: Exception in thread "UDPThread" java.nio.channels.IllegalBlockingModeException at sun.nio.ch.DatagramSocketAdaptor.receive(Unknown Source) at server.UDPThread.run(UDPThread.java:25) 10x

    Read the article

  • Can I make TCP/IP session to run less than 60 seconds?

    - by Pavel
    Our server is overloaded with TCP/IP sessions, we have 1200 - 1500 of them. Most of them are hanging in TIME_OUT state. It turns out that a connection in TIME_OUT state occupies a socket until 60 second time-out is elapsed. The problem is that the server gets unresponsive and many clients are not getting served. I have made a simple test: download an XML file from the server with Internet Explorer 8.0 The download finishes in a fraction of second. But then I see that the TCP/IP connection is hanging in TIME_OUT state for 60 seconds. Is there any way to get rid of TIME_OUT waiting or make it less to free the socket for new connections? I understand why TCP/IP connection enters TIME_OUT state, but I don't understand why Internet Explorer does not close the connection after the XML file download is over. The details. Our server runs web service written in Perl (mod-perl). The service provides weather data to clients. Client is a Flash appication (actually Flash ActiveX control embedded in Windows application). Apache "Keep Alive" option is set to 0

    Read the article

  • GUI Agent accepts statuses from Daemon and shows it using progress indicator

    - by Pavel
    Hi to all! My application is a GUI agent, which communicate with daemon through the unix domain socket, wrapped in CFSocket.... So there are main loop and added CFRunLoop source. Daemon sends statuses and agent shows it with a progress indicator. When there are any data on socket, callback function begin to work and at this time I have to immediately show the new window with progress indicator and increase counter. //this function initiate the runloop for listening socket - (int) AcceptDaemonConnection:(ConnectionRef)conn { int err = 0; conn->fSockCF = CFSocketCreateWithNative(NULL, (CFSocketNativeHandle) conn->fSockFD, kCFSocketAcceptCallBack, ConnectionGotData, NULL); if (conn->fSockCF == NULL) err = EINVAL; if (err == 0) { conn->fRunLoopSource = CFSocketCreateRunLoopSource(NULL, conn->fSockCF, 0); if (conn->fRunLoopSource == NULL) err = EINVAL; else CFRunLoopAddSource(CFRunLoopGetCurrent(), conn->fRunLoopSource, kCFRunLoopDefaultMode); CFRelease(conn->fRunLoopSource); } return err; } // callback function void ConnectionGotData(CFSocketRef s, CFSocketCallBackType type, CFDataRef address, const void * data, void * info) { #pragma unused(s) #pragma unused(address) #pragma unused(info) assert(type == kCFSocketAcceptCallBack); assert( (int *) data != NULL ); assert( (*(int *) data) != -1 ); TStatusUpdate status; int nativeSocket = *(int *) data; status = [agg AcceptPacket:nativeSocket]; // [stWindow InitNewWindow] inside [agg SendUpdateStatus:status.percent]; } AcceptPacket function receives packet from the socket and trying to show new window with progress indicator. Corresponding function is called, but nothing happens... I think, that I have to make work the main application loop with interrupting CFSocket loop... Or send a notification? No idea....

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >