Search Results

Search found 1931 results on 78 pages for 'bsd sockets'.

Page 29/78 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Sending http headers with python

    - by Niklas R
    I've set up a little script that should feed a client with html. import socket sock = socket.socket() sock.bind(('', 8080)) sock.listen(5) client, adress = sock.accept() print "Incoming:", adress print client.recv(1024) print client.send("Content-Type: text/html\n\n") client.send('<html><body></body></html>') print "Answering ..." print "Finished." import os os.system("pause") But it is shown as plain text in the browser. Can you please tell what I need to do ? I just can't find something in google that helps me.. Thanks.

    Read the article

  • Java Socket Connection is flooding network OR resulting in high ping

    - by user1461100
    i have a little problem with my java socket code. I'm writing an android client application which is sending data to a java multithreaded socket server on my pc through direct(!) wireless connection. It works fine but i want to improve it for mobile applications as it is very power consuming by now. When i remove two special lines in my code, the cpu usage of my mobile device (htc one x) is totally okay but then my connection seems to have high ping rates or something like that... Here is a server code snippet where i receive the clients data: while(true) { try { .... Object obj = in.readObject(); if(obj != null) { Class clazz = obj.getClass(); String className = clazz.getName(); if(className.equals("java.lang.String")) { String cmd = (String)obj; if(cmd.equals("dc")) { System.out.println("Client "+id+" disconnected!"); Server.connectedClients[id-1] = false; break; } if(cmd.substring(0,1).equals("!")) { robot.keyRelease(PlayerEnum.getKey(cmd,id)); } else { robot.keyPress(PlayerEnum.getKey(cmd,id)); } } } } catch .... Heres the client part, where i send my data in a while loop: private void networking() { try { if(client != null) { .... out.writeObject(sendQueue.poll()); .... } } catch .... when i write it this why, i send data everytime the while loop gets executed.. when sendQueue is empty, a null "Object" will be send. this results in "high" network traffic and in "high" cpu usage. BUT: all send comments are received nearly immediately. when i change the code to following: while(true) ... if(sendQueue.peek() != null) { out.writeObject(sendQueue.poll()); } ... the cpu usage is totally okay but i'm getting some laggs.. the commands do not arrive fast enough.. as i said, it works fine (besides cpu usage) if i'm sending data(with that null objects) every while execution. but i'm sure that this is very rough coding style because i'm kind of flooding the network. any hints? what am i doing wrong?? Thanks for your Help! Sincerly yours, maaft

    Read the article

  • Sending buffered images between Java client and Twisted Python socket server

    - by PattimusPrime
    I have a server-side function that draws an image with the Python Imaging Library. The Java client requests an image, which is returned via socket and converted to a BufferedImage. I prefix the data with the size of the image to be sent, followed by a CR. I then read this number of bytes from the socket input stream and attempt to use ImageIO to convert to a BufferedImage. In abbreviated code for the client: public String writeAndReadSocket(String request) { // Write text to the socket BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream())); bufferedWriter.write(request); bufferedWriter.flush(); // Read text from the socket BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream())); // Read the prefixed size int size = Integer.parseInt(bufferedReader.readLine()); // Get that many bytes from the stream char[] buf = new char[size]; bufferedReader.read(buf, 0, size); return new String(buf); } public BufferedImage stringToBufferedImage(String imageBytes) { return ImageIO.read(new ByteArrayInputStream(s.getBytes())); } and the server: # Twisted server code here # The analog of the following method is called with the proper client # request and the result is written to the socket. def worker_thread(): img = draw_function() buf = StringIO.StringIO() img.save(buf, format="PNG") img_string = buf.getvalue() return "%i\r%s" % (sys.getsizeof(img_string), img_string) This works for sending and receiving Strings, but image conversion (usually) fails. I'm trying to understand why the images are not being read properly. My best guess is that the client is not reading the proper number of bytes, but I honestly don't know why that would be the case. Side notes: I realize that the char[]-to-String-to-bytes-to-BufferedImage Java logic is roundabout, but reading the bytestream directly produces the same errors. I have a version of this working where the client socket isn't persistent, ie. the request is processed and the connection is dropped. That version works fine, as I don't need to care about the image size, but I want to learn why the proposed approach doesn't work.

    Read the article

  • Python 3.3 Webserver restarting problems

    - by IPDGino
    I have made a simple webserver in python, and had some problems with it before as described here: Python (3.3) Webserver script with an interesting error In that question, the answer was to use a While True: loop so that any crashes or errors would be resolved instantly, because it would just start itself again. I've used this for a while, and still want to make the server restart itself every few minutes, but on Linux for some reason it won't work for me. On windows the code below works fine, but on linux it keeps saying Handler class up here ... ... class Server: def __init__(self): self.server_class = HTTPServer self.server_adress = ('MY IP GOES HERE, or localhost', 8080) global httpd httpd = self.server_class(self.server_adress, Handler) self.main() def main(self): if count > 1: global SERVER_UP_SINCE HOUR_CHECK = int(((count - 1) * RESTART_INTERVAL) / 60) SERVER_UPTIME = str(HOUR_CHECK) + " MINUTES" if HOUR_CHECK > 60: minutes = int(HOUR_CHECK % 60) hours = int(HOUR_CHECK // 60) SERVER_UPTIME = ("%s HOURS, %s MINUTES" % (str(hours), str(minutes))) SERVING_ON_ADDR = self.server_adress SERVER_UP_SINCE = str(SERVER_UP_SINCE) SERVER_RESTART_NUMBER = count - 1 print(""" SERVER INFO ------------------------------------- SERVER_UPTIME: %s SERVER_UP_SINCE: %s TOTAL_FILES_SERVED: %d SERVING_ON_ADDR: %s SERVER_RESTART_NUMBER: %s \n\nSERVER HAS RESTARTED """ % (SERVER_UPTIME, SERVER_UP_SINCE, TOTAL_FILES, SERVING_ON_ADDR, SERVER_RESTART_NUMBER)) else: print("SERVER_BOOT=1\nSERVER_ONLINE=TRUE\nRESTART_LOOP=TRUE\nSERVING_ON_ADDR:%s" % str(self.server_adress)) while True: try: httpd.serve_forever() except KeyboardInterrupt: print("Shutting down...") break httpd.shutdown() httpd.socket.close() raise(SystemExit) return def server_restart(): """If you want the restart timer to be longer, replace the number after the RESTART_INTERVAL variable""" global RESTART_INTERVAL RESTART_INTERVAL = 10 threading.Timer(RESTART_INTERVAL, server_restart).start() global count count = count + 1 instance = Server() if __name__ == "__main__": global SERVER_UP_SINCE SERVER_UP_SINCE = strftime("%d-%m-%Y %H:%M:%S", gmtime()) server_restart() Basically, I make a thread to restart it every 10 seconds (For testing purposes) and start the server. After ten seconds it will say File "/home/username/Desktop/Webserver/server.py", line 199, in __init__ httpd = self.server_class(self.server_adress, Handler) File "/usr/lib/python3.3/socketserver.py", line 430, in __init__ self.server_bind() File "/usr/lib/python3.3/http/server.py", line 135, in server_bind socketserver.TCPServer.server_bind(self) File "/usr/lib/python3.3/socketserver.py", line 441, in server_bind self.socket.bind(self.server_address) OSError: [Errno 98] Address already in use As you can see in the except KeyboardInterruption line, I tried everything to make the server stop, and the program stop, but it will NOT stop. But the thing I really want to know is how to make this server able to restart, without giving some wonky errors.

    Read the article

  • C# socket blocking behavior

    - by Gearoid Murphy
    My situation is this : I have a C# tcp socket through which I receive structured messages consisting of a 3 byte header and a variable size payload. The tcp data is routed through a network of tunnels and is occasionally susceptible to fragmentation. The solution to this is to perform a blocking read of 3 bytes for the header and a blocking read of N bytes for the variable size payload (the value of N is in the header). The problem I'm experiencing is that occasionally, the blocking receive operation returns a partial packet. That is, it reads a volume of bytes less than the number I explicitly set in the receive call. After some debugging, it appears that the number of bytes it returns is equal to the number of bytes in the Available property of the socket before the receive op. This behavior is contrary to my expectation. If the socket is blocking and I explicitly set the number of bytes to receive, shouldn't the socket block until it recv's those bytes?, any help, pointers, etc would be much appreciated.

    Read the article

  • Sending file over socket

    - by johannix
    I'm have a problem sending data as a file from one end of a socket to the other. What's happening is that both the server and client are trying to read the file so the file never gets sent. I was wondering how to have the client block until the server's completed reading the file sent from the client. I have this working with raw packets using send and recv, but figured this was a cleaner solution... Client: connects to server creating socket connection creates a file on socket and sends data waits for file from server Server: waits for file from client Complete interraction: client sends data to server server sends data to client

    Read the article

  • Flex: client / server messaging question (RPC or socket ?)

    - by Patrick
    hi, I'm building a Flex application, which is going to perform many server requests (let's say, that almost all interactions require an update from server). At the moment I'm using remote procedure calls for it. But I was wondering if using a socket would be better. In other terms, is maybe better to keep the connection alive rather then performing many calls in sequence ? For my demo app I only have 1 client. Is the number of clients connecting to the server a factor for this choice ? thanks

    Read the article

  • inet_ntoa problem

    - by codingfreak
    Hi I am declaring following variables unsigned long dstAddr; unsigned long gateWay; unsigned long mask; These variables contains ipaddresses in network byte order. So when I am trying to print them using inet_ntoa function for mask variable sometimes it is printing strange values printf("%s\t%s\t%s\t",inet_ntoa(dstAddr),inet_ntoa(gateWay),inet_ntoa(mask)); 192.168.122.0 0.0.0.0 0.255.255.255 but it should be 192.168.122.0 0.0.0.0 255.255.255.0 So is this because of inet_ntoa ??

    Read the article

  • why can't i bind ipv6 socket to a linklocal address

    - by Haiyuan Zhang
    #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <stdio.h> void error(char *msg) { perror(msg); exit(0); } int main(int argc, char *argv[]) { int sock, length, fromlen, n; struct sockaddr_in6 server; struct sockaddr_in6 from; int portNr = 5555; char buf[1024]; length = sizeof (struct sockaddr_in6); sock=socket(AF_INET6, SOCK_DGRAM, 0); if (sock < 0) error("Opening socket"); bzero((char *)&server, length); server.sin6_family=AF_INET6; server.sin6_addr=in6addr_any; server.sin6_port=htons(portNr); inet_pton( AF_INET6, "fe80::21f:29ff:feed:2f7e", (void *)&server.sin6_addr.s6_addr); //inet_pton( AF_INET6, "::1", (void *)&server.sin6_addr.s6_addr); if (bind(sock,(struct sockaddr *)&server,length)<0) error("binding"); fromlen = sizeof(struct sockaddr_in6); while (1) { n = recvfrom(sock,buf,1024,0,(struct sockaddr *)&from,&fromlen); if (n < 0) error("recvfrom"); write(1,"Received a datagram: ",21); write(1,buf,n); n = sendto(sock,"Got your message\n",17, 0,(struct sockaddr *)&from,fromlen); if (n < 0) error("sendto"); } } when I compile and run the above code I got : binding: Invalid argument and if change to bind the ::1 and leave other thing unchanged in the source code, the code works! so could you tell me what's wrong with my code ? thanks in advance.

    Read the article

  • Asynchronous IO in Java?

    - by thr
    What options for async io (socket-based) are there in java other then java.nio? Also does java.nio use threads in the backround (as I think .NET's async-socket-library does, maybe it's been changed) or is it "true" async io using a proper select call?

    Read the article

  • socket timeout and remove O_NONBLOCK option

    - by juxstapose
    Hello, I implemented a socket timeout and retry but in order to do it I had to set the socket as a non-blocking socket. However, I need the socket to block. This was my attempt at a solution to these two problems. This is not working. Subsequent send calls block but never send any data. When I connect without the select and the timeout, subsequent send calls work normally. References: C: socket connection timeout How to reset a socket back to blocking mode (after I set it to nonblocking mode)? Code: fd_set fdset; struct timeval tv; fcntl(dsock, F_SETFL, O_NONBLOCK); tv.tv_sec = theDeviceTimeout; tv.tv_usec = 0; int retries=0; logi(theLogOutput, LOG_INFO, "connecting to device socket num retrys: %i", theDeviceRetry); for(retries=0;retries<theDeviceRetry;retries++) { connect(dsock, (struct sockaddr *)&daddr, sizeof daddr); FD_ZERO(&fdset); FD_SET(dsock, &fdset); if (select(dsock + 1, NULL, &fdset, NULL, &tv) == 1) { int so_error; socklen_t slen = sizeof so_error; getsockopt(dsock, SOL_SOCKET, SO_ERROR, &so_error, &slen); if (so_error == 0) { logi(theLogOutput, LOG_INFO, "connected to socket on port %i on %s", theDevicePort, theDeviceIP); break; } else { logi(theLogOutput, LOG_WARN, "connect to %i failed on ip %s because %s retries %i", theDevicePort, theDeviceIP, strerror(errno), retries); logi(theLogOutput, LOG_WARN, "failed to connect to device %s", strerror(errno)); logi(theLogOutput, LOG_WARN, "error: %i %s", so_error, strerror(so_error)); continue; } } } int opts; opts = fcntl(dsock,F_GETFL); logi(theLogOutput, LOG_DEBUG, "clearing nonblock option %i retries %i", opts, retries); opts ^= O_NONBLOCK; fcntl(dsock, F_SETFL, opts);

    Read the article

  • Packet fragmentation when sending data via SSLStream

    - by Ive
    When using an SSLStream to send a 'large' chunk of data (1 meg) to a (already authenticated) client, the packet fragmentation / dissasembly I'm seeing is FAR greater than when using a normal NetworkStream. Using an async read on the client (i.e. BeginRead()), the ReadCallback is repeatedly called with exactly the same size chunk of data up until the final packet (the remainder of the data). With the data I'm sending (it's a zip file), the segments happen to be 16363 bytes long. Note: My receive buffer is much bigger than this and changing it's size has no effect I understand that SSL encrypts data in chunks no bigger than 18Kb, but since SSL sits on top of TCP, I wouldn't think that the number of SSL chunks would have any relevance to the TCP packet fragmentation? Essentially, the data is taking about 20 times longer to be fully read by the client than with a standard NetworkStream (both on localhost!) What am I missing? EDIT: I'm beginning to suspect that the receive (or send) buffer size of an SSLStream is limited. Even if I use synchronous reads (i.e. SSLStream.Read()), no more data ever becomes available, regardless of how long I wait before attempting to read. This would be the same behavior as if I were to limit the receive buffer to 16363 bytes. Setting the Underlying NetworkStream's SendBufferSize (on the server), and ReceiveBufferSize (on the client) has no effect.

    Read the article

  • sendto: Network unreachable

    - by devin
    Hello. I have two machines I'm testing my code on, one works fine, the other I'm having some problems and I don't know why it is. I'm using an object (C++) for the networking part of my project. On the server side, I do this: (error checking removed for clarity) res = getaddrinfo(NULL, port, &hints, &server)) < 0 for(p=server; p!=NULL; p=p->ai_next){ fd = socket(p->ai_family, p->ai_socktype, p->ai_protocol); if(fd<0){ continue; } if(bind(fd, p->ai_addr, p->ai_addrlen)<0){ close(fd); continue; } break; } This all works. I then make an object with this constructor net::net(int fd, struct sockaddr *other, socklen_t *other_len){ int counter; this->fd = fd; if(other != NULL){ this->other.sa_family = other->sa_family; for(counter=0;counter<13;counter++) this->other.sa_data[counter]=other->sa_data[counter]; } else cerr << "Networking error" << endl; this->other_len = *other_len; } void net::gsend(string s){ if(sendto(this->fd, s.c_str(), s.size()+1, 0, &(this->other), this->other_len)<0){ cerr << "Error Sending, " << s << endl; cerr << strerror(errno) << endl; } return; } string net::grecv(){ stringstream ss; string s; char buf[BUFSIZE]; buf[BUFSIZE-1] = '\0'; if(recvfrom(this->fd, buf, BUFSIZE-1, 0, &(this->other), &(this->other_len))<0){ cerr << "Error Recieving\n"; cerr << strerror(errno) << endl; } // convert to c++ string and if there are multiple trailing ';' remove them ss << buf; s=ss.str(); while(s.find(";;", s.size()-2) != string::npos) s.erase(s.size()-1,1); return s; } So my problem is, is that on one machine, everything works fine. On another, everything works fine until I call my server's gsend() function. In which I get a "Error: Network Unreachable." I call gercv() first before calling gsend() too. Can anyone help me? I would really appreciate it.

    Read the article

  • Ruby TCPSocket doesn't notice it when server is killed

    - by user303308
    I've this ruby code that connects to a TCP server (namely, netcat). It loops 20 times, and sends "ABCD ". If I kill netcat, it takes TWO iterations of the loop for an exception to be triggered. On the first loop after netcat is killed, no exception is triggered, and "send" reports that 5 bytes have been correctly written... Which in the end is not true, since of course the server never received them. Is there a way to work around this issue ? Right now I'm losing data : since I think it's been correctly transfered, I'm not replaying it. #!/usr/bin/env ruby require 'rubygems' require 'socket' sock = TCPSocket.new('192.168.0.10', 5443) sock.sync = true 20.times do sleep 2 begin count = sock.write("ABCD ") puts "Wrote #{count} bytes" rescue Exception => myException puts "Exception rescued : #{myException}" end end

    Read the article

  • How to use data receive event in Socket class?

    - by affan
    I have wrote a simple client that use TcpClient in dotnet to communicate. In order to wait for data messages from server i use a Read() thread that use blocking Read() call on socket. When i receive something i have to generate various events. These event occur in the worker thread and thus you cannot update a UI from it directly. Invoke() can be use but for end developer its difficult as my SDK would be use by users who may not use UI at all or use Presentation Framework. Presentation framework have different way of handling this. Invoke() on our test app as Microstation Addin take a lot of time at the moment. Microstation is single threaded application and call invoke on its thread is not good as it is always busy doing drawing and other stuff message take too long to process. I want my events to generate in same thread as UI so user donot have to go through the Dispatcher or Invoke. Now i want to know how can i be notified by socket when data arrive? Is there a build in callback for that. I like winsock style receive event without use of separate read thread. I also do not want to use window timer to for polling for data. I found IOControlCode.AsyncIO flag in IOControl() function which help says Enable notification for when data is waiting to be received. This value is equal to the Winsock 2 FIOASYNC constant. I could not found any example on how to use it to get notification. If i am write in MFC/Winsock we have to create a window of size(0,0) which was just used for listening for the data receive event or other socket events. But i don't know how to do that in dotnet application.

    Read the article

  • Can I make TCP/IP session to run less than 60 seconds?

    - by Pavel
    Our server is overloaded with TCP/IP sessions, we have 1200 - 1500 of them. Most of them are hanging in TIME_OUT state. It turns out that a connection in TIME_OUT state occupies a socket until 60 second time-out is elapsed. The problem is that the server gets unresponsive and many clients are not getting served. I have made a simple test: download an XML file from the server with Internet Explorer 8.0 The download finishes in a fraction of second. But then I see that the TCP/IP connection is hanging in TIME_OUT state for 60 seconds. Is there any way to get rid of TIME_OUT waiting or make it less to free the socket for new connections? I understand why TCP/IP connection enters TIME_OUT state, but I don't understand why Internet Explorer does not close the connection after the XML file download is over. The details. Our server runs web service written in Perl (mod-perl). The service provides weather data to clients. Client is a Flash appication (actually Flash ActiveX control embedded in Windows application). Apache "Keep Alive" option is set to 0

    Read the article

  • Listening for TCP and UDP requests on the same port

    - by user339328
    I am writing a Client/Server set of programs Depending on the operation requested by the client, I use make TCP or UDP request. Implementing the client side is straight-forward, since I can easily open connection with any protocol and send the request to the server-side. On the servers-side, on the other hand, I would like to listen both for UDP and TCP connections on the same port. Moreover, I like the the server to open new thread for each connection request. I have adopted the approach explained in: link text I have extended this code sample by creating new threads for each TCP/UDP request. This works correctly if I use TCP only, but it fails when I attempt to make UDP bindings. Please give me any suggestion how can I correct this. tnx Here is the Server Code: public class Server { public static void main(String args[]) { try { int port = 4444; if (args.length > 0) port = Integer.parseInt(args[0]); SocketAddress localport = new InetSocketAddress(port); // Create and bind a tcp channel to listen for connections on. ServerSocketChannel tcpserver = ServerSocketChannel.open(); tcpserver.socket().bind(localport); // Also create and bind a DatagramChannel to listen on. DatagramChannel udpserver = DatagramChannel.open(); udpserver.socket().bind(localport); // Specify non-blocking mode for both channels, since our // Selector object will be doing the blocking for us. tcpserver.configureBlocking(false); udpserver.configureBlocking(false); // The Selector object is what allows us to block while waiting // for activity on either of the two channels. Selector selector = Selector.open(); tcpserver.register(selector, SelectionKey.OP_ACCEPT); udpserver.register(selector, SelectionKey.OP_READ); System.out.println("Server Sterted on port: " + port + "!"); //Load Map Utils.LoadMap("mapa"); System.out.println("Server map ... LOADED!"); // Now loop forever, processing client connections while(true) { try { selector.select(); Set<SelectionKey> keys = selector.selectedKeys(); // Iterate through the Set of keys. for (Iterator<SelectionKey> i = keys.iterator(); i.hasNext();) { SelectionKey key = i.next(); i.remove(); Channel c = key.channel(); if (key.isAcceptable() && c == tcpserver) { new TCPThread(tcpserver.accept().socket()).start(); } else if (key.isReadable() && c == udpserver) { new UDPThread(udpserver.socket()).start(); } } } catch (Exception e) { e.printStackTrace(); } } } catch (Exception e) { e.printStackTrace(); System.err.println(e); System.exit(1); } } } The UDPThread code: public class UDPThread extends Thread { private DatagramSocket socket = null; public UDPThread(DatagramSocket socket) { super("UDPThread"); this.socket = socket; } @Override public void run() { byte[] buffer = new byte[2048]; try { DatagramPacket packet = new DatagramPacket(buffer, buffer.length); socket.receive(packet); String inputLine = new String(buffer); String outputLine = Utils.processCommand(inputLine.trim()); DatagramPacket reply = new DatagramPacket(outputLine.getBytes(), outputLine.getBytes().length, packet.getAddress(), packet.getPort()); socket.send(reply); } catch (IOException e) { e.printStackTrace(); } socket.close(); } } I receive: Exception in thread "UDPThread" java.nio.channels.IllegalBlockingModeException at sun.nio.ch.DatagramSocketAdaptor.receive(Unknown Source) at server.UDPThread.run(UDPThread.java:25) 10x

    Read the article

  • GUI Agent accepts statuses from Daemon and shows it using progress indicator

    - by Pavel
    Hi to all! My application is a GUI agent, which communicate with daemon through the unix domain socket, wrapped in CFSocket.... So there are main loop and added CFRunLoop source. Daemon sends statuses and agent shows it with a progress indicator. When there are any data on socket, callback function begin to work and at this time I have to immediately show the new window with progress indicator and increase counter. //this function initiate the runloop for listening socket - (int) AcceptDaemonConnection:(ConnectionRef)conn { int err = 0; conn->fSockCF = CFSocketCreateWithNative(NULL, (CFSocketNativeHandle) conn->fSockFD, kCFSocketAcceptCallBack, ConnectionGotData, NULL); if (conn->fSockCF == NULL) err = EINVAL; if (err == 0) { conn->fRunLoopSource = CFSocketCreateRunLoopSource(NULL, conn->fSockCF, 0); if (conn->fRunLoopSource == NULL) err = EINVAL; else CFRunLoopAddSource(CFRunLoopGetCurrent(), conn->fRunLoopSource, kCFRunLoopDefaultMode); CFRelease(conn->fRunLoopSource); } return err; } // callback function void ConnectionGotData(CFSocketRef s, CFSocketCallBackType type, CFDataRef address, const void * data, void * info) { #pragma unused(s) #pragma unused(address) #pragma unused(info) assert(type == kCFSocketAcceptCallBack); assert( (int *) data != NULL ); assert( (*(int *) data) != -1 ); TStatusUpdate status; int nativeSocket = *(int *) data; status = [agg AcceptPacket:nativeSocket]; // [stWindow InitNewWindow] inside [agg SendUpdateStatus:status.percent]; } AcceptPacket function receives packet from the socket and trying to show new window with progress indicator. Corresponding function is called, but nothing happens... I think, that I have to make work the main application loop with interrupting CFSocket loop... Or send a notification? No idea....

    Read the article

  • How do I send telnet option codes?

    - by Matt
    I've written a socket listener in Java that just sends some data to the client. If I connect to the server using telnet, I want the server to send some telnet option codes. Do I just send these like normal messages? Like, if I wanted the client to print "hello", I would do this: PrintWriter out = new PrintWriter(clientSocket.getOutputStream()); out.print("hello"); out.flush(); But when I try to send option codes, the client just prints them. Eg, the IAC char (0xff) just gets printed as a strange y character when I do this: PrintWriter out = new PrintWriter(clientSocket.getOutputStream()); out.print((char)0xff); out.flush();

    Read the article

  • buffer size for socket connection in c++

    - by wyatt
    I'm trying to build a basic POP3 mail client in C/++, but I've run into a bit of an issue. Since you have to define the buffer size when building the program, but a message can be arbitrarily large, how do you, say, get the mail server to send it to you in parts? And if this isn't the correct means of solving the problem, what is? And while I'm here, can anyone confirm for me that RFC 2822 is still the current document defining email layout? Thanks

    Read the article

  • Polling servers at the same port - Threads and Java

    - by John
    Hi there. I'm currently busy working on an IP ban tool for the early versions of Call of Duty 1. (Apparently such a feature wasn't implemented in these versions). I've finished a single threaded application but it won't perform well enough for multiple servers, which is why I am trying to implement threading. Right now, each server has its own thread. I have a Networking class, which has a method; "GetStatus" -- this method is synchronized. This method uses a DatagramSocket to communicate with the server. Since this method is static and synchronized, I shouldn't get in trouble and receive a whole bunch of "Address already in use" exceptions. However, I have a second method named "SendMessage". This method is supposed to send a message to the server. How can I make sure "SendMessage" cannot be invoked when there's already a thread running in "GetStatus", and the other way around? If I make both synchronized, I will still get in trouble if Thread A is opening a socket on Port 99999 and invoking "SendMessage" while Thread B is opening a socket on the same port and invoking "GetStatus"? (Game servers are usually hosted on the same ports) I guess what I am really after is a way to make an entire class synchronized, so that only one method can be invoked and run at a time by a single thread. Hope that what I am trying to accomplish/avoid is made clear in this text. Any help is greatly appreciated.

    Read the article

  • "Can´t open socket or connection refused" with .NET

    - by HoNgOuRu
    Im getting a connection refused when I try to send some data to my server app using netcat. server side: IPAddress ip; ip = Dns.GetHostEntry("localhost").AddressList[0]; IPEndPoint ipFinal = new IPEndPoint(ip, 12345); Socket socket = new Socket(AddressFamily.InterNetworkV6, SocketType.Stream, ProtocolType.Tcp); socket.Bind(ipFinal); socket.Listen(100); Socket handler = socket.Accept(); ------> it stops here......nothing happens

    Read the article

  • How to "unbind" a socket programmatically?

    - by ryan1894
    1) The socket doesn't seem to unbind from the LocalEndPoint until the process ends. 2) I have tried the solutions from the other question, and also tried waiting a minute - to no avail. 3) At the moment I have tried the below to get rid of the socket and its connections: public static void killUser(User victim) { LingerOption lo = new LingerOption(false, 0); victim.connectedSocket.SetSocketOption(SocketOptionLevel.Socket,SocketOptionName.Linger, lo); victim.connectedSocket.Shutdown(SocketShutdown.Both); victim.connectedSocket.Disconnect(true); victim.connectedSocket.Close(); clients.RemoveAt(victim.ID); } 4) After a bit of googling, I can't seem to be able to unbind a port, thus if I have a sufficient amount of connecting clients, I will eventually run out of ports to listen on.

    Read the article

  • Communicate between separate MPI-Programs

    - by Fyg
    I have the following problem: Program 1 has a huge amount of data, say 10GB. The data in question consists of large integer- and double-arrays. Program 2 has 1..n MPI processes that use tiles of this data to compute results. How can I send the data from program 1 to the MPI Processes? Using File I/O is out of question. The compute node has sufficient RAM.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >