Search Results

Search found 3513 results on 141 pages for 'raw sockets'.

Page 23/141 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Oh that XML - did you ever try to read a raw file?

    - by GGBlogger
    If you've ever looked at a raw XML file - even a very simple one - you'll understand. XML files are nearly impossible to read in raw format. That's where various tools come in and there are a bunch of them including some very simple tools. If, however, you need some horsepower one of the best tools on the planet is LiquidXML! LiquidXML is a developer's tool. It's also an analyst's tool, a tester's tool and a designer's tool. Did I mention that it is compatible with Visual Studio? Once again I will be following up on this as time permits. But if this sounds like something you can use just visit http://www.liquid-technologies.com/. You will find a very complete description plus high quality training videos that will help you decide if this is a tool you can use.

    Read the article

  • How can I limit the number of open AF_INET sockets?

    - by Stefano Palazzo
    Is there a way to limit the number of concurretly open AF_INET sockets (only)? If so, how do I do it, and how will the networking behave if I'm above the limit? For background: My cheap commodity router is a bit eager to detect 'syn flooding'. When it does, it crashes (and doesn't automatically restart itself). I'm thinking limiting concurrent connections to around 1000 should keep it from bickering.

    Read the article

  • Server socket programming in Android 1.5, most power efficient way?

    - by Antek
    Hello people, I am doing a project where I have too develop an application that listens for incoming events by a service. The device that has to listen too events is an Android phone with Android SDK 1.5 on it. Currently the services that call events only implement communication trough UDP or TCP sockets. I can solve my problem by setting up a ServerSocket, but i doubt that's the most power efficient way. This application will be running most of the time, with Wi-Fi on, and I'd like too reach an long battery duration. I've been looking for options on the internet for my question for a while but i couldn't get a real answer. I've got the following questions: What is the most efficient way too listen to incoming events? Should I make an ServerSocket? or what are my options? Are there any other implementations that are more power efficient? Ive been also thinking of implementing communication trough XMPP. Not sure if this is the best way. I'm not forced too an specific implementation. All suggestions are welcome! Thanks for the help, Antek

    Read the article

  • Poco SocketReactors for a Proxy Server

    - by Genesis
    Can anyone give me some idea of the best way to implement a non-blocking proxy server using a Poco Socket Reactor? Currently I have a blocking implementation where if a readable notification arrives from the client I am writing what is read directly to the server, and if a readable notification arrives from the server I am writing what is read directly to the client. To achieve this I keep the thread that initiated the server connection alive but I would prefer to switch to non-blocking and have any threads which are used to initiate a connection removed once the server and client sockets are registered with the reactor and the SOCKS5 handshake is over. With a SocketReactor one can register event handlers for a single socket but the trouble is I would need to store whatever is read from that socket in a global buffer until the corresponding server socket is ready to be written to as from my testing I dont seem to be able to just write directly to the server when client data arrives. I am thinking of using a struct that contains the client socket, server socket, client buffer and server buffer and whenever a writable notification comes along for either the client or server, finding the corresponding buffer and writing this. Any thoughts?

    Read the article

  • Nonblocking Tcp server

    - by hoodoos
    It's not a question really, i'm just looking for some guidelines :) I'm currently writing some abstract tcp server which should use as low number of threads as it can. Currently it works this way. I have a thread doing listening and some worker threads. Listener thread is just sits and wait for clients to connect I expect to have a single listener thread per server instance. Worker threads are doing all read/write/processing job on clients socket. So my problem is in building efficient worker process. And I came to some problem I can't really solve yet. Worker code is something like that(code is really simple just to show a place where i have my problem): List<Socket> readSockets = new List<Socket>(); List<Socket> writeSockets = new List<Socket>(); List<Socket> errorSockets = new List<Socket>(); while( true ){ Socket.Select( readSockets, writeSockets, errorSockets, 10 ); foreach( readSocket in readSockets ){ // do reading here } foreach( writeSocket in writeSockets ){ // do writing here } // POINT2 and here's the problem i will describe below } it works all smothly accept for 100% CPU utilization because of while loop being cycling all over again, if I have my clients doing send-receive-disconnect routine it's not that painful, but if I try to keep alive doing send-receive-send-receive all over again it really eats up all CPU. So my first idea was to put a sleep there, I check if all sockets have their data send and then putting Thread.Sleep in POINT2 just for 10ms, but this 10ms later on produces a huge delay of that 10ms when I want to receive next command from client socket.. For example if I don't try to "keep alive" commands are being executed within 10-15ms and with keep alive it becomes worse by atleast 10ms :( Maybe it's just a poor architecture? What can be done so my processor won't get 100% utilization and my server to react on something appear in client socket as soon as possible? Maybe somebody can point a good example of nonblocking server and architecture it should maintain?

    Read the article

  • Make Python Socket Server More Efficient

    - by BenMills
    I have very little experience working with sockets and multithreaded programming so to learn more I decided to see if I could hack together a little python socket server to power a chat room. I ended up getting it working pretty well but then I noticed my server's CPU usage spiked up over 100% when I had it running in the background. Here is my code in full: http://gist.github.com/332132 I know this is a pretty open ended question so besides just helping with my code are there any good articles I could read that could help me learn more about this? My full code: import select import socket import sys import threading from daemon import Daemon class Server: def __init__(self): self.host = '' self.port = 9998 self.backlog = 5 self.size = 1024 self.server = None self.threads = [] self.send_count = 0 def open_socket(self): try: self.server = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) self.server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.server.bind((self.host,self.port)) self.server.listen(5) print "Server Started..." except socket.error, (value,message): if self.server: self.server.close() print "Could not open socket: " + message sys.exit(1) def remove_thread(self, t): t.join() def send_to_children(self, msg): self.send_count = 0 for t in self.threads: t.send_msg(msg) print 'Sent to '+str(self.send_count)+" of "+str(len(self.threads)) def run(self): self.open_socket() input = [self.server,sys.stdin] running = 1 while running: inputready,outputready,exceptready = select.select(input,[],[]) for s in inputready: if s == self.server: # handle the server socket c = Client(self.server.accept(), self) c.start() self.threads.append(c) print "Num of clients: "+str(len(self.threads)) self.server.close() for c in self.threads: c.join() class Client(threading.Thread): def __init__(self,(client,address), server): threading.Thread.__init__(self) self.client = client self.address = address self.size = 1024 self.server = server self.running = True def send_msg(self, msg): if self.running: self.client.send(msg) self.server.send_count += 1 def run(self): while self.running: data = self.client.recv(self.size) if data: print data self.server.send_to_children(data) else: self.running = False self.server.threads.remove(self) self.client.close() """ Run Server """ class DaemonServer(Daemon): def run(self): s = Server() s.run() if __name__ == "__main__": d = DaemonServer('/var/servers/fserver.pid') if len(sys.argv) == 2: if 'start' == sys.argv[1]: d.start() elif 'stop' == sys.argv[1]: d.stop() elif 'restart' == sys.argv[1]: d.restart() else: print "Unknown command" sys.exit(2) sys.exit(0) else: print "usage: %s start|stop|restart" % sys.argv[0] sys.exit(2)

    Read the article

  • why does text from socket server erase previously written text?

    - by mix
    This is strange enough I'm not sure how to search for an answer. I have a program in Python that communicates via TCP/IP sockets to a telnet-based server. If I telnet in manually and type commands like this: SET MDI G0 X0 Y0 the server will spit back a line like this: SET MDI ACK Pretty standard stuff. Here's the weird part. If, in my code, I precede my printing of each of these lines with some text, the returned line erases what I'm trying to print before it. So for example, if I write the code so it should look like this: SENT: SET MDI G0 X0 Y0 READ: SET MDI ACK What I get instead is: SENT: SET MDI G0 X0 Y0 SET MDI ACK Now, if I make the "READ: " text a bit longer, I can get a better idea of what's happening. Let's say I change READ: to 12345678901234567890, so that it should read as: 12345678901234567890: SET MDI ACK What I get instead is: SET MDI ACK234567890: So it seems like whatever text I'm getting back from the server is somehow deleting what I'm trying to precede it with. I tried saving all of my saved lines in a list, and then printing them out at the end, but it does exactly the same thing. Any ideas on what's going on, or even on how to debug this? Is there a way to get Python to show me any hidden chars in a string, for example? thx!

    Read the article

  • java.net.SocketException: Software caused connection abort: recv failed; Causes and cures?

    - by IVR Avenger
    Hi, all. I've got an application running on Apache Tomcat 5.5 on a Win2k3 VM. The application serves up XML to be consumed by some telephony appliances as part of our IVR infrastructure. The application, in turn, receives its information from a handful of SOAP services. This morning, the SOAP services were timing out intermittently, causing all sorts of Exceptions. Once these stopped, I noticed that our application was still performing very slowly, in that it took it a long time to render and deliver pages. This sluggishness was noticed both on the appliances that consume the Tomcat output, and from a simple test of requesting some static documents from my web browser. Restarting Tomcat immediately resolved the issue. Cracking open the localhost log, I see a ton of these errors, right up until I restarted Tomcat: WARNING: Exception thrown whilst processing POSTed parameters java.net.SocketException: Software caused connection abort: recv failed After a big of Googling, my working theory is that the SOAP issue caused my users to get errors, which caused them to make more requests, which put an increased load on the application. This caused it to run out of available sockets to handle incoming requests. So, here's my quandary: 1. Is this a valid hypothesis, or am I just in over my head with HTTP and Tomcat? 2. If this is a valid hypothesis, is there a way to increase the size of the "socket queue", so that this doesn't happen in the future? Thanks! IVR Avenger

    Read the article

  • Non-blocking TCP buffer issues.

    - by Poni
    Hi! I think I'm in a problem. I have two TCP apps connected to each other which use winsock I/O completion ports to send/receive data (non-blocking sockets). Everything works just fine until there's a data transfer burst. The sender starts sending incorrect/malformed data. I allocate the buffers I'm sending on the stack, and if I understand correctly, that's a wrong to do, because these buffers should remain as I sent them until I get the "write complete" notification from IOCP. Take this for example: void some_function() { char cBuff[1024]; // filling cBuff with some data WSASend(...); // sending cBuff, non-blocking mode // filling cBuff with other data WSASend(...); // again, sending cBuff // ..... and so forth! } If I understand correctly, each of these WSASend() calls should have its own unique buffer, and that buffer can be reused only when the send completes. Correct? Now, what strategies can I implement in order to maintain a big sack of such buffers, how should I handle them, how can I avoid performance penalty, etc'? And, if I am to use buffers that means I should copy the data to be sent from the source buffer to the temporary one, thus, I'd set SO_SNDBUF on each socket to zero, so the system will not re-copy what I already copied. Are you with me? Please let me know if I wasn't clear.

    Read the article

  • -[NSCFData writeStreamHandleEvent:]: unrecognized selector sent to instance in a stream callback

    - by user295491
    Hi everyone, I am working with streams and sockets in iPhone SDK 3.1.3 the issue is when the program accept a callback and I want to handle this writestream callback the following error is triggered " Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: ' -[NSCFData writeStreamHandleEvent:]: unrecognized selector sent to instance 0x17bc70'" But I don't know how to solve it because everything seems fine. Even when I run the debugger there is no error the program works. Any hint here will help! The code of the callback is: void myWriteStreamCallBack (CFWriteStreamRef stream, CFStreamEventType eventType, void *info){ NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; Connection *handlerEv = [(Connection *)info retain] autorelease]; [handlerEv writeStreamHandleEvent:eventType]; [pool release]; } The code of the writeStreamHandleEvent: - (void)writeStreamHandleEvent:(CFStreamEventType) eventType{ switch(eventType) { case kCFStreamEventOpenCompleted: writeStreamOpen = YES; break; case kCFStreamEventCanAcceptBytes: NSLog(@"Writing in the stream"); [self writeOutgoingBufferToStream]; break; case kCFStreamEventErrorOccurred: error = CFWriteStreamGetError(writeStream); fprintf(stderr, "CFReadStreamGetError returned (%ld, %ld)\n", error.domain, error.error); CFWriteStreamUnscheduleFromRunLoop(writeStream, CFRunLoopGetCurrent(),kCFRunLoopCommonModes); CFWriteStreamClose(writeStream); CFRelease(writeStream); break; case kCFStreamEventEndEncountered: CFWriteStreamUnscheduleFromRunLoop(writeStream, CFRunLoopGetCurrent(),kCFRunLoopCommonModes); CFWriteStreamClose(writeStream); CFRelease(writeStream); break; } } The code of the stream configuration: CFSocketContext ctx = {0, self, nil, nil, nil}; CFWriteStreamSetClient (writeStream,registeredEvents, (CFWriteStreamClientCallBack)&myWriteStreamCallBack,(CFStreamClientContext *)(&ctx) ); CFWriteStreamScheduleWithRunLoop (writeStream, CFRunLoopGetCurrent(), kCFRunLoopDefaultMode); You can see that there is nothing strange!, well at least I don't see it. Thank you in advance.

    Read the article

  • .NET TCP socket with session

    - by Zé Carlos
    Is there any way of dealing with sessions with sockets in C#? Example of my problem: I have a server with a socket listening on port 5672. TcpListener socket = new TcpListener(localAddr, 5672); socket.Start(); Console.Write("Waiting for a connection... "); // Perform a blocking call to accept requests. TcpClient client = socket.AcceptTcpClient(); Console.WriteLine("Connected to client!"); And i have two clients that will send one byte. Client A send 0x1 and client B send 0x2. From the server side, i read this data like this: Byte[] bytes = new Byte[256]; String data = null; NetworkStream stream = client.GetStream(); while ((stream.Read(bytes, 0, bytes.Length)) != 0) { byte[] answer = new ... stream.Write(answer , 0, answer.Length); } Then client A sends 0x11. I need a way to know that this client is the same that sent "0x1" before.

    Read the article

  • help with Firefox extension

    - by Johnny Grass
    I'm writing a Firefox extension that creates a socket server which will output the active tab's URL when a client makes a connection to it. I have the following code in my javascript file: var serverSocket; function startServer() { var listener = { onSocketAccepted : function(socket, transport) { try { var outputString = gBrowser.currentURI.spec + "\n"; var stream = transport.openOutputStream(0,0,0); stream.write(outputString,outputString.length); stream.close(); } catch(ex2){ dump("::"+ex2); } }, onStopListening : function(socket, status){} }; try { serverSocket = Components.classes["@mozilla.org/network/server-socket;1"] .createInstance(Components.interfaces.nsIServerSocket); serverSocket.init(7055,true,-1); serverSocket.asyncListen(listener); } catch(ex){ dump(ex); } document.getElementById("status").value = "Started"; } startServer(); As it is, it works for multiple tabs in a single window. If I open multiple windows, it ignores the additional windows. I think it is creating a server socket for each window, but since they are using the same port, the additional sockets fail to initialize. I need it to create a server socket when the browser launches and continue running when I close the windows (Mac OS X). As it is, when I close a window but Firefox remains running, the socket closes and I have to restart firefox to get it up an running. How do I go about that?

    Read the article

  • How can I work around WinXP using ports 1025-5000 as ephemeral?

    - by Chris Dolan
    If you create a TCP client socket with port 0 instead of a non-zero port, then the operating system chooses any free ephemeral port for you. Most OSes choose ephemeral ports from the IANA dynamic port range of 49152-65535. However in Windows Server 2003 and earlier (including XP) Microsoft used ports 1025-5000 as the ephemeral range, according to their bind() documentation. I run multiple Java services on the same hardware. On rare occasions, this range collides with well-known ports that I use for other services (e.g. port 4160 for Jini discovery). While rare, this has caused real problems. Is there any easy way to tell Windows or Java to use a different port range for client sockets? Microsoft's docs indicate that I can change the high end of that range via the MaxUserPort TcpIP registry setting, but I see no way to change the low end. Update: I've made some progress on this. It looks like Microsoft has a concept of reserved ports that are exceptions to the ephemeral port range. There's a registry setting that lets you change this permanently and apparently there must be an API to do the same thing because there's a data structure that holds high/low values for reserved port ranges, but I can't find the actual function call anywhere... The registry solution may work, but now I'm fixated on this API.

    Read the article

  • server/ client server connection

    - by user312054
    I have a server side program that creates a listening server side socket. The problem occurring is that it seems as if the client side sends a connect request it gets rejected if the server side socket is listening but connects if the server side program is not running. I can see the server side program getting the client request when debugging. It seems as if the client cannot connect to a listening socket. Any suggestions on a resolution? The server side accept code snippet is this. void CSocketListen::OnAccept(int nErrorCode) { CSocket::OnAccept(nErrorCode); CSocketServer* SocketPtr = new CSocketServer(); if (Accept(*SocketPtr)) { // add to list of client sockets connected } else { delete SocketPtr; } The client side code connect is like this. SOCKET cellModem; sockaddr_in handHeld; handHeld.sin_family = AF_INET; //Address family handHeld.sin_addr.s_addr = inet_addr("127.0.0.1"); handHeld.sin_port = htons((u_short)1113); //port to use cellModem=socket(AF_INET,SOCK_STREAM,0); if(cellModem == INVALID_SOCKET) { // log socket failure return false; } else { // log socket success } if (connect(cellModem,(const struct sockaddr*)&handHeld, sizeof(handHeld)) != 0 ) { // log socket connection success } else { // log socket connection failure closesocket(cellModem); }

    Read the article

  • C# TCP socket with session

    - by Zé Carlos
    Is there any way of dealing with sessions with sockets in C#? Example of my problem: I have a server with a socket listening on port 5672. TcpListener socket = new TcpListener(localAddr, 5672); socket.Start(); Console.Write("Waiting for a connection... "); // Perform a blocking call to accept requests. TcpClient client = socket.AcceptTcpClient(); Console.WriteLine("Connected to client!"); And i have two clients that will send one byte. Client A send 0x1 and client B send 0x2. From the server side, i read this data like this: Byte[] bytes = new Byte[256]; String data = null; NetworkStream stream = client.GetStream(); while ((stream.Read(bytes, 0, bytes.Length)) != 0) { byte[] answer = new ... stream.Write(answer , 0, answer.Length); } Then client A sends 0x11. I need a way to know that this client is the same that sent "0x1" before.

    Read the article

  • How do I make a web interface for a socket server

    - by mgroat
    I've got a socket server running (it's something that's basically like a chat server). Users can telnet into it, but I'd like to make a web interface. This is the first time I've ever done something like this, so I'm not really sure where to start. A few thoughts I've had: Have some server-side Python (or PHP) on my webserver, which accesses the socket server. I think I know enough about sockets to have Python interact with the server, but how do I go about getting the website that the user sees to update in real time? Should I just have the website refresh few seconds? I would prefer to do things this way if I can figure out how. Write a Java applet that interacts with the socket server, and embed the applet in the website. I would have to re-learn a language that I haven't touched in years, but my main goal here is learning -- so that wouldn't be such a bad thing. The main problem I have with this is that it requires end users to have Java installed on their computers, which I'd rather not do. Is one of these two solutions the right way to go? Anybody know where I can find a good tutorial to get started? Edit: There's no real security concerns with exposing the server to the internet.

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • Non-blocking TCP connection issues.

    - by Poni
    Hi! I think I'm in a problem. I have two TCP apps connected to each other which use winsock I/O completion ports to send/receive data (non-blocking sockets). Everything works just fine until there's a data transfer burst. The sender starts sending incorrect/malformed data. I allocate the buffers I'm sending on the stack, and if I understand correctly, that's a wrong to do, because these buffers should remain as I sent them until I get the "write complete" notification from IOCP. Take this for example: void some_function() { char cBuff[1024]; // filling cBuff with some data WSASend(...); // sending cBuff, non-blocking mode // filling cBuff with other data WSASend(...); // again, sending cBuff // ..... and so forth! } If I understand correctly, each of these WSASend() calls should have its own unique buffer, and that buffer can be reused only when the send completes. Correct? Now, what strategies can I implement in order to maintain a big sack of such buffers, how should I handle them etc'? And, if I am to use buffers that means I should copy the data to be sent from the source buffer to the temporary one, thus, I'd set SO_SNDBUF on each socket to zero, so the system will not re-copy what I already copied. Are you with me? Please let me know if I wasn't clear.

    Read the article

  • Sending files using Winsock - optimal send() data length?

    - by Meta
    I am using Winsock with non-blocking sockets to send a file to a client. The way I'm doing it right now is that I read a chunk of 8192 bytes from the file, and then loop until all of it successfully goes through send() (obviously handling WSAEWOULDBLOCK as it occurs). I then move on and read the next 8192 bytes, and so on... Although I can use any other number than 8192 when I test the transfer on my local machine, once I try it over a network, it seems like 8191 is the largest number I can use. When I try to use any number higher than 8191 (starting with 8192), the file transfer becomes extremely slow (about 5 times slower). Is there any reason why 8191 is so special? I've done some more testing and it turns out that using 8000 is slightly faster (by 0.5%). If you understand why 8191 is so special, can you tell me if there is a number better than the others (better than 8000)? I have a feeling that it has something to do with the fact that the default send buffer allocated to the socket by Winsock is 8KB, but I don't understand why. It might also have something to do with the Nagle algorithm, but again, I'm not sure how. Note that I have not modified the SO_SNDBUF option nor the TCP_NODELAY option. Or am I doing this all wrong? What's the best way of sending a file over a non-blocking socket?

    Read the article

  • Boost Asio UDP retrieve last packet in socket buffer

    - by Alberto Toglia
    I have been messing around Boost Asio for some days now but I got stuck with this weird behavior. Please let me explain. Computer A is sending continuos udp packets every 500 ms to computer B, computer B desires to read A's packets with it own velocity but only wants A's last packet, obviously the most updated one. It has come to my attention that when I do a: mSocket.receive_from(boost::asio::buffer(mBuffer), mEndPoint); I can get OLD packets that were not processed (almost everytime). Does this make any sense? A friend of mine told me that sockets maintain a buffer of packets and therefore If I read with a lower frequency than the sender this could happen. ¡? So, the first question is how is it possible to receive the last packet and discard the ones I missed? Later I tried using the async example of the Boost documentation but found it did not do what I wanted. http://www.boost.org/doc/libs/1_36_0/doc/html/boost_asio/tutorial/tutdaytime6.html From what I could tell the async_receive_from should call the method "handle_receive" when a packet arrives, and that works for the first packet after the service was "run". If I wanted to keep listening the port I should call the async_receive_from again in the handle code. right? BUT what I found is that I start an infinite loop, it doesn't wait till the next packet, it just enters "handle_receive" again and again. I'm not doing a server application, a lot of things are going on (its a game), so my second question is, do I have to use threads to use the async receive method properly, is there some example with threads and async receive? Thanks for you attention.

    Read the article

  • Socket isn't listed by netstat unless using certain ports

    - by illuzive
    I'm a computer science student with a few years of programming experience. Yesterday, while working on a project (Mac OS X, BSD sockets) at school, I encountered a strange problem. I was adding several modules to a very basic "server" (mostly a bunch of functions to set up and manage an UDP socket on a certain port). While doing this, I started the server from time to time in order to see that everything worked like it should. I've been using port 32000 during the development of the server. When I start the server and run netstat, the socket is listed as expected. > netstat -p UDP | grep 32000 udp46 0 0 *.32000 *.* However, when I run the server on other ports (random (10000 - 50000)), it's not listed by netstat. My thought was that I had somehow hard coded the port somewhere in the code, but that's not the case. The thing is - I can connect to the socket on any of the tested ports, and it reads data sent to it without any problem at all. It just doesn't get listed by netstat. What I wonder, is if anyone of you have any idea of why this happens? Note: Although this is a project at school, it's not homework. This is just something I want to understand for my own benefit.

    Read the article

  • InvalidOperationException: The Undo operation encountered a context that is different from what was

    - by McN
    I got the following exception: Exception Type: System.InvalidOperationException Exception Message: The Undo operation encountered a context that is different from what was applied in the corresponding Set operation. The possible cause is that a context was Set on the thread and not reverted(undone). Exception Stack: at System.Threading.SynchronizationContextSwitcher.Undo() at System.Threading.ExecutionContextSwitcher.Undo() at System.Threading.ExecutionContext.runFinallyCode(Object userData, Boolean exceptionThrown) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteBackoutCodeHelper(Object backoutCode, Object userData, Boolean exceptionThrown) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.ContextAwareResult.Complete(IntPtr userToken) at System.Net.LazyAsyncResult.ProtectedInvokeCallback(Object result, IntPtr userToken) at System.Net.Sockets.BaseOverlappedAsyncResult.CompletionPortCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) Exception Source: mscorlib Exception TargetSite.Name: Undo Exception HelpLink: The application is a Visual Studio 2005 (.Net 2.0) console application. It is a server for multiple TCP/IP connections, doing asynchronous socket reads and synchronous socket writes. In searching for an answer I came across this post which talks about a call to Application.Doevents() which I don't use in my code. I also found this post which has a resolution involved with Component which I also don't use in my code. The application does reference a library that I created that contains custom user controls and components, but they are not being used by the application. Question: What caused this to happen and how do I prevent this from happening again? Or a more realistic question: What does this exception actually mean? How is "context" defined in this situation? Anything that can help me understand what is going on would be very much appreciated.

    Read the article

  • How do I send data between two computers over the internet

    - by Johan
    I have been struggling with this for the entire day now, I hope somebody can help me with this. My problem is fairly simple: I wish to transfer data (mostly simple commands) from one PC to another over the internet. I have been able to achieve this using sockets in Java when both computers are connected to my home router. I then connected both computers to the internet using two different mobile phones and attempted to transmit the data again. I used the mobile phones as this provides a direct route to the internet and if I use my router I have to set up port forwarding, at least, that is how I understand it. I think the problem lies in the method that I set up the client socket. I used: Socket kkSocket = new Socket(ipAddress, 3333); where ipAddress is the IP address of the computer running the server. I got the IP address by right-clicking on the connection, status, support. Is that the correct IP address to use or where can I obtain the address of the server? Also, is it possible to get a fixed name for my computer that I can use instead of entering the IP address, as this changes every time I connect to the internet using my mobile phone? Alternatively, are there better methods to solving my problem such as using http, and if so, where can I find more information about this? Thanks!

    Read the article

  • Socket select() Handling Abrupt Disconnections

    - by Genesis
    I am currently trying to fix a bug in a proxy server I have written relating to the socket select() call. I am using the Poco C++ libraries (using SocketReactor) and the issue is actually in the Poco code which may be a bug but I have yet to receive any confirmation of this from them. What is happening is whenever a connection abruptly terminates the socket select() call is returning immediately which is what I believe it is meant to do? Anyway, it returns all of the disconnected sockets within the readable set of file descriptors but the problem is that an exception "Socket is not connected" is thrown when Poco tries to fire the onReadable event handler which is where I would be putting the code to deal with this. Given that the exception is silently caught and the onReadable event is never fired, the select() call keeps returning immediately resulting in an infinite loop in the SocketReactor. I was considering modifying the Poco code so that rather than catching the exception silently it fires a new event called onDisconnected or something like that so that a cleanup can be performed. My question is, are there any elegant ways of determining whether a socket has closed abnormally using select() calls? I was thinking of using the exception message to determine when this has occured but this seems dirty to me.

    Read the article

  • What can cause a spontaneous EPIPE error without either end calling close() or crashing?

    - by Hongli
    I have an application that consists of two processes (let's call them A and B), connected to each other through Unix domain sockets. Most of the time it works fine, but some users report the following behavior: A sends a request to B. This works. A now starts reading the reply from B. B sends a reply to A. The corresponding write() call returns an EPIPE error, and as a result B close() the socket. However, A did not close() the socket, nor did it crash. A's read() call returns 0, indicating end-of-file. A thinks that B prematurely closed the connection. Users have also reported variations of this behavior, e.g.: A sends a request to B. This works partially, but before the entire request is sent A's write() call returns EPIPE, and as a result A close() the socket. However B did not close() the socket, nor did it crash. B reads a partial request and then suddenly gets an EOF. The problem is I cannot reproduce this behavior locally at all. I've tried OS X and Linux. The users are on a variety of systems, mostly OS X and Linux. Things that I've already tried and considered: Double close() bugs (close() is called twice on the same file descriptor): probably not as that would result in EBADF errors, but I haven't seen them. Increasing the maximum file descriptor limit. One user reported that this worked for him, the rest reported that it did not. What else can possibly cause behavior like this? I know for certain that neither A nor B close() the socket prematurely, and I know for certain that neither of them have crashed because both A and B were able to report the error. It is as if the kernel suddenly decided to pull the plug from the socket for some reason.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >