Search Results

Search found 17651 results on 707 pages for 'unix domain sockets'.

Page 167/707 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • [C++] Adding a string or char array to a byte vector

    - by xeross
    I'm currently working on a class to create and read out packets send through the network, so far I have it working with 16bit and 8bit integers (Well unsigned but still). Now the problem is I've tried numerous ways of copying it over but somehow the _buffer got mangled, it segfaulted, or the result was wrong. I'd appreciate if someone could show me a working example. My current code can be seen below. Thanks, Xeross Main #include <iostream> #include <stdio.h> #include "Packet.h" using namespace std; int main(int argc, char** argv) { cout << "#################################" << endl; cout << "# Internal Use Only #" << endl; cout << "# Codename PACKETSTORM #" << endl; cout << "#################################" << endl; cout << endl; Packet packet = Packet(); packet.SetOpcode(0x1f4d); cout << "Current opcode is: " << packet.GetOpcode() << endl << endl; packet.add(uint8_t(5)) .add(uint16_t(4000)) .add(uint8_t(5)); for(uint8_t i=0; i<10;i++) printf("Byte %u = %x\n", i, packet._buffer[i]); printf("\nReading them out: \n1 = %u\n2 = %u\n3 = %u\n4 = %s", packet.readUint8(), packet.readUint16(), packet.readUint8()); return 0; } Packet.h #ifndef _PACKET_H_ #define _PACKET_H_ #include <iostream> #include <vector> #include <stdio.h> #include <stdint.h> #include <string.h> using namespace std; class Packet { public: Packet() : m_opcode(0), _buffer(0), _wpos(0), _rpos(0) {} Packet(uint16_t opcode) : m_opcode(opcode), _buffer(0), _wpos(0), _rpos(0) {} uint16_t GetOpcode() { return m_opcode; } void SetOpcode(uint16_t opcode) { m_opcode = opcode; } Packet& add(uint8_t value) { if(_buffer.size() < _wpos + 1) _buffer.resize(_wpos + 1); memcpy(&_buffer[_wpos], &value, 1); _wpos += 1; return *this; } Packet& add(uint16_t value) { if(_buffer.size() < _wpos + 2) _buffer.resize(_wpos + 2); memcpy(&_buffer[_wpos], &value, 2); _wpos += 2; return *this; } uint8_t readUint8() { uint8_t result = _buffer[_rpos]; _rpos += sizeof(uint8_t); return result; } uint16_t readUint16() { uint16_t result; memcpy(&result, &_buffer[_rpos], sizeof(uint16_t)); _rpos += sizeof(uint16_t); return result; } uint16_t m_opcode; std::vector<uint8_t> _buffer; protected: size_t _wpos; // Write position size_t _rpos; // Read position }; #endif // _PACKET_H_

    Read the article

  • EOF error using recv in python

    - by tipu
    I am doing this in my code, HOST = '192.168.1.3' PORT = 50007 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((HOST, PORT)) query_details = {"page" : page, "query" : query, "type" : type} s.send(str(query_details)) #data = eval(pickle.loads(s.recv(4096))) data = s.recv(16384) But I am continually getting EOF at the last line. The code I am sending with, self.request.send(pickle.dumps(results))

    Read the article

  • python send/receive hex data via TCP socket

    - by Mike
    I have a ethenet access control device that is said to be able to communicate via TCP. How can i send a pachet by entering the HEX data, since this is what i have from their manual (a standard format for the communication packets sent and received after each command)

    Read the article

  • Udp server sending only 0 bytes of data

    - by mawia
    Hi all, This is a simple Udp server.I am trying to transmit data to some clients,but unfortunetly it is unable to transmit data.Though send is running quite successfully but it is returning with a return value meaning it has send nothing.On the client they are receiving but again obviously,zero bytes. void* UdpServerStreamToClients(void *fileToServe) { int sockfd,n=0,k; struct sockaddr_in servaddr,cliaddr; socklen_t len; char dataToSend[1000]; sockfd=socket(AF_INET,SOCK_DGRAM,0); bzero(&servaddr,sizeof(servaddr)); servaddr.sin_family = AF_INET; servaddr.sin_addr.s_addr=htonl(INADDR_ANY); servaddr.sin_port=htons(32000); bind(sockfd,(struct sockaddr *)&servaddr,sizeof(servaddr)); FILE *fp; if((fp=fopen((char*)fileToServe,"r"))==NULL) { printf("can not open file "); perror("fopen"); exit(1); } int dataRead=1; while(dataRead) { len = sizeof(cliaddr); if((dataRead=fread(dataToSend,1,500,fp))<0) { perror("fread"); exit(1); } //sleep(2); for(list<clientInfo>::iterator it=clients.begin();it!=clients.end();it++) { cliaddr.sin_family = AF_INET; inet_aton(inet_ntoa(it->addr.sin_addr),&cliaddr.sin_addr); cliaddr.sin_port = htons(it->udp_port); n=sendto(sockfd,dataToSend,sizeof(dataToSend),0,(struct sockaddr *)&cliaddr,len); cout<<"number of bytes send by udp: "<< n << endl; printf("SEND this message %d : %s to %s :%d \n",n,dataToSend,inet_ntoa(cliaddr.sin_addr), ntohs(cliaddr.sin_port)); } } } I am checking the value of sizeof(dataTosend) and it is pretty much as expected ie thousand ie the size of buffer. Are you people seeing some possible flaw in it. All of the help in this regard will be appreciated. Thanks!

    Read the article

  • .net 3.5 message framing

    - by Rob
    We have message framing working by using a lengh prefix but using .NET 2.0 beginSend/BeginReceive. Is message framing any different in 3.5, if so how should we implement it using the new framework? Are there any useable examples out there which focus purely on message framing using 3.5? Many thanks

    Read the article

  • udp server unable to transmit data

    - by mawia
    Hi! all, I have written a simple udp server which has to transmit certain data to few of it's clients. but though server is successfully executing send,but unable to transmit even a single byte.The return value of send is 0 although I have enough data to be transmitted.you can see the code for the said server here: http://pastebin.com/zeMcwd6X Can you people help in finding the possible culprit for the same.Any reply in this regard will be appreciated. Lot of Thanks in advance! Mawia

    Read the article

  • Why does my Perl TCP server script hang with many TCP connections?

    - by viraptor
    I've got a strange issue with a server accepting TCP connections. Even though there are normally some processes waiting, at some volume of connections it hangs. Long version: The server is written in Perl and binds a $srv socket with the reuse flag and listen == 5. Afterwards, it forks into 10 processes with a loop of $clt=$srv->accept(); do_processing($clt); $clt->shutdown(2); The client written in C is also very simple - it sends some lines, then receives all lines available and does a shutdown(sockfd, 2); There's nothing async going on and at the end both send and receive queues are empty (as reported by netstat). Connections last only ~20ms. All clients behave the same way, are the same implementation, etc. Now let's say I'm accepting X connections from client 1 and another X from client 2. Processes still report that they're idle all the time. If I add another X connections from client 3, suddenly the server processes start hanging just after accepting. The first blocking thing they do after accept(); is while (<$clt>) ... - but they don't get any data (on the first try already). Suddenly all 10 processes are in this state and do not stop waiting. On strace, the server processes seem to hang on read(), which makes sense. There are loads of connections in TIME_WAIT state belonging to that server (~100 when the problem starts to manifest), but this might be a red herring. What could be happening here?

    Read the article

  • Socket receive buffer size

    - by Kanishka
    Is there a way to determine the receive buffer size of a TCPIP socket in c#. I am sending a message to a server and expecting a response where I am not sure of the receive buffer size. IPEndPoint ipep = new IPEndPoint(IPAddress.Parse("192.125.125.226"),20060); Socket server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); server.Connect(ipep); String OutStr= "49|50|48|48|224|48|129|1|0|0|128|0|0|0|0|0|4|0|0|32|49|50"; byte[] temp = OutStr.Split('|').Select(s => byte.Parse(s)).ToArray(); int byteCount = server.Send(temp); byte[] bytes = new byte[255]; int res=0; res = server.Receive(bytes); return Encoding.UTF8.GetString(bytes);

    Read the article

  • C++ Boost ASIO: how to read/write with a timeout?

    - by Stéphane
    From reading other Stackoverflow entries and the boost::asio documentation, I've confirmed that there is no synchronous asio read/write calls that also provide an easy-to-use timeout as a parameter to the call. I'm in the middle of converting an old-school linux socket app with select(2) calls that employs timeouts, and I need to do more-or-less the same. So what is the best way to do this in boost::asio? Looking at the asio documentation, there are many confusing examples of various things to do with timers, but I'm quite confused. I'd love to see a simple-to-read example of this: Read from a socket, but wait for a maximum of X seconds after which the function either returns with nothing, or returns with whatever it was able to read from the socket before the timeout expired.

    Read the article

  • scoket connection issue+php

    - by Abhimanyu
    Hi, I am using PHP socket programming and able to write data to open socket but i have to wait for a long time(or stuck it)for the response or some time getting error like "Maximum execution time of 30 seconds exceeded line number where this code is placed fgets($fp, 128), i have check the server it seems it has sent the response as expected but i am not getting why i m unable to get response.following the code using for socket connection and reading data. functon scoket_connection() { $fp = fsockopen(CLIENT_HOST,CLIENT_PORT, $errno, $errstr); fwrite($fp,$packet); $msg = fgets($fp, 128); fclose($fp) return $msg; } any idea???

    Read the article

  • Why can't a bind linux service to the loop-back only?

    - by Jon Trauntvein
    I am writing a server application that will provide a service on an ephemeral port that I only want accessible on the loopback interface. In order to do this, I am writing code like the following: struct sockaddr_in bind_addr; memset(&bind_addr,0,sizeof(bind_addr)); bind_addr.sin_family = AF_INET; bind_addr.sin_port = 0; bind_addr.sin_addr.s_addr = htonl(inet_addr("127.0.0.1")); rcd = ::bind( socket_handle, reinterpret_cast<struct sockaddr *>(&bind_addr), sizeof(bind_addr)); The return value for this call to bind() is -1 and the value of errno is 99 (Cannot assign requested address). Is this failing because inet_addr() already returns its result in network order or is there some other reason?

    Read the article

  • Socket Programming for the Web

    - by Benny
    I have to interact with a legacy system that accepts socket communication and messages. My goal is to make the application cross-platform, but I need the ability to push messages to the client (i.e. - .NET's WCF, Java's Comet) and detect when the user closes out of their browser to destroy the socket. I have built a prototype of .NET wrapper + WCF + Silverlight but it is so disconnected it is difficult to manage the state of the user and seems to be a nightmare to support. All of that considered, what would be my best option?

    Read the article

  • How to determine IP used by client connecting to INADDR_ANY listener socket in C

    - by codebox_rob
    I have a network server application written in C, the listener is bound using INADDR_ANY so it can accept connections via any of the IP addresses of the host on which it is installed. I need to determine which of the server's IP addresses the client used when establishing its connection - actually I just need to know whether they connected via the loopback address 127.0.0.1 or not. Partial code sample as follows (I can post the whole thing if it helps): static struct sockaddr_in serverAddress; serverAddress.sin_family = AF_INET; serverAddress.sin_addr.s_addr = INADDR_ANY; serverAddress.sin_port = htons(port); bind(listener, (struct sockaddr *) &serverAddress, sizeof(serverAddress)); listen(listener, CONNECTION_BACKLOG); SOCKET socketfd; static struct sockaddr_in clientAddress; ... socketfd = accept(listener, (struct sockaddr *) &clientAddress, &length);

    Read the article

  • How Proxy server works with tcp/http connections?

    - by Vivek
    Since I am a beginner in the world of internet/networking, I always mess up with these kinds of doubts in my head while programming ;) .. My doubts are, While working behind a proxy, how my requests and responses work? Means my request headers and data will first reach to Proxy server- then proxy server sends it(same headers and data) to corresponding server. And server responses to it with a response header and body to the proxy server-then proxy server sends it to my computer. Wright? While using websockets we are upgrading our http connection to tcp. At this time what is happening @ Proxy server? Does the proxyserver also upgrades its connection to plain TCP? After opening such TCP connections, does the proxy server able to track/log those socket messsages? And most importantly, Is the proxy server transparent or acting like an original server infront of a client? Thanks for any answers or helpful links in advance.

    Read the article

  • Send serialised object via socket

    - by RubbleFord
    Whats the best way to format a message to a server, at moment I'm serilising an object using the binaryformatter and then sending it to the server. At the server end its listening in an async fashion and then when the buffer size recieved is not 100% it assumes that the transfer has complete. This is working and the moment, and I can deserialise the object at the other end, I'm just concerned that if I start sending async this method will fail has message's could be blurred. I know that I need to mark the message somehow as to say that's the end of message one, this other bit belongs to message 2, but I'm unsure of the correct way to do this. Could anyone point me in the right direction and maybe give me some examples? Thanks

    Read the article

  • probelm with recv() on a tcp connection

    - by michael
    Hi, I am simulating TCP communication on windows in C I have sender and a receiver communicating. sender sends packets of specific size to receiver. receiver gets them and send an ACK for each packet it received back to the sender. If the sender didn't get a specific packet (they are numbered in a header inside the packet) it sends the packet again to the receiver. Here is the getPacket function on the receiver side: //get the next packet from the socket. set the packetSize to -1 //if it's the first packet. //return: total bytes read // return: 0 if socket has shutdown on sender side, -1 error, else number of bytes received int getPakcet(char *chunkBuff,int packetSize,SOCKET AcceptSocket){ int totalChunkLen = 0; int bytesRecv=-1; bool firstTime=false; if (packetSize==-1) { packetSize=MAX_PACKET_LENGTH; firstTime=true; } int needToGet=packetSize; do { char* recvBuff; recvBuff = (char*)calloc(needToGet,sizeof(char)); if(recvBuff == NULL){ fprintf(stderr,"Memory allocation problem\n"); return -1; } bytesRecv = recv(AcceptSocket, recvBuff, needToGet, 0); if (bytesRecv == SOCKET_ERROR){ fprintf(stderr,"recv() error %ld.\n", WSAGetLastError()); totalChunkLen=-1; return -1; } if (bytesRecv == 0){ fprintf(stderr,"recv(): socket has shutdown on sender side"); return 0; } else if(bytesRecv > 0) { memcpy(chunkBuff + totalChunkLen,recvBuff,bytesRecv); totalChunkLen+=bytesRecv; } needToGet-=bytesRecv; } while ((totalChunkLen < packetSize) && (!firstTime)); return totalChunkLen; } i use firstTime because for the first time the receiver doesn't know the normal package size that the sender is going to send to it, so i use a MAX_PACKET_LENGTH to get a package and then set the normal package size to the num of bytes i have received my problem is the last package. it's size is less than the package size so lets say last package size is 2 and the normal package size is 4. so recv() gets two bytes, continues to the while condition, then totalChunkLen < packetSize because 2<4 so it iterates the loop again and the gets stuck in recv() because it's blocking because the sender has nothing to send. on the sender side i can't close the connection because i didn't ACK back, so it's kind of a deadlock. receiver is stuck because it's waiting for more packages but sender has nothing to send. i don't want to use a timeout for recv() or to insert a special character to the package header to mark that it is the last one what can i do ? thanks

    Read the article

  • Socket connection to a telnet-based server hangs on read

    - by mixwhit
    I'm trying to write a simple socket-based client in Python that will connect to a telnet server. I can test the server by telnetting to its port (5007), and entering text. It responds with a NAK (error) or an AK (success), sometimes accompanied by other text. Seems very simple. I wrote a client to connect and communicate with the server, but it hangs on the first attempt to read the response. The connection is successful. Queries like getsockname and getpeername are successful. The send command returns a value that equals the number of characters I'm sending, so it seems to be sending correctly. But in the end, it always hangs when I try to read the response. I've tried using both file-based objects like readline and write (via socket.makefile), as well as using send and recv. With the file object I tried making it with "rw" and reading and writing via that object, and later tried one object for "r" and another for "w" to separate them. None of these worked. I used a packet sniffer to watch what's going on. I'm not versed in all that I'm seeing, but during a telnet session I can see my typed text and the server's text coming back. During my Python socket connection, I can see my text going to the server, but packets back don't seem to have any text in them. Any ideas on what I'm doing wrong, or any strategies to try? Here's the code I'm using (in this case, it's with send and recv): #!/usr/bin/python host = "localhost" port = 5007 msg = "HELLO EMC 1 1" msg2 = "HELLO" import socket import sys try: skt = socket.socket(socket.AF_INET, socket.SOCK_STREAM) except socket.error, e: print("Error creating socket: %s" % e) sys.exit(1) try: skt.connect((host,port)) except socket.gaierror, e: print("Address-related error connecting to server: %s" % e) sys.exit(1) except socket.error, e: print("Error connecting to socket: %s" % e) sys.exit(1) try: print(skt.send(msg)) print("SEND: %s" % msg) except socket.error, e: print("Error sending data: %s" % e) sys.exit(1) while 1: try: buf = skt.recv(1024) print("RECV: %s" % buf) except socket.error, e: print("Error receiving data: %s" % e) sys.exit(1) if not len(buf): break sys.stdout.write(buf)

    Read the article

  • Why use an event cache with epoll_wait?

    - by user1827356
    Question: epoll man page has some pointers when using epoll with an 'event cache'. But, why would you need to maintain an event cahce at all - Isn't this the same as what epoll is supposed to be doing? Is it to avoid making multiple epoll_wait calls which might be slower than managing the events in user space? Is it to implement a custom 'priority' scheme over the cached events? Background: I'm trying to understand the strengths/shortcomings of epoll and its applicability to different situations

    Read the article

  • Concurrent connections in C# socket

    - by Chu Mai
    There are three apps run at the same time, 2 clients and 1 server. The whole system should function as following: The client sends an serialized object to server then server receives that object as a stream, finally the another client get that stream from server and deserialize it. This is the sender: TcpClient tcpClient = new TcpClient(); tcpClient.Connect("127.0.0.1", 8888); Stream stream = tcpClient.GetStream(); BinaryFormatter binaryFormatter = new BinaryFormatter(); binaryFormatter.Serialize(stream, event); // Event is the sending object tcpClient.Close(); Server code: TcpListener listener = new TcpListener(IPAddress.Parse("127.0.0.1"), 8888); listener.Start(); Console.WriteLine("Server is running at localhost port 8888 "); while (true) { Socket socket = listener.AcceptSocket(); try { Stream stream = new NetworkStream(socket); // Typically there should be something to write the stream // But I don't knwo exactly what should the stream write } catch (Exception e) { Console.WriteLine("Exception: " + e.Message); Console.WriteLine("Disconnected: {0}", socket.RemoteEndPoint); } } The receiver: TcpClient client = new TcpClient(); // Connect the client to the localhost with port 8888 client.Connect("127.0.0.1", 8888); Stream stream = client.GetStream(); Console.WriteLine(stream); when I run only the sender and server, and check the server, server receives correctly the data. The problem is when I run the receiver, everything is just disconnected. So where is my problem ? Could anyone point me out ? Thanks

    Read the article

  • how can an application use port 80/HTTP without conflicting with browsers?

    - by John
    If I understand right, applications sometimes use HTTP to send messages, since using other ports is liable to cause firewall problems. But how does that work without conflicting with other applications such as web-browsers? In fact how do multiple browsers running at once not conflict? Do they all monitor the port and get notified... can you share a port in this way? I have a feeling this is a dumb question, but not something I ever thought of before, and in other cases I've seen problems when 2 apps are configured to use the same port.

    Read the article

  • change file descriptor for socket in python

    - by Dani
    Hello everybody I'm trying to manually create the file descriptor associated with a socket in python and then loaded directly into memory with mmap. Create a file into memory with mmap is simple, but I can not find a way to associate the file with a socket. Anyone know how? thank you very much.

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >