Search Results

Search found 1671 results on 67 pages for 'packets'.

Page 61/67 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Java: Cleaning up what causes a connection reset

    - by Zombies
    There seems to be some confusion as well contradicting statements on various SO answers: http://stackoverflow.com/questions/585599/whats-causing-my-java-net-socketexception-connection-reset . You can see here that the accepted answer states that the connection was closed by other side. But this is not true, closing a connection doesn't cause a connection reset. It is cauesed by "an underlying TCP/IP error." What I want to know is if a SocketException: Connection reset means really besides "unerlying TCP/IP Error." What really causes this? As I doubt it has anything to do with the connection being closed (since closing a connection isn't an exception worthy flag, and reading from a closed connection is, but that isn't an "underlying TCP/IP error." My hypothesis is this Connection reset is caused from a server's failure to acknowledge an ACK packet (either wholly or just improperly as per TCP/IP). And that a SocketTimeoutException is generated only when no data is generated to be read (since this is thrown during a read after a certain duration, and read is waiting for data, but is not concerned with ACK packets). In other words, read() throws SocketTimeoutException if it didn't read any bytes of actual data (DATA LAYER) in its allotted time.

    Read the article

  • What are the attack vectors for passwords sent over http?

    - by KevinM
    I am trying to convince a customer to pay for SSL for a web site that requires login. I want to make sure I correctly understand the major scenarios in which someone can see the passwords that are being sent. My understanding is that at any of the hops along the way can use a packet analyzer to view what is being sent. This seems to require that any hacker (or their malware/botnet) be on the same subnet as any of the hops the packet takes to arrive at its destination. Is that right? Assuming some flavor of this subnet requirement holds true, do I need to worry about all the hops or just the first one? The first one I can obviously worry about if they're on a public Wifi network since anyone could be listening in. Should I be worried about what's going on in subnets that packets will travel across outside this? I don't know a ton about network traffic, but I would assume it's flowing through data centers of major carriers and there's not a lot of juicy attack vectors there, but please correct me if I am wrong. Are there other vectors to be worried about outside of someone listening with a packet analyzer? I am a networking and security noob, so please feel free to set me straight if I am using the wrong terminology in any of this.

    Read the article

  • How do the XMPP modules work in perl?

    - by TheGNUGuy
    Hey everybody, I am trying to make my own jabber bot but i have run into a little trouble. I have gotten my bot to respond to messages, however, if I try to change the bot's presence then it seems as though all of the messages you send to the bot get delayed. What I mean is when I run the script I change the presence so I can see that it is online. Then When I send it a message it takes 3 before the callback subroutine i have set up for messages gets called. After the 3rd message is sent and the chat subroutine is called it still process the first message I sent. This really doesn't pose TOO much of a problem except that I have it set up to log out when I send the message "logout" and it has to be followed by two more messages in order to log out. I am not sure what it is that I have to do to fix this but i think it has something to do with iq packets because I have an iq callback set as well and it gets called 2 times after setting the presence. Here is my source code: http://pastebin.com/MgKMhTML Thanks for your help!

    Read the article

  • Should I move big data blobs in JSON or in separate binary connection?

    - by Amagrammer
    QUESTION: Is it better to send large data blobs in JSON for simplicity, or send them as binary data over a separate connection? If the former, can you offer tips on how to optimize the JSON to minimize size? If the latter, is it worth it to logically connect the JSON data to the binary data using an identifier that appears in both, e.g., as "data" : "< unique identifier " in the JSON and with the first bytes of the data blob being < unique identifier ? CONTEXT: My iPhone application needs to receive JSON data over the 3G network. This means that I need to think seriously about efficiency of data transfer, as well as the load on the CPU. Most of the data transfers will be relatively small packets of text data for which JSON is a natural format and for which there is no point in worrying much about efficiency. However, some of the most critical transfers will be big blobs of binary data -- definitely at least 100 kilobytes of data, and possibly closer to 1 megabyte as customers accumulate a longer history with the product. (Note: I will be caching what I can on the iPhone itself, but the data still has to be transferred at least once.) It is NOT streaming data. I will probably use a third-party JSON SDK -- the one I am using during development is here. Thanks

    Read the article

  • Sudden issues reading uncompressed video using opencv

    - by JohnSavage
    I have been using a particular pipeline to process video using opencv to encode uncompressed video (fourcc = 0), and opencv python bindings to then open and work on these files. This has been working fine for me on OpenCV 2.3.1a on Ubuntu 11.10 until just a few days ago. For some reason it currently is only allowing me to read the first frame of a given file the first time I open that file. Further frames are not read, and once I touch the file once with my program, it then cannot even read the first frame. More detail: I created the uncompressed video files as follows: out_video.open(out_vid_name, 0, // FOURCC = 0 means record raw fps, Size(640, 480)) Again, these videos worked fine for me until about a week ago. Now, when I try to open one of these I get the following message (from what I think is ffmpeg): Processing video.avi Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later. [avi @ 0x29251e0] parser not found for codec rawvideo, packets or times may be invalid. It reads and displays the first frame fine, but then fails to read the next frame. Then, when I try to run my code on the same video, the capture still opens with the same message as above. However, it cannot even read the very first frame. Here is the code to open the capture: self.capture = cv2.VideoCapture(filename) if not self.capture.isOpened() print "Error: could not open capture" sys.exit() Again, this part is passed without any issue, but then the break happens at: success, rgb = self.capture.read() if not success: print "error: could not read frame" return False This part breaks at the second frame on the first run of the video file, and then on the first frame on subsequent runs. I really don't know where to even begin debugging this. Please help!

    Read the article

  • how to continuously send data without blocking?

    - by Donal Rafferty
    I am trying to send rtp audio data from my Android application. I currently can send 1 RTP packet with the code below and I also have another class that extends Thread that listens to and receives RTP packets. My question is how do I continuously send my updated buffer through the packet payload without blocking the receiving thread? public void run() { isRecording = true; android.os.Process.setThreadPriority (android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); int buffersize = AudioRecord.getMinBufferSize(8000, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT); Log.d("BUFFERSIZE","Buffer size = " + buffersize); arec = new AudioRecord(MediaRecorder.AudioSource.MIC, 8000, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize); short[] readBuffer = new short[80]; byte[] buffer = new byte[160]; arec.startRecording(); while(arec.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING){ int frames = arec.read(readBuffer, 0, 80); @SuppressWarnings("unused") int lenghtInBytes = codec.encode(readBuffer, 0, buffer, frames); RtpPacket rtpPacket = new RtpPacket(); rtpPacket.setV(2); rtpPacket.setX(0); rtpPacket.setM(0); rtpPacket.setPT(0); rtpPacket.setSSRC(123342345); rtpPacket.setPayload(buffer, 160); try { rtpSession2.sendRtpPacket(rtpPacket); } catch (UnknownHostException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (RtpException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } So when I send on one device and receive on another I get decent audio, but when I send and receive on both I get broken sound like its taking turns to send and receive audio. I have a feeling it could be to do with the while loop? it could be looping around in there and not letting anything else run?

    Read the article

  • Outgoing UDP sniffer in python?

    - by twneale
    I want to figure out whether my computer is somehow causing a UDP flood that is originating from my network. So that's my underlying problem, and what follows is simply my non-network-person attempt to hypothesize a solution using python. I'm extrapolating from recipe 13.1 ("Passing Messages with Socket Datagrams") from the python cookbook (also here). Would it possible/sensible/not insane to try somehow writing an outgoing UDP proxy in python, so that outgoing packets could be logged before being sent on their merry way? If so, how would one go about it? Based on my quick research, perhaps I could start a server process listening on suspect UDP ports and log anything that gets sent, then forward it on, such as: import socket s =socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.bind(("", MYPORT)) while True: packet = dict(zip('data', 'addr'), s.recvfrom(1,024)) log.info("Recieved {data} from {addr}.".format(**packet)) But what about doing this for a large number of ports simultaneously? Impractical? Are there drawbacks or other reasons not to bother with this? Is there a better way to solve this problem (please be gentle).

    Read the article

  • use exec for dsadd

    - by Daryl Gill
    I'm Programming on a Windows Server 2008 and I wish to have a WebUI to interact with the domains active directory. One of my main problems is this that i'm using dsadd from a HTML form but this is no succeeding. I know my command is correct, I have tested it out on the Servers Command line My Code is As Below: if (isset($_POST['Submit'])) { $DesiredUsername = $_POST['DesiredUsername']; $DesiredPassword = $_POST['DesiredPassword']; $DU = "{$DesiredUsername}"; // Desired Username $OU = "PHPCreatedUsers"; // Domain OU $DC1 = "slayerserv"; // Domain Part one $DC2 = "local"; // Domain Part Two $PWD = "{$DesiredPassword}"; // Password $ExecScript = 'dsadd user cn=$DesiredUsername,cn=PHPCreatedUsers,dc=slayerserv,dc=local -disabled no -pwd $DesiredPassword -mustchpwd yes'; exec($ExecScript, $output); mysql_query("INSERT INTO addedusers (`ID`, `DU`, `OU`, `DC1`, `DC2, `PWD`) VALUES ('', '$DU', '$OU', '$DC1', '$DC2', '$PWD')"); echo "<br><br>"; print_r($output); # echo "User: $DesiredUsername Has been Created"; } When I print_r($output); it Returns a blank array: Array ( ) Could anyone provide me with a solution or point me in the right direction? ++++ Below is a working example of my usage of exec $Script = 'ping 127.0.0.1 -n 1'; exec($Script, $Output); print_r($Output); print_r($Output); Gives: Array ( [0] = [1] = Pinging 127.0.0.1 with 32 bytes of data: [2] = Reply from 127.0.0.1: bytes=32 time<1ms TTL=128 [3] = [4] = Ping statistics for 127.0.0.1: [5] = Packets: Sent = 1, Received = 1, Lost = 0 (0% loss), [6] = Approximate round trip times in milli-seconds: [7] = Minimum = 0ms, Maximum = 0ms, Average = 0ms )

    Read the article

  • Sending series of images to display like a movie on iPhone

    - by unknownthreat
    Allow me to elaborate more. On the server, we will have a program that will take data from iPhone and process that data and produce series of images. Each time an image is generated, it will be send back to display on iPhone. I have done all of the things above using UDP, OpenGL, and such. It works. The images are transferred to iPhone and can be displayed, but it is slow. The image's resolution is around 320 x 420 and we send the image pixels by pixels. This naive implementation leads to a slow framerate. I can see around 2-3 frames per second. There are also some UDP packets dropped, and this is expected. Are there any sort of compression method available for something like this? Are there any other method that can make this better? NOTE: please don't just write "compression" as an answer, because we are aware that we will need to do it in some ways.

    Read the article

  • Simplest way to debug an android bluetooth application

    - by intiha
    I am trying to test and build a sample android application that can simply connect to a BT server to send and receive a few packets. Since the emulator has no support what is the next best thing to test BT communication? Can I just run a code that acts as a server on my laptop and dumps BT connection onto a console? Do I have to write this code, or is their a simple tool that saves me that hassle? One more thing, I have windows 7, and currently I attach to my PC an anycom USB BT adapter. This is able to show up my nexus one as a BT device. I can pair to my laptop but connection always fails. When it tries to pair I have tried both 0000 and 1234 as suggested on my nexus, but in my connection list it still lists my laptop as paired but not connected. Any idea why? I searched for this problem and apparently people talk about rebooting and power cycling BT, but these solutions dont work for me. Thanks. Affan.

    Read the article

  • Traceroute comparison and statistics

    - by ben-casey
    I have a number of traceroutes that i need to compare against each other but i dont know the best way to do it, ive been told that hash maps are a good technique but i dont know how to implement them on my code. so far i have: FileInputStream fstream = new FileInputStream("traceroute.log"); // Get the object of DataInputStream DataInputStream in = new DataInputStream(fstream); BufferedReader br = new BufferedReader(new InputStreamReader(in)); String strLine; // reads lines in while ((strLine = br.readLine()) != null) { System.out.println(strLine); } and the output looks like this: Wed Mar 31 01:00:03 BST 2010 traceroute to www.bbc.co.uk (212.58.251.195), 30 hops max, 40 byte packets 1 139.222.0.1 (139.222.0.1) 0.873 ms 1.074 ms 1.162 ms 2 core-from-cmp.uea.ac.uk (10.0.0.1) 0.312 ms 0.350 ms 0.463 ms 3 ueaha1btm-from-uea1 (172.16.0.34) 0.791 ms 0.772 ms 1.238 ms 4 bound-from-ueahatop.uea.ac.uk (193.62.92.71) 5.094 ms 4.451 ms 4.441 ms 5 gi0-3.norw-rbr1.eastnet.ja.net (193.60.0.21) 4.426 ms 5.014 ms 4.389 ms 6 gi3-0-2.chel-rbr1.eastnet.ja.net (193.63.107.114) 6.055 ms 6.039 ms * 7 lond-sbr1.ja.net (146.97.40.45) 6.994 ms 7.493 ms 7.457 ms 8 so-6-0-0.lond-sbr4.ja.net (146.97.33.154) 8.206 ms 8.187 ms 8.234 ms 9 po1.lond-ban4.ja.net (146.97.35.110) 8.673 ms 6.294 ms 7.668 ms 10 bbc.lond-sbr4.ja.net (193.62.157.178) 6.303 ms 8.118 ms 8.107 ms 11 212.58.238.153 (212.58.238.153) 6.245 ms 8.066 ms 6.541 ms 12 212.58.239.62 (212.58.239.62) 7.023 ms 8.419 ms 7.068 ms what i need to do is compare this trace against another one just like it and look for the changes and time differences etc, then print a stats page.

    Read the article

  • Configuring IHS server to direct traffic to the Netty component bound to a port

    - by rbot
    I have a Server Component ( based on Jboss-Netty, which could maintain & handle persistent connections ) deployed in WAS. This component when deployed & initiated within the WAS environment, binds to a port & listens for incoming HTTP connection. [ Why i had to deploy a Netty HTTP Server within WAS is another story - management requirement !! Netty is deployed in WAS as a spring bean which when initiated runs on a port in the machine, independent of WAS ] Clients (mobile app) were able to establish persistent HTTP connections (to the above URL::Port) with this netty component & send/receive requests. Now, I have to replicate this feature in our Production Environment where a IHS Server (Web Server) which sits before the WAS. What i expected is to get a IHS URL which could redirect the incoming packets to the specific PORT on WAS, so that the Client apps can establish a similar persistent http connection. Our Server Admin tried a few combinations and we are not able to identify how to proceed further on this. Your expert ideas would be highly appreciated.

    Read the article

  • Passive and active sockets

    - by davsan
    Quoting from this socket tutorial: Sockets come in two primary flavors. An active socket is con­nect­ed to a remote active socket via an open data con­nec­tion... A passive socket is not con­nect­ed, but rather awaits an in­com­ing con­nec­tion, which will spawn a new active socket once a con­nec­tion is es­tab­lished ... Each port can have a single passive socket binded to it, await­ing in­com­ing con­nec­tions, and mul­ti­ple active sockets, each cor­re­spond­ing to an open con­nec­tion on the port. It's as if the factory worker is waiting for new mes­sages to arrive (he rep­re­sents the passive socket), and when one message arrives from a new sender, he ini­ti­ates a cor­re­spon­dence (a con­nec­tion) with them by del­e­gat­ing someone else (an active socket) to ac­tu­al­ly read the packet and respond back to the sender if nec­es­sary. This permits the factory worker to be free to receive new packets. ... Then the tutorial explains that, after a connection is established, the active socket continues receiving data until there are no remaining bytes, and then closes the connection. What I didn't understand is this: Suppose there's an incoming connection to the port, and the sender wants to send some little data every 20 minutes. If the active socket closes the connection when there are no remaining bytes, does the sender have to reconnect to the port every time it wants to send data? How do we persist a once established connection for a longer time? Can you tell me what I'm missing here? My second question is, who determines the limit of the concurrently working active sockets?

    Read the article

  • Parsing every part of an HTTP header field-value

    - by brickner
    Hi all. I'm parsing HTTP data directly from packets (either TCP reconstructed or not, you can assume it is). I'm looking for the best way to parse HTTP as accurately as possible. The main issue here is the HTTP header. Looking at the basic RFC of HTTP/1.1, it seems that HTTP header parsing would be complex. The RFC describes very complex regular expressions for different parts of the header. Should I write these regular expressions to parse the different parts of the HTTP header? The basic parsing I've written so far for HTTP header is for the generic HTTP header: message-header = field-name ":" [ field-value ] And I've included replacing inner LWS with SP and repeating headers with the same field-name with comma separated values as described in section 4.2. However, looking at section 14.9 for example would show that in order to parse the different parts of the field-value I need a much more complex parsing scheme. How do you suggest I should handle the complex parts of HTTP parsing (specifically the field-value) assuming I want to give the parser users the full capabilities of HTTP and to parse every part of HTTP? Design suggestions for this would also be appreciated. Thanks.

    Read the article

  • Java: Clearing up the confusion on what causes a connection reset

    - by Zombies
    There seems to be some confusion as well contradicting statements on various SO answers: http://stackoverflow.com/questions/585599/whats-causing-my-java-net-socketexception-connection-reset . You can see here that the accepted answer states that the connection was closed by other side. But this is not true, closing a connection doesn't cause a connection reset. It is cauesed by "an underlying TCP/IP error." What I want to know is if a SocketException: Connection reset means really besides "unerlying TCP/IP Error." What really causes this? As I doubt it has anything to do with the connection being closed (since closing a connection isn't an exception worthy flag, and reading from a closed connection is, but that isn't an "underlying TCP/IP error." My hypothesis is this Connection reset is caused from a server's failure to acknowledge an ACK packet (either wholly or just improperly as per TCP/IP). And that a SocketTimeoutException is generated only when no data is generated to be read (since this is thrown during a read after a certain duration, and read is waiting for data, but is not concerned with ACK packets). In other words, read() throws SocketTimeoutException if it didn't read any bytes of actual data (DATA LAYER) in its allotted time.

    Read the article

  • What's a way for a client to automatically resolve the ip address of a server?

    - by zooropa
    The project I am working on is a client/server architecture. In a LAN environment, I want the client's to be able to automatically determine the server's address. I want to avoid having to manually configure each client with the ip address of the server. What is the best way to do this? Some alternatives I have thought about doing are: The server could listen for broadcast packets from the clients. The message from the client would be a request for the IP address of the server. The server would respond with its address. The machine running my project's server could also have a bind server running. The LAN's router could be configured to use it as one of its DNS servers. I think I saw that there is a bind library. Does that mean I can build the bind service into my server so that bind doesn't have to be installed on the server? Any other ideas? What have you done in the past? What are the pros/cons of these approaches and others that might be suggested? Thanks for your help!

    Read the article

  • Symbian: clear buffer of RSocket object

    - by Heinz
    Hi, I have to come back once again to sockets in Symbian. Code to set up a connection to a remote server looks as follows: TInetAddr serverAddr; TUint iPort=111; TRequestStatus iStatus; TSockXfrLength len; TInt res = iSocketSrv.Connect(); res = iSocket.Open(iSocketSrv,KAfInet,KSockStream, KProtocolInetTcp); res = iSocket.SetOpt(KSoTcpSendWinSize, KSolInetTcp, 0x10000); serverAddr.SetPort(iPort); serverAddr.SetAddress(INET_ADDR(11,11,179,154)); iSocket.Connect(serverAddr,iStatus); User::WaitForRequest(iStatus); Over the iSocket i receive packets of variable size. On very few occurences it happens that such a packet is corrupted. What I would like to do then is to clear all the data that is currently in the iSocket buffer and ready to be read. I have not seen any method of RSocket that allows me to clear the content of the buffer. Does anyone know how to do that? If possible, I would like to avoid using RecvOneOrMore() or similar recv function clear the buffer Thanks

    Read the article

  • How to change internal buffer size of DataInputStream

    - by Gaks
    I'm using this kind of code for my TCP/IP connection: sock = new Socket(host, port); sock.setKeepAlive(true); din = new DataInputStream(sock.getInputStream()); dout = new DataOutputStream(sock.getOutputStream()); Then, in separate thread I'm checking din.available() bytes to see if there are some incoming packets to read. The problem is, that if a packet bigger than 2048 bytes arrives, the din.available() returns 2048 anyway. Just like there was a 2048 internal buffer. I can't read those 2048 bytes when I know it's not the full packet my application is waiting for. If I don't read it however - it'll all stuck at 2048 bytes and never receive more. Can I enlarge the buffer size of DataInputStream somehow? Socket receive buffer is 16384 as returned by sock.getReceiveBufferSize() so it's not the socket limiting me to 2048 bytes. If there is no way to increase the DataInputStream buffer size - I guess the only way is to declare my own buffer and read everything from DataInputStream to that buffer? Regards

    Read the article

  • C socket programming: client send() but server select() doesn't see it

    - by Fantastic Fourier
    Hey all, I have a server and a client running on two different machines where the client send()s but the server doesn't seem to receive the message. The server employs select() to monitor sockets for any incoming connections/messages. I can see that when the server accepts a new connection, it updates the fd_set array but always returns 0 despite the client send() messages. The connection is TCP and the machines are separated by like one router so dropping packets are highly unlikely. I have a feeling that it's not select() but perhaps send()/sendto() from client that may be the problem but I'm not sure how to go about localizing the problem area. while(1) { readset = info->read_set; ready = select(info->max_fd+1, &readset, NULL, NULL, &timeout); } above is the server side code where the server has a thread that runs select() indefinitely. rv = connect(sockfd, (struct sockaddr *) &server_address, sizeof(server_address)); printf("rv = %i\n", rv); if (rv < 0) { printf("MAIN: ERROR connect() %i: %s\n", errno, strerror(errno)); exit(1); } else printf("connected\n"); sleep(3); char * somemsg = "is this working yet?\0"; rv = send(sockfd, somemsg, sizeof(somemsg), NULL); if (rv < 0) printf("MAIN: ERROR send() %i: %s\n", errno, strerror(errno)); printf("MAIN: rv is %i\n", rv); rv = sendto(sockfd, somemsg, sizeof(somemsg), NULL, &server_address, sizeof(server_address)); if (rv < 0) printf("MAIN: ERROR sendto() %i: %s\n", errno, strerror(errno)); printf("MAIN: rv is %i\n", rv); and this is the client side where it connects and sends messages and returns connected MAIN: rv is 4 MAIN: rv is 4 any comments or insightful insights are appreciated.

    Read the article

  • SSL over TDS, SQL Server 2005 Express

    - by reuvenab
    I capture packets sent/received by Win Xp machine when connecting to SQL Server 2005 Express using TLS encryption. Server and Client exchange Hello messages Server and Client send ChangeCipherSpec message Then Server and Client server send strange message that is not described in TLS protocol What is the message and if SSL over TDS is standard compliant at all? Server side capture: 16 **SSL Handshake** 03 01 00 4a 02 ServerHello 00 00 46 03 01 4b dd 68 59 GMT 33 13 37 98 10 5d 57 9d ff 71 70 dc d6 6f 9e 2c Random[00..13] cb 96 c0 2e b3 2f 9b 74 67 05 cc 96 Random[14..27] 20 72 26 00 00 0f db 7f d9 b0 51 c2 4f cd 81 4c Session ID 3f e3 d2 d1 da 55 c0 fe 9b 56 b7 6f 70 86 fe bb Session ID 54 Session ID 00 04 Cipher Suite 00 Compression 14 03 01 00 01 01 **ChangeCipherSpec** 16 03 01 ???? Finished ??? 00 20 d0 da cc c4 36 11 43 ff 22 25 8a e1 38 2b ???? ??? 71 ce f3 59 9e 35 b0 be b2 4b 1d c5 21 21 ce 41 ???? ??? 8e 24

    Read the article

  • Orbited exception Data must not be unicode.

    - by Sid
    I am working with orbited and once I switch on orbited in production mode it throws the following error on my screen -- <exception caught here> --- File "/usr/lib/python2.6/dist-packages/twisted/web/server.py", line 150, in process self.render(resrc) File "/usr/lib/python2.6/dist-packages/twisted/web/server.py", line 157, in render body = resrc.render(self) File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/transports/base.py", line 21, in render self.conn.transportOpened(self) File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/cometsession.py", line 322, in transportOpened self.cometTransport.flush() File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/transports/base.py", line 45, in flush self.write(self.packets) File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/transports/htmlfile.py", line 42, in write self.request.write(payload); File "/usr/lib/python2.6/dist-packages/twisted/web/http.py", line 862, in write self.transport.write(data) File "/usr/lib/python2.6/dist-packages/twisted/internet/tcp.py", line 420, in write abstract.FileDescriptor.write(self, bytes) File "/usr/lib/python2.6/dist-packages/twisted/internet/abstract.py", line 170, in write raise TypeError("Data must not be unicode") exceptions.TypeError: Data must not be unicode I have absolutely no clue as to what could be the problem. Could anyone point me in the right direction.

    Read the article

  • How do i programmatically access the face cache in Windows Live Photo Gallery?

    - by acorderob
    I'm not talking about the "people tags" embeded in the XMP packets of JPEGs. I'm talking about the face database used to recognize new faces. I want to add to my program the option to recognize faces using the already trained database of WLPG. I managed to use the API (a type library dll) to detect faces, but to recognize them it needs an Exemplar Cache object that is not available in the same API. I could create my own object, but i want to use the already existing one to avoid duplicate training for the user. I know the database is in C:\Users\\AppData\Local\Microsoft\Windows Live Photo Gallery and that it is in an SQL Server Compact format. I tried to open the database with Visual Studio 2010, but it says that it is in an older version (pre-3.5) and needs to be upgraded. I don't want to change the database, just read it. I don't know how the WPLG reads it since apparently i don't have the correct OLEDB provider version. I would also prefer to read it without accesing the database directly but i don't see any DLL that exports that functionality. BTW, i'm using Delphi 2010. Any ideas?

    Read the article

  • how to bind/connect multiple UDP socket

    - by nicboul
    My initial UDP socket is binded to 127.0.0.1:9898. The first time that I get notified of incoming data by epoll/kqueue, I do recvfrom() and I fill a struct sockaddr called peer_name that contain the peer informations (ip:port). Then I create a new UPD socket using socket(), then I bind() this newly created socket to the same ip:port (127.0.0.1:9898) than my original socket. then I connect my newly created socket using connect() to the peer who just sent me something. I have the information in the struct sockaddr called peer_name. I then add my newly created socket in my epoll/kqueue vector and wait for notification. I would expect to ONLY receive UDP frame from the peer i'm ""connected to"". 1/ does netstat -a -p udp is suppose to show me the IP:PORT of the peer my newly created socket is ""connected to"" ? 2/ I'm probably doing something wrong since after creating my new socket, this socket receive all incoming UDP packets destinated to the IP:PORT I'm binded to, regardless of the source peer IP:PORT. I would like to see a working example of what I'm trying to do :) or any hint on what I'm doing wrong. thanks!

    Read the article

  • node.js UDP data lost at high package rates

    - by koleto
    I am observing a significant data-lost on a UDP connection with node.js 0.6.18 and 0.8.0 . It appears at high packet rates about 1200 packet per second with frames about 1500 byte limit. Each data packages has a incrementing number so it easy to track the number of lost packages. var server = dgram.createSocket("udp4"); server.on("message", function (message, rinfo) { //~processData(message); //~ writeData(message, null, 5000); }).bind(10001); On the receiving callback I tested two cases I first saved 5000 packages in a file. The result ware no dropped packages. After I have included a data processing routine and got about 50% drop rate. What I expected was that the process data routine should be completely asynchronous and should not introduce dead time to the system, since it is a simple parser to process binary data in the package and to emits events to a further processing routine. It seems that the parsing routine introduce dead time in which the event handler is unable to handle each packets. At the low package rates (< 1200 packages/sec) there are no data lost observed! Is this a bug or I am doing something wrong?

    Read the article

  • Is this the right way to write a ProtocolDecoder in MINA?

    - by phpscriptcoder
    public class CustomProtocolDecoder extends CumulativeProtocolDecoder{ byte currentCmd = -1; int currentSize = -1; boolean isFirst = false; @Override protected boolean doDecode(IoSession is, ByteBuffer bb, ProtocolDecoderOutput pdo) throws Exception { if(currentCmd == -1) { currentCmd = bb.get(); currentSize = Packet.getSize(currentCmd); isFirst = true; } while(bb.remaining() > 0) { if(!isFirst) { currentCmd = bb.get(); currentSize = Packet.getSize(currentCmd); } else isFirst = false; //System.err.println(currentCmd + " " + bb.remaining() + " " + currentSize); if(bb.remaining() >= currentSize - 1) { Packet p = PacketDecoder.decodePacket(bb, currentCmd); pdo.write(p); } else { bb.flip(); return false; } } if(bb.remaining() == 0) return true; else return false; } } Anyone see anything wrong with this code? When a lot of packets are received at once, even when only one client is connected, one of them might get cut off at the end (12 bytes instead of 15 bytes, for example) which is obviously bad.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67  | Next Page >