Search Results

Search found 6654 results on 267 pages for 'socket io'.

Page 10/267 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • C++ socket concurrent server

    - by gregpi
    Hi all; Im writing a concurrent server that's supposed to have a communication channel and a data channel. The client initially connect to the communication channel to authenticate, upon successful authentication, the client is then connected to the data channel to access data. My program is already doing that, and im using threads.My only issue is that if I try to connect another client, I get a "cannot bind : address already in use" error. I have it this way: PART A Client connect to port 4567 (and enter his login info). A thread spawn to handle the client(repeated for each client that connects). In the thread created, I have a function(let's call it FUNC_A) that checks the client's login info(dont worry about how the check is done), if successful, the thread starts the data server(listening on 8976) then sends an OK to the client, once received the client attempts to connect to the data server. PART B Once a client connect to the data server, from inside FUNC_A the client is accepted and another thread is spawn to handle the client's connection to the data server.(hopefully everything is clear). Now, all that is working fine. However, if I try to connect with second client when it gets to PART B I get a "cannot bind error: address already in use". I've tried so many different ways, I've even tried spawning a thread to start the data server and accept the client and then start another thread to handle that connection. still no luck. Please give me a suggestion as to what I'm doing wrong, how do I go about doing this or what's the best way to implement it. Thank you

    Read the article

  • socket problem in java

    - by stefan89
    I have two classes X and Y, like this: class X implements Serializable { int val1; Y val2; } class Y implements Serializable { int val; } I want to transmit an object of type X from a client to server but i can't because the class X has a field of type Y. I replace the field of type Y with a field of type X in class X and it works.

    Read the article

  • sending a packet to multiple client at a time from udp socket

    - by mawia
    Hi! all, I was trying to write a udp server who send an instance of a file to multiple clients.Now suppose any how I manage to know the address of those client statically(for the sake of simplicity) and now I want to send this packet to these addresses.So how exactly I need to fill the sockaddr structure to contain the address of these clients.I am taking an array of sockaddr structure(to contain client address) and trying to send at each one them at a time.Now problem is to fill the individual sockaddr structure to contain the client address. I was trying to do something like that sa[1].sin_family = AF_INET; sa[1].sin_addr.s_addr = htonl(INADDR_ANY);//should'nt I replace this INADDR_ANY with client ip?? sa[1].sin_port = htons(50002); Correct me if this is not the correct way. All your help in this regard will be highly appreciated. With Thanks in advance, Mawia

    Read the article

  • Android Socket Connection

    - by DrCoPe
    I'm guessing this will be such a newbee question but I hit a wall and... I am running the jWebSocket (http://jwebsocket.org) stand-alone server. For a client I am using Weberknecht (http://code.google.com/p/weberknecht/) And Eclipse is my IDE of choice. Now, when I start the server and run the Weberknecht client like a normal Java application I get a connection. Granted, the connection is quickly dropped because the handshake needs to be configured but at least the server shows me a connection was attempted. However, when I use the exact same code in my hello world Android app I get nothing :( I am also not seeing any console outputs even thought I used both Log.i and Log.v Log.i(TAG, "YEI! connectToWS 1!"); Log.v(TAG, "YEI! connectToWS 1!"); Maybe I am calling the connect method in the wrong place? @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); connectToWS(); ... Any help would be GREATLY appreciated!

    Read the article

  • Socket Programming : Inputstream Stuck in loop - read() always return 0

    - by Atom Skaa ska Hic
    Server side code public static boolean sendFile() { int start = Integer.parseInt(startAndEnd[0]) - 1; int end = Integer.parseInt(startAndEnd[1]) - 1; int size = (end - start) + 1; try { bos = new BufferedOutputStream(initSocket.getOutputStream()); bos.write(byteArr,start,size); bos.flush(); bos.close(); initSocket.close(); System.out.println("Send file to : " + initSocket); } catch (IOException e) { System.out.println(e.getLocalizedMessage()); disconnected(); return false; } return true; } Client Side public boolean receiveFile() { int current = 0; try { int bytesRead = bis.read(byteArr,0,byteArr.length); System.out.println("Receive file from : " + client); current = bytesRead; do { bytesRead = bis.read(byteArr, current, (byteArr.length-current)); if(bytesRead >= 0) current += bytesRead; } while(bytesRead != -1); bis.close(); bos.write(byteArr, 0 , current); bos.flush(); bos.close(); } catch (IOException e) { System.out.println(e.getLocalizedMessage()); disconnected(); return false; } return true; } Client side is multithreading,server side not use multithreading. I just paste some code that made problem if you want see all code please tell me. After I debug the code, I found that if I set max thread to any and then the first thread always stuck in this loop. That bis.read(....) always return 0. Although, server had close stream and it not get out of the loop. I don't know why ... But another threads are work correctly. do { bytesRead = bis.read(byteArr, current, (byteArr.length-current)); if(bytesRead >= 0) current += bytesRead; } while(bytesRead != -1);

    Read the article

  • How to handle asynchronous socket receiving in C++?

    - by Overv
    I'm currently using a thread to handle Connect and Send calls asynchronously. This is all working fine, but now I want to make receiving asynchronous too. How should I receive data without pausing the whole queue while waiting for data? The only solution I can think of right now is a second thread.

    Read the article

  • Traditional IO vs memory-mapped

    - by Senne
    I'm trying to illustrate the difference in performance between traditional IO and memory mapped files in java to students. I found an example somewhere on internet but not everything is clear to me, I don't even think all steps are nececery. I read a lot about it here and there but I'm not convinced about a correct implementation of neither of them. The code I try to understand is: public class FileCopy{ public static void main(String args[]){ if (args.length < 1){ System.out.println(" Wrong usage!"); System.out.println(" Correct usage is : java FileCopy <large file with full path>"); System.exit(0); } String inFileName = args[0]; File inFile = new File(inFileName); if (inFile.exists() != true){ System.out.println(inFileName + " does not exist!"); System.exit(0); } try{ new FileCopy().memoryMappedCopy(inFileName, inFileName+".new" ); new FileCopy().customBufferedCopy(inFileName, inFileName+".new1"); }catch(FileNotFoundException fne){ fne.printStackTrace(); }catch(IOException ioe){ ioe.printStackTrace(); }catch (Exception e){ e.printStackTrace(); } } public void memoryMappedCopy(String fromFile, String toFile ) throws Exception{ long timeIn = new Date().getTime(); // read input file RandomAccessFile rafIn = new RandomAccessFile(fromFile, "rw"); FileChannel fcIn = rafIn.getChannel(); ByteBuffer byteBuffIn = fcIn.map(FileChannel.MapMode.READ_WRITE, 0,(int) fcIn.size()); fcIn.read(byteBuffIn); byteBuffIn.flip(); RandomAccessFile rafOut = new RandomAccessFile(toFile, "rw"); FileChannel fcOut = rafOut.getChannel(); ByteBuffer writeMap = fcOut.map(FileChannel.MapMode.READ_WRITE,0,(int) fcIn.size()); writeMap.put(byteBuffIn); long timeOut = new Date().getTime(); System.out.println("Memory mapped copy Time for a file of size :" + (int) fcIn.size() +" is "+(timeOut-timeIn)); fcOut.close(); fcIn.close(); } static final int CHUNK_SIZE = 100000; static final char[] inChars = new char[CHUNK_SIZE]; public static void customBufferedCopy(String fromFile, String toFile) throws IOException{ long timeIn = new Date().getTime(); Reader in = new FileReader(fromFile); Writer out = new FileWriter(toFile); while (true) { synchronized (inChars) { int amountRead = in.read(inChars); if (amountRead == -1) { break; } out.write(inChars, 0, amountRead); } } long timeOut = new Date().getTime(); System.out.println("Custom buffered copy Time for a file of size :" + (int) new File(fromFile).length() +" is "+(timeOut-timeIn)); in.close(); out.close(); } } When exactly is it nececary to use RandomAccessFile? Here it is used to read and write in the memoryMappedCopy, is it actually nececary just to copy a file at all? Or is it a part of memorry mapping? In customBufferedCopy, why is synchronized used here? I also found a different example that -should- test the performance between the 2: public class MappedIO { private static int numOfInts = 4000000; private static int numOfUbuffInts = 200000; private abstract static class Tester { private String name; public Tester(String name) { this.name = name; } public long runTest() { System.out.print(name + ": "); try { long startTime = System.currentTimeMillis(); test(); long endTime = System.currentTimeMillis(); return (endTime - startTime); } catch (IOException e) { throw new RuntimeException(e); } } public abstract void test() throws IOException; } private static Tester[] tests = { new Tester("Stream Write") { public void test() throws IOException { DataOutputStream dos = new DataOutputStream( new BufferedOutputStream( new FileOutputStream(new File("temp.tmp")))); for(int i = 0; i < numOfInts; i++) dos.writeInt(i); dos.close(); } }, new Tester("Mapped Write") { public void test() throws IOException { FileChannel fc = new RandomAccessFile("temp.tmp", "rw") .getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_WRITE, 0, fc.size()) .asIntBuffer(); for(int i = 0; i < numOfInts; i++) ib.put(i); fc.close(); } }, new Tester("Stream Read") { public void test() throws IOException { DataInputStream dis = new DataInputStream( new BufferedInputStream( new FileInputStream("temp.tmp"))); for(int i = 0; i < numOfInts; i++) dis.readInt(); dis.close(); } }, new Tester("Mapped Read") { public void test() throws IOException { FileChannel fc = new FileInputStream( new File("temp.tmp")).getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_ONLY, 0, fc.size()) .asIntBuffer(); while(ib.hasRemaining()) ib.get(); fc.close(); } }, new Tester("Stream Read/Write") { public void test() throws IOException { RandomAccessFile raf = new RandomAccessFile( new File("temp.tmp"), "rw"); raf.writeInt(1); for(int i = 0; i < numOfUbuffInts; i++) { raf.seek(raf.length() - 4); raf.writeInt(raf.readInt()); } raf.close(); } }, new Tester("Mapped Read/Write") { public void test() throws IOException { FileChannel fc = new RandomAccessFile( new File("temp.tmp"), "rw").getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_WRITE, 0, fc.size()) .asIntBuffer(); ib.put(0); for(int i = 1; i < numOfUbuffInts; i++) ib.put(ib.get(i - 1)); fc.close(); } } }; public static void main(String[] args) { for(int i = 0; i < tests.length; i++) System.out.println(tests[i].runTest()); } } I more or less see whats going on, my output looks like this: Stream Write: 653 Mapped Write: 51 Stream Read: 651 Mapped Read: 40 Stream Read/Write: 14481 Mapped Read/Write: 6 What is makeing the Stream Read/Write so unbelievably long? And as a read/write test, to me it looks a bit pointless to read the same integer over and over (if I understand well what's going on in the Stream Read/Write) Wouldn't it be better to read int's from the previously written file and just read and write ints on the same place? Is there a better way to illustrate it? I've been breaking my head about a lot of these things for a while and I just can't get the whole picture..

    Read the article

  • How to tell when a Socket has been disconnected

    - by BowserKingKoopa
    On the client side I need to know when/if my socket connection has been broken. However the Socket.Connected property always returns true, even after the server side has been disconnected and I've tried sending data through it. Can anyone help me figure out what's going on here. I need to know when a socket has been disconnected. Socket serverSocket = null; TcpListener listener = new TcpListener(1530); listener.Start(); listener.BeginAcceptSocket(new AsyncCallback(delegate(IAsyncResult result) { Debug.WriteLine("ACCEPTING SOCKET CONNECTION"); TcpListener currentListener = (TcpListener)result.AsyncState; serverSocket = currentListener.EndAcceptSocket(result); }), listener); Socket clientSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); Debug.WriteLine("client socket connected: " + clientSocket.Connected);//should be FALSE, and it is clientSocket.Connect("localhost", 1530); Debug.WriteLine("client socket connected: " + clientSocket.Connected);//should be TRUE, and it is Thread.Sleep(1000); serverSocket.Close();//closing the server socket here Thread.Sleep(1000); clientSocket.Send(new byte[0]);//sending data should cause the socket to update its Connected property. Debug.WriteLine("client socket connected: " + clientSocket.Connected);//should be FALSE, but its always TRUE

    Read the article

  • Functioning Socket read no longer works when called in AsyncTask

    - by bibismcbryde
    I'm making an app that sends a string to a server over a socket and then reads the output after the server has processed that data. It worked perfectly when it was my foreground task, but I have since used AsyncTask to show a process dialog while the socket communication runs in the background, and things start breaking after I read the output from the server and then try to close the socket. private class Progressor extends AsyncTask<String, Void, Void> { ProgressDialog dialog; protected void onPreExecute() { dialog = ProgressDialog.show(ClearTalkInputActivity.this, "Loading..", "Analyzing Text", true, false); } protected Void doInBackground(String... strings) { String language = strings[0].toLowerCase(); String the_text = strings[1]; Socket socket = null; DataOutputStream dos = null; DataInputStream dis = null; try { socket = new Socket(my_ip, port); dos = new DataOutputStream(socket.getOutputStream()); dis = new DataInputStream(socket.getInputStream()); dos.writeUTF(language+"****"+the_text); String in = ""; while (in.indexOf("</content>") < 0) { in += dis.readUTF(); } socket.close(); save_str(OUTPUT_KEY, in); } catch (UnknownHostException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { if (socket != null) { try { socket.close(); } catch (IOException e) { e.printStackTrace(); } } } if (dos != null) { try { dos.close(); } catch (IOException e) { e.printStackTrace(); } } if (dis != null) { try { dis.close(); } catch (IOException e) { e.printStackTrace(); } } return null; } protected void onPostExecute() { if (dialog.isShowing()) dialog.dismiss(); startActivity(new Intent (output_intent)); } }

    Read the article

  • Does Mac OS X throttle the RATE of socket creation?

    - by pbhogan
    This may seem programming related, but this is an OS question. I'm writing a small high performance daemon that takes thousands of connections per second. It's working fine on Linux (specifically Ubuntu 9.10 on EC2). On Mac OS X if I throw a few thousand connections at it (roughly about 16350) in a benchmark that simply opens a connection, does it's thing and closes the connection, then the benchmark program hangs for several seconds waiting for a socket to become available before continuing (or timing out in the process). I used both Apache Bench as well as Siege (to make sure it wasn't the benchmark application). So why/how is Mac OS X limiting the RATE at which sockets can be used, and can I stop it from doing this? Or is there something else going on? I know there is a file descriptor limit, but I'm not hitting that. There is no error on accepting a socket, it's simply hangs for a while after the first (roughly) 16000, waiting -- I assume -- for the OS to release a socket. This shouldn't happen since all prior the sockets are closed at that point. They're supposed to come available at the rate they're closed, and do on Ubuntu, but there seems to be some kind of multi (5-10?) second delay on Mac OS X. I tried tweaking with ulimit every-which-way. Nada.

    Read the article

  • Does Mac OS X throttle the RATE of socket creation?

    - by pbhogan
    This may seem programming related, but this is an OS question. I'm writing a small high performance daemon that takes thousands of connections per second. It's working fine on Linux (specifically Ubuntu 9.10 on EC2). On Mac OS X if I throw a few thousand connections at it (roughly about 16350) in a benchmark that simply opens a connection, does it's thing and closes the connection, then the benchmark program hangs for several seconds waiting for a socket to become available before continuing (or timing out in the process). I used both Apache Bench as well as Siege (to make sure it wasn't the benchmark application). So why/how is Mac OS X limiting the RATE at which sockets can be used, and can I stop it from doing this? Or is there something else going on? I know there is a file descriptor limit, but I'm not hitting that. There is no error on accepting a socket, it's simply hangs for a while after the first (roughly) 16000, waiting -- I assume -- for the OS to release a socket. This shouldn't happen since all prior the sockets are closed at that point. They're supposed to come available at the rate they're closed, and do on Ubuntu, but there seems to be some kind of multi (5-10?) second delay on Mac OS X. I tried tweaking with ulimit every-which-way. Nada.

    Read the article

  • Errors when connecting to HTTPS using HTTP::Net routines (Ruby on Rails)

    - by jaycode
    Hi all, the code below explains the problem in detail #this returns error Net::HTTPBadResponse url = URI.parse('https://sitename.com') response = Net::HTTP.start(url.host, url.port) {|http| http.get('/remote/register_device') } #this works url = URI.parse('http://sitename.com') response = Net::HTTP.start(url.host, url.port) {|http| http.get('/remote/register_device') } #this returns error Net::HTTPBadResponse response = Net::HTTP.post_form(URI.parse('https://sitename.com/remote/register_device'), {:text => 'hello world'}) #this returns error Errno::ECONNRESET (Connection reset by peer) response = Net::HTTP.post_form(URI.parse('https://sandbox.itunes.apple.com/verifyReceipt'), {:text => 'hello world'}) #this works response = Net::HTTP.post_form(URI.parse('http://sitename.com/remote/register_device'), {:text => 'hello world'}) So... how do I send POST parameters to https://sitename.com or https://sandbox.itunes.apple.com/verifyReceipt in this example? Further information, I am trying to get this working in Rails: http://developer.apple.com/iphone/library/documentation/NetworkingInternet/Conceptual/StoreKitGuide/VerifyingStoreReceipts/VerifyingStoreReceipts.html#//apple_ref/doc/uid/TP40008267-CH104-SW1

    Read the article

  • Getting "Illegal Seek" error after calling accept()

    - by Bilthon
    Well.. it's pretty much that, I seem to be getting a "Illegal Seek" error when checking my errno variable. The problem is that I have no idea of what that can mean. I know sockets are treated like files in unix, but I can't see how can this be related to sockets. What I'm doing exactly is: int sck = ::accept(m_socket, (struct sockaddr*)&client_address, (socklen_t*)&address_len); Then I get sck = -1 and errno = ESPIPE And the weird thing is that it happens randomly. I mean, sometimes the code works fine, and sometimes it just thows an exception. I'm working with threads so that's understandable. But I just would like to know what kind of behaviour makes the accept() call to set errno as ESPIPE so I could check the paramethers for instance. Thanks Nelson R. Pérez

    Read the article

  • recvfrom returns invalid argument when *from* is passed

    - by Aditya Sehgal
    I am currently writing a small UDP server program in linux. The UDP server will receive packets from two different peers and will perform different operations based on from which peer it received the packet. I am trying to determine the source from where I receive the packet. However, when select returns and recvfrom is called, it returns with an error of Invalid Argument. If I pass NULL as the second last arguments, recvfrom succeeds. I have tried declaring fromAddr as struct sockaddr_storage, struct sockaddr_in, struct sockaddr without any success. Is their something wrong with this code? Is this the correct way to determine the source of the packet? The code snippet follows. ` /*TODO : update for TCP. use recv */ if((pkInfo->rcvLen=recvfrom(psInfo->sockFd, pkInfo->buffer, MAX_PKTSZ, 0, /* (struct sockaddr*)&fromAddr,*/ NULL, &(addrLen) )) < 0) { perror("RecvFrom failed\n"); } else { /*Apply Filter */ #if 0 struct sockaddr_in* tmpAddr; tmpAddr = (struct sockaddr_in* )&fromAddr; printf("Received Msg From %s\n",inet_ntoa(tmpAddr->sin_addr)); #endif printf("Packet Received of len = %d\n",pkInfo->rcvLen); } `

    Read the article

  • How to handle server-client requests

    - by Layne
    Currently I'm working on a Server-Client system which will be the backbone of my application. I have to find the best way to send requests and handle them on the server-side. The server-side should be able to handle requests like this one: getPortfolio -i 2 -d all In an old project I decided to send such a request as string and the server application had to look up the first part of the string ("getPortfolio"). Afterwards the server application had to find the correct method in a map which linked the methods with the the first part of the string ("getPortfolio"). The second part ("-i 2 -d all") got passed as parameter and the method itself had to handle this string/parameter. I doubt that this is the best solution in order to handle many different requests. Rgds Layne

    Read the article

  • Does my basic PHP Socket Server need optimization?

    - by Tom
    Like many people, I can do a lot of things with PHP. One problem I do face constantly is that other people can do it much cleaner, much more organized and much more structured. This also results in much faster execution times and much less bugs. I just finished writing a basic PHP Socket Server (the real core), and am asking you if you can tell me what I should do different before I start expanding the core. I'm not asking about improvements such as encrypted data, authentication or multi-threading. I'm more wondering about questions like "should I maybe do it in a more object oriented way (using PHP5)?", or "is the general structure of the way the script works good, or should some things be done different?". Basically, "is this how the core of a socket server should work?" In fact, I think that if I just show you the code here many of you will immediately see room for improvements. Please be so kind to tell me. Thanks! #!/usr/bin/php -q <? // config $timelimit = 180; // amount of seconds the server should run for, 0 = run indefintely $address = $_SERVER['SERVER_ADDR']; // the server's external IP $port = 9000; // the port to listen on $backlog = SOMAXCONN; // the maximum of backlog incoming connections that will be queued for processing // configure custom PHP settings error_reporting(1); // report all errors ini_set('display_errors', 1); // display all errors set_time_limit($timelimit); // timeout after x seconds ob_implicit_flush(); // results in a flush operation after every output call //create master IPv4 based TCP socket if (!($master = socket_create(AF_INET, SOCK_STREAM, SOL_TCP))) die("Could not create master socket, error: ".socket_strerror(socket_last_error())); // set socket options (local addresses can be reused) if (!socket_set_option($master, SOL_SOCKET, SO_REUSEADDR, 1)) die("Could not set socket options, error: ".socket_strerror(socket_last_error())); // bind to socket server if (!socket_bind($master, $address, $port)) die("Could not bind to socket server, error: ".socket_strerror(socket_last_error())); // start listening if (!socket_listen($master, $backlog)) die("Could not start listening to socket, error: ".socket_strerror(socket_last_error())); //display startup information echo "[".date('Y-m-d H:i:s')."] SERVER CREATED (MAXCONN: ".SOMAXCONN.").\n"; //max connections is a kernel variable and can be adjusted with sysctl echo "[".date('Y-m-d H:i:s')."] Listening on ".$address.":".$port.".\n"; $time = time(); //set startup timestamp // init read sockets array $read_sockets = array($master); // continuously handle incoming socket messages, or close if time limit has been reached while ((!$timelimit) or (time() - $time < $timelimit)) { $changed_sockets = $read_sockets; socket_select($changed_sockets, $write = null, $except = null, null); foreach($changed_sockets as $socket) { if ($socket == $master) { if (($client = socket_accept($master)) < 0) { echo "[".date('Y-m-d H:i:s')."] Socket_accept() failed, error: ".socket_strerror(socket_last_error())."\n"; continue; } else { array_push($read_sockets, $client); echo "[".date('Y-m-d H:i:s')."] Client #".count($read_sockets)." connected (connections: ".count($read_sockets)."/".SOMAXCONN.")\n"; } } else { $data = @socket_read($socket, 1024, PHP_NORMAL_READ); //read a maximum of 1024 bytes until a new line has been sent if ($data === false) { //the client disconnected $index = array_search($socket, $read_sockets); unset($read_sockets[$index]); socket_close($socket); echo "[".date('Y-m-d H:i:s')."] Client #".($index-1)." disconnected (connections: ".count($read_sockets)."/".SOMAXCONN.")\n"; } else { if ($data = trim($data)) { //remove whitespace and continue only if the message is not empty switch ($data) { case "exit": //close connection when exit command is given $index = array_search($socket, $read_sockets); unset($read_sockets[$index]); socket_close($socket); echo "[".date('Y-m-d H:i:s')."] Client #".($index-1)." disconnected (connections: ".count($read_sockets)."/".SOMAXCONN.")\n"; break; default: //for experimental purposes, write the given data back socket_write($socket, "\n you wrote: ".$data); } } } } } } socket_close($master); //close the socket echo "[".date('Y-m-d H:i:s')."] SERVER CLOSED.\n"; ?>

    Read the article

  • During npm install socket.io I get error 127, node-waf command not found. How to solve it?

    - by SandyWeb
    I'm trying to install socket.io on centos 5 with node.js package manager. During installation I got an error: "make: node-waf: Command not found" and "This is most likely a problem with the ws package" # npm install socket.io npm http GET https://registry.npmjs.org/socket.io npm http 304 https://registry.npmjs.org/socket.io npm http GET https://registry.npmjs.org/policyfile/0.0.4 npm http GET https://registry.npmjs.org/redis/0.6.7 npm http GET https://registry.npmjs.org/socket.io-client/0.9.2 npm http 304 https://registry.npmjs.org/policyfile/0.0.4 npm http 304 https://registry.npmjs.org/socket.io-client/0.9.2 npm http 304 https://registry.npmjs.org/redis/0.6.7 npm http GET https://registry.npmjs.org/uglify-js/1.2.5 npm http GET https://registry.npmjs.org/ws npm http GET https://registry.npmjs.org/xmlhttprequest/1.2.2 npm http GET https://registry.npmjs.org/active-x-obfuscator/0.0.1 npm http 304 https://registry.npmjs.org/xmlhttprequest/1.2.2 npm http 304 https://registry.npmjs.org/uglify-js/1.2.5 npm http 304 https://registry.npmjs.org/ws npm http 304 https://registry.npmjs.org/active-x-obfuscator/0.0.1 npm http GET https://registry.npmjs.org/zeparser/0.0.5 > [email protected] preinstall /root/node_modules/socket.io/node_modules/socket.io-client/node_modules/ws > make **node-waf configure build make: node-waf: Command not found make: *** [all] Error 127** npm ERR! [email protected] preinstall: `make` npm ERR! `sh "-c" "make"` failed with 2 npm ERR! npm ERR! Failed at the [email protected] preinstall script. npm ERR! This is most likely a problem with the ws package, npm ERR! not with npm itself. npm ERR! Tell the author that this fails on your system: npm ERR! make npm ERR! You can get their info via: npm ERR! npm owner ls ws npm ERR! There is likely additional logging output above. npm ERR! npm ERR! System Linux 2.6.18-194.17.4.el5 npm ERR! command "node" "/usr/bin/npm" "install" "socket.io" npm ERR! cwd /root npm ERR! node -v v0.6.13 npm ERR! npm -v 1.1.10 npm ERR! code ELIFECYCLE npm ERR! message [email protected] preinstall: `make` npm ERR! message `sh "-c" "make"` failed with 2 npm ERR! errno {} npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /root/npm-debug.log npm not ok What is "node-waf" and how can I solve this problem? Thanks!

    Read the article

  • Cant overload python socket.send

    - by ralu
    Code from socket import socket class PolySocket(socket): def __init__(self,*p): print "PolySocket init" socket.__init__(self,*p) def sendall(self,*p): print "PolySocket sendall" return socket.sendall(self,*p) def send(self,*p): print "PolySocket send" return socket.send(self,*p) def connect(self,*p): print "connecting..." socket.connect(self,*p) print "connected" HOST="stackoverflow.com" PORT=80 readbuffer="" s=PolySocket() s.connect((HOST, PORT)) s.send("a") s.sendall("a") Output: PolySocket init connecting... connected PolySocket sendall As we can see, send method is not overloaded.

    Read the article

  • What's up with LDoms: Part 9 - Direct IO

    - by Stefan Hinker
    In the last article of this series, we discussed the most general of all physical IO options available for LDoms, root domains.  Now, let's have a short look at the next level of granularity: Virtualizing individual PCIe slots.  In the LDoms terminology, this feature is called "Direct IO" or DIO.  It is very similar to root domains, but instead of reassigning ownership of a complete root complex, it only moves a single PCIe slot or endpoint device to a different domain.  Let's look again at hardware available to mars in the original configuration: root@sun:~# ldm ls-io NAME TYPE BUS DOMAIN STATUS ---- ---- --- ------ ------ pci_0 BUS pci_0 primary pci_1 BUS pci_1 primary pci_2 BUS pci_2 primary pci_3 BUS pci_3 primary /SYS/MB/PCIE1 PCIE pci_0 primary EMP /SYS/MB/SASHBA0 PCIE pci_0 primary OCC /SYS/MB/NET0 PCIE pci_0 primary OCC /SYS/MB/PCIE5 PCIE pci_1 primary EMP /SYS/MB/PCIE6 PCIE pci_1 primary EMP /SYS/MB/PCIE7 PCIE pci_1 primary EMP /SYS/MB/PCIE2 PCIE pci_2 primary EMP /SYS/MB/PCIE3 PCIE pci_2 primary OCC /SYS/MB/PCIE4 PCIE pci_2 primary EMP /SYS/MB/PCIE8 PCIE pci_3 primary EMP /SYS/MB/SASHBA1 PCIE pci_3 primary OCC /SYS/MB/NET2 PCIE pci_3 primary OCC /SYS/MB/NET0/IOVNET.PF0 PF pci_0 primary /SYS/MB/NET0/IOVNET.PF1 PF pci_0 primary /SYS/MB/NET2/IOVNET.PF0 PF pci_3 primary /SYS/MB/NET2/IOVNET.PF1 PF pci_3 primary All of the "PCIE" type devices are available for SDIO, with a few limitations.  If the device is a slot, the card in that slot must support the DIO feature.  The documentation lists all such cards.  Moving a slot to a different domain works just like moving a PCI root complex.  Again, this is not a dynamic process and includes reboots of the affected domains.  The resulting configuration is nicely shown in a diagram in the Admin Guide: There are several important things to note and consider here: The domain receiving the slot/endpoint device turns into an IO domain in LDoms terminology, because it now owns some physical IO hardware. Solaris will create nodes for this hardware under /devices.  This includes entries for the virtual PCI root complex (pci_0 in the diagram) and anything between it and the actual endpoint device.  It is very important to understand that all of this PCIe infrastructure is virtual only!  Only the actual endpoint devices are true physical hardware. There is an implicit dependency between the guest owning the endpoint device and the root domain owning the real PCIe infrastructure: Only if the root domain is up and running, will the guest domain have access to the endpoint device. The root domain is still responsible for resetting and configuring the PCIe infrastructure (root complex, PCIe level configurations, error handling etc.) because it owns this part of the physical infrastructure. This also means that if the root domain needs to reset the PCIe root complex for any reason (typically a reboot of the root domain) it will reset and thus disrupt the operation of the endpoint device owned by the guest domain.  The result in the guest is not predictable.  I recommend to configure the resulting behaviour of the guest using domain dependencies as described in the Admin Guide in Chapter "Configuring Domain Dependencies". Please consult the Admin Guide in Section "Creating an I/O Domain by Assigning PCIe Endpoint Devices" for all the details! As you can see, there are several restrictions for this feature.  It was introduced in LDoms 2.0, mainly to allow the configuration of guest domains that need access to tape devices.  Today, with the higher number of PCIe root complexes and the availability of SR-IOV, the need to use this feature is declining.  I personally do not recommend to use it, mainly because of the drawbacks of the depencies on the root domain and because it can be replaced with SR-IOV (although then with similar limitations). This was a rather short entry, more for completeness.  I believe that DIO can usually be replaced by SR-IOV, which is much more flexible.  I will cover SR-IOV in the next section of this blog series.

    Read the article

  • File io error Python

    - by serpiente
    I have a program that monitors a folder with word documents for any modifications made on the files. The error -Windows Error[2] The system cannot find the file specified- comes when I run the program, open a .doc within the folder make some changes and save it. Any suggestions on how to fix this? Here's the code: def archivar(): txt = open('archivo.txt', 'r+' ) for rootdir, dirs, files in os.walk(r"C:\Users\keinsfield\Desktop\colegio"): for file in files: time = os.stat(os.path.join(rootdir, file)).st_ctime txt.write(file +','+str(time) + '\n') def check(): txt = [col.split(',') for col in (open('archivo.txt', 'r+').read().split('\n'))] files = os.listdir(r"C:\Users\keinsfield\Desktop\colegio") for file in files: for info in txt: if info[0]==os.stat(os.path.join(r"C:\Users\keinsfield\Desktop\colegio",file)).st_ctime: print "modified"

    Read the article

  • C# Multithreading File IO (Reading)

    - by washtik
    We have a situation where our application needs to process a series of files and rather than perform this function synchronously, we would like to employ multi-threading to have the workload split amongst different threads. Each item of work is: 1. Open a file for read only 2. Process the data in the file 3. Write the processed data to a Dictionary We would like to perform each file's work on a new thread? Is this possible and should be we better to use the ThreadPool or spawn new threads keeping in mind that each item of "work" only takes 30ms however its possible that hundreds of files will need to be processed. Any ideas to make this more efficient is appreciated.

    Read the article

  • gdi+ removable device IO Safe problem

    - by sxingfeng
    I am using gdi+ for image format checking. It is really surprising that gdi+ Image img(path); does not throw exception when a device is removed. for example, I am checking a list of image files on a removable device. I plug the disk off, Then My Application will crashed. How can I avoid such problem? Many Thanks! I am using c++ gdi+ windows ,many thanks.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >