Search Results

Search found 7065 results on 283 pages for 'cpu sockets'.

Page 36/283 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Visual Studio "Any CPU" target

    - by galets
    I have some confusion related to the .NET platform build options in VS 2008 Does anyone have a clear understanding what does "Any CPU" compilation target is and what sort of files it generates? I examined the output executable of this "Any CPU" build and found that they are (who would not see that coming!) the x86 executables. So, is there any the difference between targeting executable to x86 vs "Any CPU"? Another thing that I noticed, is that managed C++ projects do not have this platform as option. I'm wondering why is that. Does that mean that my suspicion about "Any CPU" executables being plain 32-bit ones is right?

    Read the article

  • Video capture Performance

    - by volting
    I have noticed high CPU utilization in a number of applications (except mplayer) which read from the embedded webcam on my laptop. Bizarrely CPU utilization varies proportionately to the level of illumination present. I know that that high CPU usage has nothing to do with rendering the video, as I have written a simple app using the OpenCV library to simply grab frames from the webcam, and cpu usage is still high. I think that mplayer might be using my GPU (and the other apps aren't), but since its not an issue with rendering, I dont think this explains anything. Cheese Low light --- ~12% CPU Bright Light ---- ~63% CPU Camorama Low light --- ~7% CPU Bright Light ---- ~30% CPU Opencv C++ library, (display in a single highgui window) Low light --- ~13% CPU Bright Light ---- ~40% CPU (same test on windows 7, 4-9%) Mplayer No problem, 1-2% regardless of light levels Note: If all I want't to do is capture a feed from my webcam I would use mplayer and forget about it, but I'm developing an application which uses the OpenCV to capture a video feed among other things, performance is important.

    Read the article

  • Measuring daemon CPU utilization over a portion of it's wall clock run time

    - by WhirlWind
    I am dealing with a network-related daemon: it takes data in, processes it, and spits it out. I would like to increase the performance of this daemon by profiling it and reducing it's CPU utilization. I can do this easily on Linux with gprof. However, I would also like to use something like "time" to measure it's total CPU utilization over a period of time. If possible, I would like to time it over a period that is less than its total run time: thus, I would like to start the daemon, wait awhile, generate CPU statistics, stop generating them, then stop the daemon at some later time. The "time" command would work well for me, but it seems to require that I start and stop the daemon as a child of time. Is there a way to measure CPU utilization for only a portion of the daemon's wall clock time?

    Read the article

  • Monitoring UDP socket in glib(mm) eats up CPU time

    - by Gyorgy Szekely
    Hi, I have a GTKmm Windows application (built with MinGW) that receives UDP packets (no sending). The socket is native winsock and I use glibmm IOChannel to connect it to the application main loop. The socket is read with recvfrom. My problem is: this setup eats 25% percent CPU time on a 3GHz workstation. Can somebody tell me why? The application is idle in this case, and if I remove the UDP code, CPU usage drops down to almost zero. As the application has to perform some CPU intensive tasks, I could image better ways to spend that 25% Here are some code excerpts: (sorry for the printf's ;) ) /* bind */ void UDPInterface::bindToPort(unsigned short port) { struct sockaddr_in target; WSADATA wsaData; target.sin_family = AF_INET; target.sin_port = htons(port); target.sin_addr.s_addr = 0; if ( WSAStartup ( 0x0202, &wsaData ) ) { printf("WSAStartup failed!\n"); exit(0); // :) WSACleanup(); } sock = socket( AF_INET, SOCK_DGRAM, 0 ); if (sock == INVALID_SOCKET) { printf("invalid socket!\n"); exit(0); } if (bind(sock,(struct sockaddr*) &target, sizeof(struct sockaddr_in) ) == SOCKET_ERROR) { printf("failed to bind to port!\n"); exit(0); } printf("[UDPInterface::bindToPort] listening on port %i\n", port); } /* read */ bool UDPInterface::UDPEvent(Glib::IOCondition io_condition) { recvfrom(sock, (char*)buf, BUF_SIZE*4, 0, NULL, NULL); /* process packet... */ } /* glibmm connect */ Glib::RefPtr channel = Glib::IOChannel::create_from_win32_socket(udp.sock); Glib::signal_io().connect( sigc::mem_fun(udp, &UDPInterface::UDPEvent), channel, Glib::IO_IN ); I've read here in some other question, and also in glib docs (g_io_channel_win32_new_socket()) that the socket is put into nonblocking mode, and it's "a side-effect of the implementation and unavoidable". Does this explain the CPU effect, it's not clear to me? Whether or not I use glib to access the socket or call recvfrom() directly doesn't seem to make much difference, since CPU is used up before any packet arrives and the read handler gets invoked. Also glibmm docs state that it's ok to call recvfrom() even if the socket is polled (Glib::IOChannel::create_from_win32_socket()) I've tried compiling the program with -pg and created a per function cpu usage report with gprof. This wasn't usefull because the time is not spent in my program, but in some external glib/glibmm dll.

    Read the article

  • Socket.ReceiveAsync problem

    - by bartol
    Hi, I have a problem using SocketAsyncEventArgs model with .net sockets. Everything works great until the moment that the server wishes to close a client connection. I use following code for this: try { socket.Shutdown(SocketShutdown.Both); } catch { } // throws if client process has already closed finally { socket.Close(); } socket = null; Each connection is using two SocketAsyncEventArgs (one for send and one for receive) and after closing the connection they are returned to a pool from which they can be later reused. And here the problem starts, because when another connection is established and receive args are reused from the pool we get an exception: System.InvalidOperationException: "An asynchronous socket operation is already in progress using this SocketAsyncEventArgs instance."; at System.Net.Sockets.SocketAsyncEventArgs.StartOperationCommon(Socket socket) at System.Net.Sockets.Socket.ReceiveAsync(SocketAsyncEventArgs e) I've done some debugging and it appears that the connection closing code from the beginning of the question does not cancel Socket.ReceiveAsync operation that is in progress when the connection is closed. I've tried many combinations of Shutdown, Disconnect and Linger options for the socket but nothing worked. Any suggestions? Thanks

    Read the article

  • Web Sockets: Browser won't receive the message, complains about it not starting with 0x00 (byte)

    - by giggsey
    Here is my code: import java.net.*; import java.io.*; import java.util.*; import org.jibble.pircbot.*; public class WebSocket { public static int port = 12345; public static ArrayList<WebSocketClient> clients = new ArrayList<WebSocketClient>(); public static ArrayList<Boolean> handshakes = new ArrayList<Boolean>(); public static ArrayList<String> nicknames = new ArrayList<String>(); public static ArrayList<String> channels = new ArrayList<String>(); public static int indexNum; public static void main(String args[]) { try { ServerSocket ss = new ServerSocket(WebSocket.port); WebSocket.console("Created socket on port " + WebSocket.port); while (true) { Socket s = ss.accept(); WebSocket.console("New Client connecting..."); WebSocket.handshakes.add(WebSocket.indexNum,false); WebSocket.nicknames.add(WebSocket.indexNum,""); WebSocket.channels.add(WebSocket.indexNum,""); WebSocketClient p = new WebSocketClient(s,WebSocket.indexNum); Thread t = new Thread( p); WebSocket.clients.add(WebSocket.indexNum,p); indexNum++; t.start(); } } catch (Exception e) { WebSocket.console("ERROR - " + e.toString()); } } public static void console(String msg) { Date date = new Date(); System.out.println("[" + date.toString() + "] " + msg); } } class WebSocketClient implements Runnable { private Socket s; private int iAm; private String socket_res = ""; private String socket_host = ""; private String socket_origin = ""; protected String nick = ""; protected String ircChan = ""; WebSocketClient(Socket socket, int mynum) { s = socket; iAm = mynum; } public void run() { String client = s.getInetAddress().toString(); WebSocket.console("Connection from " + client); IRCclient irc = new IRCclient(iAm); Thread t = new Thread( irc ); try { Scanner in = new Scanner(s.getInputStream()); PrintWriter out = new PrintWriter(s.getOutputStream(),true); while (true) { if (! in.hasNextLine()) continue; String input = in.nextLine().trim(); if (input.isEmpty()) continue; // Lets work out what's wrong with our input if (input.length() > 3 && input.charAt(0) == 65533) { input = input.substring(2); } WebSocket.console("< " + input); // Lets work out if they authenticate... if (WebSocket.handshakes.get(iAm) == false) { checkForHandShake(input); continue; } // Lets check for NICK: if (input.length() > 6 && input.substring(0,6).equals("NICK: ")) { nick = input.substring(6); Random generator = new Random(); int rand = generator.nextInt(); WebSocket.console("I am known as " + nick); WebSocket.nicknames.set(iAm, "bo-" + nick + rand); } if (input.length() > 9 && input.substring(0,9).equals("CHANNEL: ")) { ircChan = "bo-" + input.substring(9); WebSocket.console("We will be joining " + ircChan); WebSocket.channels.set(iAm, ircChan); } if (! ircChan.isEmpty() && ! nick.isEmpty() && irc.started == false) { irc.chan = ircChan; irc.nick = WebSocket.nicknames.get(iAm); t.start(); continue; } else { irc.msg(input); } } } catch (Exception e) { WebSocket.console(e.toString()); e.printStackTrace(); } t.stop(); WebSocket.channels.remove(iAm); WebSocket.clients.remove(iAm); WebSocket.handshakes.remove(iAm); WebSocket.nicknames.remove(iAm); WebSocket.console("Closing connection from " + client); } private void checkForHandShake(String input) { // Check for HTML5 Socket getHeaders(input); if (! socket_res.isEmpty() && ! socket_host.isEmpty() && ! socket_origin.isEmpty()) { send("HTTP/1.1 101 Web Socket Protocol Handshake\r\n" + "Upgrade: WebSocket\r\n" + "Connection: Upgrade\r\n" + "WebSocket-Origin: " + socket_origin + "\r\n" + "WebSocket-Location: ws://" + socket_host + "/\r\n\r\n",false); WebSocket.handshakes.set(iAm,true); } return; } private void getHeaders(String input) { if (input.length() >= 8 && input.substring(0,8).equals("Origin: ")) { socket_origin = input.substring(8); return; } if (input.length() >= 6 && input.substring(0,6).equals("Host: ")) { socket_host = input.substring(6); return; } if (input.length() >= 7 && input.substring(0,7).equals("Cookie:")) { socket_res = "."; } /*input = input.substring(4); socket_res = input.substring(0,input.indexOf(" HTTP")); input = input.substring(input.indexOf("Host:") + 6); socket_host = input.substring(0,input.indexOf("\r\n")); input = input.substring(input.indexOf("Origin:") + 8); socket_origin = input.substring(0,input.indexOf("\r\n"));*/ return; } protected void send(String msg, boolean newline) { byte c0 = 0x00; byte c255 = (byte) 0xff; try { PrintWriter out = new PrintWriter(s.getOutputStream(),true); WebSocket.console("> " + msg); if (newline == true) msg = msg + "\n"; out.print(msg + c255); out.flush(); } catch (Exception e) { WebSocket.console(e.toString()); } } protected void send(String msg) { try { WebSocket.console(">> " + msg); byte[] message = msg.getBytes(); byte[] newmsg = new byte[message.length + 2]; newmsg[0] = (byte)0x00; for (int i = 1; i <= message.length; i++) { newmsg[i] = message[i - 1]; } newmsg[message.length + 1] = (byte)0xff; // This prints correctly..., apparently... System.out.println(Arrays.toString(newmsg)); OutputStream socketOutputStream = s.getOutputStream(); socketOutputStream.write(newmsg); } catch (Exception e) { WebSocket.console(e.toString()); } } protected void send(String msg, boolean one, boolean two) { try { WebSocket.console(">> " + msg); byte[] message = msg.getBytes(); byte[] newmsg = new byte[message.length+1]; for (int i = 0; i < message.length; i++) { newmsg[i] = message[i]; } newmsg[message.length] = (byte)0xff; // This prints correctly..., apparently... System.out.println(Arrays.toString(newmsg)); OutputStream socketOutputStream = s.getOutputStream(); socketOutputStream.write(newmsg); } catch (Exception e) { e.printStackTrace(); } } } class IRCclient implements Runnable { protected String nick; protected String chan; protected int iAm; boolean started = false; IRCUser irc; IRCclient(int me) { iAm = me; irc = new IRCUser(iAm); } public void run() { WebSocket.console("Connecting to IRC..."); started = true; irc.setNick(nick); irc.setVerbose(false); irc.connectToIRC(chan); } void msg(String input) { irc.sendMessage("#" + chan, input); } } class IRCUser extends PircBot { int iAm; IRCUser(int me) { iAm = me; } public void setNick(String nick) { this.setName(nick); } public void connectToIRC(String chan) { try { this.connect("irc.appliedirc.com"); this.joinChannel("#" + chan); } catch (Exception e) { WebSocket.console(e.toString()); } } public void onMessage(String channel, String sender,String login, String hostname, String message) { // Lets send this message to me WebSocket.clients.get(iAm).send(message); } } Whenever I try to send the message to the browser (via Web Sockets), it complains that it doesn't start with 0x00 (which is a byte). Any ideas? Edit 19/02 - Added the entire code. I know it's real messy and not neat, but I want to get it functioning first. Spend last two days trying to fix.

    Read the article

  • C# TCP Async EndReceive() throws InvalidOperationException ONLY on Windows XP 32-bit

    - by James Farmer
    I have a simple C# Async Client using a .NET socket that waits for timed messages from a local Java server used for automating commands. The messages come in asynchronously and is written to a ring buffer. This implementation seems to work fine on Windows Vista/7/8 and OSX, but will randomly throw this exception while it's receiving a message from the local Java server: Unhandled Exception: System.InvalidOperationException: EndReceive can only be called once for each asynchronous operation.     at System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult, SocketError& errorCode)     at System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult)     at SocketTest.Controller.RecvAsyncCallback(IAsyncResult ar)     at System.Net.LazyAsyncResult.Complete(IntPtr userToken)     ... I've looked online for this error, but have found nothing really helpful. This is the code where it seems to break: /// <summary> /// Callback to receive socket data /// </summary> /// <param name="ar">AsyncResult to pass to End</param> private void RecvAsyncCallback(IAsyncResult ar) { // The exception will randomly happen on this call int bytes = _socket.EndReceive(_recvAsyncResult); // check for connection closed if (bytes == 0) { return; } _ringBuffer.Write(_buffer, 0, bytes); // Checks buffer CheckBuffer(); _recvAsyncResult = _sock.BeginReceive(_buffer, 0, _buffer.Length, SocketFlags.None, RecvAsyncCallback, null); } The error doesn't happen on any particular moment except in the middle of receiving a message. The message itself can be any length for this to happen, and the exception can happen right away, or sometimes even up to a minute of perfect communication. I'm pretty new with sockets and network communication, and I feel I might be missing something here. I've tested on at least 8 different computers, and the only similarity with the computers that throw this exception is that their OS is Windows XP 32-bit.

    Read the article

  • Replace low level web-service reference call transport with custom one

    - by hoodoos
    I'm not sure if title sounds right actually, so I will give more explanation here. I will begin from very beginning :) I'm using c# and .net for my development. I have an application that makes requests to some soap web-service and for each user request it produces 3 to 10 requests for web-service, they should all run async to finish in one time, so I use Async method of the web-service generated reference and then wait for result on callback. But it seems like it starts a thread (or takes it from pool) for every async call I make, so if I have 10 clients I got to spawn 30 to 100 threads and it sounds terrible even for my 16 cores server :) So i wanted to replace low level transport implementation with my own which uses non-blocking sockets and can handle at least 50 sockets run parallel in one thread with not much overhead. But I actually dunno where to put my override best. I analyzed System.Web.Services.Protocols.SoapHttpClientProtocol class and see that it has some GetWebRequest method which I actually could use. If only I could somehow interupt the object it creates and get a http request with all headers and body from there and then send it with my own sockets.. Any ideas what approach to use? Or maybe there's something built in the framework I can use?

    Read the article

  • Forward local port or socket file to remote socket file

    - by Ninefingers
    Hi All, Quick question - I run two linux boxes, one my own desktop and the other my VPS. For security reasons on the VPS end I opted for socket connections to MySQL (/var/run/mysqld/mysql.sock). I know I can tunnel like this: ssh -L 3307:127.0.0.1:3306 [email protected] if I set up the remote sql server to listen on some port, but what I want to know is can I do something like: ssh -L /path/to/myremotesqlserver.sock:/var/run/mysqld/mysql.sock thereby tunnelling two sockets, as opposed to two ports? A perfectly acceptable solution would also be to forward a local port to the remote socket file, but where possible I'm trying not to have tcp servers running on the remote box. (and yes, I know tcp would be easier). Thanks all, Nf.

    Read the article

  • What does %st mean in top?

    - by Ben
    Here is an example from my top: Cpu(s): 6.0%us, 3.0%sy, 0.0%ni, 78.7%id, 0.0%wa, 0.0%hi, 0.3%si, 12.0%st I am trying to figure out the significance of the %st field. I read that it means steal cpu and it represents time spent by the hypervisor, but I want to know what that actually means to me. Does it mean I may be on a busy physical server and someone else is using too much CPU on the server and they are taking from my VM? If I am using EBS could it be related to handling EBS I/O at the hypervisor level? Is it related to things running on my VM or is it completely unaffected by me?

    Read the article

  • Raw socket sendto() failure in OS X

    - by user37278
    When I open a raw socket is OS X, construct my own udp packet (headers and data), and call sendto(), I get the error "Invalid Argument". Here is a sample program "rawudp.c" from the web site http://www.tenouk.com/Module43a.html that demonstrates this problem. The program (after adding string and stdlib #includes) runs under Fedora 10 but fails with "Invalid Argument" under OS X. Can anyone suggest why this fails in OS X? I have looked and looked and looked at the sendto() call, but all the parameters look good. I'm running the code as root, etc. Is there perhaps a kernel setting that prevents even uid 0 executables from sending packets through raw sockets in OS X Snow Leopard? Thanks.

    Read the article

  • Where does power consumption go in a computer?

    - by Johannes Rössel
    Today we had a weird discussion over lunch: What exactly causes power consumption in a computer, particularly in the CPU? Figures you usually see indicate that only a percentage (albeit a large one) of the power consumption ends up in heat. However, what exactly does happen with the rest? A CPU isn't (anymore) a device that mechanically moves parts, emits light or uses other ways of transforming energy. Conservation of energy dictates that all energy going in has to go out somewhere and for something like a CPU I seriously can't imagine that output being anything but heat. Us being computer science instead of electrical engineering students certainly didn't help in accurately answering the question.

    Read the article

  • Using mod_wsgi with mpm_itk: socket permission issue

    - by djechelon
    I'm using mod_itk as MPM for increased security in shared environment. I also have a Firefox Sync Server within one of the VHosts I host. That vhost is restricted to a certain user via AssignUserId user group. The problem is that the socket /var/run/wsgi...whatever.sock is chmodded srwx------ and owned by Apache's wwwrun. While I configured the vhost with WSGIProcessGroup sync WSGIDaemonProcess sync user=djechelon group=djechelon processes=1 threads=5 I still get the error that Apache wants to access a socket that is not accessible and because of this gets an error. Is it possible to configure mod_wsgi in order to create different sockets with different owners for different applications or to chmod its socket in a different way (less secure)? Currently, I'm running Firefox Sync as the only WSGI application. Moving it to a vhost that doesn't AssignUserId could solve this problem but will force me to change URL (and buy an additional SSL certificate), so I wouldn't consider this

    Read the article

  • Understanding top output in Linux

    - by Rayne
    Hi, I'm trying to determine the CPU usage of a program by looking at the output from Top in Linux. I understand that %us means userspace and %sy means system/kernel etc. But say I see 100%us. Does this mean that the CPU is really only doing useful work? What if a CPU is tied up waiting for resources that are not avaliable, or cache misses, would it also show up in the %us column, or any other column? Thank you.

    Read the article

  • How to set up memcached to use unix socket?

    - by alfish
    While I could use memcached on Debian to use the default 11211 port, but I've had great difficulty setting up unix socket, Form what I'v read, I know that I need to create a memcache.socket and add -s /path/to/memcache.socket -a 0766 To /etc/memcached.conf and comment out the default connection port and IP, i.e. -p 11211 -l 127.0.0.1 However, when I restart memcached I get internal server errors on Drupal site. I'm trying to implement unix sockets to avoid TCP/IP overhead and boost overal memcached performance, however not sure how much performance gain one can expect of this tweak. I appreciate your hints or possibly configs to to resolve this.

    Read the article

  • PC wont boot with more than 1 stick of RAM.

    - by Aidan
    Hi Guys, I've got the following computer and I've just put in a new CPU QX9650 and I've run into this problem since making this hardware change. Whenever I put more than 1 of my 4 sticks of ram into my machine it wont load an OS. It'll go through the BIOS but BSOD on windows load. It also wont let me install an OS from disk or boot into Linux. I've ran memtest with all 4 sticks in and I get 10k+ errors on test5. Each stick of ram on it's own is fine and functions properly. I only have problems when all 4 sticks are in the machine at the same time. System specs.. CPU: QX9650 Mobo: Asus P5B 2104 BIOS RAM: 2xPC25400 DDR2 , 2xPC2 6400 both OCZ. Is the problem on my end or is the CPU faulty?

    Read the article

  • PC wont boot with more than 1 stick of RAM.

    - by Aidan
    Hi Guys, I've got the following computer and I've just put in a new CPU QX9650 and I've run into this problem since making this hardware change. Whenever I put more than 1 of my 4 sticks of ram into my machine it wont load an OS. It'll go through the BIOS but BSOD on windows load. It also wont let me install an OS from disk or boot into Linux. I've ran memtest with all 4 sticks in and I get 10k+ errors on test5. Each stick of ram on it's own is fine and functions properly. I only have problems when all 4 sticks are in the machine at the same time. System specs.. CPU: QX9650 Mobo: Asus P5B 2104 BIOS RAM: 2xPC25400 DDR2 , 2xPC2 6400 both OCZ. Is the problem on my end or is the CPU faulty?

    Read the article

  • Intel T5600 to upgrade T5250

    - by galets
    I want to upgrade my laptop which has T5250 CPU with a T5600 CPU to support virtualization. I ordered T5600 on ebay, but it didn't fit. It says T5250 supports PPGA478 socket, so I assume that is what I have. T5600 says supports "PBGA479, PPGA478". Since T5600 didn't fit as a replacement, I assume it means there are 2 models of T5600, one supports PBGA479, another one supports PPGA478, and not like I thought - one CPU supports both. Is that a correct statement? Does anybody know if it's even possible to do such an upgrade, or I'm wasting time?

    Read the article

  • performance monitoring

    - by Sunny
    I want to monitor CPU usage, disk read/write usage for a particular process, say ./myprocess. To monitor CPU top command seems to be a nice option and for read and write iotop seems to be a handy one. For example to monitor read/write for every second i use the command iotop -tbod1 | grep "myprocess". My difficulty is I just want only three variables to store, namely read/sec, write/sec, cpu usage/sec. Could you help me with a script that combines the outputs the above said three variables from top and iotop to be stored into a log file? Thanks!

    Read the article

  • Assembling a number-crunching computer [closed]

    - by tugrul büyükisik
    What is needed to make a GPU fully fed by CPU? Comparing their flops/s is enough? For example, if i could manage to make a very old(pentium-3) CPU with one of Nvidia-Fermi GPU, it would not be able to fed the gpu with data per sec. What is the criteria to fit CPU to GPU exactly when OpenCL or some similar work needed? Of course RAM and BUS will be chosen in a similar way but how exactly? Assume each GPU-core will calculate a sqrt and a division and an adding for 100 times for every itearation. Thanks.

    Read the article

  • IIS 7 Application Pools using a different amount of memory on multiple servers behind a load balancer

    - by Jim March
    We have 6 servers in a web farm behind an F5. There are approximately 25 AppPools on each of these servers. On servers 1 - 5 the apppools are consuming approx 500MB Private Bytes, and 5GB Virtual Bytes. On server 6 the apppools are consuming approx 800MB Private Bytes, and 8GB Virtual Bytes. I can not seem to figure out why we have this difference. The code is the exact same on each box. We replicate the apphost.config between the boxes, so the Appplication Configs are identical. The only difference seems to be that this box consumes more RAM, and in turn ends up using a lot more CPU. During Black Friday we observed the CPU on server 6 spiking to 100% and noticed that the % Memory Commit was also near 100%, while the rest of the farm was at closer to 50% utilization. Pulling the 6th server from the load balancer dropped CPU/Memory on the 6th server back to normal, and did not cause a noticeable strain on the other servers.

    Read the article

  • xen virtual machines get to many porcentages of cpus

    - by ki0
    Hi everyone, This is my question. I have one Xen server with 8 CPU's and 6 virtual machine running, each virtual hard disk are running in diferent physical hardisk. Everything worked fine but sometimes one virtual machine get almost whole CPU, if the Domain-0 is 90% that is normal, the virtual machine is 500% usage of CPU. I have improved that it is not depends who are working with the VM even when nobody are working with the server this still happens. I dont know what happen. Anyone have any idea?¿ or anyone have happened the same?¿

    Read the article

  • How to configure LAMP server for iOS social/chat app?

    - by andufo
    I'm on the last developing phase of a social networking app for iOS that has a chat module. Right now I'm trying to figure out the best way to achieve these features: Send message instantly to another user. If other user is online, delivery should be instantly. If user reads the message, the remitent should be notified of that action. If a user visits my profile, I should be notified instantly. What would be, in your opinion, the best approach to achieve that experience? The server is CentOS 5.6. I've previously reviewed XMPP, sockets, but I'm still unsure on what the best approach is. Any opinions and resources will be much appreciated.

    Read the article

  • How can 2 or more instances of the same program to communicate in local network?

    - by user1981437
    I want to create program which will be in use for few computers connected in local network. Basically the program aim is to keep track of all tables in a bar ( lets say ), which are reserved. When some user book a table as reserved the program should broadcast the table number to all other Pc's and mark the table as reserved. Since all computers use the same program, how is possible to create communication between all of them ? Should i use sockets to achieve this? If it matters, all of the computers have installed Linux OS,and the app will be developed in ruby,perl or php. Thank you.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >