Search Results

Search found 7065 results on 283 pages for 'cpu sockets'.

Page 23/283 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Which CPU for SQL Server machine (Xeon, i5, i7, AMD Phenom)?

    - by Tony_Henrich
    I am going to build a full height server machine to be used for SQL Server 2008 64bit. I have $400 to spend for a CPU. Which CPU should I get among i5, i7, Xeon and Phenom in terms of performance. There are so many options and I am out of touch with the latest stuff. All I know I want something fast and works with DDR3 fast memory and works with some kind of fast system bus. I don't care about overclocking, 3D & gfx benchmarks. The machine is not used for games and gfx apps. Any recommendations?

    Read the article

  • ESXI guests not using available CPU resources

    - by Alan M
    I have a VMWare ESXI 5.0.0 (it's a bit old, I know) host with three guest VMs on it. For reasons unknown, the guests will not use much of the availble CPU resources. I have all three guests in a single pool, with the hosts all configured to use the same amount of resource shares, so they're basically 33% each. The three guests are basically identically configured as far as their VM resources go. So the problem is, even when the guests are performing what should be very 'busy' activitites, such as at bootup, the actual host CPU consumed is something tiny, like 33mhz, when seen via vSphere console's "Virtual Machines" tab when viewing properties for the pool. And of course, the performance of the guest VMs is terrible. The host has plenty of CPU to spare. I've tried tinkering with individual guest VM resource settings; cranking up the reservation, etc. No matter. The guests just refuse to make use of the abundant CPU available to them, and insist on using a sliver of the available resources. Any suggestions? Update after reading various comments below Per the suggestions below, I did remove the guests from the application pool; this didn't make any difference. I do understand that the guests are not going to consume resources they do not need. I have tried to do a remote perfmon on the guest which is experiencing long boot times, but I cannot connect to the guest remotely with perfmon (guest is w2k8r2 server). Host graphs for CPU, Mem, Disk are basically flatlining; very little demand. Same is true for the guest stats; while the guest itself seems to be crawling, the guest resource graphing shows very little activity across CPU, Mem, Disk. Host is a Dell PowerEdge 2900, has 2 physical CPU,20gb RAM. (it's a test/dev environment using surplus gear) Guest1 has: VM ver. 7, 2vCPU, 4gb RAM, 140gb storage which lives on a RAID-5 array on the host. Guest2 has: VM ver. 7, 2vCPU, 4gb RAM, 140gb storage which lives on a RAID-5 array on the host. Guest3 has: VM ver. 7, 1vCPU, 2gb RAM, 2tb storage which lives on a RAID-5 ISCSI NAS box Perhaps I am making a false assumption that if a guest has a demand for CPU (e.g. Windows Task Manager shows 100% CPU), the host would supply the guest with more CPU (mem, disk) on demand. Another Update After checking the stats, it would appear that the host is indeed not busy at all, neither is the guest. I believe I have a good idea on the issue, though; a messed-up VMWare Tools install. The guest has VMware Tools on it, but the host says it does not. VMWare Tools refuses to uninstall, refuses to be upgraded, refuses to be recognized. While I cannot say with authority, this would appear to be something worth investigation. I do not know the origin of the guest itself, nor the specifics on the original VMWare Tools install. Following various bits of googling, I did come up with a few suggestions that went nowhere. To that end, I was going to delete this question, but was prompted not to do so since so many folks answered. My suspicion right now is; the problem truly is the guest; the guest is not making a demand on the host, and as a natural result, the host is treating the guest accordingly. My Final Update I am 99% certain the guest VM had something fundamentally wrong with it re VMWare Tools. I created a clone of a different VM with a near-identical OS config, but a properly working install of VMWare tools. The guest runs just great, and takes up it's allotment of resources when it needs to; e.g. it eats up about 850mhz CPU during startup, then ticks down to idle once the guest OS is stable.

    Read the article

  • Does Lapping a CPU / Heatsink actually drop the temp?

    - by Pure.Krome
    Hi folks, i've been watching some YouTube vids about Lapping a CPU. I've never heard of this modding technique before and, though extreame, I was wondering if it acutally works? Assuming you lap your cpu and/or heatsink correctly, will the temps drop? When I say drop, at least a 1 degree drop is success (for the debate of this topic). To keep this topic clean, please refrain from anyone commenting on the overkill of labour, just for a 1 degree (worst case) drop, etc. This is a discussion about the theory and concept, not personal opionion of wether to lap or not.

    Read the article

  • How to test KVM guest CPU maximum allocation limit?

    - by Ace
    Running Ubuntu 13.04 Host and vm Guest. Using virtio for hdd, nic. Max-allocaion CPU cores is 6, minimum is 2. Ive made a vm with virt-manager just to play with, and to test out kvm. Alright, I have a decent understand how the memory balloon driver works, but I still dont know how to test if the guest OS can utilize the max setting for cpu cores. From what i gather, the host will start one thread of qemu for each core allocated per vm. When i run htop inside the guest, it only shows two cores. (also here is the output of cat /proc/cpuinfo: https://gist.github.com/anonymous/93a361545130923537da ) How can I "force" the guest to allocate the other 4 cores so that it can show 6 cores in htop? Is there a way to do this?

    Read the article

  • CPU fan making noise. heat sink or replace...

    - by user32185
    CPU fan is making noise ...i figured out reasons: Cable is hitting the fan causing a vibration. And may be also CPU fan is loose causing vibration... suggest options... i cleaned the fan ...but found that there is no heatsink compound /paste between processor and the fan..is it ok ..should i apply heat sink compound. How to correct the cable vibration... My processor is p4 2.4 ghz motherboard intel 845gvsr if i have to buy new fan ..which brand should i go for ..and estimated cost ?!

    Read the article

  • Why does the CPU usage reach 100% when laptop is plugged in?

    - by Mayank
    I have an HP Elitebook 8440p with Windows 7 Ultimate, i7 Processor and 8 GB RAM. For the last couple of days, when I plug-in my laptop for charging, the CPU usage shoots up to 100% and remains there for around 15 minutes. No offending process shows up in Task Manager/Process Explorer but the laptop is practically unusable for that time. The CPU usage becomes normal when I remove the power cord but shoots up again when plugged back. This is happening with different power adapters/wall sockets so it might not be a faulty adapter issue. Why is this happening and any pointer to resolve this issue?

    Read the article

  • What kind of CPU/GPU integration is offered by APUs?

    - by clabacchio
    I'm truly fascinated by the idea of GPGPU and using the GPU for heavy processing. I'm seeing that also APUs (Accelerated Processing Units, CPU+GPU on the same chip) are gaining a consistent popularity. Are all of the APUs using a GPGPU? Can it be used for processing? And is it seamless or it requires special code (like Cuda) to have the hard work made by the GPU? I'm not interested in bare graphic performance, but more about how much the GPU can accelerate the "normal" CPU work.

    Read the article

  • Why does CentOS Linux use cpu/core #1 so much more in a 4-core system?

    - by ck_
    I've been watching top and htop for awhile on a very active server and I am wondering why linux does not automatically use cpu affinity better? CPU #1 (actually core #1 of 4) is used much more heavily than the others. Is there a setting similar to what vm.swappiness does for vm swap that forces a preferred affinity pattern? Should I be using forced affinity settings within mysql/apache/nginx/exim to get better results? This is on CentOS 2.6.32-279 x86_64 SMP Thanks for any suggestions.

    Read the article

  • How should oracle vbox look like in terms of Memory, CPU and Performance? [duplicate]

    - by Nicholas DiPiazza
    This question already has an answer here: Can you help me with my capacity planning? 2 answers I've got a need for a ton of VMs to simulate some realistic load testing scenarios. I've got a bunch of different host machines that differ in ram, cpu's, etc. What should my resource manager look like? Is there a standard way to know what the CPU, Memory and Disk Utilization should be given your CPUs + Memory available + Disks available? For example, I have a box: MemTotal: 50 Gb CPUs: 8 CPUs are pretty much 100% all day long. Memory is at about 60%. Swap not getting hit. Little bewildered by why the VMs, while doing the exact same test script, are showing different virtual memory consumption. Huh.

    Read the article

  • What is the best CPU I could get for my old setup?

    - by cmbrnt
    On my current not-so-good desktop computer, I'm currently running an Intel Core 2 Duo E6550-CPU, and I'm thinking of buying a newer one now that these must be cheap on E-bay and the like. Being a software guy, I'd be grateful for any help in identifying what CPUs that would fit my motherboard so I don't buy a processor which won't work with my computer. It might not be worth it however, since the SATA2-drive is currently what's bottlenecking my system, but I'm still curious about what could be done except for buying an SSD-drive. I'm providing screenshots of CPU-Z running, to help with the identification. My CPU- and motherboard info: http://i.imgur.com/YdO8x.png

    Read the article

  • SO_LINGER and closing sockets(WINSOCK)

    - by Johnny Walked
    hey. im writing a multithreaded winsock application and im having some issues with closing the sockets. first of all, is there a limit for a number of simultaneously open sockets? lets say like 32 sockets all in once. i establish a connection on one of the sockets, and passing information and it all goes right. problem is when i disconnect the socket and then reconnect to the same destination, i get a RST from the server after my SYN. i dont have the code for the server app so i cant debug it. when i used SO_LINGER and it sent a RST flag at the end of each session - it worked. but i dont want to end my connections this way. when not using SO_LINGER a FIN flag was sent but it seems the connection was not really closed. any help? thanks

    Read the article

  • How to use CFNetwork to get byte array from sockets?

    - by Vic
    Hi, I'm working in a project for the iPad, it is a small program and I need it to communicate with another software that runs on windows and act like a server; so the application that I'm creating for the iPad will be the client. I'm using CFNetwork to do sockets communication, this is the way I'm establishing the connection: char ip[] = "192.168.0.244"; NSString *ipAddress = [[NSString alloc] initWithCString:ip]; /* Build our socket context; this ties an instance of self to the socket */ CFSocketContext CTX = { 0, self, NULL, NULL, NULL }; /* Create the server socket as a TCP IPv4 socket and set a callback */ /* for calls to the socket's lower-level connect() function */ TCPClient = CFSocketCreate(NULL, PF_INET, SOCK_STREAM, IPPROTO_TCP, kCFSocketDataCallBack, (CFSocketCallBack)ConnectCallBack, &CTX); if (TCPClient == NULL) return; /* Set the port and address we want to listen on */ struct sockaddr_in addr; memset(&addr, 0, sizeof(addr)); addr.sin_len = sizeof(addr); addr.sin_family = AF_INET; addr.sin_port = htons(PORT); addr.sin_addr.s_addr = inet_addr([ipAddress UTF8String]); CFDataRef connectAddr = CFDataCreate(NULL, (unsigned char *)&addr, sizeof(addr)); CFSocketConnectToAddress(TCPClient, connectAddr, -1); CFRunLoopSourceRef sourceRef = CFSocketCreateRunLoopSource(kCFAllocatorDefault, TCPClient, 0); CFRunLoopAddSource(CFRunLoopGetCurrent(), sourceRef, kCFRunLoopCommonModes); CFRelease(sourceRef); CFRunLoopRun(); And this is the way I sent the data, which basically is a byte array /* The native socket, used for various operations */ // TCPClient is a CFSocketRef member variable CFSocketNativeHandle sock = CFSocketGetNative(TCPClient); Byte byteData[3]; byteData[0] = 0; byteData[1] = 4; byteData[2] = 0; send(sock, byteData, strlen(byteData)+1, 0); Finally, as you may have noticed, when I create the server socket, I registered a callback for the kCFSocketDataCallBack type, this is the code. void ConnectCallBack(CFSocketRef socket, CFSocketCallBackType type, CFDataRef address, const void *data, void *info) { // SocketsViewController is the class that contains all the methods SocketsViewController *obj = (SocketsViewController*)info; UInt8 *unsignedData = (UInt8 *) CFDataGetBytePtr(data); char *value = (char*)unsignedData; NSString *text = [[NSString alloc]initWithCString:value length:strlen(value)]; [obj writeToTextView:text]; [text release]; } Actually, this callback is being invoked in my code, the problem is that I don't know how can I get the data that the windows client sent me, I'm expecting to receive an array of bytes, but I don't know how can I get those bytes from the data param. If anyone can help me to find a way to do this, or maybe me point me to another way to get the data from the server in my client application I would really appreciate it. Thanks.

    Read the article

  • Sockets, Threads and Services in android, how to make them work together ?

    - by Spredzy
    Hi all, I am facing a probleme with threads and sockets I cant figure it out, if someone can help me please i would really appreciate. There are the facts : I have a service class NetworkService, inside this class I have a Socket attribute. I would like it be at the state of connected for the whole lifecycle of the service. To connect the socket I do it in a thread, so if the server has to timeout, it would not block my UI thread. Problem is, into the thread where I connect my socket everything is fine, it is connected and I can talk to my server, once this thread is over and I try to reuse the socket, in another thread, I have the error message Socket is not connected. Questions are : - Is the socket automatically disconnected at the end of the thread? - Is their anyway we can pass back a value from a called thread to the caller ? Thanks a lot, Here is my code public class NetworkService extends Service { private Socket mSocket = new Socket(); private void _connectSocket(String addr, int port) { Runnable connect = new connectSocket(this.mSocket, addr, port); new Thread(connect).start(); } private void _authentification() { Runnable auth = new authentification(); new Thread(auth).start(); } private INetwork.Stub mBinder = new INetwork.Stub() { @Override public int doConnect(String addr, int port) throws RemoteException { _connectSocket(addr, port); _authentification(); return 0; } }; class connectSocket implements Runnable { String addrSocket; int portSocket; int TIMEOUT=5000; public connectSocket(String addr, int port) { addrSocket = addr; portSocket = port; } @Override public void run() { SocketAddress socketAddress = new InetSocketAddress(addrSocket, portSocket); try { mSocket.connect(socketAddress, TIMEOUT); PrintWriter out = new PrintWriter(mSocket.getOutputStream(), true); out.println("test42"); Log.i("connectSocket()", "Connection Succesful"); } catch (IOException e) { Log.e("connectSocket()", e.getMessage()); e.printStackTrace(); } } } class authentification implements Runnable { private String constructFirstConnectQuery() { String query = "toto"; return query; } @Override public void run() { BufferedReader in; PrintWriter out; String line = ""; try { in = new BufferedReader(new InputStreamReader(mSocket.getInputStream())); out = new PrintWriter(mSocket.getOutputStream(), true); out.println(constructFirstConnectQuery()); while (mSocket.isConnected()) { line = in.readLine(); Log.e("LINE", "[Current]- " + line); } } catch (IOException e) {e.printStackTrace();} } }

    Read the article

  • Compatibility between Qt and Boost sockets libraries

    - by cake
    Hello In my work, I'm developing a Viewer client for a Offshore simulation server, using sockets to send the simulation data from the Simulator to de Viewer. But, the server uses Boost.asio as it's sockets library. As the client uses Qt for it's GUI, I was wondering if there is any problem in using de Qt Networking library for handling the sockets. Is there any compatibility issues? Thanks in advance, and sorry for my bad english.

    Read the article

  • Why sockets does not die when server dies? Why socket dies when server is alive?

    - by Roman
    I try to play with sockets a bit. For that I wrote very simple "client" and "server" applications. Client: import java.net.*; public class client { public static void main(String[] args) throws Exception { InetAddress localhost = InetAddress.getLocalHost(); System.out.println("before"); Socket clientSideSocket = null; try { clientSideSocket = new Socket(localhost,12345,localhost,54321); } catch (ConnectException e) { System.out.println("Connection Refused"); } System.out.println("after"); if (clientSideSocket != null) { clientSideSocket.close(); } } } Server: import java.net.*; public class server { public static void main(String[] args) throws Exception { ServerSocket listener = new ServerSocket(12345); while (true) { Socket serverSideSocket = listener.accept(); System.out.println("A client-request is accepted."); } } } And I found a behavior that I cannot explain: I start a server, than I start a client. Connection is successfully established (client stops running and server is running). Then I close the server and start it again in a second. After that I start a client and it writes "Connection Refused". It seems to me that the server "remember" the old connection and does not want to open the second connection twice. But I do not understand how it is possible. Because I killed the previous server and started a new one! I do not start the server immediately after the previous one was killed (I wait like 20 seconds). In this case the server "forget" the socket from the previous server and accepts the request from the client. I start the server and then I start the client. Connection is established (server writes: "A client-request is accepted"). Then I wait a minute and start the client again. And server (which was running the whole time) accept the request again! Why? The server should not accept the request from the same client-IP and client-port but it does!

    Read the article

  • Neccessity of push and pop operands on CPUs

    - by Hawken
    Why do we have commands like push and pop? From what I understand pop and push are basically the same as doing a (mov then add) and (sub then mov) on esp respectively. For example wouldn't: pushl %eax be equivalent to: subl $4, %esp movl %eax, (%esp-4) please correct me if stack access is not (%esp-4), I'm still learning assembly The only true benefit I can see is if doing both operation simultaneously offers some advantage; however I don't see how it could.

    Read the article

  • Cache bandwidth per tick for modern CPUs

    - by osgx
    Hello What is a speed of cache accessing for modern CPUs? How many bytes can be read or written from memory every processor clock tick by Intel P4, Core2, Corei7, AMD? Please, answer with both theoretical (width of ld/sd unit with its throughput in uOPs/tick) and practical numbers (even memcpy speed tests, or STREAM benchmark), if any. PS it is question, related to maximal rate of load/store instructions in assembler. There can be theoretical rate of loading (all Instructions Per Tick are widest loads), but processor can give only part of such, a practical limit of loading.

    Read the article

  • High CPU usage on Linux machine

    - by user305210
    I have a piece of java code running on two different machines, but on one of the linux machines, the code uses alot of CPU (close to 100% cpu usage). On the other machine the same code uses less cpu (under 3 to 4%). The machine where cpu usage is high, is a more powerful machine, more CPU and and more memory. This has started happening recently and performance on the machine with high cpu usage has degraded significantly. I am wondering if anyone has any ideas why something like this could happen, possible causes behind this etc. any guesses? No recent changes in hardware were made, no recent code updates... Thank you.

    Read the article

  • Apache2 + FCGI + PHP5 not creating sockets / pools

    - by CodyRo
    I have a pretty vanilla setup with Apache2, FastCGI setup as DSO and serving PHP through an external CGI script that sets the max children / serves the request to PHP. The issue is FastCGI doesn't appear to be creating the PHP sockets / pooling them so each request calls the php-cgi binary, then dies off .. effectively making the reason I want to use FastCGI moot. The only configuration directives I have are: AddHandler php5-fastcgi .php Action php5-fastcgi /cgi-bin/fcgi.cgi FastCgiIpcDir /usr/local/apache2/fastcgi The dyanmic/ directory is getting created as anticipated, but there are no sockets in there. Permissions are indeed correct. Any help would be greatly appreciated, thanks!

    Read the article

  • Separation of memory oriented process and CPU oriented process

    - by Jeevan Dongre
    I am develops guy working for an e-commerce company I am running my e-commerce application built using ruby on rails spree commerce. I am presently running 2 medium instances in the production. One is a high memory instance which has 3.8 RAM and single Core CPU and another one is high CPU instance which has Dual Core CPU. Basically AWS calls it has m1.medium and c1.medium instance respectively. My question is it possible to separate the processes according the cpu intense and memory intense? So that all the cpu intense process can be made run in high cpu instance and all the memory intense process can be made to run in the high memory instances. Is any tool available to identify those process. Kindly give me some heads up. Thank you

    Read the article

  • LINQ to SQL : Too much CPU Usage: What happens when there are multiple users.

    - by soldieraman
    I am using LINQ to SQL and seeing my CPU Usage sky rocketting. See below screenshot. I have three questions What can I do to reduce this CPU Usage. I have done profiling and basically removed everything. Will making every LINQ to SQL statement into a compiled query help? I also find that even with compiled queries simple statements like ByID() can take 3 milliseconds on a server with 3.25GB RAM 3.17GHz - this will just become slower on a less powerful computer. Or will the compiled query get faster the more it is used? The CPU Usage (on the local server goes to 12-15%) for a single user will this multiply with the number of users accessing the server - when the application is put on a live server. i.e. 2 users at a time will mean 15*2 = 30% CPU Usage. If this is the case is my application limited to maximum 4-5 users at a time then. Or doesnt LINQ to SQL .net share some CPU usage.

    Read the article

  • C sockets, chat server and client, problem echoing back.

    - by wretrOvian
    Hi This is my chat server : #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/socket.h> #include <netdb.h> #include <string.h> #define LISTEN_Q 20 #define MSG_SIZE 1024 struct userlist { int sockfd; struct sockaddr addr; struct userlist *next; }; int main(int argc, char *argv[]) { // declare. int listFD, newFD, fdmax, i, j, bytesrecvd; char msg[MSG_SIZE], ipv4[INET_ADDRSTRLEN]; struct addrinfo hints, *srvrAI; struct sockaddr_storage newAddr; struct userlist *users, *uptr, *utemp; socklen_t newAddrLen; fd_set master_set, read_set; // clear sets FD_ZERO(&master_set); FD_ZERO(&read_set); // create a user list users = (struct userlist *)malloc(sizeof(struct userlist)); users->sockfd = -1; //users->addr = NULL; users->next = NULL; // clear hints memset(&hints, 0, sizeof hints); // prep hints hints.ai_family = AF_INET; hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; // get srver info if(getaddrinfo("localhost", argv[1], &hints, &srvrAI) != 0) { perror("* ERROR | getaddrinfo()\n"); exit(1); } // get a socket if((listFD = socket(srvrAI->ai_family, srvrAI->ai_socktype, srvrAI->ai_protocol)) == -1) { perror("* ERROR | socket()\n"); exit(1); } // bind socket bind(listFD, srvrAI->ai_addr, srvrAI->ai_addrlen); // listen on socket if(listen(listFD, LISTEN_Q) == -1) { perror("* ERROR | listen()\n"); exit(1); } // add listfd to master_set FD_SET(listFD, &master_set); // initialize fdmax fdmax = listFD; while(1) { // equate read_set = master_set; // run select if(select(fdmax+1, &read_set, NULL, NULL, NULL) == -1) { perror("* ERROR | select()\n"); exit(1); } // query all sockets for(i = 0; i <= fdmax; i++) { if(FD_ISSET(i, &read_set)) { // found active sockfd if(i == listFD) { // new connection // accept newAddrLen = sizeof newAddr; if((newFD = accept(listFD, (struct sockaddr *)&newAddr, &newAddrLen)) == -1) { perror("* ERROR | select()\n"); exit(1); } // resolve ip if(inet_ntop(AF_INET, &(((struct sockaddr_in *)&newAddr)->sin_addr), ipv4, INET_ADDRSTRLEN) == -1) { perror("* ERROR | inet_ntop()"); exit(1); } fprintf(stdout, "* Client Connected | %s\n", ipv4); // add to master list FD_SET(newFD, &master_set); // create new userlist component utemp = (struct userlist*)malloc(sizeof(struct userlist)); utemp->next = NULL; utemp->sockfd = newFD; utemp->addr = *((struct sockaddr *)&newAddr); // iterate to last node for(uptr = users; uptr->next != NULL; uptr = uptr->next) { } // add uptr->next = utemp; // update fdmax if(newFD > fdmax) fdmax = newFD; } else { // existing sockfd transmitting data // read if((bytesrecvd = recv(i, msg, MSG_SIZE, 0)) == -1) { perror("* ERROR | recv()\n"); exit(1); } msg[bytesrecvd] = '\0'; // find out who sent? for(uptr = users; uptr->next != NULL; uptr = uptr->next) { if(i == uptr->sockfd) break; } // resolve ip if(inet_ntop(AF_INET, &(((struct sockaddr_in *)&(uptr->addr))->sin_addr), ipv4, INET_ADDRSTRLEN) == -1) { perror("* ERROR | inet_ntop()"); exit(1); } // print fprintf(stdout, "%s\n", msg); // send to all for(j = 0; j <= fdmax; j++) { if(FD_ISSET(j, &master_set)) { if(send(j, msg, strlen(msg), 0) == -1) perror("* ERROR | send()"); } } } // handle read from client } // end select result handle } // end looping fds } // end while return 0; } This is my client: #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/socket.h> #include <netdb.h> #include <string.h> #define MSG_SIZE 1024 int main(int argc, char *argv[]) { // declare. int newFD, bytesrecvd, fdmax; char msg[MSG_SIZE]; fd_set master_set, read_set; struct addrinfo hints, *srvrAI; // clear sets FD_ZERO(&master_set); FD_ZERO(&read_set); // clear hints memset(&hints, 0, sizeof hints); // prep hints hints.ai_family = AF_INET; hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; // get srver info if(getaddrinfo(argv[1], argv[2], &hints, &srvrAI) != 0) { perror("* ERROR | getaddrinfo()\n"); exit(1); } // get a socket if((newFD = socket(srvrAI->ai_family, srvrAI->ai_socktype, srvrAI->ai_protocol)) == -1) { perror("* ERROR | socket()\n"); exit(1); } // connect to server if(connect(newFD, srvrAI->ai_addr, srvrAI->ai_addrlen) == -1) { perror("* ERROR | connect()\n"); exit(1); } // add to master, and add keyboard FD_SET(newFD, &master_set); FD_SET(STDIN_FILENO, &master_set); // initialize fdmax if(newFD > STDIN_FILENO) fdmax = newFD; else fdmax = STDIN_FILENO; while(1) { // equate read_set = master_set; if(select(fdmax+1, &read_set, NULL, NULL, NULL) == -1) { perror("* ERROR | select()"); exit(1); } // check server if(FD_ISSET(newFD, &read_set)) { // read data if((bytesrecvd = recv(newFD, msg, MSG_SIZE, 0)) < 0 ) { perror("* ERROR | recv()"); exit(1); } msg[bytesrecvd] = '\0'; // print fprintf(stdout, "%s\n", msg); } // check keyboard if(FD_ISSET(STDIN_FILENO, &read_set)) { // read data from stdin if((bytesrecvd = read(STDIN_FILENO, msg, MSG_SIZE)) < 0) { perror("* ERROR | read()"); exit(1); } msg[bytesrecvd] = '\0'; // send if((send(newFD, msg, bytesrecvd, 0)) == -1) { perror("* ERROR | send()"); exit(1); } } } return 0; } The problem is with the part where the server recv()s data from an FD, then tries echoing back to all [send() ]; it just dies, w/o errors, and my client is left looping :(

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >