Search Results

Search found 19752 results on 791 pages for 'cpu window'.

Page 75/791 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • CPU cores and workers / maxservers

    - by user80666
    I'm trying to optimize my apache and nginx installations and have been looking for information on how to setup correct number or min/max servers and connections in Apache and worker processes in Nginx. I was wondering whether or not Apache and Nginx take advantage of multy core processors and how to set configuration in Apache and Nginx? For example, lets say I have a 4 core processor, should I set workers to 4 in nginx? what should I set spare server in apache to?

    Read the article

  • average temperature of CPU

    - by lego 69
    I downloaded the program named speedFan(for measuring the temperature of the hardware), and I have parameters like these GPU: 60C HDO: 38C Temp1: 50C Core 0: 44C Core 1: 44C Core: 59C Ambient: 0C I know that it shows me temperature of the different parts of my hardware, but I have no idea which ones, can You please explain it and also, is it something wrong with my laptop (because temperature of GPU is 60C) thanks in advance

    Read the article

  • Maintenance window and recovery for a large database

    - by NYSystemsAnalyst
    One of our teams is developing a database that will be somewhat large (~500GB) and grow from there (I know 500 Gigs may seem small to many of you, but it will be one of the larger databases in our shop). One of the issues they are grappling with is backing up and restoring the database. Basically, the database will have several "data" tables and one table used for storing images / documents. We need to accomplish the following: Be able to quickly backup and restore only the data tables (sans images) to our test server for debugging and testing purposes. In the event of a catastrophic database failure, restore the data tables only to get most of the application up and running ASAP. Then, restore the images table when possible. Backup the database within the allotted nightly time window (a few hours). My questions are: Is it possible to accomplish the first two goals while still having the images stored in the same database? If so, would we use filegroups, filestream, or something else? How do other shops backup their databases in a reasonable time window while maintaining high availability? Do you replicate to a second server and backup from there?

    Read the article

  • How to resume XMPP groupchat window in Irssi (using bitlbee)?

    - by mcnesium
    I use Bitlbee to chat in XMPP-networks within my IRC-client Irssi. This works great so far, and recently I started using XMPP Multi User Chats as an alternative to IRC-channels. I set up a channel using chat add <account> <[email protected]> in the &bitlbee control window, set chan <room> set autojoin true and entered /join #room in the &bitlbee window to join that groupchat. It then appears as a unique Irssi window in the status bar. This seems to work ok too, but with one exception: Since I idle in the channels 24/7 my irssi has to cope with the every-night-24h-DSL-disconnection by the ISP. After it automatically reconnects, it does kind of rejoin that XMPP-groupchat, but the traffic of the groupchat does not go back to the unique irssi window, but keeps flooding &bitlbee with messages from root telling me about a Groupchat Message from unknown JID <jid>: <message> - which is the traffic of the groupchat. The unique groupchat window is gone after the reconnect, and I will again have to go /join #room in &bitlbee to get it back. Even worse, the window number is unused before I rejoin the groupchat, and if I get a query from any network, the window nests in that unused window spot, so I will first have to remove that query from the spot, and then move the rejoined groupchat to that window number. I want my groupchat window to resume after the reconnect just like every other IRC channel too. How can I get this done? Any ideas?

    Read the article

  • CPU Configuration Issue for 2 Servers (Server 2008 R2)

    - by Bill Moreland
    I have 2 servers running the exact same Classic ASP code with Access DBs (yes, not ideal, but it is what it is, for now). 1) Xeon 5520 @ 2.27 GHz (6 GB Memory) 2) Xeon E5-2620 @ 2.00 GHz (2 processors, 32 GB Memory) For most pages the newer E5-2620 processes the pages between 10-15% faster. On pages requiring heavy and/or multiple complicated access stored procedures (queries) the older 5520 does a much better job. I believe the servers are configured nearly identically. My question: is it possible that the newer, multi-processor server is not as good at handling Classic ASP as the older single processor? Is there a configuration difference that needs to be in place that I'm missing since I'm shooting for identical implementations?

    Read the article

  • How to make one CPU to be used simulataneously be three different users

    - by beginning_steps
    As a bootstrapping start-up we are thinking of saving on the IT hardware cost by making more use of the hardware that we have. As a solopreneur I have a laptop config : intel core2duo processor, 3Gb RAM and 250 GB RAM. Now we are planning to increase our team to 3 members. Will like your suggestions on the nest cost-effective step that I can take so that I can use the computing power of the existing laptop to act as a kind of server and then buy to more monitors where the new recruits can do the daily work on and they need to have different login id and access and they dont need access to all the files/applications as are available in my laptop. We use internet intensively to do our day to day activity. Please share you experience, whether you think this is a good ploy or there is any other more effective way of achieving the same result.

    Read the article

  • How to get accurate window information (dimensions, etc.) in Linux (X)?

    - by mellort
    How can I get accurate window information in Linux? I know that I can use wmctrl to get a window's size, but the actual size of the window can vary due to window decorations. I need the following information and methods: * precise window dimensions * precise available screen space (excluding panels like gnome-panel) * the ability to set a window to be a certain size, including decorations What would be the best way to do this? Thanks in advance!

    Read the article

  • Webserver - Memory-bound or CPU-bound? [closed]

    - by JJP
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Web Sites I'm installing a social network using Zend Framework & MySql, with lots of plugins & queries. I want Webserver & Sql server on one box. I'm trying to choose between two machines (on hetzner.de): A) intel i7-2600 3.4 GHz 16 GB DDR3 RAM B) intel i7-920 2.6 GHz 24 GB DDR3 RAM B has 50% more RAM but 30% slower clock speed. Q is: is it obvious where the bottleneck will be? Would I ever need 24GB of RAM, even with lots of concurrent users?

    Read the article

  • Removing Little Snitch completely (Mac OS X Snow Leopard)

    - by Mathias Bynens
    I uninstalled Little Snitch months ago. Or so, I thought. When opening Console.app, I see something like this: Here’s a textual log: 21/11/09 22:05:31 com.apple.launchd[1] (at.obdev.littlesnitchd[10045]) Exited with exit code: 1 21/11/09 22:05:31 com.apple.launchd[1] (at.obdev.littlesnitchd) Throttling respawn: Will start in 10 seconds 21/11/09 22:05:33 Little Snitch UIAgent[10046] 2.0.4.385: m65968c1c 21/11/09 22:05:33 Little Snitch UIAgent[10046] 2.0.4.385: m579328b9 21/11/09 22:05:33 Little Snitch UIAgent[10046] 2.0.4.385: m41531ded 21/11/09 22:05:33 com.apple.launchd.peruser.501[170] (at.obdev.LittleSnitchUIAgent) Throttling respawn: Will start in 10 seconds 21/11/09 22:05:41 com.apple.launchd[1] (at.obdev.littlesnitchd[10049]) Exited with exit code: 1 21/11/09 22:05:41 com.apple.launchd[1] (at.obdev.littlesnitchd) Throttling respawn: Will start in 10 seconds 21/11/09 22:05:43 Little Snitch UIAgent[10050] 2.0.4.385: m65968c1c 21/11/09 22:05:43 Little Snitch UIAgent[10050] 2.0.4.385: m579328b9 21/11/09 22:05:43 Little Snitch UIAgent[10050] 2.0.4.385: m41531ded 21/11/09 22:05:43 com.apple.launchd.peruser.501[170] (at.obdev.LittleSnitchUIAgent) Throttling respawn: Will start in 10 seconds Spotlight searches for ‘little snitch’ or ‘littlesnitch’ yield no results. Yet, it seems like I didn’t get rid of Little Snitch entirely, since it’s still using up my CPU. Any ideas?

    Read the article

  • Why does my computer crash randomly?

    - by Donavon Decker
    The other day I went out to my van to get my Tower and when I opened the trunk it fell out. I brought it into the house and opened it, and everything looked ok. When I started it up, about 1-3 minutes afterwards it would crash. It did this over and over until I reseated the cooler. Everything seemed normal again, until after about 10 minutes of gameplay (any game), it would crash. I reseated my GPU + reinstalled the drivers, however I still get the same error. A while back, I'd check my 'Windows Rating' periodically, and all of them were in the '6.0-6.9' range except for my hard disk usage (always been like that [not relative]). Today I went in and looked, and my Processor and Memory was rated 5.4. I reseated my cpu and my memory, refreshed the windows rating, and then my processor and memory went from 5.4, to 5.1. A few minutes ago I reseated them once again, and now it's back to 5.4. Note: Not sure if this is relevant to the issue, but I updated my bios earlier today I honestly have no idea what the issue is, but I'm getting aggravated at the problem. Here are some images which contain images of my specifications: i1271.photobucket.com/albums/jj623/donxdeck/1_zps09f0607c.jpg i1271.photobucket.com/albums/jj623/donxdeck/4_zps381cd00a.jpg i1271.photobucket.com/albums/jj623/donxdeck/3_zps54bba720.jpg i1271.photobucket.com/albums/jj623/donxdeck/2_zps945d3d72.jpg Thanks for the help

    Read the article

  • fail2ban log parsing too slow on Raspberry Pi - options? [migrated]

    - by Gordon Morehouse
    I'm running fail2ban on a Raspberry Pi at 950MHz which I cannot overclock further. The Pi is occasionally subject to SYN floods on particular ports. I've set up iptables to throttle the rate of SYNs on the port of interest; when the throttle limits are exceeded, hosts which send SYNs are dropped into the REJECT chain and the particular SYN packet which exceeded the limit is logged. fail2ban then watches for these logged SYNs and, after seeing a few, temporarily bans the host for a short time (this is a transient issue in the app I'm working with). The problem is that the SYN floods can occasionally reach rates which are too fast for fail2ban to keep up with; I'll see 20-40 log messages per second, and eventually fail2ban falls behind and becomes ineffective. To add insult to injury, it continues consuming a LOT of CPU as it tries to catch up. I have verified that DROP chained packets from hosts already banned by fail2ban are not logged, and thus do not add to its load. What are my options here? I have a few ideas, but no clear path forward. Could I make the log-parse regex "easier" so it takes fewer cycles? Would using iptables --log-prefix to put a token near the start of the log message, and/or otherwise simplifying/altering the fail2ban regex help? Here is the current fail2ban config line containing a regex: failregex = kernel:.*?SRC=(?:::f{4,6}:)?(?P<host>[\w\-.^_]+) DST.*?SYN Is there a faster way for fail2ban to watch for the packets exceeding the limits than parsing kern.log? Could fail2ban be run under PyPy instead of CPython with minimal nonstandard wizardry (the OS is Raspbian 7, so, mostly Debian 7)? Is there something better than fail2ban that I could use to watch for the packets which exceed the SYN limits, and after N exceeds in X seconds, temporarily put the offending IP into the iptables DROP bucket, and take it out when the ban timer expires? Again, I'd vastly prefer a solution that uses as much software available in Debian as possible, though I can build Debian packages in a pinch.

    Read the article

  • does my machine configuration make sense?

    - by user1227914
    i couldn't think of a better place to ask this question, so here it goes. we're putting together a dedicated server for a website that will initially host the web server and the mysql database. as the website grows, we'll move the database to a different server and this machine will eventually only server the actual website. so the question is ...does my configuration look okay? it's the first time i'm building a server from scratch so i want to make sure i don't combine components that don't fit or something. things like ..do the drives i picked work for the hot swap ..etc. what do you guys think? am i good to go with this configuration? :) Chassis: Supermicro SuperServer 6016T-MTHF (6x DDR3 SDRAM - ECC DIMM 240-pin, 2x LGA1366 Socket, Power Provided: 600 Watt, 4 (free) x hot-swap - 3.5") CPU: Intel BX80614E5620 Xeon E5620 Processor - 4 Core, 2.40GHz, LGA 1366, 5.86GT/s QPI 12MB Cache, 64-Bit, 80W, HyperThreading Memory: Crucial CT51272BB1339 4GB PC10600 DDR3 Memory - 1333MHz, ECC, Registered, 1x4096MB (possibly 3 or 4 of them) Hard Drives: Western Digital WD2002FAEX Caviar Black Hard Drive - 2TB, 3.5", SATA 6Gbps, 7200 RPM, 64MB (possibly 2 or 3). thank you very much for any professional advice :)

    Read the article

  • Computer restarts without warning; code bcc116

    - by Robert C.
    Processor: Intel i5 4430 4-Core 4x3Ghz Motherboard: msi h87-g41 Graphics Card: Nvidia GTX760 Power supply: eps-750 cm RAM: 8GB I bought a new assembled gaming PC which worked fine for a few days. Then it started rebooting without warning. After it restarts windows 7 gives me an bbc 116 error code. Apparently it's something to do with my video card, either it overheating or wrong drivers. I've installed the latest driver from Nvidia for my graphics card. Since it's brand new it can't be dust, I'm running it with its lid open to see if the problem persists. I'm also running prime95 now to see if it tells me anything else. Using core temp it tells me that my CPU reaches up to 95° celsius with the blend stress test from prime95. Aaaand it just peaked to 100°. Of course it doesn't reach these temperatures at all while idle/gaming. I'm gonna let prime95 run for a night and to see what happens. Until then does anyone know what I should do next?

    Read the article

  • Linux Scheduler (not using all cores on multi-core machine) RHEL6

    - by User512
    I'm seeing strange behavior on one of my servers (running RHEL 6). There seems to be something wrong with the scheduler. Here's the test program I'm using: #include <stdio.h> #include <unistd.h> #include <stdlib.h> void RunClient(int i) { printf("Starting client %d\n", i); while (true) { } } int main(int argc, char** argv) { for (int i = 0; i < 4; ++i) { pid_t p_id = fork(); if (p_id == -1) { perror("fork"); } else if (p_id == 0) { RunClient(i); exit(0); } } return 0; } This machine has a lot more than 4 cores so we'd expect all processes to be running at 100%. When I check on top, the cpu usage varies. Sometimes it's split (100%, 33%, 33%, 33%), other times it's split (100%, 100%, 50%, 50%). When I try this test on another server of ours (running RHEL 5), there are no issues (it's 100%, 100%, 100%, 100%) as expected. What's causing this and how can I fix it? Thanks

    Read the article

  • 5 year old server upgrade

    - by rizzo0917
    I am looking to upgrade a server for a web app. Currently the application is running very sluggish. We've made some adjustments to mysql (that's another issue in itself) and made some adjustments so that heaviest quires get run on a copy of the database on another server was have as a backup, however this will not last that much longer and we are looking to upgrade. Currently the servers CPUs are (4) Intel(R) XEON(TM) CPU 2.00GHz, with 1 gig of ram. The database is 442.5 MiB, with about 1,743,808 records. There are two parts of the program, the one, side a, inserts and updates most of the data. Side b, reads the data and does some minor updates. Currently our biggest day for side a are 800 users (of 40,000 users all year) imputing the system. And our Side b is currently unknown, however we have a total of 1000 clients. The system is most likely going to cap out at 5000 side b clients, with about a year 300,000 side a users. The current database is 5 years old, so we can most likely expect the database to grow pretty rapidly, possibly double each year (which we can most likely archive older records if it comes to that). So with that being said, should we get a server for each side of the app, side a being the master, side b being the slave, any updates made on side b are router to side a. So the question is should i get 2 of these or 1. 2 x Intel Nehalem Xeon E5520 2.26Ghz (8 Cores) 12GB DDRIII Memory 500GB SATAII HDD 100Mbps Port Speed And Naturally I would need to have a redundant backup so it could potentially be 4 of them.

    Read the article

  • New i7 is slower than old Core 2 Duo? Why? (BIOS programming)

    - by DrChase
    I've always wondered why the companies who make BIOS' either have terrible engineering psychologists or none at all. But without wasting your time further with random speculative questions, my real question is as follows: Why does my new computer run slower than my old computer? Old Computer: Intel Core 2 Duo CPU @ 3.0 Ghz (stock) 4GB OCZ DDR2 800 RAM Wolfdale E8400 mb nVidia GeForce 8600 GT New Computer: Intel Core i7 920 @ ~3.2 Ghz 6 GB OCZ DDR3 1066 RAM EVGA x58 SLI LE motherboard nVidia GeForce GTX 275 Vista x64 Home Premium on both. "Run slower" is defined as: - poorer FPS performance in the same games, applications - takes longer to start up - general desktop usage (checking email, opening up files, running exe's) is noticeably slower At first I thought I must've not set something up in the BIOS or something. But I have no idea how to set anything in the bios except for "Dummy O.C.", which brought me to ~3.2 Ghz. But beyond that I have no idea. I've been reading stuff about "ram timing" and voltages and the like but I really have no idea about that stuff. I'm a psychologist who has a basic understanding in building his own computers, not a computer scientist. Can someone give me some wisdom that might guide me to the reason my new computer is worse than my older one? I'm sorry if this is a bad question, or not appropriate to SO. I'm just pretty frustrated now and you all have helped me in the past so I figured I'd give it a shot. Thanks for your time.

    Read the article

  • SFML fail to load image as texture

    - by zyeek
    I have come to a problem with the code below ... Using SFML 2.0 #include <SFML/Graphics.hpp> #include <iostream> #include <list> int main() { float speed = 5.0f; // create the window sf::RenderWindow window(sf::VideoMode(sf::VideoMode::getDesktopMode().height - 300, 800), "Bricks"); // Set game window position on the screen window.setPosition( sf::Vector2i(sf::VideoMode::getDesktopMode().width/4 + sf::VideoMode::getDesktopMode().width/16 , 0) ); // Allow library to accept repeatitive key presses (i.e. holding key) window.setKeyRepeatEnabled(true); // Hide mouse cursor //window.setMouseCursorVisible(false); // Limit 30 frames per sec; the minimum for all games window.setFramerateLimit(30); sf::Texture texture; if (!texture.loadFromFile("tile.png", sf::IntRect(0, 0, 125, 32))) { std::cout<<"Could not load image\n"; return -1; } // Empty list of sprites std::list<sf::Sprite> spriteContainer; bool gameFocus = true; // run the program as long as the window is open while (window.isOpen()) { sf::Vector2i mousePos = sf::Mouse::getPosition(window); // check all the window's events that were triggered since the last iteration of the loop sf::Event event; while (window.pollEvent(event)) { float offsetX = 0.0f, offsetY = 0.0f; if(event.type == sf::Event::GainedFocus) gameFocus = !gameFocus; else if(event.type == sf::Event::LostFocus) gameFocus = !gameFocus; if(event.type == sf::Event::KeyPressed) { if (event.key.code == sf::Keyboard::Space) { if(gameFocus) { // Create sprite and add features before putting it into container sf::Sprite sprite(texture); sprite.scale(.9f,.7f); sf::Vector2u textSize = texture.getSize(); sprite.setPosition(sf::Vector2f(mousePos.x-textSize.x/2.0f, mousePos.y - textSize.y/2.0f)); spriteContainer.push_front(sprite); } } if(event.key.code == sf::Keyboard::P) std::cout << spriteContainer.size() << std::endl; if( event.key.code == sf::Keyboard::W ) offsetY -= speed; if( event.key.code == sf::Keyboard::A ) offsetX -= speed; if( event.key.code == sf::Keyboard::S ) offsetY += speed; if( event.key.code == sf::Keyboard::D ) offsetX += speed; } // "close requested" event: we close the window if (event.type == sf::Event::Closed || event.key.code == sf::Keyboard::Escape) window.close(); // Move all sprites synchronously for (std::list<sf::Sprite>::iterator sprite = spriteContainer.begin(); sprite != spriteContainer.end(); ++sprite) sprite->move(offsetX, offsetY); //sprite.move(offsetX,offsetY); } // clear the window with black color window.clear(sf::Color::Black); // draw everything here... // window.draw(...); // Draw all sprites in the container for (std::list<sf::Sprite>::iterator sprite = spriteContainer.begin(); sprite != spriteContainer.end(); ++sprite) window.draw(*sprite); // end the current frame window.display(); } return 0; } A couple weeks ago it worked flawlessly to my expectation, but now that I come back to it and I am having problems importing the image as a texture "tile.png". I don't understand why this is evening happening and the only message I get via the terminal is "Cannot load image ..." then a bunch of random characters. My libraries are for sure working, but now I am not sure why the image is not loading. My image is in the same directory as with my .h and .cpp files. This is an irritating problem that keep coming up for some reason and is always a problem to fix it. I import my libraries via my own directory "locals" which contain many APIs, but I specifically get SFML, and done appropriately as I am able to open a window and many other stuff.

    Read the article

  • Setting up a minimalist linux environment

    - by Nate
    All right, I've been messing around with various linux distros and a variety of window managers (I seem to change operating systems like most people change their pants), and I've gotten to the point where I know what I want but I'm not sure the best way to set it up. Here's what I want out of my programming machine: I don't want a status bar. I don't want a menu bar. When there are no windows open, the screen should show my desktop background and nothing else. I'll use alt+f2 to run things, and my shell prompt will tell me my battery life and the time. I'll open network controls and volume controls when I need them, no need for them to pollute the screen all the time. I want a good, simple terminal emulator. I'll be using it with tmux. It should have no title bar and, if possible, no app frame. It's ok if I have to run it in full screen mode to remove the app frame, but only if it still plays nicely with alt-tab and workspaces. I want a dirt-simple window manager. It needs to support transparency: I don't have a lot of screen real-estate and I often overlay the terminal on the browser and type out commands. I don't want a tiling-only system, for the above reason. Bonus points for tiling and overlaying. I'd like multiple workspaces. I prefer to have one gui per workspace. If I could 'pin' the terminal emulator to always show up in each workspace, that's bonus points. If not, I can have a terminal emulator in each workspace attached to the same tmux instance. I'd like a way to set up a keypress that always takes me to the current open terminal emulator. Currently, 90% of the time I only have two windows open: the terminal emulator and something else. In this scenario, alt-tab works like a toggle between the two. If I have another gui open (like a developer window with a web browser), this throws a wrench in my workflow. I'd like a way to assign, for example, 'super-T' to switch to the first open terminal emulator. Bonus points if I can also assign 'super-B' (or whatever) to switch to the first open browser. So far I've been messing around with gnome and tweaking it heavily to match my preferences, but that seems like overkill and I can never get it quite right. I've toyed with xmonad, but it's more for handling many windows, and I usually only have the two. and am considering fluxbox, but I was wondering if any of your minimalists out there had suggestions that might better match my workflow. I'm sick of fighting the window manager, I just want it to get out of my way. Edit: To make things clear, I am not considering switching to a mac/windows environment. I find programming in windows to be a bore, and I have no interest in buying new (read: mac) hardware. Thanks! -Nate

    Read the article

  • Exploring TCP throughput with DTrace

    - by user12820842
    One key measure to use when assessing TCP throughput is assessing the amount of unacknowledged data in the pipe. This is sometimes termed the Bandwidth Delay Product (BDP) (note that BDP is often used more generally as the product of the link capacity and the end-to-end delay). In DTrace terms, the amount of unacknowledged data in bytes for the connection is the different between the next sequence number to send and the lowest unacknoweldged sequence number (tcps_snxt - tcps_suna). According to the theory, when the number of unacknowledged bytes for the connection is less than the receive window of the peer, the path bandwidth is the limiting factor for throughput. In other words, if we can fill the pipe without the peer TCP complaining (by virtue of its window size reaching 0), we are purely bandwidth-limited. If the peer's receive window is too small however, the sending TCP has to wait for acknowledgements before it can send more data. In this case the round-trip time (RTT) limits throughput. In such cases the effective throughput limit is the window size divided by the RTT, e.g. if the window size is 64K and the RTT is 0.5sec, the throughput is 128K/s. So a neat way to visually determine if the receive window of clients may be too small should be to compare the distribution of BDP values for the server versus the client's advertised receive window. If the BDP distribution overlaps the send window distribution such that it is to the right (or lower down in DTrace since quantizations are displayed vertically), it indicates that the amount of unacknowledged data regularly exceeds the client's receive window, so that it is possible that the sender may have more data to send but is blocked by a zero-window on the client side. In the following example, we compare the distribution of BDP values to the receive window advertised by the receiver (10.175.96.92) for a large file download via http. # dtrace -s tcp_tput.d ^C BDP(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count -1 | 0 0 | 6 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 9 4096 | 14 8192 | 27 16384 | 67 32768 |@@ 1464 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 32396 131072 | 0 SWND(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count 16384 | 0 32768 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 17067 65536 | 0 Here we have a puzzle. We can see that the receiver's advertised window is in the 32768-65535 range, while the amount of unacknowledged data in the pipe is largely in the 65536-131071 range. What's going on here? Surely in a case like this we should see zero-window events, since the amount of data in the pipe regularly exceeds the window size of the receiver. We can see that we don't see any zero-window events since the SWND distribution displays no 0 values - it stays within the 32768-65535 range. The explanation is straightforward enough. TCP Window scaling is in operation for this connection - the Window Scale TCP option is used on connection setup to allow a connection to advertise (and have advertised to it) a window greater than 65536 bytes. In this case the scaling shift is 1, so this explains why the SWND values are clustered in the 32768-65535 range rather than the 65536-131071 range - the SWND value needs to be multiplied by two since the reciever is also scaling its window by a shift factor of 1. Here's the simple script that compares BDP and SWND distributions, fixed to take account of window scaling. #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::send / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @bdp["BDP(bytes)", args[2]-ip_daddr, args[4]-tcp_sport] = quantize(args[3]-tcps_snxt - args[3]-tcps_suna); } tcp:::receive / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @swnd["SWND(bytes)", args[2]-ip_saddr, args[4]-tcp_dport] = quantize((args[4]-tcp_window)*(1 tcps_snd_ws)); } And here's the fixed output. # dtrace -s tcp_tput_scaled.d ^C BDP(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count -1 | 0 0 | 39 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 4 4096 | 9 8192 | 22 16384 | 37 32768 |@ 99 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 3858 131072 | 0 SWND(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count 512 | 0 1024 | 1 2048 | 0 4096 | 2 8192 | 4 16384 | 7 32768 | 14 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1956 131072 | 0

    Read the article

  • How to keep a window from overlapping into another workspace?

    - by user1477
    It's annoying to me how if a window is even a couple pixels off the right edge of my screen, when I switch to that right workspace, the system thinks that the window is there. The unity launcher bar gets hidden because of it and switching to that window keeps you on the current workspace where you can't even see the window cause it's only a couple pixels. KDE seems to do this much better where when you switch to another workspace, the window just isn't there. But well I don't want KDE. Any way of getting that same behavior without the switch?

    Read the article

  • Distributed Computing Framework (.NET) - Specifically for CPU Instensive operations

    - by StevenH
    I am currently researching the options that are available (both Open Source and Commercial) for developing a distributed application. "A distributed system consists of multiple autonomous computers that communicate through a computer network." Wikipedia The application is focused on distributing highly cpu intensive operations (as opposed to data intensive) so I'm sure MapReduce solutions don't fit the bill. Any framework that you can recommend ( + give a brief summary of any experience or comparison to other frameworks ) would be greatly appreciated. Thanks.

    Read the article

  • C# multi CPU for ThreadPool.QueueUserWorkItem

    - by ikurtz
    I have a program that uses: ThreadPool.QueueUserWorkItem(new WaitCallback(FireAttackProc), fireResult); On Windows7 and Vista it works fine. When I try to run it on XP the result is a bit different from the others. I was just wondering in order to execute QueueUserWorkItem properly do I need a dual CPU system? The XP I tried to test on had .Net 3.5 installed. Inputs most welcome.

    Read the article

  • SQL Server high CPU and I/O activity database tuning

    - by zapping
    Our application tends to be running very slow recently. On debugging and tracing found out that the process is showing high cpu cycles and SQL Server shows high I/O activity. Can you please guide as to how it can be optimised? The application is now about an year old and the database file sizes are not very big or anything. The database is set to auto shrink. Its running on win2003, SQL Server 2005 and the application is a web application coded in c# i.e vs2005

    Read the article

  • how to use quad core CPU in application

    - by Mayank
    For using all the cores of a quad core processor what do I need to change in my code is it about adding support of multi threading or is it which is taken care by OS itself. I am having FreeBSD and language I am using is C++. I want to give complete CPU cycles to my application at least 90%.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >