Search Results

Search found 3775 results on 151 pages for 'higher education'.

Page 131/151 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • load-causing processes disappearing from "top" ps -o pcpu shows bogus numbers

    - by Alec Matusis
    I administer a large number of servers, and I have this problem only with Ubuntu 10.04 LTS: I run a server under normal load (say load average 3.0 on an 8-core server). The "top" command shows processes taking certain % of CPU that cause this load average: say PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 11008 mysql 20 0 25.9g 22g 5496 S 67 76.0 643539:38 mysqld ps -o pcpu,pid -p11008 %CPU PID 53.1 11008 , everything is consistent. The all of the sudden, the process causing the load average disappears from "top", but the process continues to run normally (albeit with a slight performance decrease), and the system load average becomes somewhat higher. The output of ps -o pcpu becomes bogus: # ps -o pcpu,pid -p11008 %CPU PID 317910278 1587 This happened to at least 5 different severs (different brand new IBM System X hardware), each running different software: one httpd 2.2, one mysqld 5.1, and one Twisted Python TCP servers. Each time the kernel was between 2.6.32-32-server and 2.6.32-40-server. I updated some machines to 2.6.32-41-server, and it has not happened on those yet, but the bug is rare (once every 60 days or so). This is from an affected machine: top - 10:39:06 up 73 days, 17:57, 3 users, load average: 6.62, 5.60, 5.34 Tasks: 207 total, 2 running, 205 sleeping, 0 stopped, 0 zombie Cpu(s): 11.4%us, 18.0%sy, 0.0%ni, 66.3%id, 4.3%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 74341464k total, 71985004k used, 2356460k free, 236456k buffers Swap: 3906552k total, 328k used, 3906224k free, 24838212k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 805 root 20 0 0 0 0 S 3 0.0 1493:09 fct0-worker 982 root 20 0 0 0 0 S 1 0.0 111:35.05 fioa-data-groom 914 root 20 0 0 0 0 S 0 0.0 884:42.71 fct1-worker 1068 root 20 0 19364 1496 1060 R 0 0.0 0:00.02 top Nothing causing high load is showing on top, but I have two highly loaded mysqld instances on it, that suddenly show crazy %CPU: #ps -o pcpu,pid,cmd -p1587 %CPU PID CMD 317713124 1587 /nail/encap/mysql-5.1.60/libexec/mysqld and #ps -o pcpu,pid,cmd -p1624 %CPU PID CMD 2802 1624 /nail/encap/mysql-5.1.60/libexec/mysqld Here are the numbers from # cat /proc/1587/stat 1587 (mysqld) S 1212 1088 1088 0 -1 4202752 14307313 0 162 0 85773299069 4611685932654088833 0 0 20 0 52 0 3549 27255418880 5483524 18446744073709551615 4194304 11111617 140733749236976 140733749235984 8858659 0 552967 4102 26345 18446744073709551615 0 0 17 5 0 0 0 0 0 the 14th and 15th numbers according to man proc are supposed to be utime %lu Amount of time that this process has been scheduled in user mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK). This includes guest time, guest_time (time spent running a virtual CPU, see below), so that applications that are not aware of the guest time field do not lose that time from their calculations. stime %lu Amount of time that this process has been scheduled in kernel mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK). On a normal server, these numbers are advancing, every time I check the /proc/PID/stat. On a buggy server, these numbers are stuck at a ridiculously high value like 4611685932654088833, and it's not changing. Has anyone encountered this bug?

    Read the article

  • Design Technique: How to design a complex system for processing orders, products and units.

    - by Shyam
    Hi, Programming is fun: I learned that by trying out simple challenges, reading up some books and following some tutorials. I am able to grasp the concepts of writing with OO (I do so in Ruby), and write a bit of code myself. What bugs me though is that I feel re-inventing the wheel: I haven't followed an education or found a book (a free one that is) that explains me the why's instead of the how's, and I've learned from the A-team that it is the plan that makes it come together. So, armed with my nuby Ruby skills, I decided I wanted to program a virtual store. I figured out the following: My virtual Store will have: Products and Services Inventories Orders and Shipping Customers Now this isn't complex at all. With the help of some cool tools (CMapTools), I drew out some concepts, but quickly enough (thanks to my inferior experience in designing), my design started to bite me. My very first product-line were virtual "laptops". So, I created a class (Ruby): class Product attr_accessor :name, :price def initialize(name, price) @name = name @price = price end end which can be instantiated by doing (IRb) x = Product.new("Banana Pro", 250) Since I want my virtual customers to be able to purchase more than one product, or various types, I figured out I needed some kind of "Order" mechanism. class Order def initialize(order_no) @order_no = order_no @line_items = [] end def add_product(myproduct) @line_items << myproduct end def show_order() puts @order_no @line_items.each do |x| puts x.name.to_s + "\t" + x.price.to_s end end end that can be instantiated by doing (IRb) z = Order.new(1234) z.add_product(x) z.show_order Splendid, I have now a very simple ordering system that allows me to add products to an order. But, here comes my real question. What if I have three models of my product (economy, business, showoff)? Or have my products be composed out of separate units (bigger screen, nicer keyboard, different OS)? Surely I could make them three separate products, or add complexity to my product class, but I am looking for are best practices to design a flexible product object that can be used in the real world, to facilitate a complex system. My apologies if my grammar and my spelling are with error, as english is not my first language and I took the time to check as far I could understand and translate properly! Thank you for your answers, comments and feedback!

    Read the article

  • serving mp3s to mobile devices is flooding nginx with partial requests

    - by drumfire
    I am serving mp3s with a minimalistic nginx server. What I see in my log files is that there are a lot of requests, in particular from AppleCoreMedia and sometimes Android useragents, that flood the server with short requests. Sometimes they keep requesting to download the same partial content for a very long time; sometimes more than an hour. For example: "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" [...] I also get a lot, but not as much, of these: "-" 400 0 "-" "-" 400 0 "-" The IP addresses are always from clients that start downloading shortly after that request, usually they have roughly the same UserAgent as in the first example. emphasized text I have enabled server throttling and connection limits in nginx to limit the huge amount of log entries from equivalent IPs at least somewhat. There was a performance issue when I saw the same behaviour on the previous server that used Apache. I installed nginx on a better server then moved the site. When Apache could not handle more connections from the increasing number of clients effectively that server was ddossed. There was no bandwidth issue with already connected clients and I don't know if the already connected clients were using more than one connection at a time. Please tell me: Are clients that appear to get stuck on a download a Bad Thing™ I heard people say their mobile bandwidth use was much higher than they could account for. I'm thinking this type of client behaviour can account for that. And costs us more bandwidth too. Which up to date alternatives exist out there that can handle serving this type of data better than plain HTTP? Useful general insights for someone who just came into this field straight out of the late 90s. :-)

    Read the article

  • UCARP: prevent the original master from taking over the VIP when it comes back after failure?

    - by quanta
    Keepalived can do this by combining the nopreempt option and the BACKUP state on the both nodes: Prevent VRRP Master from becoming Master once it has failed Prevent master to fall back to master after failure How about the UCARP? Name : ucarp Arch : x86_64 Version : 1.5.2 Release : 1.el5.rf Size : 81 k Repo : installed Summary : Common Address Redundancy Protocol (CARP) for Unix URL : http://www.ucarp.org/ License : BSD Description: UCARP allows a couple of hosts to share common virtual IP addresses in order : to provide automatic failover. It is a portable userland implementation of the : secure and patent-free Common Address Redundancy Protocol (CARP, OpenBSD's : alternative to the patents-bloated VRRP). : Strong points of the CARP protocol are: very low overhead, cryptographically : signed messages, interoperability between different operating systems and no : need for any dedicated extra network link between redundant hosts. If I don't use the --preempt option and set the --advskew to the same value, both nodes become master. /etc/sysconfig/carp/vip-010.conf # Virtual IP configuration file for UCARP # The number (from 001 to 255) in the name of the file is the identifier # $Id: vip-001.conf.example 1527 2004-07-09 15:23:54Z dude $ # Set the same password on all mamchines sharing the same virtual IP PASSWORD="pa$$w0rd" # You are required to have an IPADDR= line in the configuration file for # this interface (so no DHCP allowed) BIND_INTERFACE="eth0" # Do *NOT* use a main interface for the virtual IP, use an ethX:Y alias # with the corresponding /etc/sysconfig/network-scripts/ifcfg-ethX:Y file # already configured and ith ONBOOT=no VIP_INTERFACE="eth0:0" # If you have extra options to add, see "ucarp --help" output # (the lower the "-k <val>" the higher priority and "-P" to become master ASAP) OPTIONS="-z -k 255" /etc/sysconfig/network-scripts/ifcfg-eth0:0 DEVICE=eth0:0 ONBOOT=no BOOTPROTO= IPADDR=192.168.6.8 NETMASK=255.255.255.0 USERCTL=yes IPV6INIT=no node 1: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether c6:9b:8e:af:a7:69 brd ff:ff:ff:ff:ff:ff inet 192.168.6.192/24 brd 192.168.6.255 scope global eth0 inet 192.168.6.8/24 brd 192.168.6.255 scope global secondary eth0:0 inet6 fe80::c49b:8eff:feaf:a769/64 scope link valid_lft forever preferred_lft forever node 2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:30:48:f7:0f:81 brd ff:ff:ff:ff:ff:ff inet 192.168.6.38/24 brd 192.168.6.255 scope global eth1 inet 192.168.6.8/24 brd 192.168.6.255 scope global secondary eth1:0 inet6 fe80::230:48ff:fef7:f81/64 scope link valid_lft forever preferred_lft forever

    Read the article

  • How much did it cost our competitor to DDoS us at 50 Gbps for two weeks?

    - by MiniQuark
    I know that this question may sound like an invalid serverfault question, but I believe that it's quite valid: the amount of time and effort that a sysadmin should spend on DDoS protection is a direct function of typical DDoS prices. Let me rephrase this: protecting a web site against small attacks is one thing, but resisting 50 Gbps of UDP flood is another and requires time & money. Deciding whether or not to spend that time & money depends on whether such an attack is likely or not, and this in turn depends on how cheap and simple such an attack is for the attacker. So here's the full story: our company has been victim to a massive DDoS attack (over 50 Gbps of UDP traffic, full-time during 2 weeks). We are pretty sure that it's one of our competitors, and we actually know which one, because we were the only two remaining competitors on a very big request for proposal, and the DDoS attack magically stopped the day we won (double hurray, by the way)! These people have proved in the past that they are very dishonest, but we know that they are not technical at all, so we believe that they simply paid for some botnet DDoS service. I would like to know how much these services typically cost, for such a large scale attack. Please do not give any link to such services, I would really hate to give these people any publicity. I understand that a hacker could very well do this for free, but what's a typical price for such an attack if our competitors paid for it through some kind of botnet service? It is really starting to scare me (if we're talking thousands of dollars here, then I am really going to freak off: who knows, they might just hire a hit-man one day?). Of course we filed a complaint, but the police says that they cannot do much about it (DDoS attacks are virtually untraceable, so they say), and our suspicions are not enough to justify them raiding our competitor's offices to search for proofs. For your information, we now changed our infrastructure to be able to sustain such attacks: we now use a major CDN service so that our servers are not directly affected by DDoS attacks. Requests for dynamic pages do get proxied to our servers, but for low level attacks (UDP flood, or Syn floods, for example) we only receive legitimate trafic, so we're fine. If they decide to launch higher level attacks (HTTP flood or slowloris attacks for example), most of the load should be handled by the CDN... at least I hope so! Thank you very much for your help.

    Read the article

  • Trying to style the first tbody different than others without introducing another class.

    - by mwiik
    I have a table with multiple tbody's, each of which has a classed row, and I want it so that the classed row in the first tbody has style differences, but am unable to get tbody:first-child to work in any browser. Perhaps I am missing something, or maybe there is a workaround. Ideally, I would like to provide the programmers with a single tbody section they can use as a template, but will otherwise have to add a class to the first tbody, making for an extra test in the programming. The html is straightforward: <tbody class="subGroup"> <tr class="subGroupHeader"> <th colspan="8">All Grades: Special Education</th> <td class="grid" colspan="2"><!-- contains AMO line --></td> <td><!-- right 100 --></td> </tr> <tr>...</tr> <!-- several more rows of data --> </tbody> There are several tbody's per table. I want to style the th and td's within tr.subGroupHeader in the very first tbody differently than the rest. Just to illustrate, I want to add a border-top to the tr.subGroupHeader cells. The tr.subGroupHeader will be styled with a border-top, such as: table.databargraph.continued tr.subGroupHeader th, table.databargraph.continued tr.subGroupHeader td { border-top: 6px solid red; } For the first tbody, I am trying: table.databargraph.continued tbody:first-child tr.subGroupHeader th { border-top: 6px solid blue ; } However, this doesn't seem to work in any browser (I've tested in Safari, Opera, Firefox, and PrinceXML, all on my Mac) Curiously, the usually excellent Xyle Scope tool indicates that the blue border should be taking precedence, though it obviously is not. See the screenshot at http://s3.amazonaws.com/ember/kUD8DHrz06xowTBK3qpB2biPJrLWTZCP_o.png This screenshot shows (top left) the American Indian th is selected, and (bottom right), shows (via black instead of gray text for the css declaration), that, indeed, the blue border should be given precedence. Yet the border is red. I may be missing something fundamental, like pseudo-classes not working for tbodys at all... This really only needs to work in PrinceXML, and maybe Safari so I can see what I'm doing with webkit-based css tools. Note I did try a selector like tr.subGroupHeader:first-child, but such tr's apparently consider the tbody the parent (as I would suspect), thus made every border blue. Thanks...

    Read the article

  • Google Chrome issues

    - by Ben Hooper
    I'm an avid user of Chrome and would defend it to the death but there is no denying that it has issues, and I've been experiencing a lot of them recently. Issues Stuck tabs Issue: Around 60% of the time (higher when the system has just started), when Chrome is launched, some or all of the pinned tabs and startup pages will get stuck in the counter-clockwise-rotating "contacting server" mode and never snap out of it. Fix: Quit and launch Chrome until they load properly (this really isn't good, as quitting and launching Chrome can and has cleared all pinned tabs). Extra Information: If you stop the loading and re-enter the URL then the page will load perfectly. The amount of pinned tabs or startup pages seems to be irrelevant, but I could be wrong.   "Downloaded/out of" Issue: Each item in the download bar has a downloaded status section, formatted as "downloaded/out of". Sometimes Chrome doesn't display the "out of" part.   Collapsed settings windows Issue: In Chrome version 19(ish)+, settings are configured via overlayed / popup windows. Sometimes, the window will open fully collapsed. Fix: Resize Chrome's window or open the developer tools. "Network Error" with large downloads Issue: Sometimes, when downloading large files (500MB+) Chrome will download the entire file, the download status will freeze (for example, "1 GB/1 GB") for a few minutes, report "Network error", and delete the .crdownload file. Extra Information: The same file from the same website on the same computer downloads perfectly in other browsers. The website and file type seem to be irrelevant.   Information: I've experienced some of these issues on my home PC and some on my work PC, both of which are Windows 7 Ultimate x64. The only things that link them are my Google account (all settings are synced). Updating Chrome hasn't worked. Most of these issues presented themselves around about version 17 and have continued right through to 21 (current). Uninstalling Chrome, deleting all data in %programFiles(x86)%\Google and %localAppData%\Google, and reinstalling hasn't worked. I have yet to see whether disabling all extensions would make a difference, but it's hard to diagnose as these issues don't occur 100% of the time.   In my case, I don't know if there's an actual solution. I'm just curious as to whether anyone else is having similar issues to those that I'm experiencing. At least then I'll know it's an issue with Chrome itself.

    Read the article

  • Radeon HD4850 serious issues when using DirectX 10

    - by ricsmania
    Hello, I have a problem with my video card. Whenever I run a DirectX 10 game, it works for a few seconds (10 or so) and then starts displaying nothing but big polygons. I have tested this with Crysis and Resident Evil 5, both have the same problems. The same games running under DirectX 9 work fine, except for some small black squares once in a while. I have the following specs: Asus P7P55D LE Intel Core i5 750 Sapphire Radeon HD4850 1GB 2x2GB Patriot Viper II Sector 5, DDR3 1600 MHz OCZ Stealth X Stream 500SXS 500W At first I thought it could be the video card overheating (it has stock cooling), but the game crashes even when it's running at 50 degrees C, and it's never been higher than 70. I also thought it could be the PSU, but as far as I know 500W is enough for this computer, especially because I haven't overclocked anything. My OS is Windows 7 X64 and I am using Catalyst 10.10, but I have also tried many older versions with no success. I don't think there is a problem with the card itself, or else it wouldn't run DirectX 9 games I believe. I have spent many hours searching for a solution but I couldn't, so any help is appreciated. Thank you. EDIT: I did some further investigation about the problem, and it seems taspeotis was right, it might be related to memory. I slightly underclocked the memory from 993 to 965 MHz and the problem went away completely. Both the black squares using DirectX 9 and the weird polygons using DirectX 10. I was using RE DirectX 10 Benchmark, as it consistently crashed around the same point, and now I can play the full benchmark with no artifacts at all. Unfortunately, the underclock has an obvious hit in performance. Although it's not critical, it's definitely noticeable. So, if the video memory test software showed no erros, but the card needs an underclock to work, what might be the problem? Temperature? Voltage? By the way, I couldn't find what the default voltage for this card is. And what is a good software to try and increase it? I tried Ati Tray Tools but it has a bug that increases the clock speed dramatically whenever I change something in the Overclock tab, so I'm afraid it might fry my card. Worst case scenario, if I don't find I solution I will try to slightly increase the GPU clock to compensate for the memory clock. Thank you again.

    Read the article

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • Can't Move Windows to 2nd Monitor without Left Mouse and Cntl Key

    - by John C
    I have 2 very frustrating problems that maybe someone can help me with: I have 2 monitors (different sizes and resolutions) setup with the "Extended" monitor Win7 setup. My problem is this = I can not "move" a window from my Primary Monitor (larger and higher resolution on right side in front of me) to my Secondary 2nd monitor (smaller and lower resolution) with just selecting the title bar with the left mouse button and dragging it to the left. Windows 7 "snaps" it back to the left Primary Monitor when the window is physically in the 2nd window area as I'm holding the left mouse button. I can prevent this problem - by holding down the Cntl Key with the Left Mouse button, but this is extremely annoying to me. Also I typically "lose" focus if I try typing input on the 2nd monitor. Typing is erratic with regard to keystroke accuracy from my keyboard translated into input on the 2nd screen. No problem with typing input on the primary left monitor. I find this extremely annoying in Windows 7 and turning off the "snap" feature via the Control panel does NOT work for me. Win7 stubbornly refuses to move my selected window to my 2nd monitor without me "forcing" Win7 to do this with the Cntrl Key. Please tell me this is not a Win7 feature. Also on my system - Windows Key + Shift, Left arrow Key (pressed together) or the same combo with The Right arrow Key - don't do anything whatsoever. Widows Key with "+" however does maximize current window across both monitors, and I can "restore" it with Windows Key and "-" back to original monitor and size. I have tried various solutions including changing the resolutions of one or both of my monitors and sometimes "temporarily helps" but reverts back to the problem. Also if I swap the logical (not physical) layout so that I tell Win7 the monitors are setup in a reserved situation (Large monitor on the left, and small on the right) - this also sometimes helps for awhile - and is very strange and awkward to work with "backwards". But all of these solutions stop working. The only solution that consistently works for "moving" the screens is to hold the Cntrl Key down as I'm moving window with the left mouse selected on the title bar. Even that however, doesn't prevent the loss of typing focus for me on the 2nd monitor - while at the same time the typing on the 1st monitor is fine. Any help on moving my window screens from one monitor on my 2nd monitor without having to press the Cntrl key while holding down my left mouse button with be appreciated. Also any help on gaining typing "focus" into my 2nd screen with be helpful too. Thanks - John

    Read the article

  • Working with friends. Poor career choice?

    - by a_person
    Hi all, Hope you can help me solve somewhat of a moral dilemma. Some time ago, after just a few years of living in U.S. and having to take any job I could get my hands on a friend of mine submitted recommended me for an open position at the company that he was working for. I could have not been happier. I do not have a degree of any sort, however, by being passionate about CS and with constant drive for self education I've became a somewhat of a strong generalist. Every place I worked for recognized me for that quality and used me on various projects where set of technology in hand had no overlap with set of knowledge of the team members. Rapidly I've advanced to Sr. Programmer position and the trend of me following a friend from one place to another have started and continued on for a few years. My friend's goal always been to become an IT Director, mine is to become the best programmer I can be. To my knowledge I've accommodated his goals as much as I could by taking a back seat, and letting him take the lead. Fast forward to today. He's a manager, and I am on his team. I am unhappy and I in considerable amount of suffering. I am not being utilized to my potential, it's almost exact opposite, I am being micromanaged to an unhealthy extent, my decisions, and suggestions are constantly met with negative connotation. Last week I had to hear about how my friend is a better programmer than I am. My ego was ecstatic about this one /s. In addition to that working in the field of BI have exhausted itself for most parts. The only pleasure of my work is being derived from making everything as dynamic and parameter driven as possible. This is the only area where a friend of mine does not feel competent enough to actually micromanage. Because of my situation I feel a fair amount of guilt and ever growing resentment. I need your advice, maybe you've dealt with this expression of ego before, needs of self vs the needs of your friend. Is working with a friend a poor choice? Thank you for reading in.

    Read the article

  • Is this distributed database server idea feasible?

    - by David
    I often use SQLite for creating simple programs in companies. The database is placed on a file server. This works fine as long as there are not more than about 50 users working towards the database concurrently (though depending on whether it is reads or writes). Once there are more than this, they will notice a slowdown if there are a lot of concurrent writing on the server as lots of time is spent on locks, and there is nothing like a cache as there is no database server. The advantage of not needing a database server is that the time to set up something like a company Wiki or similar can be reduced from several months to just days. It often takes several months because some IT-department needs to order the server and it needs to conform with the company policies and security rules and it needs to be placed on the outsourced server hosting facility, which screws up and places it in the wrong localtion etc. etc. Therefore, I thought of an idea to create a distributed database server. The process would be as follows: A user on a company computer edits something on a Wiki page (which uses this database as its backend), to do this he reads a file on the local harddisk stating the ip-address of the last desktop computer to be a database server. He then tries to contact this computer directly via TCP/IP. If it does not answer, then he will read a file on the file server stating the ip-address of the last desktop computer to be a database server. If this server does not answer either, his own desktop computer will become the database server and register its ip-address in the same file. The SQL update statement can then be executed, and other desktop computers can connect to his directly. The point with this architecture is that, the higher load, the better it will function, as each desktop computer will always know the ip-address of the database server. Also, using this setup, I believe that a database placed on a fileserver could serve hundreds of desktop computers instead of the current 50 or so. I also do not believe that the load on the single desktop computer, which has become database server will ever be noticable, as there will be no hard disk operations on this desktop, only on the file server. Is this idea feasible? Does it already exist? What kind of database could support such an architecture?

    Read the article

  • How to circumvent ISP Limiting "Unknown" traffic - (SSH)Proxy, VPN

    - by connery
    I am having issues with using a proxy/VPN, with my current ISP (Comenersol, Spain). From my point of view they limit traffic by protocol or by traffic they "know" and "dont know". I'll explain my findings so far below. Internet connection in Spain: ~400-420KByte/sec (speedtest.net) OpenVPN Server in Sweden(pfsense): 100/100Mbit. LZO Compression. TCP. Tun. Aes128 Squid Proxy server in Sweden (pfsense): 100/100 (same box as the vpn server). Plain, no encryption. Runs in stealth mode to hide the use of proxy. NOT running OpenVPN or Squid Proxy, this is my findings: When I download a file from my pfsense box in Sweden, I get maximum speed When I run speedtest.net and choose any european server (including Swedish), I get max speed When I download a torrent (with non default port above 10K), I get limited to ~100KByte/sec. Encryption is turned off If I download something through https, I get max speed Running either Squid Proxy or VPN, this is my findings When I download a file from my pfsense box in Sweden, I get ~100KByte/sec When I run speedtest.net and choose any european server (including Swedish and Spanish), I get ~100Kbyte/sec When I download a torrent, I get same limitation ~100KByte/sec When I download something through https, I get ~100KByte/sec I verify the speeds above with speedtest.net measure, firefox measure in addition to having bmon running in terminal in the background. This way I am certain that the speeds I get presented, are in fact correct. If I connect through a different ISP with VPN or Squid Proxy, I get better speeds (400KByte/sec ++) In short: Whenever I tunnel my traffic through Sweden, my SPanish ISP throttles the traffic. I thought tunneling it through Squid would solve the issue, since I then would no longer hide my traffic through encryption. This does not seem to be the case. Wget and fetch gives same result. I did not try 'nc', but I assume this would give the same result. Does anyone know how to circumvent this issue? I would very much like to be able to get full speed with Swedish ip, as this would make me able to stream TV at higher quality than today. 100KByte/sec just does not cut it quality wise. Thanks for reading. Looking forward for your help.

    Read the article

  • Educate me - should I buy these prebuilt NAS (which is better) or make my own?

    - by user29336
    I'm trying to learn as much as possible, and I think I've learned quite a bit so bear with me here under my confusion. I found a coupe NAS setups. I'm not sure if one is better than the other, other than the price being higher on some, and some coming with drives VS not. Let me list my setup so you can get an idea of what I want to provide: Macbook Pro Macbook Mini for Media streaming (so far) Windows 7 Gaming Computer Xbox 360 I'd like to provide a storage system for all these devices so they can access files very easily, I'd also like any of these devices to be able to stream media from this storage system. I'd like this storage system to be hassle free in terms of my confidence in the data integrity. If a drive fails, I want to know that I can replace the drive and all my files will still exist. I'd like to access this storage system OUTSIDE of my LAN. If I'm out on a job for work I'd like to go in, or be able to have people DL some files. This brings me to a question, is this what iSCSI is? I'd like this data system to be able to download torrents. I want to mount any drive on this storage system onto my OSX laptop as if it were a local drive attached. (Is this with iSCSI is?) I'd like this system to have a GOOD web based GUI. I don't want to install software to use it. I believe those are the most of my requirements. If I'm missing something that I have no knowledge about, can someone educate me? Here are the systems I found: $729ish on Newegg Lacie 5Big Network 2 (comes with 5TB of space. iSCSI / mac compatible, torrents, nice ui, + others?) Is this overpriced for what it provides? It almost seems like a great deal to me because of the 5TB of space it comes with vs the other NAS systems that don't come with storage but cost $600-700. Should I get a different NAS system? Netgear? Others? Do they have same features? Better? Is it better to buy your own disks? What about making my own? I'm tech savy all around. It seems cheaper to buy a premade one especially with the support/warranty it provides...

    Read the article

  • D-Link DIR-300 slows down / loses network

    - by basic6
    there are 2 buildings (A and B). In bldg A is an open WLAN (which I'm allowed to use btw). In bldg B is a computer that I want to connect to that network. So I flashed an old D-Link DIR-300 AP with DD-WRT, mounted it to the wall (bldg B) near a window, attached a 13 dBi directional antenna (pointing to bldg A) and configured it as AP client in that wireless network. Then there's another AP, connected to the D-Link AP, acting as standard access point, which the computer is connected to. That's basically working so far, but: Every now and then the connection is lost. Not the connection between the computer and the D-Link (I can access the DD-WRT admin page normally) or the connection between the D-Link and the WLAN (in Status - Wireless it says it's still connected to the network), but when I want to access a web page (which only works if I'm connected to the wireless network from bldg A), my Firefox keeps "Looking for" (name resolution) without finding anything. When I reset the D-Link (power off, power on) in this situation, after some moments, everything's working fine again (Internet access). I've no idea why this is happening, but usually it's at most every few weeks (most times when nobody was using the computer, so no traffic). Compared to the connection speed when I connect directly to the WLAN in bldg A (Laptop), the speed in bldg B is rather slow, but I have the impression that this difference is worse in the last few days. A few minutes ago, I got 582 KB/s down and 911 KB/s up in bldg A (directly/laptop) and 84 KB/s down and 9 KB/s up in bldg B. The speed in bldg B used to be way higher (I remember 200 KB/s up) while the actual network speed in bldg A was lower than it is now (close to those 200). I'm aware that the wireless connection between those buildings should slow things down, but I'm wondering why this difference has become that extreme. Thanks for any tips... Update: I currently want to upload a large file (1.5 GB) via FTP (FileZilla). Since that caused the D-Link to disconnect (as described in first post), I took my laptop to bldg A, connected directly to the original WLAN (bypassing my D-Link) and tried the same upload. Guess what - same issue: At some point the connection is dead (at this point I would have reset my D-Link if I was connected to it). Just as the D-Link, my laptop is still connected, but not even name resolution is working ("Looking for..." in Firefox). After reconnecting, it's working again. Maybe my D-Link isn't the problem at all...

    Read the article

  • Generating Unordered List with PHP + CodeIgniter from a MySQL Database

    - by Tim
    Hello Everyone, I am trying to build a dynamically generated unordered list in the following format using PHP. I am using CodeIgniter but it can just be normal php. This is the end output I need to achieve. <ul id="categories" class="menu"> <li rel="1"> Arts &amp; Humanities <ul> <li rel="2"> Photography <ul> <li rel="3"> 3D </li> <li rel="4"> Digital </li> </ul> </li> <li rel="5"> History </li> <li rel="6"> Literature </li> </ul> </li> <li rel="7"> Business &amp; Economy </li> <li rel="8"> Computers &amp; Internet </li> <li rel="9"> Education </li> <li rel="11"> Entertainment <ul> <li rel="12"> Movies </li> <li rel="13"> TV Shows </li> <li rel="14"> Music </li> <li rel="15"> Humor </li> </ul> </li> <li rel="10"> Health </li> And here is my SQL that I have to work with. -- -- Table structure for table `categories` -- CREATE TABLE IF NOT EXISTS `categories` ( `id` mediumint(8) NOT NULL auto_increment, `dd_id` mediumint(8) NOT NULL, `parent_id` mediumint(8) NOT NULL, `cat_name` varchar(256) NOT NULL, `cat_order` smallint(4) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; So I know that I am going to need at least 1 foreach loop to generate the first level of categories. What I don't know is how to iterate inside each loop and check for parents and do that in a dynamic way so that there could be an endless tree of children. Thanks for any help you can offer. Tim

    Read the article

  • Avoiding explicit recursion in Haskell

    - by Travis Brown
    The following simple function applies a given monadic function iteratively until it hits a Nothing, at which point it returns the last non-Nothing value. It does what I need, and I understand how it works. lastJustM :: (Monad m) => (a -> m (Maybe a)) -> a -> m a lastJustM g x = g x >>= maybe (return x) (lastJustM g) As part of my self-education in Haskell I'm trying to avoid explicit recursion (or at least understand how to) whenever I can. It seems like there should be a simple non-explicitly recursive solution in this case, but I'm having trouble figuring it out. I don't want something like a monadic version of takeWhile, since it could be expensive to collect all the pre-Nothing values, and I don't care about them anyway. I checked Hoogle for the signature and nothing shows up. The m (Maybe a) bit makes me think a monad transformer might be useful here, but I don't really have the intuitions I'd need to come up with the details (yet). It's probably either embarrassingly easy to do this or embarrassingly easy to see why it can't or shouldn't be done, but this wouldn't be the first time I've used self-embarrassment as a pedagogical strategy. Background: Here's a simplified working example for context: suppose we're interested in random walks in the unit square, but we only care about points of exit. We have the following step function: randomStep :: (Floating a, Ord a, Random a) => a -> (a, a) -> State StdGen (Maybe (a, a)) randomStep s (x, y) = do (a, gen') <- randomR (0, 2 * pi) <$> get put gen' let (x', y') = (x + s * cos a, y + s * sin a) if x' < 0 || x' > 1 || y' < 0 || y' > 1 then return Nothing else return $ Just (x', y') Something like evalState (lastJustM (randomStep 0.01) (0.5, 0.5)) <$> newStdGen will give us a new data point.

    Read the article

  • What router hardware or software should be used when multiple public IPs are routed into the same LAN?

    - by lcbrevard
    I am looking for recommendations to replace a set of consumer grade (Linksys, Netgear, Belkin) routers with something that can handle more traffic while routing more than one static public IP into the same LAN address space. We have a block of static public IPs, 5 usable, with Comcast Business. Currently four of them are in use for: General office access Web server Mail and DNS servers Download and backup web server for separate business All systems (a mixture of physical and virtual) are in the same LAN address space (10.x.y.0/24) to enable easy access between them inside the office. There are 30 or more systems in use depending on which virtual machines are currently active. We have a mixture of Windows, Linux, FreeBSD, and Solaris. Currently a separate consumer grade router is used for each of the four static addresses, with its WAN address set to the specific static address and a different gateway address for each: uses 10.x.y.1 - various ports are forwarded to various LAN IPs on systems with gateway 10.x.y.1 uses 10.x.y.254 - port 80 is forwarded to a server with gateway 10.x.y.254 uses 10.x.y.253 - ports for mail and dns are forwarded to a server with gateway 10.x.y.253 uses 10.x.y.252 - ports as needed are forwarded to server with gateway 10.x.y.252 Only router 1. is allowed to serve DHCP and address reservation based on the MAC is used for most of the internal "server" IP addresses so they are at fixed values. [Some are set static due to limitations in the address reservation capabilities of router 1.] And, yes, this really does work! But... I am looking for: better DHCP with more capable address reservation higher capacity so I don't have to periodically power cycle the routers One obvious improvement would be to have a real DHCP server and not use a consumer grade router for that purpose. I am torn between buying a "professional" router such as Cisco or Juniper or Sonic Wall verus learning to configure some spare hardware to perform this function. The price goes up extremely rapidly with capabilities for commercial routers! Worse, some routers require licensing based on the number of clients - a disaster in our environment with so many virtual machines. Sorry for such a long posting but I am getting tired of having to power cycle routers and deal with shifting IP addresses afterwards!

    Read the article

  • If Nvidia Shield can stream a game via wifi, why can I not do the same via ethernet to any other PC?

    - by Enigma
    I think it absurd that a wireless game streaming solution is the *first to hit the market when a 1000mbps+ Ethernet connection would accomplish the same feat with roughly 6x the available bandwidth. I can only assume that there must be some reason behind this or a limitation preventing this, but what? 150mbps wifi is in no way superior to a 1000mbps LAN connection aside from well wireless mobility. Not only that but I have a secondary laptop and desktop which should by hardware comparison completely outperform anything the Tegra in the Nvidia Shield can do. Is this all just a marketing scheme to force people to buy the shield for the streaming benefit? Chief among these is that NVIDIA’s Shield handheld game console will be getting a microconsole-like mode, dubbed “Shield Console Mode”, that will allow the handheld to be converted into a more traditional TV-connected console. In console mode Shield can be controlled with a Bluetooth controller, and in accordance with the higher resolution of TVs will accept 1080p game streaming from a suitably equipped PC, versus 720p in handheld mode. With that said 1080p streaming will require additional bandwidth, and while 720p can be done over WiFi NVIDIA will be requiring a hardline GigE connection for 1080p streaming (note that Shield doesn’t have Ethernet, so this is presumably being done over USB). Streaming aside, in console mode Shield will also support its traditional local gaming/application functionality. - http://www.anandtech.com/show/7435/nvidia-consolidates-game-streaming-tech-under-gamestream-brand-announces-shield-console-mode ^ This is not acceptable for me for a number of reasons not to mention the ridiculousness of having a little screen+controller unit sitting there while using a secondary controller and screen instead. That kind of redundant absurdity exemplifies how wrong of a solution that is. They need a second product for this solution without the screen or controller for it to make sense... at which point your just buying a little computer that does what most other larger computers do better. All that is required, by my understanding, is the ability to decode H.264 video compression and transmit control/feedback so by any logical comparison, one (Nvidia especially) should have no difficulty in creating an application for PC's (win32/64 environment) that does the exact same thing their android app does. I have 2 video cards capable of streaming (encoding) H.264 so by right they must be capable of decoding it I would think. I haven't found anything stating plans to allow non-shield owners to do this. Can a third party create this software or does it hinge on some limitation that only Nvidia can overcome? (*) - perhaps this isn't the first but afaik it is the first complete package.

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • Low FPS in some games, but hardware not fully used

    - by Mario De Schaepmeester
    I just did a little funny experiment in the game/sim "Train Simulator 2013". I normally have good FPS in it (around 30) at full settings. What I did was make a really, really long train so that the calculations the sim needed to make were enormous (the sim is quite realistic, it takes all things into account like speed/acceleration, G-forces, comfort levels, possible wheel slip and many more, and most of those things on each carriage seperately). This resulted in only 14FPS as reported by the game, but it felt more like 8FPS or so. I have a Logitech G15 keyboard which has an LCD, and it allows me to monitor CPU/RAM and video card load on it. The strange thing is, all CPU cores were busy, but the total load was only about 60% maximum at all times. The video card was only on 30% load (possibly an important note, the memory was full, which is however not unusual for the game in question). The RAM had plenty of room and there weren't many operations as it didn't grow or shrink much. I just have the feeling that the game would run smoother if it used more of my hardware power. Why is it not doing so? I had the same in another game, The Elder Scrolls: Morrowind when using more than 100 mods (that all use scripting) and a few high res texture mods, + a full-on graphics improvement program. The engine is very old (2003), and so I thought this might be the cause (not being optimised for multithreading). I had thought of possible causes, like: The operating system doesn't let the games use all the resources. It doesn't make use of multi-threading appropriately. To eliminate the former, I tried a CPU stress tool and that got 100% CPU juice as I let it run, so the OS is not the problem. I gave its thread the "higher" priority though. My actual question In both games, I did things the engine was not really built to do or support. Can those games' framerate be limited cause of their own engine not being able to cope? What is the real reason and more importantly, can I help it? And in any case, could something actually be wrong with my hardware? It's all reasonably new, a couple of months, and I (almost) never experience any other trouble. Modern and much more demanding games work absolutely fine. Specs CPU: AMD Phenom II 965 X4 @ 3.4gHz RAM: 8GB of DDR3 RAM Video: MSI GTX560 (nVidia chip) with 1GB of GDDR5 memory OS: Windows 7 Ultimate 64 bit Nothing overclocked.

    Read the article

  • Troubleshooting Network Speeds -- The Age Old Inquiry

    - by John K
    I'm looking for help with what I'm sure is an age old question. I've found myself in a situation of yearning to understand network throughput more clearly, but I can't seem to find information that makes it "click" We have a few servers distributed geographically, running various versions of Windows. Assuming we always use one host (a desktop) as the source, when copying data from that host to other servers across the country, we see a high variance in speed. In some cases, we can copy data at 12MB/s consistently, in others, we're seeing 0.8 MB/s. It should be noted, after testing 8 destinations, we always seem to be at either 0.6-0.8MB/s or 11-12 MB/s. In the building we're primarily concerned with, we have an OC-3 connection to our ISP. I know there are a lot of variables at play, but I guess I was hoping the experts here could help answer a few basic questions to help bolster my understanding. 1.) For older machines, running Windows XP, server 2003, etc, with a 100Mbps Ethernet card and 72 ms typical latency, does 0.8 MB/s sound at all reasonable? Or do you think that slow enough to indicate a problem? 2.) The classic "mathematical fastest speed" of "throughput = TCP window / latency," is, in our case, calculated to 0.8 MB/s (64Kb / 72 ms). My understanding is that is an upper bounds; that you would never expect to reach (due to overhead) let alone surpass that speed. In some cases though, we're seeing speeds of 12.3 MB/s. There are Steelhead accelerators scattered around the network, could those account for such a higher transfer rate? 3.) It's been suggested that the use SMB vs. SMB2 could explain the differences in speed. Indeed, as expected, packet captures show both being used depending on the OS versions in play, as we would expect. I understand what determines SMB2 being used or not, but I'm curious to know what kind of performance gain you can expect with SMB2. My problem simply seems to be a lack of experience, and more importantly, perspective, in terms of what are and are not reasonable network speeds. Could anyone help impart come context/perspective?

    Read the article

  • Display is slightly blurry on (native) 1920x1080 resolution

    - by Martin Tuskevicius
    I have a computer monitor that is approximately 23" in size. Its native resolution is 1920x1080, and Windows 7 will not allow it to be any higher. However, I cannot make the resolution a little lower as well. When I right-click on my desktop and select 'Screen resolution,' the vertical slider has only two options: 1920x1080 and 1280x720. There are no real problems that I am having besides the fact that the image is slightly blurry. I can easily make things out and see them, but I definitely feel that the image is not as clear as it could be. My graphics card is ATI Radeon HD 5450 and it has the latest graphics drivers installed. I've tried playing around with the AMD VISION Engine Control Center to see if I can change an option to make the image clearer, but I had no luck. I did find one odd thing, though. When I lowered the refresh rate from 60Hz to 50Hz, the image kind of "zoomed in" but it also became perfectly clear like I would expect it to look. The problem is that when I use 50Hz, the image zooms in a little on the center and I lose maybe an inch and a half of the screen (I do not see the bar at the top of applications, I do not see the Windows taskbar thing, etc). I figured if I could somehow zoom in so that the entire image fills the screen (not the slightly cropped version) then I would have the perfectly crisp image of 50Hz, and also the uncropped image of 60Hz. However, upon zooming in, the image began to look blurry again just like it did with 60Hz. So I am at a loss here. I do not know how to make the image look as clear as it should. I have the latest drivers (I updated them today) and I know that my monitor supports the resolution that I am trying to use. Has anybody experienced something like this before? I'd really appreciate any input - thanks! Update: I have figured out how to make the display look crisp! I set it to the 50Hz option, and then I changed the scaling through the monitor itself, rather than software. Now, however, I am finding that games look pretty bad because since it is clear, the lower quality really becomes apparent. I cannot run new games at 1080p, so I run them at the lowest resolution possible (1280x720, since it is the only other option offered, as I have mentioned). So I am wondering, is there a way to have Windows display more resolution options?

    Read the article

  • Update (ajax) only part of table without affecting whole table

    - by ile
    <table width="100%" border="0" cellspacing="0" cellpadding="0" class="list"> <tr> <th><a href="#" class="sortby">Full Name</a></th> <th><a href="#" class="sortby">City</a></th> <th><a href="#" class="sortby">Country</a></th> <th><a href="#" class="sortby">Status</a></th> <th><a href="#" class="sortby">Education</a></th> <th><a href="#" class="sortby">Tasks</a></th> </tr> <div class="dynamicData"> <tr> <td>Firstname Lastname</a></td> <td>Los Angeles</td> <td>USA</td> <td>Married</td> <td>High School</td> <td>4</td> </tr> </tr> <tr> <td>Firstname Lastname</a></td> <td>Los Angeles</td> <td>USA</td> <td>Married</td> <td>High School</td> <td>4</td> </tr> </div> </table> The idea is to update table rows when clicking on link with clasl "sortby" which is part of th table tag. I tried inserting div but this doesn't work. Separating this in two tables is not good solution because witdh in both tables cell are not following each other. Any other solution? Thanks

    Read the article

  • How can I minimize the amount my router slows down my Internet connection speed?

    - by Lord Torgamus
    Background I'm working with what I assume is a pretty common Internet setup: a cable modem, a wireless router and a few Internet-connected devices. Lately, I've started being more demanding on my Internet connection, and noticed that using my router slows down my download speeds considerably. I just kind of dealt with it until Zune Marketplace on the Xbox 360 told me that a movie was going to take well over ten hours to download, and I just didn't want to wait that long. Good little scientist that I am, I tried to reduce the problem down to one variable. The test As a control, I turned off all the devices in the house that use wireless Internet, and unplugged all the wired devices except for the Xbox. I also power-cycled both the modem and the router. I then tried to download the movie again, and was told that it would still take over ten hours. Next, I unplugged the router, and connected the Xbox directly to the modem. The movie downloaded in just over one hour. As far as I can tell, this means that my ISP, other cable users near me, the remote servers, anything wireless-related and my machines' disk speeds can't be at fault. A similar experiment that replaced the Xbox with a wired laptop produced similar results. To me, this says "the router is responsible for things taking around ten times longer to download." My question I'd still prefer to use the router for a few reasons: it's a pain to connect and disconnect everything every time there's a big file to download direct connection to the modem isn't good for security only one machine can be connected directly to the modem at a time What can I do to have fast connection speeds while still using the router? I don't mind turning other machines off, as long as I don't have to mess with power and ethernet cables. EDIT : After asking this followup question and then this one, I installed dd-wrt on my router, and I seem to be getting higher and more consistent speeds. Perhaps more importantly, my memory use is fairly constant. I know this isn't an answer — which is why I'm not posting it as an answer — but it is how I resolved the situation, and hopefully it'll be helpful for someone.

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >