Search Results

Search found 6311 results on 253 pages for 'limit clause'.

Page 57/253 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Server and Network Lag

    - by DanSpd
    At my home I have a server and my computer. Server has Windows Server 2008 Standart 64 bit installed and then my computer has Windows 7 64 bit installed. My home router is DIR-615 My server has a game server installed on it and basically total open connections on server are around 700. Once its around 700 server and then whole home network lags really badly. My internet connection is good and only around 600kb/s is being used out of 2mb/s (real download/upload speeds). So far I assume that my router has a connection limit and with server + my computer connections it peaks at around 1k and lags everything. Another assumption server lag comes from connection limit. On the internet I have read that standart version of server has connection limit of 700 + 10 reserved. I do not rmember where I found that information but it was on microsoft website. So I have two options. First is to upgrade my router to Netgear FVS318G-100 and second is to change connections limit on server. Any advices? Thank you

    Read the article

  • Server and Network Lag

    - by DanSpd
    At my home I have a server and my computer. Server has Windows Server 2008 Standart 64 bit installed and then my computer has Windows 7 64 bit installed. My home router is DIR-615 My server has a game server installed on it and basically total open connections on server are around 700. Once its around 700 server and then whole home network lags really badly. My internet connection is good and only around 600kb/s is being used out of 2mb/s (real download/upload speeds). So far I assume that my router has a connection limit and with server + my computer connections it peaks at around 1k and lags everything. Another assumption server lag comes from connection limit. On the internet I have read that standart version of server has connection limit of 700 + 10 reserved. I do not rmember where I found that information but it was on microsoft website. So I have two options. First is to upgrade my router to Netgear FVS318G-100 and second is to change connections limit on server. Any advices? Thank you

    Read the article

  • Server and Network Lag

    - by DanSpd
    At my home I have a server and my computer. Server has Windows Server 2008 Standart 64 bit installed and then my computer has Windows 7 64 bit installed. My home router is DIR-615 My server has a game server installed on it and basically total open connections on server are around 700. Once its around 700 server and then whole home network lags really badly. My internet connection is good and only around 600kb/s is being used out of 2mb/s (real download/upload speeds). So far I assume that my router has a connection limit and with server + my computer connections it peaks at around 1k and lags everything. Another assumption server lag comes from connection limit. On the internet I have read that standart version of server has connection limit of 700 + 10 reserved. I do not rmember where I found that information but it was on microsoft website. So I have two options. First is to upgrade my router to Netgear FVS318G-100 and second is to change connections limit on server. Any advices? Thank you

    Read the article

  • Regarding traffic shaping on juniper SRX550

    - by peilin
    We have implemented the Juniper SRX550 in our company. Now we have one issue that how to restrict the internal user download speed from internet. Take one example that i want to restrict the end user with IP:192.168.1.20/32 downloading speed up to 1M via my external port ge-0/0/6.0. Below is my setting: [edit firewall policer p1M] root@SRX550# show if-exceeding { bandwidth-limit 1m; burst-size-limit 15k; } then discard; [edit firewall family inet] root@SRX550# show filter limit-user term 10 { from { destination-address { 192.168.1.20/32; } } then policer p1M; } term else { then accept; } [edit interfaces ge-0/0/6] root@SRX550# show per-unit-scheduler; unit 0 { family inet { filter { input limit-user; } address Hidden Here; } } As per the setting, the end user downloading speed should not exceed the 1m (125KB in windows), but the result is the downloading speed for this end users still can up to 400KB via HTTP/HTTPS. Please advise. Thanks.

    Read the article

  • HTB.init / tc behind NAT

    - by Ben K.
    I have an Ubuntu 10 box that I'm trying to set up as a bandwidth-shaping router. The machine has one WAN interface, eth0 and two LAN interfaces, eth1 and eth2. NAT is configured using MASQUERADE as described at InternetConnectionSharing. I'm mostly concerned with shaping outbound traffic from the LAN interfaces -- in the end, I'd like to end up with a hard 768Kbps limit per-LAN-interface (rather than a limit on eth0 pooled across all interfaces). I installed HTB.init, and riffing on the examples, tried to set this up on eth1 by putting three files into /etc/sysconfig/htb: /etc/sysconfig/htb/eth1 DEFAULT=30 R2Q=100 /etc/sysconfig/htb/eth1-2.root RATE=768Kbps BURST=15k /etc/sysconfig/htb/eth1-2:30.dfl RATE=768Kbps CEIL=788Kbps BURST=15k LEAF=sfq I can /etc/init.d/htb start and /etc/init.d/htb stats and see information that /seems/ to suggest it's working...but when I try pulling a large file via the WAN interface the shaping clearly isn't in effect. Any suggestions? My guess is it has something to do with where the shaping falls in the NAT chain, but I really have no idea where to begin troubleshooting this. ---- Update: Here's my /etc/init.d/htb list output, it seems to make sense -- the default rate for eth1 is 768Kbps? ### eth0: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth0: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b ### eth0: filtering rules filter parent 1: protocol ip pref 100 u32 filter parent 1: protocol ip pref 100 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 100 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:30 match 00000000/00000000 at 12 match 00000000/00000000 at 16 ### eth1: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth1: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b

    Read the article

  • How to diagnose website performance/app pool recycling with Windows 2008/IIS7

    - by ilasno
    Ok, so there are various symptoms here (clients and and our own employees complaining of intermittent slowdowns, getting 'kicked out' to login page or just having a save request not properly save the submitted data). The environment: Windows Server 2008 (Datacenter), Service Pack 2, 64-bit, 2x2.8 GHz processors, 7.5 GB RAM MS SQL Server 2008 (running on the same machine) IIS 7 There are ~10 websites running on the server, each in their own application pool - most of these pools are running in Integrated mode, 2 are in Classic, all are on .NET 2.0 and all run as ApplicationPoolIdentity. I'm trying to analyze, diagnose, and troubleshoot and am struggling with where to get more info about what could be happening. Here are some steps i have already taken: Set each application pool to recycle once per day, and removed any other automatic recycling Set a Virtual Memory Limit for each to 1024000KB (1GB) Enabled ALL 'Generate Recycle Event Log Entry' entries (Config Changes, Isapi Reported Unhealthy, Manual Recycle, Private Memory Limit Exceeded, Regular Time Interval, Request Limit Exceeded, Specific Time, Virtual Memory Limit Exceeded) I have seen the app pool processes recycle (in Task Manager) - a new one will start up, and then the first one dies off - and this has happened without the memory or time going over the settings. This is a fairly new server, and most of these came from Windows Server 2003/IIS6. Any 'next steps' for setting up information gathering, logging, diagnosing, etc. would be much appreciated! j

    Read the article

  • 500 Internal Server Error when setting up Apache on localhost

    - by Martin Hoe
    I downloaded and installed XAMPP, and to keep my projects nicely separated I want to create a VirtualHost for each one based on its future domain name. For example, in my first project (we'll say it's project.com) I've put this in my Apache configuration: NameVirtualHost 127.0.0.1 <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/ ServerName localhost ServerAdmin admin@localhost </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/sub/ ServerName sub.project.com ServerAdmin [email protected] </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/project/ ServerName project.com ServerAdmin [email protected] </VirtualHost> And this in my hosts file: # development 127.0.0.1 localhost 127.0.0.1 project.org 127.0.0.1 sub.project.org When I go to project.com in my browser, the project loads up successfully. Same if I go to sub.project.com. But, if I navigate to: http://project.com/register (one of my site pages) I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. The error log shows this: [Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Any idea what config items I got wrong or how to get this working? It happens on any page that's not in in the root directory of project.com. Thanks.

    Read the article

  • Linux: prevent outgoing TCP flood

    - by Willem
    I run several hundred webservers behind loadbalancers, hosting many different sites with a plethora of applications (of which I have no control). About once every month, one of the sites gets hacked and a flood script is uploaded to attack some bank or political institution. In the past, these were always UDP floods which were effectively resolved by blocking outgoing UDP traffic on the individual webserver. Yesterday they started flooding a large US bank from our servers using many TCP connections to port 80. As these type of connections are perfectly valid for our applications, just blocking them is not an acceptable solution. I am considering the following alternatives. Which one would you recommend? Have you implemented these, and how? Limit on the webserver (iptables) outgoing TCP packets with source port != 80 Same but with queueing (tc) Rate limit outgoing traffic per user per server. Quite an administrative burden, as there are potentially 1000's of different users per application server. Maybe this: how can I limit per user bandwidth? Anything else? Naturally, I'm also looking into ways to minimize the chance of hackers getting into one of our hosted sites, but as that mechanism will never be 100% waterproof, I want to severely limit the impact of an intrusion. Cheers!

    Read the article

  • Can applications use all of the memory in Windows 8?

    - by Barleyman
    Windows 7 (and Windows Vista) have a built-in limit of not being able to use the last 25% of RAM. You will get a low memory warning when you get close to the limit. Even if you disable that warning, applications will run out of memory and crash since the OS will refuse to allocate memory from that last 25%. That was fine when Vista was designed, when machines had 1 GB of total memory, but is pretty daft for today's 8 GB machines. Yes, the system will run cache, etc. on that extra 2 GB, but running out of memory when you have "merely" 2 GB left.... NB: this has nothing to do with the page file. If you limit the page file to a sensible size like 2 GB, you will still see this behavior. The system will cram the page file to the last byte while refusing to touch that 1/4th of the RAM. Does Windows 8 change this behavior? Is there now some fixed minimum free RAM requirement, like 512 MB, or is it still 25%? Can you actually adjust the low memory limit?

    Read the article

  • ActiveMQ Configuration with KahaDB

    - by xeraa
    We are using ActiveMQ 5.6.0 with KahaDB. It has produced quite some log files, which is to be expected with our infrastructure, looking like this: $ ll -h /opt/activemq/data/kahadb/ total 969M drwxr-xr-x 2 root root 4.0K Nov 3 12:47 ./ drwxr-xr-x 3 activemq activemq 4.0K Sep 24 12:12 ../ -rw-r--r-- 1 root root 39M Oct 16 07:57 db-202.log -rw-r--r-- 1 root root 38M Oct 16 07:57 db-203.log -rw-r--r-- 1 root root 33M Oct 17 08:12 db-238.log ... No more messages were processed, when we ran into the 1GB temp usage limit. Or that's what we are assuming, is that correct? The configuration looks like this: <systemUsage> <systemUsage> <memoryUsage> <memoryUsage limit="512mb"/> </memoryUsage> <storeUsage> <storeUsage limit="3 gb"/> </storeUsage> <tempUsage> <tempUsage limit="1 gb"/> </tempUsage> </systemUsage> </systemUsage> After cleaning up the log files and being way below the limits, still no messages were consumed by AMQ. Only when we manually purged a route, messages were starting to be delivered again. So we need to ensure, that the KahaDB log size always stays below the temp usage, right? And that delivery was not picked up after fixing that is a bug or are there any other steps to be taken?

    Read the article

  • rsync --remove-source-files but only those that match a pattern

    - by Daniel
    Is this possible with rsync? Transfer everything from src:path/to/dir to dest:/path/to/other/dir and delete some of the source files in src:path/to/dir that match a pattern (or size limit) but keep all other files. I couldn't find a way to limit --remove-source-files with a regexp or size limit. Update1 (clarification): I'd like all files in src:path/to/dir to be copied to dest:/path/to/other/dir. Once this is done, I'd like to have some files (those that match a regexp or size limit) in src:path/to/dir deleted but don't want to have anything deleted in dest:/path/to/other/dir. Update2 (more clarification): Unfortunately, I can't simply rsync everything and then manually delete the files matching my regexp from src:. The files to be deleted are continuously created. So let's say there are N files of the type I'd like to delete after the transfer in src: when rsync starts. By the time rsync finishes there will be N+M such files there. If I now delete them manually, I'll lose the M files that were created while rsync was running. Hence I'd like to have a solution that guarantees that the only files deleted from src: are those known to be successfully copied over to dest:. I could fetch a file list from dest: after the rsync is complete, and compare that list of files with what I have in src:, and then do the removal manually. But I was wondering if rsync can do this by itself.

    Read the article

  • Nginx Redirect when URL includes variable p=1

    - by ChrisD
    Need to write a small nginx rewrite line(s) to alter/301 redirect some URLs within our existing website. for example: www.example.com.au/pageone.html?p=1 to www.example.com.au/pageone.html www.example.com.au/pagetwo.html?dir=asc&limit=200&order=price&p=1 to www.example.com.au/pagetwo.html?dir=asc&limit=200&order=price www.example.com.au/pagethree.html?dir=dsc&limit=100&order=price&p=1 to www.example.com.au/pagethree.html?dir=dsc&limit=100&order=price As you can see p=1 has been stripped from the URL's (as it is superfluous but has been live on the site and needs to be redirected now) - all http and https links. Basically if, and only if, p=1 is used anywhere within the URL then it should redirect to the same URL without the p=1. This should also let p=11, p=12 through as normal (and not redirect), as it is not specifically p=1. # # # # If that is not possible then, I'd like to know how to redirect this kind of URL as a standalone one off: www.example.com.au/pageone.html?p=1 to www.example.com.au/pageone.html I tried several redirects but they were all pointless and did not work, and was not able to get it working. To be honest I do not really know where to start with this - I am new to nginx.

    Read the article

  • rsync --remove-source-files but only those that match a pattern

    - by user28146
    Is this possible with rsync? Transfer everything from src:path/to/dir to dest:/path/to/other/dir and delete some of the source files in src:path/to/dir that match a pattern (or size limit) but keep all other files. I couldn't find a way to limit --remove-source-files with a regexp or size limit. Update1 (clarification): I'd like all files in src:path/to/dir to be copied to dest:/path/to/other/dir. Once this is done, I'd like to have some files (those that match a regexp or size limit) in src:path/to/dir deleted but don't want to have anything deleted in dest:/path/to/other/dir. Update2 (more clarification): Unfortunately, I can't simply rsync everything and then manually delete the files matching my regexp from src:. The files to be deleted are continuously created. So let's say there are N files of the type I'd like to delete after the transfer in src: when rsync starts. By the time rsync finishes there will be N+M such files there. If I now delete them manually, I'll lose the M files that were created while rsync was running. Hence I'd like to have a solution that guarantees that the only files deleted from src: are those known to be successfully copied over to dest:. I could fetch a file list from dest: after the rsync is complete, and compare that list of files with what I have in src:, and then do the removal manually. But I was wondering if rsync can do this by itself.

    Read the article

  • ORA-4030 Troubleshooting

    - by [email protected]
    QUICKLINK: Note 399497.1 FAQ ORA-4030 Note 1088087.1 : ORA-4030 Diagnostic Tools [Video]   Have you observed an ORA-0430 error reported in your alert log? ORA-4030 errors are raised when memory or resources are requested from the Operating System and the Operating System is unable to provide the memory or resources.   The arguments included with the ORA-4030 are often important to narrowing down the problem. For more specifics on the ORA-4030 error and scenarios that lead to this problem, see Note 399497.1 FAQ ORA-4030.   Looking for the best way to diagnose? There are several available diagnostic tools (error tracing, 11g Diagnosibility, OCM, Process Memory Guides, RDA, OSW, diagnostic scripts) that collectively can prove powerful for identifying the cause of the ORA-4030.    Error Tracing   The ORA-4030 error usually occurs on the client workstation and for this reason, a trace file and alert log entry may not have been generated on the server side.  It may be necessary to add additional tracing events to get initial diagnostics on the problem. To setup tracing to trap the ORA-4030, on the server use the following in SQLPlus: alter system set events '4030 trace name heapdump level 536870917;name errorstack level 3';Once the error reoccurs with the event set, you can turn off  tracing using the following command in SQLPlus:alter system set events '4030 trace name context off; name context off';NOTE:   See more diagnostics information to collect in Note 399497.1  11g DiagnosibilityStarting with Oracle Database 11g Release 1, the Diagnosability infrastructure was introduced which places traces and core files into a location controlled by the DIAGNOSTIC_DEST initialization parameter when an incident, such as an ORA-4030 occurs.  For earlier versions, the trace file will be written to either USER_DUMP_DEST (if the error was caught in a user process) or BACKGROUND_DUMP_DEST (if the error was caught in a background process like PMON or SMON). The trace file may contain vital information about what led to the error condition.    Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Informationto Support[Video]  Oracle Configuration Manager (OCM) Oracle Configuration Manager (OCM) works with My Oracle Support to enable proactive support capability that helps you organize, collect and manage your Oracle configurations. Oracle Configuration Manager Quick Start Guide Note 548815.1: My Oracle Support Configuration Management FAQ Note 250434.1: BULLETIN: Learn More About My Oracle Support Configuration Manager    General Process Memory Guides   An ORA-4030 indicates a limit has been reached with respect to the Oracle process private memory allocation.    Each Operating System will handle memory allocations with Oracle slightly differently. Solaris     Note 163763.1Linux       Note 341782.1IBM AIX   Notes 166491.1 and 123754.1HP           Note 166490.1Windows Note 225349.1, Note 373602.1, Note 231159.1, Note 269495.1, Note 762031.1Generic    Note 169706.1   RDAThe RDA report will show more detailed information about the database and Server Configuration. Note 414966.1 RDA Documentation Index Download RDA -- refer to Note 314422.1 Remote Diagnostic Agent (RDA) 4 - Getting Started OS Watcher (OSW)This tool is designed to gather Operating System side statistics to compare with the findings from the database.  This is a key tool in cases where memory usage is higher than expected on the server while not experiencing ORA-4030 errors currently. Reference more details on setup and usage in Note 301137.1 OS Watcher User Guide Diagnostic Scripts   Refer to Note 1088087.1 : ORA-4030 Diagnostic Tools [Video] Common Causes/Solutions The ORA-4030 can occur for a variety of reasons.  Some common causes are:   * OS Memory limit reached such as physical memory and/or swap/virtual paging.   For instance, IBM AIX can experience ORA-4030 issues related to swap scenarios.  See Note 740603.1 10.2.0.4 not using large pages on AIX for more on that problem. Also reference Note 188149.1 for pointers on 10g and stack size issues.* OS limits reached (kernel or user shell limits) that limit overall, user level or process level memory * OS limit on PGA memory size due to SGA attach address           Reference: Note 1028623.6 SOLARIS How to Relocate the SGA* Oracle internal limit on functionality like PL/SQL varrays or bulk collections. ORA-4030 errors will include arguments like "pl/sql vc2" "pmucalm coll" "pmuccst: adt/re".  See Coding Pointers for pointers on application design to get around these issues* Application design causing limits to be reached* Bug - space leaks, heap leaks   ***For reference to the content in this blog, refer to Note.1088267.1 Master Note for Diagnosing ORA-4030

    Read the article

  • Converting LINQ to Twitter to Twitter API v1.1

    - by Joe Mayo
    Twitter recently updated their API to v1.1 (Current status: API v1.1). Naturally, LINQ to Twitter  needed to be updated too. This blog post outlines the changes made to LINQ to Twitter during this conversion and highlights important features that LINQ to Twitter developers will want to know. Overall Impact Generally speaking, Twitter API v1.1 is semantically very much the same as it’s predecessor. The base URL changed and so did a few resource segments, but the resources themselves are still intact. The good news is that LINQ to Twitter has always shielded the developer from this plumbing, so the entities, types, and filters didn’t change much at all.  The following sections describe what did  change. Authentication In Twitter API v1.0 authentication was not required for some resources, such as user timelines and search. However, that’s all changed because *all* queries must be authenticated in Twitter API v1.1. LINQ to Twitter has various types of authorizers you can use, supporting whatever OAuth options are available via Twitter.  You can see the LINQ to Twitter documentation, Securing Your Applications, for more info on OAuth support. The New Search One of the larger changes to the API was Search. To be more specific, the Search entity now contains a List<Status>, named Statuses, to hold results.  Additionally, any meta-data associated with the search is now in a property named SearchMetaData. The change to the Search entity and responses is the big change, but the good news is that your Search query syntax doesn’t change. Different Rate Limits The issue of rate limits itself is contentious, but this discussion is focused on the coding experience and I’ll leave the politics to those who prefer to engage in that activity. What’s important here is that both headers and resources have changed. You should review Twitter’s Rate Limit documentation to understand what the changes mean.  A quick explanation is that rate limits are applied individually to each resource in 15 minute time intervals. In LINQ to Twitter these changes surface on the Help entity, via HelpType.RateLimits. The RateLimits query has a Resources filter where you can specify a comma-separated list of categories to return rate limit info for.  The results materialize in the RateLimits dictionary, keyed on category. The Help entity also has a RateLimitsAuthorizationContext, holding the Access Token for the user performing queries – and to whom the rate limits apply. In addition to the new RateLimits query, there are new RateLimit headers that appear in the query response, whose HTTP header name is of the form X-Rate-Limit… which is different from the previous header name. LINQ to Twitter surfaces these headers via the existing properties of the TwitterContext instance. For anyone who retrieved rate limit information via the Headers property of TwitterContext, you should be aware of the new header names.  I haven’t done anything with Feature rate limit properties yet, but they appear to no longer be available – this will require more follow-up. Error Handling Twitter API v1.1 has a new format for Error Codes & Responses. LINQ to Twitter wraps these messages in the TwitterQueryException, which has been updated appropriately. The Message property of TwitterQueryException now reflects the Twitter error message, when available. There’s also a new ErrorCode that’s populated with the message error code. Parameters Most parameters stayed the same, but one of interest is Include Entities (different from LINQ to Twitter data object entities). Entities are metadata hanging off tweets, that provide start/end position in the tweet and other information for mentions, urls, hash tags, and media. Entities used to not be included unless you specified you wanted them. Now, in v1.1, entities are included by default for all APIs that return a Status.  If you were always setting IncludeEntities to true, then you won’t see a change. However, be aware that you’ll now be receiving additional data in your response from Twitter, which will explain a sudden increase in bandwidth utilization. This might or might not  matter to you  depending on the requirements of your application, but you should be aware of it. Everything Else There might be small changes here and there that I haven’t mentioned, but these were the ones you should be most aware of.  Streams didn’t change, but Twitter will be deprecating username/password authentication on public streams, in favor of OAuth, so you’ll be seeing me make that change some time in the future.  Also, Twitter will continue to evolve the API and you can expect that LINQ to Twitter will change accordingly. Summary The big changes to Twitter API were Authentication, Search, Rate Limits, and Error Handling. All API calls must be authenticated. You’ll need to change your code to read Search results differently, but the query is much the same as you use now. There’s a new RateLimits API, one of the Help queries.  Also, the new error messages are integrated into TwitterQueryException. Besides these changes, I expect  most others to be small or affect a smaller percentage of developers.  You can get the latest version of LINQ to Twitter from NuGet or visit the LINQ to Twitter download page at CodePlex.com.   @JoeMayo

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • help fixing pagination class

    - by michael
    here is my pagination class. the data in the construct is just an another class that does db queries and other stuff. the problem that i am having is that i cant seem to get the data output to match the pagination printing and i cant seem to get the mysql_fetch_assoc() to query the data and print it out. i get this error: Warning: mysql_fetch_assoc(): supplied argument is not a valid MySQL result resource in /phpdev/pagination/index.php on line 215 i know that can mean that the query isnt correct for it to fetch the data but i have entered it correctly. sorry for the script being a little long. i thank all in advance for any help. <?php include_once("class.db.php"); mysql_connect("host","login","password"); mysql_select_db("iadcart"); class Pagination { var $adjacents; var $limit; var $param; var $mPage; var $query; var $qData; var $printData; protected $db; function __construct() { $this->db = new MysqlDB; } function show() { $result = $this->db->query($this->query); $totalPages = $this->db->num_rows($result); $limit = $this->limit; /* ------------------------------------- Set Page ------------------------------------- */ if(isset($_GET['page'])) { $page = intval($_GET['page']); $start = ($page - 1) * $limit; } else { $start = 0; } /* ------------------------------------- Set Page Vars ------------------------------------- */ if($page == 0) { $page = 1; } $prev = $page - 1; $next = $page + 1; $lastPage = ceil($totalPages/$limit); $lpm1 = $lastPage - 1; $adjacents = $this->adjacents; $mPage = $this->mPage; //the magic $this->qData = $this->query . " LIMIT $start, $limit"; $this->printData = $this->db->query($this->qData); /* ------------------------------------- Draw Pagination Object ------------------------------------- */ $pagination = ""; if($lastPage > 1) { $pagination .= "<div class='pagination'>"; } /* ------------------------------------- Previous Link ------------------------------------- */ if($page > 1) { $pagination .= "<li><a href='$mPage?page=$prev'> previous </a></li>"; } else { $pagination .= "<li class='previous-off'> previous </li>"; } /* ------------------------------------- Page Numbers (not enough pages - just display active page) ------------------------------------- */ if($lastPage < 7 + ($adjacents * 2)) { for($counter = 1; $counter <= $lastPage; $counter++) { if($counter == $page) { $pagination .= "<li class='active'>$counter</li>"; } else { $pagination .= "<li><a href='$mPage?page=$counter'>$counter</a></li>"; } } } /* ------------------------------------- Page Numbers (enough pages to paginate) ------------------------------------- */ elseif($lastPage > 5 + ($adjacents * 2)) { //close to beginning; only hide later pages if($page < 1 + ($adjacents * 2)) { for ($counter = 1; $counter < 4 + ($adjacents * 2); $counter++) { if ($counter == $page) { $pagination.= "<li class='active'>$counter</li>"; } else { $pagination.= "<li><a href='$mPage?page=$counter'>$counter</a></li>"; } } $pagination.= "..."; $pagination.= "<li><a href='$mPage?page=$lpm1'>$lpm1</a></li>"; $pagination.= "<li><a href='$mPage?page=$lastPage'>$lastPage</a></li>"; } elseif($lastPage - ($adjacents * 2) > $page && $page > ($adjacents * 2)) { $pagination.= "<li><a href='$mPage?page=1'>1</a></li>"; $pagination.= "<li><a href='$mPage?page=2'>2</a></li>"; $pagination.= "..."; for ($counter = $page - $adjacents; $counter <= $page + $adjacents; $counter++) { if ($counter == $page) { $pagination.= "<li class='active'>$counter</li>"; } else { $pagination.= "<li><a href='$mPage?page=$counter'>$counter</a></li>"; } } $pagination.= "..."; $pagination.= "<li><a href='$mPage?page=$lpm1'>$lpm1</a></li>"; $pagination.= "<li><a href='$mPage?page=$lastPage'>$lastPage</a></li>"; } else { $pagination.= "<li><a href='$mPage?page=1'>1</a></li>"; $pagination.= "</li><a href='$mPage?page=2'>2</a></li>"; $pagination.= "..."; for ($counter = $lastPage - (2 + ($adjacents * 2)); $counter <= $lastPage; $counter++) { if ($counter == $page) { $pagination.= "<li class='active'>$counter</li>"; } else { $pagination.= "<li><a href='$mPage?page=$counter'>$counter</a></li>"; } } } } /* ------------------------------------- Next Link ------------------------------------- */ if ($page < $counter - 1) { $pagination.= "<li><a href='$mPage?page=$next'> Next </a></li>"; } else { $pagination.= "<li class='next-off'> Next </li>"; $pagination.= "</div>"; } print $pagination; } } ?> <html> <head> </head> <body> <table style="width:800px;"> <?php mysql_connect("localhost","root","games818"); mysql_select_db("iadcart"); $pagination = new pagination; $pagination->adjacents = 3; $pagination->limit = 10; $pagination->param = $_GET['page']; $pagination->mPage = "index.php"; $pagination->query = "select * from tbl_products where pVisible = 'yes'"; while($row = mysql_fetch_assoc($pagination->printData)) { print $row['pName']; } ?> </table> <br/><br/> <?php $pagination->show(); ?> </body> </html>

    Read the article

  • Where is the bottleneck?

    - by jsymon
    There is a limit on connections somewhere along the line here... On a windows server 2008 machine, each request to a url running on localhost takes ~3 seconds to complete. This is fine and normal for the url. However, if i open the same localhost url in about 10 tabs, and set them to reload all at the same time, they finish sequentially, 3 seconds after each other. Meaning the last tab has taken 30 seconds to load (3s x 10). What is especially odd is that firebug reports each page as taking 3seconds to load. Another point to add is that the status bar just sits at 'done' for the last tab until 3 seconds before completing, where it then changes to 'waiting for localhost'. I am praying there is some connection limit somewhere otherwise this would be a disaster if more than one user ever visited the site at a time! Maybe a limit or something where one pc cant make more than 2 simultaneous requests to a url at a given time?

    Read the article

  • How to legitimately work around ISP rate limits

    - by Derek Ting
    A lot of ISP rate limit the amount of e-mails that is sent from a particular IP address. What is the proper way to get around that rate limit? Our company has an iPhone application that sends many e-mails because of our large user base and many e-mails go to different ISPs that rate limit the number of messages coming from a specific IP. We do not send spam and we are a legitimate business. However, is there a better way to resolve this limitation rather than just getting a ton of IP addresses? Ideally, I wouldn't want to rely on a third party service to send mail. However, if its the only possible solution, we would consider.

    Read the article

  • 421 Concurrent Connections - Ratelimit from helpdesk to rackspace server

    - by g18c
    We have Kayako helpdesk running on our WHM Linux server. When e-mails come in from customers, notifications are sent out by Kayako to a number of staff whose mailboxes are hosted on Rackspace mail servers. I noticed a large queue in the Exim queued message viewer of WHM - when looking in Exim logs I can see many lines 2012-10-13 20:06:56 1TN72s-0007Cw-1l SMTP error from remote mail server after initial connection: host mx2.emailsrvr.com [173.203.2.32]: 421 Too many concurrent connections from this client. One client email results in about 5 emails to rackspace servers, perhaps 60 emails per 1 hour on average - not a huge amount but enough to cause messages to be rejected when sent in short bursts. In this case ideally if we can limit the connections sent to the rackspace server we can comply with their limit. For our requirements if we send 1 email every10 seconds or so, this would be OK. Messages to all other servers should go through a normal rates, only mx1.emailsrvr.com and mx2.emailsrvr.com should have this connection limit policy applied. Is this possible?

    Read the article

  • Linq Left Outer Join

    - by Neil
    I am new to LINQ and am trying to convert a SQL query to LINQ: SQL: left outer join PRODUCT_BEST_USE pbu on pbu.PRODUCT_GUID = @uProductId and pbu.BEST_USE_GUID = bu.BEST_USE_GUID LINQ: from PBU in PRODUCT_BEST_USE.Where(PBU=>PBU.PRODUCT_GUID == p.PRODUCT_GUID).DefaultIfEmpty() When I add and PBU.BEST_USE_GUID equals BU.BEST_USE_GUID, I get an error: "A query body must end with a select clause or a group clause" Here is the full Linq query: from p in PRODUCT join BU in BEST_USE on p.CATEGORY_GUID equals BU.CATEGORY_GUID from PBU in PRODUCT_BEST_USE.Where(PBU=>PBU.PRODUCT_GUID == p.PRODUCT_GUID).DefaultIfEmpty() and PBU.BEST_USE_GUID equals BU.BEST_USE_GUID where p.PRODUCT_GUID == new Guid("d317752b-581d-4f43-92fa-4a4af91009f5") select new { BU.NAME, PBU.PRODUCT_BEST_USE_GUID }

    Read the article

  • Mail server hammering

    - by Rodrigo
    I've noticed a quick increase on smtp connections coming to my server, investigating it further i figured out that there's a botnet hammering my smtp server. I've tried to stop it by adding a rule at iptables: -N SMTP-BLOCK -A SMTP-BLOCK -m limit --limit 1/m --limit-burst 3 -j LOG --log-level notice --log-prefix "iptables SMTP-BLOCK " -A SMTP-BLOCK -m recent --name SMTPBLOCK --set -j DROP -A INPUT -p tcp --dport 25 -m state --state NEW -m recent --name SMTPBLOCK --rcheck --seconds 360 -j SMTP-BLOCK -A INPUT -p tcp --dport 25 -m state --state NEW -m recent --name SMTP --set -A INPUT -p tcp --dport 25 -m state --state NEW -m recent --name SMTP --rcheck --seconds 60 --hitcount 3 -j SMTP-BLOCK -A INPUT -p tcp --dport 25 -m state --state NEW -j ACCEPT That would avoid them from hammering "too fast", however the problem still, there's like 5 tries per second, it's going insane, i had to incrase the maximum number of childs of sendmail/dovecot. There's too many ips to filter out manually and simply changing the smtp to another port is not practical since i got many other clients on that server. I'm using sendmail with dovecot, any ideas to have this filtered out more efficiently?

    Read the article

  • Why isn't "(ulimit -d 1000; firefox) &" working?

    - by MaxB
    I'm trying to limit the memory usage of firefox to prevent it from stalling the whole system with problematic web sites. I tried, in bash: (ulimit -d 1000; firefox) & This should limit the memory usage to 1000kB. Then I opened YouTube, and noticed, in top, that firefox is using 2.6% of the memory, or about 200MB, and not crashing. Clearly the limit is being ignored. Why is that, and how can I enforce it correctly?

    Read the article

  • Performance required to improve Windows Experience Index?

    - by Ian Boyd
    Is there a guide on the metrics required to obtain a certain Windows Experience Index? A Microsoft guy said in January 2009: On the matter of transparency, it is indeed our plan to disclose in great detail how the scores are calculated, what the tests attempt to measure, why, and how they map to realistic scenarios and usage patterns. Has that amount of transparency happened? Is there a technet article somewhere? If my score was limited by my Memory subscore of 5.9. A nieve person would suggest: Buy a faster RAM Which is wrong of course. From the Windows help: If your computer has a 64-bit central processing unit (CPU) and 4 gigabytes (GB) or less random access memory (RAM), then the Memory (RAM) subscore for your computer will have a maximum of 5.9. You can buy the fastest, overclocked, liquid-cooled, DDR5 RAM on the planet; you'll still have a maximum Memory subscore of 5.9. So in general the knee-jerk advice "buy better stuff" is not helpful. What i am looking for is attributes required to achieve a certain score, or move beyond a current limitation. The information i've been able to compile so far, chiefly from 3 Windows blog entries, and an article: Memory subscore Score Conditions ======= ================================ 1.0 < 256 MB 2.0 < 500 MB 2.9 <= 512 MB 3.5 < 704 MB 3.9 < 944 MB 4.5 <= 1.5 GB 5.9 < 4.0GB-64MB on a 64-bit OS Windows Vista highest score 7.9 Windows 7 highest score Graphics Subscore Score Conditions ======= ====================== 1.0 doesn't support DX9 1.9 doesn't support WDDM 4.9 does not support Pixel Shader 3.0 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 7.9 Windows 7 highest score Gaming graphics subscore Score Result ======= ============================= 1.0 doesn't support D3D 2.0 supports D3D9, DX9 and WDDM 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 6.0-6.9 good framerates (e.g. 40-50fps) at normal resoltuions (e.g. 1280x1024) 7.0-7.9 even higher framerates at even higher resolutions 7.9 Windows 7 highest score Processor subscore Score Conditions ======= ========================================================================== 5.9 Windows Vista highest score 6.0-6.9 many quad core processors will be able to score in the high 6 low 7 ranges 7.0+ many quad core processors will be able to score in the high 6 low 7 ranges 7.9 8-core systems will be able to approach 8.9 Windows 7 highest score Primary hard disk subscore (note) Score Conditions ======= ======================================== 1.9 Limit for pathological drives that stop responding when pending writes 2.0 Limit for pathological drives that stop responding when pending writes 2.9 Limit for pathological drives that stop responding when pending writes 3.0 Limit for pathological drives that stop responding when pending writes 5.9 highest you're likely to see without SSD Windows Vista highest score 7.9 Windows 7 highest score Bonus Chatter You can find your WEI detailed test results in: C:\Windows\Performance\WinSAT\DataStore e.g. 2011-11-06 01.00.19.482 Disk.Assessment (Recent).WinSAT.xml <WinSAT> <WinSPR> <DiskScore>5.9</DiskScore> </WinSPR> <Metrics> <DiskMetrics> <AvgThroughput units="MB/s" score="6.4" ioSize="65536" kind="Sequential Read">89.95188</AvgThroughput> <AvgThroughput units="MB/s" score="4.0" ioSize="16384" kind="Random Read">1.58000</AvgThroughput> <Responsiveness Reason="UnableToAssess" Kind="Cap">TRUE</Responsiveness> </DiskMetrics> </Metrics> </WinSAT> Pre-emptive snarky comment: "WEI is useless, it has no relation to reality" Fine, how do i increase my hard-drive's random I/O throughput? Update - Amount of memory limits rating Some people don't believe Microsoft's statement that having less than 4GB of RAM on a 64-bit edition of Windows doesn't limit the rating to 5.9: And from xxx.Formal.Assessment (Recent).WinSAT.xml: <WinSPR> <LimitsApplied> <MemoryScore> <LimitApplied Friendly="Physical memory available to the OS is less than 4.0GB-64MB on a 64-bit OS : limit mem score to 5.9" Relation="LT">4227858432</LimitApplied> </MemoryScore> </LimitsApplied> </WinSPR> References Windows Vista Team Blog: Windows Experience Index: An In-Depth Look Understand and improve your computer's performance in Windows Vista Engineering Windows 7 Blog: Engineering the Windows 7 “Windows Experience Index”

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >