Search Results

Search found 1006 results on 41 pages for 'limits'.

Page 3/41 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Determine nginx reverse-proxy load limits

    - by Aaron
    Hi all: I have an nginx server (CentOS 5.3, linux) that I'm using as a reverse-proxy load-balancer in front of 8 ruby on rails application servers. As our load on these servers increases, I'm beginning to wonder at what point will the nginx server become a bottleneck? The CPUs are hardly used, but that's to be expected. The memory seems to be fine. No IO to speak of. So is my only limitation bandwidth on the NICs? Currently, according to some cacti graphs, the server is hitting around 700Kbps ( 5 min average ) on each NIC during high load. I would think this is still pretty low. Or, will the limit be in sockets or some other resource in the operating system? Thanks for any thoughts and insights. Aaron

    Read the article

  • CPU Limits for Application Pools in IIS 7.5

    - by Kyle Brandt
    I see that in iis 7.5 I can set a CPU % utilization limit for a specified amount of time for an application pool. I can have also have it kill the worker process if this limit is violated. If tell it to do this, will the worker process automatically restart after it is killed, or is manual intervention required? Over at Stack Overflow there is the mention that it can restarted at the completion of the interval...

    Read the article

  • Ubuntu whois package and request limits

    - by Sam Hammamy
    I'm writing a django app with a form that accepts an IP and does a whois lookup on the discovered domain names. I've found the Ubuntu package whois which I plan to call from a python subprocess, and read the stdout into a StringIO, then parse for things like Registrar, Name Servers, etc. My question is, it seems that there are many paid whois services, which means that there must be a reason why people don't just use this Ubuntu package. I'm wondering if there's a request limit on the number of requests from a single IP to the package's whois server? I will probably be making 250 domain lookups per IP or maybe more. Also, I've found that some domains aren't searchable: qmul.ac.uk is searchable kat.ph is not searchable ahram.org.eg is not searchable Any particular reason for that?

    Read the article

  • MaxClients, Server Limits etc

    - by Moe
    Hello, I'm having some problems with my Server. It's getting quite a bit of traffic and is very slow, and sometimes inaccessible by my users. Here are the server specs: CPU: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz - 16 Processors RAM: 2GB The Values for the Apache Config are: StartServers: 5 MaxSpareServers: 10 MinSpareServers: 5 MaxClients: 150 ServerLimit: 256 MaxRequestsPerChild: 1000 KeepAlive: On KeepAliveTimeout: 5 MaxKeepAliveRequests: 100 TimeOut: 300 What would be optiminal values for a server of my configuration to support the maximum amount of users at a reasonable speed without killing the server! Thank you.

    Read the article

  • How to legitimately work around ISP rate limits

    - by Derek Ting
    A lot of ISP rate limit the amount of e-mails that is sent from a particular IP address. What is the proper way to get around that rate limit? Our company has an iPhone application that sends many e-mails because of our large user base and many e-mails go to different ISPs that rate limit the number of messages coming from a specific IP. We do not send spam and we are a legitimate business. However, is there a better way to resolve this limitation rather than just getting a ton of IP addresses? Ideally, I wouldn't want to rely on a third party service to send mail. However, if its the only possible solution, we would consider.

    Read the article

  • Apache2/mod_fcgid/PHP Process Limits Not Respected

    - by Daniel
    I've recently moved to Apache2 / mod_fcgid / PHP from nginx / php_fpm. This is the second server on which I've made this migration, but it's used much less frequently than the first, which is working like a charm. The problem is in the PHP processes that it's spawning. In looking at the mod_fcgid documentation, it appears that the default for killing idle processes is 300 seconds; I've changed that to 20. At this point, I'd be fine if 300 would work - but it's not happening. It's been running for nearly a day now, and server-status shows 12 active processes: Process name: php5 Pid Active Idle Accesses State 19243 84879 14420 11 Ready Process name: php5 Pid Active Idle Accesses State 20954 82143 149 22 Ready 20947 82149 149 22 Ready 20953 82143 149 13 Ready Process name: php5 Pid Active Idle Accesses State 20589 82765 23644 72 Ready Process name: php5 Pid Active Idle Accesses State 17663 86103 2034 117 Ready Process name: php5 Pid Active Idle Accesses State 19862 83961 1976 91 Ready Process name: php5 Pid Active Idle Accesses State 18495 85825 5164 18 Ready Process name: php5 Pid Active Idle Accesses State 25463 75109 23948 24 Ready Process name: php5 Pid Active Idle Accesses State 2466 60019 60016 2 Ready Process name: php5 Pid Active Idle Accesses State 20729 82541 12592 23 Ready Process name: php5 Pid Active Idle Accesses State 22135 80616 46361 6 Ready PHP applications are not being served at this point - Apache is returning a 503. However, it is still serving the server-status module, and mod_mono/Mono 2.10 applications are still being served. The problem is with the PHP. /etc/apache2/mods-available/fcgid.conf... <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi FcgidConnectTimeout 10 FcgidMaxRequestsPerProcess 500 FcgidIdleTimeout 20 FcgidFixPathinfo 1 FcgidMaxProcesses 10 </IfModule> (heh - Max Processes isn't being respected either...) Of course, fcgid.conf is smylinked in mods-enabled.

    Read the article

  • Default uTorrent upload limits?

    - by jasondavis
    I am using uTorrent for my torrents and when I open a new torrent with utorrent, I must manually right click the item click on bandwidth allocation set it to HIGH then I must repat the first 2 steps again and then go to click on Set upload limit and click the proper desired amount which for me is 1 Kb/s This is very annoying to do this on every single torrent, is there a way to have these set as the default values in utorrent?

    Read the article

  • ESXi 4.0 Licensing Limits

    - by Ramsey
    From what I understand, the software is free and you just need to register to remove the 60-day limitation. Does this mean I have to register every time I install ESXi on a new machine? Or can I use the same key for different ESXi 4.0 installations?

    Read the article

  • Determining Performance Limits

    - by JeffV
    I have a number of windows processes that pass messages between them hat a high rate using tcp to local host. Aside from testing on actual hardware how can I assess what my hardware limit will be. These applications are not doing CPU intensive work, mostly decomposing and combining messages, scanning over them for special flag in the data etc.. The message size is typically 3k and the rate is typically ~10k messages per second. ~30MB per second between processing stages. There may be 10 or more stages depending. For this type of application, what should I look to for assessing performance? What do I look for in a server performance wise? I am currently running an XEON L5408 with 32 GB ram. But I am assuming cache is more important than actual ram size as I am barely touching the ram.

    Read the article

  • Removing resource limits on Solaris 10

    - by mikeydonkey
    How should one remove all potential artificial resource limitations for a process? I just saw a case where a server application consumed resources so that some limitation was hit. The other shells into the same server etc were all extremely slow (waiting for something to free up for them; ie. prstat starting 5 minutes). It wasn't CPU/memory related problem so I think it has got something to do with ulimits / projects. Already managed to set the maximum open files to 500 000 and it helped a little bit. However there is something else and I can not figure out what resource is maxed out. I can get some in-house administrator probably to check this but I would like to understand how I could make sure there shouldn't be any limitations! If you think I am going the wrong way (would be better to figure out what limitation should be specfically tuned etc) please feel free to point me to the correct way. I know technical stuff - it's just Solaris 10 that is giving me headache :/

    Read the article

  • Linux per-process resource limits - a deep Red Hat Mystery

    - by BobBanana
    I have my own multithreaded C program which scales in speed smoothly with the number of CPU cores.. I can run it with 1, 2, 3, etc threads and get linear speedup.. up to about 5.5x speed on a 6-core CPU on a Ubuntu Linux box. I had an opportunity to run the program on a very high end Sunfire x4450 with 4 quad-core Xeon processors, running Red Hat Enterprise Linux. I was eagerly anticipating seeing how fast the 16 cores could run my program with 16 threads.. But it runs at the same speed as just TWO threads! Much hair-pulling and debugging later, I see that my program really is creating all the threads, they really are running simultaneously, but the threads themselves are slower than they should be. 2 threads runs about 1.7x faster than 1, but 3, 4, 8, 10, 16 threads all run at just net 1.9x! I can see all the threads are running (not stalled or sleeping), they're just slow. To check that the HARDWARE wasn't at fault, I ran SIXTEEN copies of my program independently, simultaneously. They all ran at full speed. There really are 16 cores and they really do run at full speed and there really is enough RAM (in fact this machine has 64GB, and I only use 1GB per process). So, my question is if there's some OPERATING SYSTEM explanation, perhaps some per-process resource limit which automatically scales back thread scheduling to keep one process from hogging the machine. Clues are: My program does not access the disk or network. It's CPU limited. Its speed scales linearly on a single CPU box in Ubuntu Linux with a hexacore i7 for 1-6 threads. 6 threads is effectively 6x speedup. My program never runs faster than 2x speedup on this 16 core Sunfire Xeon box, for any number of threads from 2-16. Running 16 copies of my program single threaded runs perfectly, all 16 running at once at full speed. top shows 1600% of CPUs allocated. /proc/cpuinfo shows all 16 cores running at full 2.9GHz speed (not low frequency idle speed of 1.6GHz) There's 48GB of RAM free, it is not swapping. What's happening? Is there some process CPU limit policy? How could I measure it if so? What else could explain this behavior? Thanks for your ideas to solve this, the Great Xeon Slowdown Mystery of 2010!

    Read the article

  • Office 2007 install limits?

    - by AppsByAaron
    I've been able to install Office 2003 on a couple of computers that I own without any trouble. However, I just got office 2007 and I'm not able to install it on my computer and my laptop. I know this may violate agreements and whatnot but I'm curious to know if this is something that Microsoft has finally started to crack down on or what. Any information is help full. And I know that I'm going to get a few answers telling me that I'm "stealing" or whatever so I don't need to hear about that. thanks. ;)

    Read the article

  • 10 GigE interfaces limits single connection throughput to 1 Gb on a ProCurve 4208vl

    - by wazoox
    The setup is as follow : 3 Linux servers with Intel CX4 10 GigE controllers and an X-Serve with a Myricom 10 GigE CX4 controller are connected to a ProCurve 4208vl switch, with a myriad of other machines connected through good ol' 1000 base-T. The interfaces are actually set up as 10 Gig, according to both the switch monitoring interface and the servers (ethtool, etc). However a single connection between two 10 GigE equipped machines through the switch is limited to exactly 1Gb. If I connect two of the 10 GigE machines directly with a CX4 cable, netperf reports the link bandwidth as 9000 Mb/s. NFS achieves about 550 MB/s transfers. But when I'm using the switch, the connection tops at 950 Mb/s through netperf and 110 MB/s with NFS. When I open several connections from 3 of the machines to the 4th, I get 350 MB/s of NFS transfer speed. So each individual 10 GigE ports actually can reach much more than 1 Gb, but individual connections are strictly limited to 1 Gb. Conclusion : the 10 GigE connection through the switch behaves exactly like a trunk of 10 1 Gb connections. That doesn't make any sense to me, unless HP planned these ports only for cascading switches or strictly for many-clients-to-single-server connection. Unfortunately this is NOT the envisioned setup, we need big throughput from machine to machine. Is this a not-so-known (or carefully hidden...) limitation of this type of switch? Should I suggest seppuku to the HP representative? Does anyone have any idea on how to enable a proper behaviour ? I upgraded for an hefty price from bonded 1Gb links to 10 GigE and see exactly ZERO gain! That's absolutely unacceptable.

    Read the article

  • Change Windows 7 Explorer's Details Pane limits

    - by Paul
    For some reason, MS decided to completely kill the status bar's functionality in Win7 (and maybe Vista, but I don't know for sure). I have tried all possible options such as Classic Shell and so on. Basically, the one thing I miss most is seeing at a glance the total size of my selected files. I know I can press Alt+Enter or whatever, but that's not the point. The point is that the so-called 'details' pane stops providing details if more than 15 files are selected! WTH? Cannot understand the reason behind such a stupid arbitrary limit, that doesn't seem to be user-configurable at all. Anyway, what I'm looking for is a way to change that limit, either via the registry or otherwise. If changing the limit isn't possible, would it be possible for some programming genius to create a very small optimized light-weight non resource-hungry (you get the idea!) program whose sole purpose will be to automatically click the "Show more details..." link in the details pane after every file selection? For bonus points, the program should wait for a second or two after file selection is complete to do this, so that selecting multiple files in quick succession doesn't hit the HDD repeatedly. Any takers for the challenge? :)

    Read the article

  • How to calculate bandwidth limits per user on WiFi network

    - by Lars
    A typical 802.11g access point can provide around 25 Mbps of bandwidth. How is the bandwidth shared among the users? Furthermore, how many users can be served by a single access point using 802.11g in an environment with low interference, and average web activity from the users? The goal is to use bandwidth limitation to avoid starvation for some users in case some of the users start to download a file or stream HD video or some other bandwidth intensive activity. Can someone break down the math on this?

    Read the article

  • Network load balancing, efficience and limits?

    - by Vimvq1987
    I'm about to study about NLB on Windows Server 2003. It archives both of my interests now: scalability and high-availability. But I don't know about its power in production environment. Is NLB a efficient solution? How does it implement in real-world? Is it popular? What are its limit? Thank you so much for answering my questions. :)

    Read the article

  • Workaround for API limits [closed]

    - by blunders
    Problem: Planning on building out a client services company that requires access to APIs. Most APIs are limited based on user, IP, etc. - and even though the API calls would be on a per client basis, there's no way to get usage not tied to IPs. (Theoretical) Solution: Have each client install on their network a proxy/VPN that would allow my systems to connect and use their assigned usage. So, it's possible there's a better solution than the one I've thought of, but it's the only one I've been able to come up with.

    Read the article

  • Amazon and bandwidth limits

    - by Dave
    This question may sound weird to some of you but I have never really used cloud and above all I'm still beginner in the web development and would be really thankful if someone could answer few of my question though they may sound weird So I would like to deploy simple website to Amazon, however, I'm concerned about bandwidth as they charge 0.12GB and I'm not able to set budget limit. My problem is that I wouldn't like to pay for 1000GB of bandwidth if someone for some reason decides to download one file constantly So could some of you, who have experience with amazon, tell what happens if my app is able to handle (say 50 req/sec 30kb/page) does that mean that in the worst case I would have to pay req * sec * min * hours * days * page size 50 * 60 * 60 * 24 * 30 * 30kb = 3888GB

    Read the article

  • Asus P6X58-E WS motherboard memory limits

    - by Arsen Zahray
    I've just ordered motherboard Asus P6X58-E WS, and now I'm, looking for memory for it. It has 6 slots, but the strange thing is, that in specifications it says that it is limited to 24G of memory . I'm planning on using 8G Kingston KVR1333D3D4R9S/8G sticks with it. Using those sticks, I'll be teoretically limited to 48G. Does the 24G limitation mean, that even if I install all 6 sticks, I still will be limited by 24G of possible memory? Sorry if the question seems dumb, but I never faced such a limitation before

    Read the article

  • Change Windows 7 Explorer's Details Pane limits

    - by Paul
    For some reason, MS decided to completely kill the status bar's functionality in Win7 (and maybe Vista, but I don't know for sure). I have tried all possible options such as Classic Shell and so on. Basically, the one thing I miss most is seeing at a glance the total size of my selected files. I know I can press Alt+Enter or whatever, but that's not the point. The point is that the so-called 'details' pane stops providing details if more than 15 files are selected! WTH? Cannot understand the reason behind such a stupid arbitrary limit, that doesn't seem to be user-configurable at all. Anyway, what I'm looking for is a way to change that limit, either via the registry or otherwise. Is this at all possible?

    Read the article

  • Setting CPU cores off-limits to all threads not specified (preferably in Windows 7)

    - by Shinrai
    I have a really specific machine configuration in the works that would really be helped out if there were any way to do this...basically what I'm looking for is the opposite of setting CPU Affinity for a process. I want to be able to tell Windows "No applications except [x] are allowed on [these cores]." Is there any mechanism whatsoever for doing this? (Yes, I am aware of some of the potential issues this could cause and I normally would never fool with processor affinities, since the OS usually does a damned good job itself, but this is a pretty odd situation involving some software that is very CPU-bound constantly having to wait on interrupts and DPCs and things from other threads.)

    Read the article

  • Pushing new mailbox limits for Exchange 2003 accounts

    - by Dobler
    There is this problem that I have every time I set up an account for a user on Exchange 2003. After I update the Mailbox limit to unlimited in the Exchange tab in the user preferences it does not seem to update right away. I remember reading somewhere that it takes several hours for the change to update in the database. I also remember typing in some terminal command that pushed the new updates immediately. What is that command, and why is it so difficult to find the answer to this problem on Google?

    Read the article

  • HTML5 localStorage restrictions and limits

    - by Chuck
    HTML5's localStorage databases are usually size-limited — standard sizes are 5 or 10 MB per domain. Can these limits be circumvented by subdomains (e.g. example.com, hack1.example.com and hack2.example.com all have their own 5 MB databases)? And is there anything in the standard that specifies whether parent domains can access their children's databases? I can't find anything, and I can see arguments for doing it either way, but it seems like there has to be some standard model.

    Read the article

  • Limits GUIs in C#

    - by xarzu
    What are the limits of writing a C# app when you want a truly impressive GUI? At what point does one have to leave Visual C# behind and go into WPF? Also, if I choose to go with WPF, do I have to ditch the Visual Studio IDE and go with Expression Studio?

    Read the article

  • Limits of GUIs in C#

    - by xarzu
    What are the limits of writing a C# app when you want a truly impressive GUI? At what point does one have to leave Visual C# behind and go into WPF? Also, if I choose to go with WPF, do I have to ditch the Visual Studio IDE and go with Expression Studio?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >