Search Results

Search found 48586 results on 1944 pages for 'page performance'.

Page 188/1944 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • Centos running Apache Tomcat keep getting "java.net.SocketException: Too many open files"

    - by Gerard Moroney
    We're running Apache Tomcat 7.0.41 on CentOS 6 with java version "1.7.0_21". We were getting a lot of too many open files errors so I did some research. The consensus was that it was to to with the number of open files. So I did the following: Increased max files in /etc/security/limits.conf soft nofile 100000 hard nofile 100000 Rebooted the server Checked the limits were valid for the user which was to run the process [app_admin@xxx ~]$ ulimit -Hn 100000 [app_admin@xxx ~]$ ulimit -Sn 100000 Monitored open files on the server using the lsof command What I observed was when the total open files reached circa 13000 and tomcat had around 4500 open files the error reappeared. I am confused. I thought it would have resolved the problem but clearly I don't fully understand the root cause and also how to set the parameter correctly. To (maybe) help I have not modified the server.xml file for Tomcat (although I'm tempted). I don't want to start fiddling with that and make things worse. I'm more than happy to share any more information if someone can give me some hints on where to start looking.

    Read the article

  • HP Probook 4530s great specs, but lagging. Hard Drive?

    - by Mark
    I have this laptop, which has a i3 processor, 4gb memory, 7200rpm hard drive. So there is nothing wrong with the specs. Even when I have no applications open, simply closing and opening windows, lags. Or opening the start menu, or dragging icons across the desktop. sometimes even the cursor lags. So I checked out the resource monitor, and the resources using disk activity are svchost Avast ------- my antivirus, but not much System (PID 4) ------ This is using a huge chunk The total disk activity fluctuates between %50 - %100

    Read the article

  • Enabling DMA in pc with windows 7

    - by Mugen
    I was checking my pc settings using AIDA64 (supposed to be the successor for Everest - it basically shows you detailed hardware you currently have). For my ATA hard disk it shows the setting for DMA as "supported, disabled". But when I checked the windows setting I see that it is actually enabled. How can I find out which is correct? And if its disabled what do I do to enable it? Thanks for your help. Here are some screenshots for this:

    Read the article

  • why dstat failed with --tcp option?

    - by hugemeow
    why --tcp option failed? should i install some package? if yes, which package i should install? this question is not the same as http://stackoverflow.com/questions/10475991/tcp-sockets-double-messages dstat --tcp Module tcp has problems. (Cannot open file /proc/net/tcp6.) None of the stats you selected are available. what is the problem with module tcp, how to find the problem? what i should do in order to fix this issue?

    Read the article

  • ft_stopword_file not picked up

    - by Alex Holsgrove
    I have a VPS server with a company called Webfusion. I want to remove some or all of the FULLTEXT stopwords because some specific words needs to be searchable with my DB content. I opened /etc/mysql/my.cnf and added the line ft_stopword_file="". I restarted the mysql service, ran a repair table and then tried my MATCH query with no success. I ran SHOW VARIABLES LIKE 'ft_%' and it simply shows (built-in) next to the stopword file. I am running WAMP on my workstation, and whilst I realise this isn't configured the same as a commercial VPS, the above method worked just fine. Couple someone please offer some guidance?

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • Oracle Hangs on Responses Intermittently

    - by Ryan Cook
    I want to preface this with the fact that I am a developer and I am not even close to a DBA, plus I am new to Oracle. OK, here it goes: I have a Java application which uses spring and hibernate. Its a simple CRUD app and I will leave the details out as I don't think they are the issue. I have noticed that my app runs fine when I use MySql, but when I use an Oracle 10.2 server every 7th-10th hangs for 5-10 seconds. My Oracle installation was done by me using all defaults, same as the mysql install. I don't even know where to start looking. Any ideas? Thanks in advance and sorry that I lack the details that are most likely required for help.

    Read the article

  • How to test server throughput

    - by embwbam
    I've always used apache benchmark to try to get a rough idea of how many requests/second my server can handle. I read that it was good, and it seemed to work well. Enter node.js, which is fully event-based, so it never blocks. If I run apache benchmark on a simple hello world server it can handle 2500 requests per second or so. However, if I put a timeout in the hello world function, so that it responds after 2 seconds, apache benchmark reports a dramatically reduced throughput: about 50/s. I'm running 100 concurrent connections with ab. If I increase the concurrency, it goes up. This makes sense, because apache benchmark is basically sending out requests in batches of 100, which come back every 2 seconds. 100 requests / 2 seconds = 50 requests / second If I increase the concurrency to about 400 or 500, it starts to crash. I don't think I've hit node.js's limit, I think I'm hitting a wall in my operating system on the number of open file descriptors or sockets or something. Any way I can get a good guess about how many requests my server can handle? I want to make sure the test computer isn't the one causing the problem.

    Read the article

  • NFS robustness or weaknesses

    - by Thomas
    I have 2 web servers with a load balancer in front of them. They both have mounted a nfs share, so that they can share some common files, like images uploaded from the cms and some run time generated files. Is nfs robust? Are there any specific weaknesses I should now about? I know it does not support file locking but that does not matter to me. I use memcache to emulate file locking for the runtime generated files. Thanks

    Read the article

  • How to redirect (or Alias) jump page with Apache

    - by Meltemi
    I'm not an Apache expert but need to make a small change to a web server. We are introducing a "jump page" URL that is different from a primary URL (for tracking reasons). /productA/index.html /productA/jump_index.html Basically i want to log that jump_index.html was requested and then return index.html. I don't want the client to wait 8 seconds or so for a redirect. How should we be handling this? Simply symlink (or alias) the file in the filesystem? Use mod_alias Alias Match (if so how exactly)? something better still? Edit: mod_rewrite in httpd.conf: <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_METHOD} ^TRACE RewriteRule .* - [F] </IfModule>

    Read the article

  • High Load - Low IO - Low CPU usage

    - by devup
    I have a system whose load is rather high. As you can see from the top output below, CPU usage and I/O is negligible: top - 17:31:59 up 4 days, 2:34, 2 users, load average: 1.00, 0.99, 1.00 Tasks: 71 total, 1 running, 70 sleeping, 0 stopped, 0 zombie Cpu(s): 2.0%us, 2.0%sy, 0.0%ni, 95.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 960720k total, 707288k used, 253432k free, 67328k buffers Swap: 2811896k total, 2644k used, 2809252k free, 528928k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 15310 root 20 0 2512 1128 888 R 2.1 0.1 0:00.05 top I would appreciate any assistance with isolating the cause(s) of high load for when I/O and CPU are not factors.

    Read the article

  • SQL Server not releasing Memory

    - by noob2487
    I am using SQL Server 2005. I am running a job which processes around 100 K records. Job runs fine, it takes are 45 mins to execute, which is good. But after that job is processed, I can see instance of SQL Server 2005 still there with around 900 MB of Memory. I waited for around 2 hrs but that memory was not released. Is there any process which takes care of memory here, something like GC (unpredictable) Or am I doing something wrong???

    Read the article

  • How come my Intel 520 180GB SSD performs extremely poorly?

    - by Willem
    I recently installed a new Intel 520 series 180GB SSD in my brand new MacBook Pro. The system is as follows: Model: MacBook Pro 15-inch, Late 2011 (MacBookPro8,2) Processor: 2.4 GHz Intel Core i7 Memory: 16 GB 1333 MHz DDR3 Graphics: AMD Radeon HD 6770M 1024 MB Software: Mac OS X Lion 10.7.3 Main Drive Bay: Intel 520-series 180GB SATA-3 (6GB/s negotiated link) SSD (Firmware: 400i) [80GB free] Optical Bay: Toshiba 5400 RPM 750GB SATA-2 HDD Trim: Enabled (according to Trim Enabler App) And here are the speeds I'm getting: Read: 412 MB/s Write: 186 MB/s What have I done wrong? Results expected: Read/write both 500MB/s I have seen benchmarks with lesser SSD:s (SATA-2 even) outperform my write-speeds by far. Also, Intel 520 SSD:s are supposed to be the top class of SSD:s. Trim Enabler report: This looks a bit odd compared to screenshots from their site: These is the defined S.M.A.R.T attributes (taken from Intel): And here are my S.M.A.R.T attributes read using smartctl tool from smartmontools: They don't seem very compatible. I'm going to try and look for a S.M.A.R.T attributes reader tool for OS X which might support Intel 520 series.

    Read the article

  • New Oracle Database Performance and Tuning Specialization version available to all partners!

    - by Catalin Teodor
    Any partner working with the Oracle database should take advantage of the new Oracle Database Performance and Tuning specialization version. Have partners review the criteria and see what’s new in this specialization solution update. Partners should plan to take the new version as the Oracle Database 11g Performance Tuning specialization becomes non-qualifying as of August 1, 2015.

    Read the article

  • Customize specific websites on chrome's new tab page

    - by Ben
    Chrome's new tab offers the 8 websites you like to visit. Unfortunately for one of the sites, 4chan.org, the main page is a directory, not somewhere I want to go. I would like the 4chan.org thumb to instead open to 4chan.org/u/, taking me directly there instead of the main directory where I would have to manually find /u/. Anyone know how to make this happen? Thanks. Edit: I would like this to happen without totally destroying the new tab's default functionality.

    Read the article

  • Computer experiencing slowdowns and lockups despite low cpu useage

    - by user157145
    my setup i5-2300 nvidia gtx 550 ti 6 gigs ram 600 w ocz modular psu recently reformatted and already experiencing drastic slowdown as soon as windows comes up, including repeated lockups with multiple various programs reporting that they are not responsive, then recovering after 10-30 seconds. ive checked memory and hard drive both of which come out fine. despite my plethura of worthless antiviral software im forced to assume that my illicit downloading practices have lead me into some comp trouble that i cant seem to determine. i have used ccleaner, search and destroy and malware bytes, all of which have found nothing to indicate what is causing this massive slowdown. in addition according to my resource manager my computer is operating at a load of only 30-50 percent CPU useage and 60 ram useage but taking 5-10 seconds to load files and open folders, and repeated lockups of multiple programs, especially firefox which seems to go unresponsive every 2-3 minutes. any help would be appreciated, i used a program called OTL by old timer, but cant make any sense of the results i was given. any help or suggestions would be appreciated, thank you for taking the time to read this i have avast but it didnt even find anything when i had it do a full system scan, so im thinking its clueless(also nortons, avg, and ad-aware). i also have mse but it has yet to complete a full scan it takes so long (i left it on last night but when i woke up my computer had a problem and had to restart). my hard drive has 300 gigs out of 1tb open and i already used hd tune pro, which said my harddrive was fine and its not a ssd. also im a noob at comps and only have the hd that is currently inside the computer in addition im not sure if studdering is the issue im suffering. my problem is that during my typing of these responses firefox has gone "not responsive" at least 5 times, each for times of about 5-10 seconds. when i try to control alt delete to bring up windows task manager it took 20 seconds. essentially its that my computer goes super slow at bringing up anything, or taking any action whatsoever that opens a program or file and has repeated incidents where i cant even click on whatever im trying to do because it locks up. the confusing thing about these incidents is that its right after restarting where there are minimal programs running and the computer and memory load is light.

    Read the article

  • How passively monitor for tcp packet loss? (Linux)

    - by nonot1
    How can I passively monitor the packet loss on TCP connections to/from my machine? Basically, I'd like a tool that sits in the background and watches TCP ack/nak/re-transmits to generate a report on which peer IP addresses "seem" to be experiencing heavy loss. Most questions like this that I find of SF suggest using tools like iperf. But, I need to monitor connections to/from a real application on my machine. Is this data just sitting there in the Linux TCP stack?

    Read the article

  • HD latency measurement using bonnie++ on different machines with different RAM size

    - by j0nes
    Hello, I have run bonnie++ v1.96 on two different servers without any additional load. One server is a "physical" Dell server with 32GB RAM, the other one is a virtual instance with 14GB RAM. I have read in the bonnie manuals that I should use two times the size of RAM in my bonnie runs, so I used 64GB on the physical machine and 28GB on the virtual machine. Now I want to compare the results, and I am wondering whether the results are comparable at all. The most interesting part is the latency part - on the physical machine, the values are about 10 times higher than on the virtual machine! Can I take these results seriously (e.g. the virtual machine HD is much much faster) or does the different RAM size tamper the results? Thanks! Jonas

    Read the article

  • Coldfusion 8 Application Crashes Under Heavy Load

    - by KM01
    Hello, We have a CF8 app that runs for 20-25 minutes before crashing under heavy load ~ 1200 users. This load is generated by our load testing tool: 1200 users ramped up in 5 mins (approx behavior of our users), running for an hour. We have this app on Solaris 10, Apache 2, JRun 4 and Oracle 10g. Java version is 1.6. During the initial load tests, the thread dumps pointed to monitor deadlocks that pointed to sessions. "jrpp-173": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" "scheduler-1": waiting to lock monitor 0x026c3ce0 (object 0x6abe2f20, a coldfusion.monitor.memory.SessionMemoryMonitor$TopMemoryUsedSessions), which is held by "jrpp-167" "jrpp-167": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" We increased the number of sessions relative to the number of CPUs (48 simultaneous threads against 32 CPUs), and the deadlock went away. While varying the simultaneous threads helped a little bit in terms of response time, the CF server still tanked in 20-25 minutes during all of these tests. We ran more thread dumps, and saw a thread locking a monitor, for e.g.: "jrpp-475" prio=3 tid=0x02230800 nid=0x2c5 runnable [0x4397d000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.getEntry(HashMap.java:347) at java.util.HashMap.containsKey(HashMap.java:335) at java.util.HashSet.contains(HashSet.java:184) at coldfusion.monitor.memory.MemoryTracker.onAddObject(MemoryTracker.java:124) at coldfusion.monitor.memory.MemoryTrackerProxy.onReplaceValue(MemoryTrackerProxy.java:598) at coldfusion.monitor.memory.MemoryTrackerProxy.onPut(MemoryTrackerProxy.java:510) at coldfusion.util.CaseInsensitiveMap.put(CaseInsensitiveMap.java:250) at coldfusion.util.FastHashtable.put(FastHashtable.java:43) - locked <0x6f7e1a78> (a coldfusion.runtime.Struct) at coldfusion.runtime.CfJspPage._arrayset(CfJspPage.java:1027) at coldfusion.runtime.CfJspPage._arraySetAt(CfJspPage.java:2117) at cfvalidation2ecfc1052964961$funcSETUSERAUDITDATA.runFunction(/app/docs/apply/cfcs/validation.cfc:377) As you see in the last line above there were several references CFMs and CFCs, and the lines have "cflock" tags, which were scoped to the "application." We (the dev team) then changed them to be scoped to a "name". After more load tests, there is no locking going on and there no deadlocks, but now the application tanks in 7-10 minutes. We've gotten system, network and DB reports from the respective admins, and they are not being taxed; even watched the server stats with server monitor, top, prstat, ran sar reports, etc. So we believe it is an issue with the CF server or maybe the JVM. I am running out of ideas as to what else we can try. Disclaimer: I am not a CF developer or Admin. I am just running the load test, analyzing the reports, threads etc, and sharing the results with the dev and admin teams, and trying the next change, and so on. So far no dice. Has anyone run into something similar? How did you go about diagnosing and troubleshooting? All thoughts and pointers welcome. Thank you for your time! KM

    Read the article

  • Windows, why 8 GB of RAM feel like a few MB?

    - by Desmond Hume
    I'm on Windows 7 x64 with 4-core Intel i7 and 8 GB of RAM, but lately it feels like my computer's "RAM" is located solely on the hard drive. Here is what the task manager shows: The total amount of memory used by the processes in the list is just about 1 GB. And what is happening on my computer for a few days now is that one program (Cataloger.exe) is continually processing large quantities of (rather big) files, repeatedly opening and reading them for the purposes of cataloging. But it doesn't grow too much in memory and stays about that size, about 90 MB. However, the amount of data it processes in, say, 30 minutes can be measured in gigabytes. So my guess was that Windows file caching has something to do with it. And after some research on the topic, I came across this program, called RamMap, that displays detailed info on a computer's RAM. Here is the screenshot: So to me it looks like Windows keeps in RAM huge amounts of data that is no longer needed, redirecting any RAM allocation requests to the pagefile on the hard drive. Even when I close Cataloger.exe, the RamMap reports the size of the mapped file as about the same for a long time on. And it's not just this particular program. Earlier I noticed that similar slowdown occurred after some massive file operations with other programs. So it's really not an exception. Whatever it is, it slows down the computer by like 50 times. Opening a new tab in Chrome takes 20-30 seconds, opening a new program can take up to a minute. Due to the slowdown, some programs even crash. So what do you think, is the problem hiding in file caching or somewhere else? How do I solve it?

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >