Search Results

Search found 12310 results on 493 pages for 'andrew day'.

Page 456/493 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • How should I use my new SSD drive?

    - by jasondavis
    I just built a new PC the other day. Specs... Processor: Intel i7-930 quad core CPU CPU Cooler: COOLER MASTER Hyper 212 Motherboard: AsRock X58 Extreme 3 RAM/Memory: 6gb G-Skill tripple channel DDR3 memory (3 sticks of 2gb planning to get another kit to make it 12gb total soon) Operating System Hard drive: Intel X25-M 80GB Mainstream SATA2 Solid State Drive Video Cards: 2 XFX ATI Redeon HD 4650 cards to run 3-4 monitors Case: Lian Li PC-B10 Midtower case Power Supply: Antec TruePower New TP-750 Blue 750W Operating System Windows 7 Pro 64bit Not sure if the specs are helpful at all but I posted them just in case. So I got everything put together and running great so far but I need some advice/ideas/help/tips. I got the SSD drive in hopes of using it strictly for my windows 7 install along with all my other programs I install. I am then going to get another drive or 2 just for data (video,music,photos, etc). So my plan is to just install the new data drives and then in windows 7 I will change my "My documents" "My Music" "My Video" "MY Photos" library's to be located on the data drives instead of the OS SSD drive. I would ultimately like to install all my programs with my windows install on the SSD drive and then create an IMAGE of the drive and then 6 months down the road if things are sluggish I can just wipe the drive and restore my IMAGE with all my programs and settings in tact still. So here are some questions now. 1) How can I verify that TRIM is working on my new SSD? 2) Is there anything above that I missed that I should be doing? I think I once read that there is a page file or some sort of file that windows changes a lot and that it should be moved off f an SSD an onto my data drives. DOes anyone know what I might of heard? If you do can you explain the pros and cons of doing such a thing as well as how to possibly? 3) Any tips or advice to get the best performance from all this, I built a pretty nice system and I just want to make it stay that way as long as I can.

    Read the article

  • My network drive disappears from Mac OS Finder

    - by Mariusz
    I have recently bought a Netgear WNDR3800 router to use it in my home network. But just the same day I installed it, I noticed a strange behaviour of Finder and iTunes. Let me explain it further. There is a Synology DS111 NAS attached to that router and two Macs with Mac OS X Lion. One of them is connected by a cable and the second one wirelessly. Before I changed my router to the new one I mentioned above, Finder always used to display my NAS on its sidebar. So I could just click its network name to access shared folders existing on it. But after I installed WNDR3800, I can no longer access the NAS that way. It is no longer displayed. I always have to mount it manually by typing its IP address using the Finder's 'connect to server' option. The same NAS supports TimeMachine backups and has an inbuilt DLNA server. And the same situation here. I can't perform a backup because my NAS is no longer accessible in TimeMachine preferences. iTunes does not display it as well (as a multimedia server) even though it used to before I installed that router. What's important, everything works fine for a couple of minutes after I restart the router or the NAS. Or even when I change the NAS's IP address it becomes accessible again in Finder, TimeMachine and iTunes, but only for some time. Both the Mac computers I mentioned behave the same way. And all those issues have been taking place sice I installed that new router. Before I did that, everything had worked fine. My old router was Netgear WGR614v10. Would you be so kind to tell me what you think could possibly be the reason of that behaviour? What settings of the router should I look closer at? I'm not a network specialist, but is it possible that some network packets are blocked for some reason? I will be grateful for any clues you give me. Thank you.

    Read the article

  • How to Configure Source NAT (Private IP => Public IP Outbound)

    - by DavidScherer
    I'm running VMWare ESXi Free and have Zentyal SBS 3.2 running as a Gateway. I have 5 Public IPS (CIDR/29, let's call them 69.1.1.1 - 69.1.1.5) and currently Zentyal is bound to 69.1.1.1 as the Gateway, with the other 4 Public IPs set as Virtual Interfaces in Zentyal (wan2-wan5) I have machines sitting on the Private Network (10.34.251.x) that, when going Outbound (to Google for instance) should be seen by the Internet as an IP other than the Gateway (69.1.1.1), this is because our machines need to be able to communicate with 3rd party APIs that expect these requests to come from a specific IP. From what I could find, SNAT (Source NAT) in Zentyal is used to achieve this, but I'm not sure how to configure it and cannot find a specific piece of Documentation for it at Zentyal. I've tried setting this up a couple different ways, with no results and at this point I have no idea if I'm going about this completely wrong, or my lack of experience with networking and the associated terminology is preventing me from placing the correct values in the correct fields. I get the following form to set up "SNAT" rules in Zentyal: Perhaps someone can offer some guidance and definitions for the fields above? SNAT Address Is this the Public IP I want to masquerade? Outgoing Interface Should this by my External NIC (one connected to Public 'Net), or is it the "Private" interface? It sounds as though this should be the External interface as I want the traffic from the internal network sent Out over this Interface (using a different IP than normal, anyway) Source Is the the Source on the internal network (one of the private IPs?), a public IP I want to masquerade as, or something else entirely? Destination Is this a place on the Internet (eg, "Only do this for the Site Google.com"/IP) or am I allowing myself to become confused again? Service I'm assuming this allows me to restrict which services this rule will apply to, but is it for a service on the internal network or a service being accessed on the external network? If I can offer any further details or information to make what I'm trying to do more clear, I will happily do so. Honestly any kind of help here would be very appreciated. I'm not a NetOps or anything even close, I spend most of my day writing code and my entire "team" at this company consists of "me, myself, and I" so while I try to broaden my KB at every possible opportunity, I can only learn so much, so fast and I feel like with networking especially there's just so much, coupled with a learning curve for each solution that likes to (from my limited perspective) use slightly different terminology that what I'm used to (and I don't exactly have the necessary experience to cross reference this stuff with the stuff I already know in context).

    Read the article

  • Linux software Raid 10 no superblock

    - by Shoshomiga
    I have a software raid 10 with 6 x 2tb hard drives (raid 1 for /boot), ubuntu 10.04 is the os. I had a raid controller failure that put 2 drives out of sync, crashed the system and initially the os didnt boot up and went into initramfs instead, saying that drives were busy but I eventually managed to bring the raid up by stopping and assembling the drives. The os booted up and said that there were filesystem errors, I chose to ignore because it would remount the fs in read-only mode if there was a problem. Everything seemed to be working fine and the 2 drives started to rebuild, I was sure that it was a sata controller failure because I had dma errors in my log files. The os crashed soon after that with ext errors. Now its not bringing up the raid, it says that there is no superblock on /dev/sda2. I tried to reassemble manually with all the device names but it still would not bring up the raid 10 complaining about the missing superblock on sda2, and sda1 was also dropped from the raid 1. When I did examine on the raid10 it says that 1 of the initially failed drives is a spare, the other is spare rebuilding and sda2 is removed. It seems that sda decided to fail right when the system was vulnerable to it because when I boot up a live cd it spews out sda unrecoverable read failures. I have been trying to fix this all week but I'm not sure where to go with this now, I ordered more hard drives because I didn't have a complete backup, but its too late for that now and the only thing I could do is mirror all the hard drives onto the new ones (I'm not sure whether sda was mirrored without errors). On the internet I read that you can recover from this by recreating the array with the same options as when it was made, however because sda is failing I cant use it and I don't want to risk using its mirror instead, so I'm waiting to get another hard drive. I'm also not sure whether to include the out of sync drives or if I can actually use those instead to recover the array. Sorry if this is a mess to read but I've been trying to fix this all day and its late at night now, any thoughts on this would be greatly appreciated. I also did a memtest and changed the motherboard in addition to everything else. EDIT: This is my partition layout Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0009c34a Device Boot Start End Blocks Id System /dev/sdb1 * 2048 511999 254976 83 Linux /dev/sdb2 512000 3904980991 1952234496 83 Linux /dev/sdb3 3904980992 3907028991 1024000 82 Linux swap / Solaris

    Read the article

  • Gentoo box can't cURL or ping after restarting net.eth1

    - by Curlybraces
    Hi all, the following is completely baffling me. We currently have a gentoo box which acts as our LAMP, DNS, DHCP server. This is assigned a static IP on the network. This server is connected directly to the internet via a BT BusinessHub Router. The server is also connected to a patch panel/switch port which connects the remaining office (around 10 PC's) to the server. Everything has been plain sailing until the other day when the server was restarted. For some reason now only portions of network accessibility is available depending on which ethernet device was last restarted. Restarting net.eth0 allows the office server to cURL, ping, etc but stops all networked PC's from accessing the internet. Then restarting net.eth1 restores all internet to the network but stops the server from curling, pinging, etc again. However, even when the server can't ping, curl, etc, I can still remote SSH and remote MySQL connect from the server command line to other external servers that we own. Here's my route map (router is 192.168.1.254): Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 192.168.1.254 0.0.0.0 UG 0 0 0 eth1 Here's my /etc/conf.d/net: iface_eth0="192.168.1.99 broadcast 192.168.1.255 netmask 255.255.255.0" iface_eth1="dhcp" None of the above have ever been changed however. Things have just ceased to operate correctly, which makes me think it's a freshly added Iptables rule. Here's the Iptables Filter table: Chain INPUT (policy ACCEPT) target prot opt source destination DROP tcp -- ##.##.##.## anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:2199 ACCEPT tcp -- anywhere anywhere tcp dpt:3199 ACCEPT tcp -- ##.###.###.## anywhere tcp dpt:http ACCEPT tcp -- ###.###.##.## anywhere tcp dpt:2199 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.##.## anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:3128 ACCEPT udp -- ##.###.###.### anywhere udp dpt:3128 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:https Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere ##.###.###.## DROP all -- anywhere ##.###.###.## ACCEPT all -- anywhere anywhere state NEW,ESTABLISHED Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp spt:2199 ACCEPT udp -- anywhere anywhere udp spt:4817 ACCEPT udp -- anywhere anywhere udp spt:4819 ACCEPT udp -- anywhere anywhere udp spt:3199 Help gratefully appreciated.

    Read the article

  • Melting Laptop Power Supply Tip

    - by AlReece45
    Several (6-7) months ago, my laptop power supply cord got a cut in it and stopped working. Having gotten cheap (and short) power supplies in the past, I decided to buy 2 brand new ones from the manufacturer (ASUS). Now, I used my laptop a little less than usual between February and March. During that time I noticed a few times that the power supply, even though plugged in, did not provide power. Often the computer would just off on me. I figured it was just that one power supply being bad. I had left the alternate at my parent's house in another state and asked them to ship it to me. Now, at work the other day I wanted to get a file off the of hard disk. So I booted it up, knowing that it had a low battery, plugged it in. During the first 2 minutes of use, I was told that the battery was low and I should plug it in. I unplugged it, inspected the end (Being plugged in, this was suspicious), and decided I shouldn't plug it back in-- the plastic on the tip was melting from the heat of the metal on the tip. The computer had simply booted up and I had the file-manager open. It had not been on for more than 10 hours. Now I know that computers tend to get pretty hot. However, the melting point of plastic is usually above 200C.. so that's much hotter than the computer should be generating. I went and bought a THIRD power supply. This time a universal one from Best Buy (it was very fast to buy and test). I tried it out on the computer and it's tip is melting as well. My older laptop that uses the universal power supply uses it perfectly (has been about a week and a part of use now). I have tried using the computer without the battery, with the same effect. Obviously, this is not a problem with the power supply. My room mate and I being trained computer techs were contemplating taking the computer apart and desoldering and resoldering on the power tip. (The computer is about 6 months out of its 2-year warranty). We're hoping that will correct the issue as I would prefer to devote my money on a Good Desktop rather than yet ANOTHER $1200+ laptop. Is there any thing I'm missing here that might cause the the tip on the power unit to melt?

    Read the article

  • Alienware m15x (older model) BSOD investigation

    - by Crishu
    A frined of mine asked me to help him with an Alienware m15x laptop that had a little service history. It was bought in june 2008, serviced in january 2009 for a random fps drop problem, Alienware returned it saying nothing was wrong. The laptop still had hiccups, but after juggling a few drivers and settings, the fps drops weren't as noticeable. Eventually it died in Sept. 2009. It would not boot up locking itself on a white/gray screen. (i think it was overheating .. clocking in 100 degrees Celsius). So back to Alienware it went. They replaced the GPU and all was fine. Up until these blue screens started showing up. One other thing that was updated was the HDD and a Windows 7 reinstall, in August. From then on it seems to have started its BSOD. Could this be the culprit? Why? 0_o The original Windows was Vista but it was upgraded with a digital download/purchase of Windows 7 Home Premium and activated after installing windows. No errors on the old HDD, just on the latest installation. LE:Due note that now the old HDD is used to see if issues re-occur. So please, I am in need of someone who can interpret these windows dump files: Minidump I may have come to some conflicting conclusions. So if someone can clarify each dump/date and the probable cause/error it had; and a final conclusion or solution, we would be very grateful. Also please consult report for other system info I omitted: same link,code: XRWIVLWG If I missed something or if you have any other questions I'll be happy to answer them. Thank you. Good day. Processor: Intel(R) Core(TM)2 Duo CPU T9300 @ 2.50GHz Network Adapter Properties: Broadcom NetLink (TM) Gigabit Ethernet Intel(R) Wireless WiFi Link 4965AGN Video Adapter Properties: Driver Description NVIDIA GeForce 8800M GTX Driver Date 19.08.2009 Driver Version 8.16.11.8681 Driver Provider NVIDIA INF File oem19.inf Hardware ID PCI\VEN_10DE&DEV_060C&SUBSYS_0770152D&REV_A2 Location Information @system32\DRIVERS\pci.sys,#65536;PCI bus %1, device %2, function %3;(1,0,0) PCI Device NVIDIA GeForce 8800M GTX [NoDB] BIOS String Version 62.92.34.0.8 Installed Drivers nvd3dum (8.16.11.8681), nvwgf2um, nvwgf2um Hard Dik Drive: Model ID ST9120823ASG (**older one 120gb**) Model ID WD32000BEKT (new 320gb with fresh OS)

    Read the article

  • High CPU usage - symptoms moving from server to server after bouncing

    - by grt3kl
    First off, I apologize if I didn't include enough information to properly troubleshoot this issue. This sort of thing isn't my specialty, so it is a learning process. If there's something I need to provide, please let me know and I'll be happy to do what I can. The images associated with my question are at the bottom of this post. We are dealing with a clustered environment of four WebLogic 9.2 Java application servers. The cluster utilizes a round-robin load algorithm. Other details include: Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_12-b04) BEA JRockit(R) (build R27.4.0-90_CR352234-91983-1.5.0_12-20071115-1605-linux-x86_64, compiled mode) Basically, I started looking at the servers' performance because our customers are seeing lots of lag at various times of the day. Our servers should easily handle the loads they are given, so it's not clear what's going on. Using HP Performance Manager, I generated some graphs that indicate that the CPU usage is completely out of whack. It seems that, at any given point, one or more of the servers has a CPU utilization of over 50%. I know this isn't particularly high, but I would say it is a red flag based on the CPU utilization of the other servers in the WebLogic cluster. Interesting things to note: The high CPU utilization was occurring only on server02 for several weeks. The server crashed (extremely rare; we are not sure if it's related to this) and upon starting it back up, the CPU utilization was normal on all 4 servers. We restarted all 4 managed servers and the application server (on server01) yesterday, on 2/28. As you can see, server03 and server04 picked up the behavior that was seen on server02 before. The CPU utilization is a Java process owned by the application user (appown). The number of transactions is consistent across all servers. It doesn't seem like any one server is actually handling more than another. If anyone has any ideas or can at least point me in the right direction, that would be great. Again, please let me know if there is any additional information I should post. Thanks!

    Read the article

  • ASA5505 Novice. Setting up Outside/Inside/and DMZ as Guest Network

    - by GriffJ
    I need a little help in developing a config for our ASA5505. I'm an MCSA/MCITPAS but I don't have a lot of practical cisco experience. Here is what I need help with, we currently have a PIX as our boarder gateway and well it's antiquated and it only has a 50 user license which means I'm constantly clearing local-host throughout the day as people complain. I discovered that the last IT person bought at couple ASA5505s and they've been sitting in the back of a cupboard. So far I've duplicated the configuration from the pix to the asa but as I was going to be going this far I thought I'd go further and remove another old cisco router that was used only for the guest network, I know the asa can do both jobs. So I'm going to paste a scenario I wrote up with the actual IPs changed to protect the innocent. ... Outside Network: 1.2.3.10 255.255.255.248 (we have a /29) Inside Network: 10.10.36.0 255.255.252.0 DMZ Network: 192.168.15.0 255.255.255.0 Outside Network on e0/0 DMZ Network on e0/1 Inside Network on e0/2-7 DMZ Network has DHCPD Enabled. DMZ DHCPD Pool is 192.168.15.50-192.168.15.250 DMZ Network needs to be able to see DNS on Inside Network at 10.10.37.11 and 10.10.37.12 DMZ Network needs to be able to access webmail on inside network at 10.10.37.15 DMZ Network needs to be able to access business website on inside network at 10.10.37.17 DMZ Network needs to be able to access the outside network (access to the internet). Inside Network has NO DHCPD. (dhcp is handled by domain controller) Inside Network needs to be able to see anything on the DMZ network. Inside Network needs to be able to access the outside network (access to the internet). There is some access-list stuff already, some static mapping already. Maps external IPs from our ISP to our inside server IPs static (inside,outside) 1.2.3.11 10.10.37.15 netmask 255.255.255.255 static (inside,outside) 1.2.3.12 10.10.37.17 netmask 255.255.255.255 static (inside,outside) 1.2.3.13 10.10.37.20 netmask 255.255.255.255 Allows access to our Webserver/Mailserver/VPN from the Outside. access-list 108 permit tcp any host 1.2.3.11 eq https access-list 108 permit tcp any host 1.2.3.11 eq smtp access-list 108 permit tcp any host 1.2.3.11 eq 993 access-list 108 permit tcp any host 1.2.3.11 eq 465 access-list 108 permit tcp any host 1.2.3.12 eq www access-list 108 permit tcp any host 1.2.3.12 eq https access-list 108 permit tcp any host 1.2.3.13 eq pptp Here is all the NAT and route stuff I have so far. global (outside) 1 interface global (outside) 2 1.2.3.11-1.2.3.14 netmask 255.255.255.248 nat (inside) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 route outside 0.0.0.0 0.0.0.0 1.2.3.9 1

    Read the article

  • Well....a ghost lives in my server...

    - by tsgiannis
    Hello to every body and greetings from Greece I have a rather unusual situation and i am running out of ideas. I have this old server (IBM x205 - P4 2.4Ghz,3xSCSI 36gb) and i was about a year ago i decided to use it as an additional domain controller,fax & file server...for this task i had a Delock 70154 SATA card along with 2x320 SATA II HDDs Everything was going super smoothly until about 3 weeks ago. I was on a trip and i was informermed when i got back that the server was found frozen...well i considered it was a glitch since a simple powerdown power up fixed everything.Again 2 weeks ago another freeze situation....it got suspicious but again power down power up everything was running.... Last again it frozen and when i power it up it came with a message that the Domain services could not start due to NTDS corruption....booting in safe mode revealed that there was an issue with the SATA Raid (degraded). After a lot of searching i degraded the server...cleaned Active Directory and pulled both HDDs out (one of these was really BAD ) and recovered my files (i had some problems with ho Delock handles the redudant HDD). Right now my server is vanilla simple...with only what the factory installed and here is where the fun begins. Everyday when i arrive at the office i find this particular machine dead..and i mean total dead...just a black screen and nothing else...the cpu fan is working ,the psu is working .keyboard and mouse are dead(they also lock my kvm) ...network is dead.... the machine is DEAD. I power it down forcilly ...i power it up and for the 8 hours i am in the office it works,either idling or running some kind of diagnostic...when i leave the office after some time..it maybe half an hour ...it maybe 4 hours the machine dies...at least this is the information the event log shows (" the previous shutdown at xx:xx:xx was unexpected) Well i must admit i am runnig out of ideas.... I have tried Memtest....nothing Passmark burn in test.....nothing Carefull study of the event log.....nothing Set Instead of restart..BSOD....nothing Power sceheme to sleep...all set to never. i know there are a lot of other tools that heavily stress a machine like occt but .... the machine is old...today i will give them a try nevertheless ..... One idea is to reformat it...but ... i really like to find what is causing this because i could get to a situation that everything is working for a while and kaboom...one day again is dying.. I really need a helping hand and every opinion / idea is well welcomed.... iknow the obvious solution is to never leave the office but....i have a life...sory server...:) P.S this situation with the machine dying some time after is going on for about one week...everyday i would set either the RAID to rebuild....or to copy/recover files and while everythig was working

    Read the article

  • Recover strategy single bad sector in moricon

    - by Damon
    This week, my harddisk made me an early christmas present in the form of a single defect sector. To make up for the puny size of the present, it chose a sector inside moricons.dll for that. This means that now the system takes about 5 minutes to boot before Windows gives up and moves on, and there's 2 dozen scary "critical failure" entries in the system log after every boot, which is annoying. OK, admittedly, I shouldn't complain, it could be worse, the bad sector could be in ntldr... SMART info more or less indicates (for what SMART can indicate anyway) that the drive is mostly OK. Soft Read Error Rate has a score of 96, and Current Pending Sector Count has a raw value of 8, which translates to a score of 100. Acronis DriveMonitor makes this an issue (lowering the overall rating to 75%), HDD Health calls it "excellent", giving an overall rating of 95% (which is what this harddisk from day one). No single score is below 95 (power on hours and spin up count), and most are 100 anyway. Well, whatever, I've seen drives with perfect SMART values fail from one second to the other, and drives with moderate values work for years. So, I'm inclined not to put too much weight into that overall. TL;DR Now... to the problem: I don't feel like trashing the disk just yet (that's planned with a new OS install upgrading to Win7 early next year, independently of this issue), but in the mean time, I would still like to have a smoothly running system again. Therefore, I feel tempted to tamper with it, but before I render my system entirely unusable (since I've never done this before), I'd like to verify that my planned procedere is likely to suceed in having a working system again: Copy moricons.dl_ from the Windows install disk, rename it to moricons.zip, and unzip it. This gives an intact 5.1.2600.2180 version (the broken one is 5.1.2600.5512 - but I guess this makes not much of a difference, since it's an icon-only DLL, and an outdated copy should work better than one that can't be read) Run chkdsk /r /f` which will "repair" the file (i.e. delete the file without asking, tell the drive to remap the sector, and toss some unreadable junk into a file with a hexadecimal number) Hopefully Windows still boots after this (is that a reasonable expectation, or do I need to have something like BartPE ready? -- but then again, what's that good for in case chkdsk has nuked the entire file system...) Delete the junk file generated by chkdsk, copy the new DLL to %windir%\system32 Reboot. Pray. Maybe I just shouldn't touch anything, since it still kind of works... if annoying, but it works. Unsure... But, is there anything fundamentally wrong with the planned approach? Is this a sensible approach at all?

    Read the article

  • Complete machine freezes...at a loss

    - by user28818
    Guys, We built around 12 machines a few months ago to run Ubuntu. They each have the following specs: ASUS Z8NA-D6 motherboard Dual quad core Intel(R) Xeon(R) CPU E5520 @ 2.27GHz OCZ Mod Extreme Pro 500W power supply 12 GB Kingston RAM Nvidia GeForce 9800 GT graphics card My machine ran well for awhile. However, it started experiencing random lockups. These lockups are not X lockups, they are complete system freezes. The nic stops responding, the magic sysrq buttons won't work. The machine is dead. I first suspected RAM. Memtest86 didn't find anything, but I replaced the RAM anyway. Still, lockups. So I replaced the graphics card. Still, more lockups. They became more and more frequent and started to happen 2-3 times a day. So I replaced the motherboard and power supply in one fell swoop. Suddenly, no more lockups! Woohoo! Except, a week later, in the morning, the machine wouldn't wake up. I reset it, started it up, and the log files showed the last entry at around 11 pm the evening before. This has started occurring with more frequency...now just about every morning I come in, the machine is locked up, and has been since the night before. Yesterday, in the 3 weeks since I replaced the motherboard and power supply, the machine actually locked up on in in mid-work. This is the first time since replacing the two (MB and PS) that this happened while I was using it. All others occurred while I was away. I'm at a loss. Nothing is in syslog or message that would indicate a problem around the time of the lockup. Temps are good...I use lmsensors to monitor and have a script that writes the output to file every minute. They never get that high. The only thing I haven't replaced at this point is the case and the harddrives. I doubt either could be the cause. What would you do if you were in my shoes? Is there a troubleshooting approach I'm missing? For the record, all of the other machines, all eleven of them, don't have any problems. They're all running the same version of Ubuntu (Lucid) that I am. Thanks!

    Read the article

  • Echo 404 directly from nginx to improve performance

    - by user64204
    I am in charge of production servers serving static content for a website. Those servers are constantly being crawled by bots looking for potential exploits (which isn't that much of a problem security-wise because no application can be reached behind the web server) but generates thousands of 404 per day, sometimes per hour. I am looking into ways of blocking those requests but it's tricky (you want to make sure you don't block legitimate traffic and these bots are becoming more and more clever at looking like they're legit) and is going to take me a while to find an acceptable solution. In the meantime I would like to reduce the performance impact of serving those 404 pages. Indeed we're using nginx which by default is configured to serve it's 404 page from the disk (This can be changed using the error_page directive but in the end the 404 will either have to be served from disk or from another external source (e.g. upstream application which would be worst)) which isn't ideal. I ran a test with ab on my local machine with a basic configuration: in one case I echo a message directly from nginx so the disk isn't touched at all, in the other case I hit a missing page and nginx serves its 404 from disk. server { # [...] the default nginx stuff location / { } location /this_page_exists { echo "this page was found"; } } Here are the test results (my laptop has Intel(R) Core(TM) i7-2670QM + SSD in case you're wondering why they are so high): $ ab -n 500000 -c 1000 http://localhost/this_page_exists Requests per second: 25609.16 [#/sec] (mean) $ ab -n 500000 -c 1000 http://localhost/this_page_doesnt_exists Requests per second: 22905.72 [#/sec] (mean) As you can see, returning a value with echo is 11% ((25609-22905)÷22905×100) faster than serving the 404 page from disk. Accordingly I would like to echo a simple 404 Page not Found string from nginx. I tried many things so far but they all failed, essentially the idea was this: location / { try_files $uri @not_found; } location @not_found { echo "404 - Page not found"; } The problem is that as soon as the echo directive is used, the http response code is set to 200. I tried changing that by doing error_page 200 = 400 but that breaks the configuration. How can I serve a 404 page directly from nginx? (without hacking the source which may be might next step)

    Read the article

  • All internet requests in Windows time out

    - by Brandon
    So, I've run into a very strange problem with my home wireless network. Previously, at seemingly random times, the router seemed to disconnect all wireless hosts and cause all of the wired hosts to have a "limited connection" according to windows. In order to fix this, I had to unplug all of the wired hosts from the router, unplug the modem from the router, and power cycle the router. This seemed to solve the problem for a while until the exact same thing happened a day later and I had to go through the same process again. That's where I noticed something weird happening. There was one wireless host (a Windows Vista laptop) that seemed to be causing the router to disconnect the other hosts whenever it connected. When this happened, only that laptop was able to use the wireless from the router. When this happened, I disconnected it from the wireless (by disabling the wireless adapter) then reconnected it (by re-enabling it) and now it, like the other hosts, couldn't connect. I've never really seen anything this strange happen on our network before. So, I restored the router to factory settings and the problem seems to have vanished except one crucial problem. There's another host (a Windows 7 laptop) that was perfectly able to connect before all of the router issues and even in between the crashing and power-cycling events but now says its connected and says it's able to reach the Internet, but all requests time out. In any browser I've tried, the tab says connecting to [site]... for a solid minute and then tells me the request timed out. When I try to ping google.com in cmd it also says request timed out. In frustration, I booted into a dual-boot Ubuntu installation on the Windows 7 host and the connection works fine, to my surprise, as ubuntu is where I am now typing this rather long question. I haven't looked through the event log in windows but will post anything I find in an edit I haven't tried connecting (in Windows 7) to any other wireless network, since The fact that it works in Ubuntu suggests its Windows and not the router but I didn't change any wireless settings in windows before it being able to reach the Internet and not. Does anyone have any clue what could have happened. I opened to buying another router as this one is almost a year old :) but I would like to know whats going on here. Thanks in Advance! P.S. Sorry for how long my question is, I'm a little anxious (:

    Read the article

  • Some Memory Slots Not Working on MSI FM2-A85XA-G65 Motherboard

    - by Mike Ciaraldi
    Short version of question: Does anyone have an MSI FM2-A85XA-G65 motherboard, who can confirm that all four memory slots work? Long version: Several months ago I bought an MSI FM2-A85XA-G65 motherboard at Newegg. At that point I installed an AMD A8-5500 processor and two sticks of Corsair Vengeance 8 GB DDR3-1866 memory, and put it into my file server. I installed the RAM in slots 1 and 3, as directed in the manual, to enable dual-channel memory access. It seemed to work fine, so I bought a second identical mobo (which arrived dead, but was quickly replaced by Newegg) and set of RAM, installed an A10-5800K, and put that into my production Linux machine. Again, it seemed to work well. Eventually I happened to notice that on the server only 8 GB of RAM appeared in the BIOS. I tried each of the slots and memory modules individually and in various combinations. I even swapped processors with the production machine. The result was that putting memory in slots 1 and 2 worked (showing a total of 16 GB), but any memory in slots 3 or 4 was not recognized. However, all four memory slots in the production machine worked, and I confirmed this with both processors. I contacted MSI and arranged to ship the defective mobo back to them for replacement under warranty. I did not want my file server to be down in the interim, and I had another machine I wanted to upgrade, so I bought a third identical mobo to use. That one had the same problem -- only memory slots 1 and 2 worked. I tested it thoroughly with multiple processors and memory sticks. I sent the defective mobo back to MSI and they sent me a new one. This has the same memory slot problem. So I sent it back. The replacement arrived the other day and shows the same problem. I contacted MSI yet again and they said that nobody else has reported memory slot problems on this board and it must be my processor. So my score so far is, out of six boards of this model, I have: One where all four slots work. One which was dead on arrival. Four where only memory slots 1 and 2 work. Before I tear my other machines apart and start swapping processors again I thought I would ask if anyone else has this exact model motherboard and could confirm that all four memory slots either do or do not work. According to MSI you should be able to just plug a single memory module into any of the slots and it will work (and it does on the one mobo I have which works correctly). If you have not yet used all four slots, this is a good time to test them so you know if you can expand your memory in the future. Thanks in advance to anyone who can help.

    Read the article

  • arp "who-has tell" on cloned machine

    - by mcmorry
    I have a urgent problem to solve today, but I'm lost. Please help. I've cloned a Virtual Machine hosted on VM Ware ESXi 4.1 The OS is now Ubuntu Server 12.04 LTS, but at the time of cloning it was 10.04 LTS. I fixed the MAC address manually inside /etc/udev/rules.d/70-persistent-net.rules. It is a known problem on Ubuntu. I had to remove the old MAC address and set the new one as eth0. Everything seems to work fine, except ARP. My provider OVH sent me a warning to resolve it today (this is the second day) or they will block my IP! The log contains many lines like this: Tue Jun 5 01:04:29 2012 : arp who-has 178.32.136.212 tell 178.32.136.224 where .224 is the cloned server that is causing problems, and .212 is the cloned one. arp -na returns: ? (178.33.230.254) at 00:07:b4:00:00:02 [ether] on eth0 ? (178.32.136.212) at 00:50:56:09:8e:f1 [ether] on eth0 The first IP is the ESXi machine. The second one should not be there. I'm not an expert and I don't know what else to do to fix this problem. Any help will be very appreciated. Thanks. EDIT: ifcofig on .224: eth0 Link encap:Ethernet HWaddr 00:50:56:01:32:c6 inet addr:178.32.136.224 Bcast:178.32.136.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fe01:32c6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:399924 errors:0 dropped:465 overruns:0 frame:0 TX packets:241884 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:58006071 (58.0 MB) TX bytes:663603166 (663.6 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:516216 errors:0 dropped:0 overruns:0 frame:0 TX packets:516216 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:236284275 (236.2 MB) TX bytes:236284275 (236.2 MB) ifconfig on .212: eth0 Link encap:Ethernet HWaddr 00:50:56:09:8e:f1 inet addr:178.32.136.212 Bcast:178.32.136.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fe09:8ef1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16014 errors:0 dropped:0 overruns:0 frame:0 TX packets:14511 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15134444 (15.1 MB) TX bytes:2683025 (2.6 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:9944 errors:0 dropped:0 overruns:0 frame:0 TX packets:9944 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1139347 (1.1 MB) TX bytes:1139347 (1.1 MB)

    Read the article

  • New harddrives failing within weeks.

    - by Jason Kealey
    I've experienced 8 hard disk failures in 3 months and have tried many things to solve the issue permanently but I have failed. I would like to know if you have any advice for me. System was running Win XP on an Asus P5W-DH Deluxe. I have setup a RAID-1 array. I started out with 2 x 500 GB 7200RPM Western Digital drives. One died. I took it out to RMA it. On the same day, the router was fried. Assumed a power surge occurred; connected an older UPS to protect the system. Once I got my hands on an identical disk, I installed it. The RAID array was rebuilt. A few days later, the other one died. Assumed the rebuild caused it to fail. Took it out for RMA. Before the other one arrived, the remaining one died. I then discovered I could re-enable them using the Intel Matrix Storage Manager. I re-enabled both and the system seemed fine for a week, until both died again. I got two new 1.5 TB 7200RPM Seagate drives and re-installed Windows 7. Also replaced the UPS and power supply. They both died again. The voltage on the plug is stable between 120 and 122V as per the UPS. None of the other devices have had any problems (monitors, etc.). At this point, I see two options: a) electrical issue in the house that was, for some reason, not blocked by the UPS. b) something else inside the system causing surges? motherboard? onboard raid controller? Failures happen fairly quickly, between 2 and 14 days after I fix the previous issue. I just gotten a new computer (Core i7) to replace it. If it is stable, I can determine that b) was the problem. If it fries its hard drive again, I can determine that it is an electrical issue in the house. Do you have any other thoughts? Any tools I can run on the drives that failed to get more information about the original SMART event history?

    Read the article

  • Have a server, need to figure out a method of backup

    - by PolishHurricane
    My company has an older Dell 2650 server running ArchLinux x64: http://www.dell.com/downloads/global/products/pedge/en/2650_specs.pdf (2 x 2.4GHz Intel Xeon w/around 3287 RAM according to "free -m") We use it to host our internal company site and to post some information from our orders to and we'd like the ability to keep it up as much as possible. What we require: - It needs to always be functional from 8am to 4pm for our data entry person to use it and others to do other things required on it. - If it goes down, we need a quick way to get the machine running again. - If it goes down, we would like to have the data backed up. Some of the major problems include: - The servers old and it may have memory issues - We don't know when one of the hard drives could fail - Our power goes out here once in a while We have a battery backup, but that's pretty much it and it's not for long term. If the server does go down, we have another system in place to store order information that comes in while it's down and repost it when it's back, but we need it up during the day. So we're wondering, what should we get for options? These are the things we thought of, sort of: Setup RAID 1, but that would involve wiping everything right? If we do that, how would we transfer the data over without messing up the server? We could buy an extra server or 2 off eBay for $100, the same model, is that practical or should we get something else? Should we buy a PC or another better server and host off that because it is if anything easier to exchange parts? Should we keep extra parts handy incase it implodes? Should we buy/use backup software? We hear drobo's are cool, but suck. Perhaps there is a software solution to this problem that backs up to another machine or gets us up and running again quickly. Also, if we are to purchase hardware, what is decent? Does anybody know of one for ArchLinux/Linux? We both know a ton about computers but we're kind of unsure what step to take with this, especially with this type of server. Thanks

    Read the article

  • Syncronizing XML file with MySQL database

    - by Fred K
    My company uses an internal management software for storing products. They want to transpose all the products in a MySql database so they can do available their products on the company website. Notice: they will continue to use their own internal software. This software can exports all the products in various file format (including XML). The syncronization not have to be in real time, they are satisfied to syncronize the MySql database once a day (late night). Also, each product in their software has one or more images, then I have to do available also the images on the website. Here is an example of an XML export: <?xml version="1.0" encoding="UTF-8"?> <export_management userid="78643"> <product id="1234"> <version>100</version> <insert_date>2013-12-12 00:00:00</insert_date> <warrenty>true</warrenty> <price>139,00</price> <model> <code>324234345</code> <model>Notredame</model> <color>red</color> <size>XL</size> </model> <internal> <color>green</color> <size>S</size> </internal> <options> <s_option>some option</standard_option> <s_option>some option</standard_option> <extra_option>some option</extra> <extra_option>some option</extra> </options> <images>   </images> </product> </export_management> Some ideas for how can I do it? Or if you have better ideas to do that.

    Read the article

  • Windows 2003 Server Caching

    - by pablomedok
    We're experiencing almost everyday table index corruption on Windows Server 2003. We are running an old application which uses DBF/CDX tables. Everything was fine for ages, but 6 months after we've installed Advantage Database Server (which allows access to some tables to our website) we started to get index corruption problems. And we don't know whom to blame. We've tried to exclude all possible causes of this corruption. Now all users work in terminal mode - so no network problems can cause that, OpLocks also can't be a reason. We changed hardware, network cards, switches, reainstalled Server and even moved to new dedicated server. The only thing we can't exclude is ADS - because it should be working. Is that possible that local read/write caching that causes that problem? E.g. one user or process uses cached data, later another user/process changes it, and later the first user changes it again without knowing about the first change. Is it possible theoretically? Is it possible that this problem is caused by imporper file server or caching settings? Is it possible that normal users use non-cached data and ADS is using cached data? Or vice versa? Is it possible that each terminal user has its own cache? Or maybe the problem is about RAID caching somehow interfering with Windows Server caching? Or maybe there are some special settings for Windows Server for working with DBF tables that are being written simultaneously by several terminal users? Maybe there is a way to turn off caching for some certain files to check it? Sometimes we get index crash twice a day, sometimes everything is fine for 5 days in a row. Today only one user was working in the evening with the database (usually there are 30-50 users are working simultaneously on working hours). So it's almost zero load on server. , Syncronization with website is performed every 5 minutes during work hours and every 15 minutes in the evening and on weekend. We've done file access auditing and it shows that during website syncroniztions ADS server opens the table and index files for ReadEA and WriteEA though it performs only SELECT queries. ADS does UPDATE/INSERT queries but less freqently - not during regular synchronizations, but only when an order is placed by website visitor). Please help me. We are struggling with this problem for almost a year and still can't find any pattern or any clue about this problem. Here is my previous qestion about this issue on DBA: http://dba.stackexchange.com/questions/8646/foxpro-dbf-index-corruption

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • Mouse Error Code 24. Windows 7

    - by Cj.
    I've had the same mouse for a while, and it's been working fine until one day, it started giving me a message about a device not working properly. I tried updating the drivers, and re-installing, I even deleted old drivers in case my computer should be a little confused. It never made a difference, and my mouse seemed to be working just fine despite getting the permanent error in my device manager, I looked it up several times online, but I never found anything I could actually use, when I go to official websites, I always get the same response "plug in so so into a different place - drivers - install silverlight before you can watch this tutorial, try it on a different machine". so I gave up on that. But now is where I have a real problem, lately, my little strange error evolved into a fullblown Error 24, and my mouse is starting to turn on and off randomely, especially when it is being used, but I do hear it go "badum..dadum" when I'm off doing something else. when I looked up error code 24, I really didn't find much other than it meaning: Code 24 This device is not present, is not working properly, or does not have all its drivers installed. (Code 24) Cause The device is installed incorrectly. The problem could be a hardware failure, or a new driver might be needed. Devices stay in this state if they have been prepared for removal. After you remove the device, this error disappears. But, I have tried uninstalling the device entirely several times, and it'll go right back to its previous state with error 24, and turning on and off randomely. what do I do? I cannot afford taking it to a repair place, I can't really afford a new mouse either, I refuse to buy cheap ones as I am a gamer, in need of more than 3 buttons, and a good grip is important. Could there possibly be some confusion in the registry? I do remember having gotten some early problems after I converted my vista to windows7. But I hardly dare going in there unless I'm 100% certain of what I'm going for, and I can honestly say I am at a loss here. Edit: it is a USB mouse we're talking about here. MX™518 Optical Gaming Mouse (logitech) Edit2: I am seeing no rupture, so it must be on the inside of my mouse, or inside the rubber, protecting the cable, that would be really inconvenient to search for

    Read the article

  • Mysterious visitor to hidden PHP page

    - by B. VB.
    On my website, I have a "hidden" page that displays a list of the most recent visitors. There exist no links at all to this single PHP page, and, theoretically, only I know of its existence. I check it many times per day to see what new hits I have. However, about once a week, I get a hit from a 208.80.194.* address on this supposedly hidden page (it records hits to itself). The strange thing is this: this mysterious person/bot does not visit any other page on my site. Not the public PHP pages, but only this hidden page that prints the visitors. It's always a single hit, and the HTTP_REFERER is blank. The other data is always some variation of Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; YPC 3.2.0; FunWebProducts; .NET CLR 1.1.4322; SpamBlockerUtility 4.8.4; yplus 5.1.04b) ... but sometimes MSIE 6.0 instead of 7, and various other plug ins. The browser is different every time, as with the lowest-order bits of the address. And it's just that. One hit per week or so, to that one page. Absolutely no other pages are touched by this mysterious vistor. Doing a whois on that IP address showed it's from the new york area, and from the "Websense" ISP. The lowest order 8 bits of their address are always different, but always from 208.80.194.*/8. From most of the computers that I access my website, doing a tracerout to my server does not contain a router anywhere along the way with the IP 208.80.*. So that rules out any kind of HTTP sniffing, I might think. I have NO idea how, why this is happening. Does anyone have any clue, or have seen something as strange as this before? It seems completely benign, but unexplainable and a little creepy. Thanks in advance!

    Read the article

  • How to store data on a machine whose power gets cut at random

    - by Sevas
    I have a virtual machine (Debian) running on a physical machine host. The virtual machine acts as a buffer for data that it frequently receives over the local network (the period for this data is 0.5s, so a fairly high throughput). Any data received is stored on the virtual machine and repeatedly forwarded to an external server over UDP. Once the external server acknowledges (over UDP) that it has received a data packet, the original data is deleted from the virtual machine and not sent to the external server again. The internet connection that connects the VM and the external server is unreliable, meaning it could be down for days at a time. The physical machine that hosts the VM gets its power cut several times per day at random. There is no way to tell when this is about to happen and it is not possible to add a UPS, a battery, or a similar solution to the system. Originally, the data was stored on a file-based HSQLDB database on the virtual machine. However, the frequent power cuts eventually cause the database script file to become corrupted (not at the file system level, i.e. it is readable, but HSQLDB can't make sense of it), which leads to my question: How should data be stored in an environment where power cuts can and do happen frequently? One option I can think of is using flat files, saving each packet of data as a file on the file system. This way if a file is corrupted due to loss of power, it can be ignored and the rest of the data remains intact. This poses a few issues however, mainly related to the amount of data likely being stored on the virtual machine. At 0.5s between each piece of data, 1,728,000 files will be generated in 10 days. This at least means using a file system with an increased number of inodes to store this data (the current file system setup ran out of inodes at ~250,000 messages and 30% disk space used). Also, it is hard (not impossible) to manage. Are there any other options? Are there database engines that run on Debian that would not get corrupted by power cuts? Also, what file system should be used for this? ext3 is what is used at the moment. The software that runs on the virtual machine is written using Java 6, so hopefully the solution would not be incompatible.

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >