Search Results

Search found 25984 results on 1040 pages for 'disk load'.

Page 133/1040 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Big IP F5 outbound HTTP issues

    - by mbuk2k
    We've tried upgrading from 9.x to 10.2 on our F5 Big IP 3400 and everything went over fine apart from one thing. We're unable to establish any outbound HTTP (80) connections from any servers that are assigned to a virtual server. This is something that worked before and is required for certain calls our servers need to make. Interestingly HTTPS (443) connections work fine, it's literally just anything outbound over port 80 seems to fail. Does anyone know if anything has changed between 9.4 and 10.2 that would mean additional config would need to be made to allow for external HTTP connections? Any advice would be appreciated, thank you

    Read the article

  • ubuntu server slowly filling up

    - by Crash893
    We had our samba server (ubuntu 8.04 ltr) share fill up the other day but when I went to look at it I cant see any of the shares have to much on them we have 5 group shares and then each users has an individual share one users has 22gigs of stuff a few others have 10-20mb of stuff and everyone else is empty so maybe like 26gigs total I deleted a few files yesterday and freed up about 250mb of space today when i checked it it was completely full again and i deleted some older files and freed up about 170mb of stuff but i can watch it slowly creep down in free space. I keep running a df -h Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 241690180 229340500 169200 100% / varrun 257632 260 257372 1% /var/run varlock 257632 0 257632 0% /var/lock udev 257632 72 257560 1% /dev devshm 257632 52 257580 1% /dev/shm lrm 257632 40000 217632 16% /lib/modules/2.6.24-28-generic /volatile what can I do to try to hunt down whats taking up so much of my hdd? (im fairly new to unix in general so i apologize if this is not well explained)

    Read the article

  • Detecting a TPM chip?

    - by Danielb
    I have a HP Mini 311-1000CA netbook running Windows 7 Ultimate. I'd really like to encrypt the harddrive using BitLocker but I am unsure as to how to work out if the Mini has a TPM chip or not. Any ideas?

    Read the article

  • 16GB winsxs folder on windows vista

    - by Jigs
    I know there are lots of posts about the winsxs folder, but I have yet to find any useful information about shrinking it. I have already tried running the two service pack removal tools. The folder totals a massive 16GB (about 10% of the entire drive!) If anyone has any pearls of wisdom then please share them.

    Read the article

  • write-through RAM disk, or massive caching of file system?

    - by Will
    I have a program that is very heavily hitting the file system, reading and writing randomly to a set of working files. The files total several gigabytes in size, but I can spare the RAM to keep them all mostly in memory. The machines this program runs on are typically Ubuntu Linux boxes. Is there a way to configure the file system to have a very very large cache, and even to cache writes so they hit the disk later? I understand the issues with power loss or such, and am prepared to accept that. Crashing aside, in normal operation the writes should eventually reach the disk! Or is there a way to create a RAM disk that writes-through to real disk?

    Read the article

  • Ghost Image - windows asks for activation on when deployed to VM

    - by Chris Sobolewski
    I have several images created with Ghost Solution Suite (v11 I believe), the images have been in use for a few years now, but I am finally to the point where I have enough time to attempt to virtualize them for easier updates. I am running VMWare and attempting to image the virtual machines with my ghost image files. For my images I am running sysprep with minisetup and using reseal. The image deploys successfully, however when I start the VM for the first time, it demands windows activation. This doesn't happen when I image a physical computer, even a different model with different hardware. The idea of virtualizing my images becomes rather worthless if I am unable to deploy the images without having to activate every time (especially as Microsoft keeps declaring our volume licence key as invalid for activations). Does anyone know why it is asking for activation on a virtual machine, but not a physical PC? How can I prevent this?

    Read the article

  • SQL Server: One 12-drive RAID-10 array or 2 arrays of 8-drives and 4-drives

    - by ben
    Setting up a box for SQL Server 2008, which would give the best performance (heavy OLTP)? The more drives in a RAID-10 array the better performance, but will losing 4 drives to dedicate them to the transaction logs give us more performance. 12-drives in RAID-10 plus one hot spare. OR 8-drives in RAID-10 for database and 4-drives RAID-10 for transaction logs plus 2 hot spares (one for each array). We have 14-drive slots to work with and it's an older PowerVault that doesn't support global hot spares.

    Read the article

  • Compiling the Linux kernel, how much size is needed?

    - by ant2009
    I have downloaded the newest most stable Linux kernel, 2.6.33.2. I thought I would test this using VirtualBox. So I create a dynamically sized harddisk of 4 GB. And installed CentOS 5.3 with just the minimum packages. I setup the make menuconfig with just the default settings. After that I ran make and got the following error: net/bluetooth/hci_sysfs.o: final close failed: No space left on device make[2]: *** [net/bluetooth/hci_sysfs.o] Error 1 make[1]: *** [net/bluetooth] Error 2 make: *** [net] Error 2 The amount of space I have left is: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 3.3G 3.3G 0 100% / /dev/hda1 99M 12M 82M 13% /boot tmpfs 125M 0 125M 0% /dev/shm My virtual size is 4 GB, but the actual size is 3.5 GB. $ ls -hl total 7.5G -rw-------. 1 root root 3.5G 2010-04-13 14:08 LFS.vdi How much size should I give when compiling and installing a Linux kernel? Are there any guidelines to follow when doing this? This is my first time, so just experimenting with this.

    Read the article

  • Identifying Hard Drive as performance bottleneck for desktop machines

    - by Programming Hero
    I'm working in a development team where we all use laptops so we can work in multiple locations. These laptops are proving notoriously slow for development work, but at a glance they all look to have the specification for a much faster experience: CPU - Intel Core 2 Duo T7500 Memory - 2GB of RAM We all experience the biggest delays when the hard-drives are being accessed, particularly when swap-files are being thrashed. After doing a little bit of profile, a colleague discovered that our HDDs are seeing Read/Write speeds of about 10MB/sec. This seems abnormally low and we believe it the cause of the problem. Sensibly (though somewhat annoyingly) our business wont blow money on faster drives just to see if it fixes the problem; we need to illustrate this is definitely the problem and that buying some solid-state drives will make it go away. I need some way of showing how 90% of the system resources aren't being used over the course of a day, and that whenever there is utilization, it's all in HDD reads or writes. Are there any tools I could use to provide this information? Does it seem likely the problem is going to be fixed by a faster drive? Should I be looking for alternatives?

    Read the article

  • ESX 4.0 space: DASD, NAS, or ?

    - by thormj
    I put together an ESX box for better management, but its performance is a WTF item; I'm a noob at dealing with ESX, so I'm looking for a laundry-list of reading material to help me straighten this out so I can go back to .NET programming. Current storage system: We're running Raid5+Hotspare (8x500 GB spindles) on a PERC6i on a Dell 2910. Due to ESX limitatios, the PERC is showing the storage as 1x2TB + 1x800GB "partitions." I'm not sure of the setup's configuration (stride / stripe / ???) at all. Our Applications We have a SBS server as well as a minor (2x50 GB, but growing at 10GB/month) database server... Our application that lives on the database VM is CPU and I/O insense; it's a database churning excercise mixed in with a lot of computation on the data (fixing that performance is what I'm supposed to be working on)... Perfomance Issue When I do a backup, restore, or worse (copy a backup from 1 vm to another to move it to the QA VM), the entire system slows to a crawl (even "unrelated" VMs). I originally thought a DASD situation would be quite good since you had PCI-x bandwidth, but the systemwide slowdown is killing productivity. Questions What should I do to make an intelligent decision about NAS vs RAID vs SAN vs DASD? Are there sweet spots/ugly spots in the storage setup? Can you use a SSD PCI-X card in ESX for the tempdb? Good/Bad idea? Is there any way to "share" some image in a copy-on-write fashion? Most of the "Backup-Copy-Restore" is to "put a clean image on the dev boxes"; if I could have them "share" the master image, the "big copy" (2x50 GB) would only need to be done once per week instead of once per dev per week...[runtime performance isn't a concern with the dev boxes, but the backup/copy/restore kills production, SBS, and everything else on the box]

    Read the article

  • Sticky sessions in Lighttpd

    - by Matias
    Hello, I've read in http://redmine.lighttpd.net/projects/1/wiki/Docs%3AModProxyCore that sticky sessions are not currently implemented in lighttpd. I'd like to know if it is possible to have sticky sessions using lighttpd as a loadbalancer (Perhaps implementing the sticky sessions using fastcgi or applying some patch?) Thanks and bye!!

    Read the article

  • RAID10 without write-back cache = horrible write performance?

    - by Harry Mexican
    I have just provisioned a dedicated server on singlehop. I'm running it through some tests to know what to expect performance-wise. On the I/O side (with 4 1TB disks in RAID 10) I get: write-cache disabled 200 MB/s read throughput 30 MB/s write throughput I thought that was really low compared to my desktop HD which gets 150-150 or so. So I had a chat with them and they suggested enabling the write cache. New results: write-cache enabled 280 MB/s read 260 MB/s write which is great and all but means I'd have to add a BBU for an additional monthly cost. Is it normal for the write throughput to be 1/4 of a regular drive on RAID10, if you don't have write cache? It almost feels like its intentionally bad to force you to pony up for the BBU. I'd be happy with normal non-raid performance of 150/150.

    Read the article

  • Simultaneous read/write to RAID array slows server to a crawl

    - by Jeff Leyser
    Fairly beefy NFS/SMB server (32GB RAM, 2 Xeon quad cores) with LSI MegaRAID 8888ELP controlling 12 drives configured into 3 different arrays. 5 2TB drives are grouped into a RAID 6 array. As expected, write performance to the array is slow. However, sustained, simultaneous read/write to the array (wether through NFS or done locally) seems to practically block any other access to anything else on the controller. For example, if I do: cp /home/joe/BigFile /home/joe/BigFileCopy where BigFile is 20G, then even a simple ls /home/jane will take many 10s of seconds to complete. In addition, an ls /backup will also take many tens of seconds, even though /backup is a different array on the same controller. As soon as the cp is done, everything is back to normal. cp /home/joe/BigFile /backup/BigFile does not exhibit this behavior. It's only when doing read/write to the same array.

    Read the article

  • Linux Tuning for High Traffic JBoss Server with LDAP Binds

    - by Levi Stanley
    I'm configuring a high traffic Linux server (RedHat) and running into a limit I haven't been able to track down. I need to be able to handle sustained 300 requests per second throughput using Nginx and JBoss. The point of this server is to run checks on a user's account when that user signs in. Each request goes through Nginx to JBoss (specifically Torquebox with JBoss A7 with a Sinatra app) and then makes an LDAP request to bind that user and retrieve several attributes. It is during the bind that these errors occur. I'm able to reproduce this going directly to JBoss, so that rules out Nginx at least. I get a variety of error messages, though oddly JBoss stopped writing to the log file recently. It used to report errors about creating native threads. Now I just see "java.net.SocketException: Connection reset" and "org.apache.http.conn.HttpHostConnectException: Connection to http://my.awesome.server:8080 refused" as responses in jmeter. To the best of my knowledge, I have plenty of available file handles, processes, sockets, and ports, yet the issue persists. Unfortunately, I have very little experience tuning servers. I've found a couple useful documents - Ipsysctl tutorial 1.0.4 and Linux Tuning - but those documents are a bit over my head (and just entering the the configuration described in Linux Tuning doesn't fix my issue. Here are the configuration changes I've tried (webproxy is the user that runs Nginx and JBoss): /etc/security/limits.conf webproxy soft nofile 65536 webproxy hard nofile 65536 webproxy soft nproc 65536 webproxy hard nproc 65536 root soft nofile 65536 root hard nofile 65536 root soft nproc 65536 root hard nofile 65536 First attempt /etc/sysctl.conf sysctl net.core.somaxconn = 8192 sysctl net.ipv4.ip_local_port_range = 32768 65535 sysctl net.ipv4.tcp_fin_timeout = 15 sysctl net.ipv4.tcp_keepalive_time = 1800 sysctl net.ipv4.tcp_keepalive_intvl = 35 sysctl net.ipv4.tcp_tw_recycle = 1 sysctl net.ipv4.tcp_tw_reuse = 1 Second attempt /etc/sysctl.conf net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.core.netdev_max_backlog = 30000 net.ipv4.tcp_congestion_control=htcp net.ipv4.tcp_mtu_probing=1 Any ideas what might be happening here? Or better yet, are there some good documentation resources designed for beginners?

    Read the article

  • Can't access USB drive anymore

    - by marie
    I have a 32 GB Lacie Cookey USB flash disk that doesn't show in the Computer window but it's visible as a device. cmd > diskpart DISKPART> list disk Disk ### Status Size -------- ------------- ------ Disk 0 Online 149 G Disk 1 No Media 0 DISKPART> select disk 1 Disk 1 is now the selected disk. DISKPART> clean Virtual Disk Service error: There is no media in the device. It also appears in the Disk Management tool, but the box is empty. Is there anything I can do or is it dead? ............................................................ output from ChipGenius: Description: [F:]USB Mass Storage Device(LaCie CooKey) Device Type: Mass Storage Device Protocal Version: USB 2.00 Current Speed: High Speed Max Current: 200mA USB Device ID: VID = 059F PID = 103B Serial Number: 070535924B170C18 Device Vendor: LaCie Device Name: CooKey Device Revision: 0100 Manufacturer: LaCie Product Model: CooKey Product Revision: PMAP Controller Vendor: Phison Controller Part-Number: PS2251-67(PS2267) - F/W 06.08.53 [2012-09-26] Flash ID code: 983AA892 - Toshiba [TLC] Tools on web: http://dl.mydigit.net/special/up/phison.html

    Read the article

  • Simple Workstation Imaging Solution?

    - by Will
    Hey guys, I need a fairly cheap imaging solution for Windows XP corporate desktops. Ideally, I'd be able to set up a desktop exactly as we want it, create an image, deploy this image to a server, then boot a new desktop to a CD/USB Drive/Network and quickly set up the workstation. Ideally, each computer would also have a unique workstation name. Any ideas? Right now I'm using a custom built Linux DD solution, but it's slow, not network-based, can't image multiple computers at the same time as there's only one copy on a USB drive, and can't uniquely name the computers. Thanks, Will

    Read the article

  • How to tell which process is hogging my CPU when they don't add up to 100%?

    - by endolith
    Ubuntu's System Monitor applet shows 100% CPU usage continuously. If I click it, the resources tab shows it at 100% continuously, too. If I go to processes, though, to find out which process is the culprit, there is nothing above 10%. If I run top there is nothing above 10%. The individual processes do not add up to 100%. I try killing lots of processes, but the overall usage continues to be 100%. How can I find out what's hogging the CPU? This is an unusual situation on a computer I use daily, which is never anywhere near 100% CPU unless I'm doing something that requires it (like loading 32 Firefox tabs), after which it goes back to a normal idle level. It's not a new install or anything. There is no reason the processor should be maxed out. I'm not sure when it started or if I changed something that caused it to happen. Normally I would use top or System Monitor and find the process that had gone out of control, but I can't find anything with those tools this time. It persists after reboots and everything. And the processor is obviously hot, so it's not an erroneous reading. Update: I tried killing every process, one at a time, until the problem went away, and killing vino-server finally fixed it, even though that process never went above 5%. I had enabled Remote Desktop a few days ago (and have obviously now disabled it). But the question remains: How did a single process manage to use 100% CPU while top only showed that process at 5%? How do I identify culprits like this in the future? Looks like I'm not the only one who's had this problem: Still a problem in both jaunty & karmic. Interestingly, both System Monitor & htop do not show the sum of individual processes being anywhere near 100% cpu.

    Read the article

  • Missing HDD space - says 65GB used, selecting all folders shows 30GB used

    - by Igor K
    Hi Running Windows Server 2008, 74GB raptor drive and noticed we only had about 500MB left - yikes! So deleted some old backups we don't need, but can't track down where about 30GB seems to be taken up. If I go to C: and select all folders and go to properties, this comes to around 30GB but in My Computer I can see 65GB is used. How can I find out whats eating the space? Just IIS + MSSQL Express + Smartermail on the server

    Read the article

  • USB 3.0 hard disk not detected on a particular host controller?

    - by Alvin Wong
    I have a USB 3.0 hard disk which has always been working on my desktop with an XHCI. Now I just bought a notebook with an XHCI (something with Intel's Ivy Bridge setup). The first time I plug the hard disk in its 3.0 port it is detected and working. A few hours later I try to connect it again, but seems that the notebook just ignored it! The light on the hard disk didn't blink as usual (instead it is hold at on). I then tested it with my desktop again and it is working perfectly. It gets trickier when I plug it in the USB 2.0 port of that notebook it is detected and working perfectly (despite the slower speed). Then I try to plug in an USB 2.0 USB flash drive to that USB 3.0 port, and it is detected (of course as USB 2.0). So, there are two USB 3.0 ports on my notebook's XHCI. Both of them are not working with my hard disk but working perfectly fine with my USB 2.0 UFD. What's wrong with it? When I plug in the hard disk, device manager doesn't change. I've tried re-installing the driver for the XHCI, but it changes nothing. Had I broke the USB 3.0-specific pins of both USB 3.0 ports?

    Read the article

  • How to re-do the hard disks in a WD Word Book Edition II ?

    - by jfmessier
    I recently purchased a WD World Book II, a 2 TB one. I call it the "White Box". It has those 2 1TB drives, and they were in this RAID 1 config, only giving me about 1 TB. I could not delete the raid array, and I took the drives in a Linux box. But I also deleted the entire partitions of the disks, and I cannot even et the existing RAID array on this WD White Box. The drives are fine, but I cannot get them to work on the WD White Box. My goal was to get back to a real 2 TB storage space. If I cannot get those drives back in the White Box, I can re-use them elsewhere, but this would mean a waste of the firmware and network connection. After the fact, I read that, anyway, the network performance is rather poor. Thanks :-)

    Read the article

  • How can I keep a file in Windows 7's cache?

    - by netvope
    Sometimes you know better than Windows what files will be re-used later. Suppose you have 8GB of memory, and you use the same 1GB file every hour in an I/O-bound application (which takes 1 second to finish if the file is cached, and 1 minute if not.) Now you process some other 16GB of data that are not going to be re-used. Naturally the frequently used 1GB file will be pushed out of the cache. It would be beneficial if one can tell Windows to keep that 1GB file in memory. (Better yet, it would be great if I can tell Windows not to cache those 16GB of data, but I'm not optimistic that this can be done.) The poor-man's way to keep a file in the cache would be to keep reading the file. Are there any better ways? Are you aware of any programs that do this? (If this can be easily done under Linux, please let me know too.)

    Read the article

  • Conflicting answers from du with different units

    - by dpitch40
    My question is quite simple. I get this output when checking the total amount of space I'm using on my Walkman. david@Milton:/media$ du -b --max-depth=0 WALKMAN/ 14823290693 WALKMAN/ david@Milton:/media$ du -k --max-depth=0 WALKMAN/ 14523776 WALKMAN/ Last I checked, 14,523,776 KB * 1024 = 14,872,646,624 B, not 14,823,290,693. Dividing the two, their "K" unit seems to be equal to about 1020.62 rather than 1024 as advertised. This is causing some errors in the program I wrote to sync my Walkman, so it fills up faster than it claims to. Can anyone explain this discrepency?

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >