Search Results

Search found 8937 results on 358 pages for 'disk defragmenting'.

Page 6/358 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Why isn't Startup Disk Creator working in 12.04?

    - by Steve Kelem
    I'm trying to create a bootable USB stick (7.5G) for Ubuntu 12.04 (x86_64) from another Ubuntu 12.04 x86_64 PC. I downloaded the Ubuntu 12.04 LTS "Precise Pangolin" - Release amd64 (20120425). When I run Make Startup Disk, I selected the downloaded release. The drive shows up with a capacity of 7.5GB and a blank space under "Free Space". I have tried using the "Erase Disk" button, which seems to erase the disk. The problem is that the options below the "Disk to use" section are grayed out. The "Make Startup Disk" is colored dull orange, while the source disc image and device to use are bright orange. The "Make Startup Disk" button doesn't do anything when I click it. The only working buttons are "Other...", "Erase Disk", and "Close". Upon using Other button to select the ISO, it allows to select the ISO but it doesn't load and the "Source Disk Image" field remains empty.

    Read the article

  • Virtual Machine files on ramdisk doesn't run faster than on physical disk

    - by Landy
    I installed total 36G memory (4x8G + 2x2G) in the host (Windows 7) and I used ImDisk to create a 32G ramdisk and format it to NTFS file system. Then I copied the virtual machine (in VMware Workstation format) folder, including vmx, vmdk, etc... to the new created ram disk. Then I tried to power on it in VMware Workstation. What made me surprised is that the performance is not better than before. It cost almost the same time to power on the Windows 7 VM. I check the Resource Monitor in the Windows 7 host, and the statistics of CPU, disk, network are rather normal. The memory has reported 3000+ hard fault/sec when guest OS boot then drop to 0 after the guest powered on. Any idea about this issue? I had thought the performance of ramdisk will be better than physical disk in this case. Am I wrong? Thanks.

    Read the article

  • Dynamic Disk: Revert back to basic or...?

    - by someguy
    When I was trying to create a new partition (via Disk Management) it warned me that the disk would be dynamic, but I thought it meant the partition and went ahead. Now, my hard disk, which has the main C partition, is dynamic. I haven't shut down the computer, and I'm not sure what the consequences are. Should I revert back to basic or...? What ever happens, I don't want to lose my data. Edit: I think I should mention that I don't know how to revert back to basic...

    Read the article

  • Hard Drive Fundamentals And Verifying Disk Performance

    - by Agnel Kurian
    Over the past few months, my Windows XP machine has slowed down to a crawl. It takes about 10-15 minutes to go from power-up to reaching a responsive state. I have reasons to believe that this is a result of the hard disk slowing down. Questions: Do hard disks slow down as a result of mechanical wear and tear ...or age? How do I check if my disk has slowed down? Conversely, how can I verify that my disk is indeed running at the speed it's designed to run at? Could drivers be at fault here? Do hard disks come with drivers or does Windows use a generic driver?

    Read the article

  • Missing disk space in Windows XP

    - by Jørn Schou-Rode
    On my mother's Lenovo laptop, Windows XP claims that the hard drive is almost full. According to the properties window, 52.7 out of 55.2 GB is in use: By deleting temp files from Internet Explorer, System Restore, Recycle bin, Windows Update, System Cleanup, I managed to free up about one GB. That's still 50 GB in use, which still is a lot more than I expected. Hence, I gave good old WinDirStat a spin, and here's the output: It might be hard to read here, but the first line says that the total amount of disk space in use on drive C is 24.3 GB. So Windows claims usage of 52.7 GB and WinDirStat can only account for 24.3 GB. Where is the other half of that disk space being used? I hope someone has an answer, or some tricks or tips to do further research. UPDATE: The laptop in question has an SSD hard drive. I am aware that these disk (at least the earlier ones) have a limited life-time. Could the symptoms described be caused by wear and tear on the SSD?

    Read the article

  • Solaris kstat sdX disk nread counter value decreasing

    - by mykhal
    I get strange disk io nread (bytes read) counter values (from kstat) on Solaris. Example of collected nread value for sd6 disk collected in 30s interval (command kstat -n sd6): 768579416 768579416 768579416 768579416 768579416 768579416 768579416 768496080 768496080 768496080 768496080 768496080 768496080 768496080 768496080 768530896 768530896 768447560 768447560 768447560 One would suppose that the relative read bytes count can't be negative.. I wonder what can couse this situation and whether there is more reliable disk io data available. Some info about the system: machine:~ # uname -a SunOS machine 5.10 Generic_127112-11 i86pc i386 i86pc machine:~ # cat /etc/release Solaris 10 11/06 s10x_u3wos_10 X86 Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 14 November 2006

    Read the article

  • Disk performance below expectations

    - by paulH
    this is a follow-up to a previous question that I asked (Two servers with inconsistent disk speed). I have a PowerEdge R510 server with a PERC H700 integrated RAID controller (call this Server B) that was built using eight disks with 3Gb/s bandwidth that I was comparing with an almost identical server (call this Server A) that was built using four disks with 6Gb/s bandwidth. Server A had much better I/O rates than Server B. Once I discovered the difference with the disks, I had Server A rebuilt with faster 6Gbps disks. Unfortunately this resulted in no increase in the performance of the disks. Expecting that there must be some other configuration difference between the servers, we took the 6Gbps disks out of Server A and put them in Server B. This also resulted in no increase in the performance of the disks. We now have two identical servers built, with the exception that one is built with six 6Gbps disks and the other with eight 3Gbps disks, and the I/O rates of the disks is pretty much identical. This suggests that there is some bottleneck other than the disks, but I cannot understand how Server B originally had better I/O that has subsequently been 'lost'. Comparative I/O information below, as measured by SQLIO. The same parameters were used for each test. It's not the actual numbers that are significant but rather the variations between systems. In each case D: is a 2 disk RAID 1 volume, and E: is a 4 disk RAID 10 volume (apart from the original Server A, where E: was a 2 disk RAID 0 volume). Server A (original setup with 6Gpbs disks) D: Read (MB/s) 63 MB/s D: Write (MB/s) 170 MB/s E: Read (MB/s) 68 MB/s E: Write (MB/s) 320 MB/s Server B (original setup with 3Gpbs disks) D: Read (MB/s) 52 MB/s D: Write (MB/s) 88 MB/s E: Read (MB/s) 112 MB/s E: Write (MB/s) 130 MB/s Server A (new setup with 3Gpbs disks) D: Read (MB/s) 55 MB/s D: Write (MB/s) 85 MB/s E: Read (MB/s) 67 MB/s E: Write (MB/s) 180 MB/s Server B (new setup with 6Gpbs disks) D: Read (MB/s) 61 MB/s D: Write (MB/s) 95 MB/s E: Read (MB/s) 69 MB/s E: Write (MB/s) 180 MB/s Can anybody suggest any ideas what is going on here? The drives in use are as follows: Dell Seagate F617N ST3300657SS 300GB 15K RPM SAS Dell Hitachi HUS156030VLS600 300GB 3.5 inch 15000rpm 6GB SAS Hitachi Hus153030vls300 300GB Server SAS Dell ST3146855SS Seagate 3.5 inch 146GB 15K SAS

    Read the article

  • Store Varnish cache in hard disk

    - by Great Kuma
    Hello, The situation is: Im building PHP application, and need http caching. Varnish is great, and lots of people tell me that Varnish store the cached data in RAM. But I want it cached in hard disk. Is there any way to store the Varnish cached data in hard disk? thanks.

    Read the article

  • Why is the disk making my motherboard beep?

    - by Mark Ransom
    Whenever I let my PC do heavy disk accesses for a long time, the speaker on the motherboard starts making a continuous chirping sound. Thankfully it doesn't happen often, but it drives me nuts when it does. Anybody know where this sound might be coming from, or have any hints as to how to track it down? Edit: The problem appears to be with the processor, the correlation with disk access was coincidental. Thanks for all the answers.

    Read the article

  • Clean install vs disk image

    - by Thanos
    Once a year I am making a clean install on windows, in order to keep my system fast. After posting a question on making a bootable windows usb with exe programs where I was adviced to make a disk image, a new question rose. What is the difference in making a disk image and performing a clean install on windows? Which is better in terms of speed, general performance, value for time and transfering between different computers?

    Read the article

  • How do I Change a damaged Disk in a Raid 5 array

    - by Egakagoc2xI
    Hi, I have a server with a 4-drives Raid 5 array; one of the disks is damaged. All the disks are hot pluggable. My Question is, I want to replace the damaged disk with a new one, do I have to shutdown the server or should I just change the hard disk with the server on and it will rebuild the array? There is a procedure to follow? My Server is a HP. Regards.

    Read the article

  • Bad disk performance on HP DL360 with Smarty Array P400i RAID controller

    - by sarge
    I have a HP DL360 server with 4x 146GB SAS disks and a Smart Array P400i RAID controller with 256MB cache. The disks are in RAID 5 (3 disks + 1 hot spare). The server is running VMware ESX 3i. The disk write performance is really bad. Here are some numbers: ns1:~# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 3364 MB in 2.00 seconds = 1685.69 MB/sec Timing buffered disk reads: 18 MB in 3.79 seconds = 4.75 MB/sec ns1:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=125000 && sync" 125000+0 records in 125000+0 records out 1024000000 bytes (1.0 GB) copied, 282.307 s, 3.6 MB/s real 4m52.003s user 0m2.160s sys 3m10.796s Compared to another server those number are terrible: Dell R200, 2x 500GB SATA disks, PERC raid controller (disks are mirrored). web4:~# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 6584 MB in 2.00 seconds = 3297.79 MB/sec Timing buffered disk reads: 316 MB in 3.02 seconds = 104.79 MB/sec web4:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=125000 && sync" 125000+0 records in 125000+0 records out 1024000000 bytes (1.0 GB) copied, 35.2919 s, 29.0 MB/s real 0m36.570s user 0m0.476s sys 0m32.298s The server isn't very loaded and the VMware Infrastructure Client performance monitor is showing 550KBps average read and 1208KBps average write for the last 30 minutes (highest write rate: 6.6MBps). This has been a problem from the start. Any ideas?

    Read the article

  • Windows 7 keeps insisting that it needs to check disk for consistency, but never does

    - by Mike
    Lately Windows 7 has been telling me that I need to check disk D: for consistency. This happens more than 50% of the time when booting up. The first time, I didn't touch anything so that it would go ahead and do its scan. It didn't seem to do anything - just booted straight into Windows. The second time I tried to skip it by pressing any key. It ignored all of my keystrokes and still counted down to 0 (then skipped the disk check). Sometimes, it gets down to 0 but then just hangs... no indication that anything is going on. This is happening on a < 3 month old laptop. C: and D: are on the same physical disk - just two partitions. I never get any notification that C: needs to be checked for consistency. It's a ~300GB HD. C: has 60gb (32gb free) and D: has ~240GB (122gb free). What could be causing this to keep coming up? What can I do to fix it? Thanks!

    Read the article

  • After reinstallation, Disk Cleanup disappears when I click OK.

    - by James
    After I reinstalled Windows 7, Disk Cleanup stopped working. I can start Disk Cleanup and select the drive to clean, but when I click on the OK button, the window disappears. Any solutions? Here's the data from Windows LogsApplication :- EventData 1744235005 1 APPCRASH Not available 0 cleanmgr.exe 6.1.7600.16385 4a5bc5e1 Csi.dll 14.0.4733.1000 4b5662be c0000005 00135213 F:\Users\Jacob\AppData\Local\Temp\WER419.tmp.WERInternalMetadata.xml F:\Users\Jacob\AppData\Local\Microsoft\Windows\WER\ReportArchive\AppCrash_cleanmgr.exe_6514b6ecb633f97cbf78e3a5bcae2c4bd74351_0d3b109c 0 75fa9599-41b1-11e0-b864-001966b2bcb6 0 The above one was with an Information icon. The one below was with an Error icon:-- EventData cleanmgr.exe 6.1.7600.16385 4a5bc5e1 Csi.dll 14.0.4733.1000 4b5662be c0000005 00135213 bbc 01cbd5be36b572bf F:\Windows\system32\cleanmgr.exe F:\Program Files\Common Files\Microsoft Shared\OFFICE14\Csi.dll 75fa9599-41b1-11e0-b864-001966b2bcb6 I also used process explorer:- When i started disk cleanup, a cleanmgr.exe process appeared under explorer.exe.But when i clicked on the "OK" button after selecting the drive, cleanmgr.exe was there for some seconds before it disappeared. But a new process - WerFault.exe appeared under svchost.exe a few seconds after i clicked the "OK" button. It disappeared, too, from the process list after some time (i think it disappeared along with cleanmgr.exe).

    Read the article

  • Win 2008 R2 - copying TO disk is very slow, copying FROM is more or less okay

    - by avs099
    I have Windows 2008 R2 SP1 with 4 identical SATA disks (Seagate Barracude 7200) in RAID 5 array. It has 4Gb of memory; all recent updates are installed. Problem: when I copy large file from one folder to another, I get about 10MB/s average speed. When I read this file from network share via 1Gbps connection - I get about 25-30 MB/s. Both numbers seems to be low for me - but specifically I'm very frustrated with low write speed. there is no antivirus, no hyper-v, it's just a fileserver - i when i do my tests nobody else reads/write from it (we have only 4 people in a team, so I'm sure). Not sure if that matters, but there is only 1 logic disk "C" with all available space (1400 GB). I'm not an admin at all, so I have no idea where to look and what other information to provide. I did run performance monitor with "% idle time", "avg bytes read", "avg byte write" - here is the screenshot: I'm not sure why there are such obvious spikes. Any idea? Please let me know if you need me to provide more information - what counters should I check, etc. I'm very eager to get this solved. Thank you. UPDATE: we have another Windows 2008 R2 SP1 server with 2 RAID1 arrays - one is disk C (where windows is installed, another one is disk E). It is running Hyper-V and does not have antivirus. I noticed the following behavior when I copy large file (few GBs): C - C: about 50MB/sec C - E: about 55MB/sec E - E: 8MB/sec!!! E - C: 8MB/sec!!! what could cause this?? E drive is RAID1 array from same Seagate Barracuda 1TB drives..

    Read the article

  • Mac OS X Disk Encryption - Automation

    - by jfm429
    I want to setup a Mac Mini server with an external drive that is encrypted. In Finder, I can use the full-disk encryption option. However, for multiple users, this could become tricky. What I want to do is encrypt the external volume, then set things up so that when the machine boots, the disk is unlocked so that all users can access it. Of course permissions need to be maintained, but that goes without saying. What I'm thinking of doing is setting up a root-level launchd script that runs once on boot and unlocks the disk. The encryption keys would probably be stored in root's keychain. So here's my list of concerns: If I store the encryption keys in the system keychain, then the file in /private/var/db/SystemKey could be used to unlock the keychain if an attacker ever gained physical access to the server. this is bad. If I store the encryption keys in my user keychain, I have to manually run the command with my password. This is undesirable. If I run a launchd script with my user credentials, it will run under my user account but won't have access to the keychain, defeating the purpose. If root has a keychain (does it?) then how would it be decrypted? Would it remain locked until the password was entered (like the user keychain) or would it have the same problem as the system keychain, with keys stored on the drive and accessible with physical access? Assuming all of the above works, I've found diskutil coreStorage unlockVolume which seems to be the appropriate command, but the details of where to store the encryption key is the biggest problem. If the system keychain is not secure enough, and user keychains require a password, what's the best option?

    Read the article

  • Linux disk usage analyser that acts like symlinks are real files

    - by Rory
    I am using git-annex, an extension to the DVCS git, which is designed for handling large files. It makes heavy use of symlinks. The actual large files are moved to the .git/annex directory and the original files are symlinked to there. I am running out of disk space, and need to clear up, and see what's using all my space. Usually I'd use a disk usage tool like ncdu, Baobab or Filelight. However they treat the symlink as essentially empty, and only count the file that it is pointing to as using any space. Which means when I use git-annex, it shows no space used in the main directories and lots of space used in the .git/annex directory. This is not helpful. Is there any (graphical or ncurses) based disk usage programme for linux (apt-get installable would be easie that is capable (through options or not) of counting a symlink as using up the space that the original file uses up? Many have options for different behaviour for hard links, so makes sense that some should h (I know counting symlinks as using space has flaws, like counting the space space twice, broken symlinks, etc. But that's OK for my purposes)

    Read the article

  • Wiping Deleted Directory Entries and Defragmenting Directories

    - by Synetech inc.
    Hi, I have seen plenty of apps that wipe free space on a disk (usually by creating a file that is as big as the remaining space) or defragment a file (usually by using the MoveFile API to copy it to a new contiguous area). What I have not seen however is a program that wipes the deleted directory entries. That is, when a file is deleted, its information (name, dates, etc.) remain in the directory, but are simply marked as empty. That leaves all kinds of information in a directory entry, and also wastes space since (at least on FAT drives), the directory may be using several clusters. For example, if a directory once had a lot of files, it will be expanded to use another cluster which could be anywhere on the disk. This means that the directory is fragmented, and may be using more clusters than needed, possibly with 100’s of unused (ie, “deleted file”) entries between active files. Does anyone know of a program that can defragment/consolidate directories (ie, wipe unused entries, and move active entries together)? (I would really rather not have to resort to writing my own yet again.) Thanks a lot. EDIT Sorry, I should have said, Windows and/or DOS, for FAT*/NTFS.

    Read the article

  • Unable to format disk: 'The system cannot find the file specified'

    - by ACarter
    I have a USB flash drive, which I may have mucked up, so I used DISKPART's CLEAN to clean it up. I created a simple volume, and tried to format it. (This is all using Windows' disk management.) I was told The system cannot find the file specified. So I tried using DISKPART (as an admin): DISKPART select volume 9 Volume 9 is the selected volume. DISKPART format recommended DiskPart has encountered an error: The system cannot find the file specified. See the System Event Log for more information. DISKPART As you can see, no luck. When I plug the drive in, the computer makes a beep noise as though it has recognised something, but nothing appears in My Computer How can I format the disk so I can use it again?

    Read the article

  • How HP D2700 disk enclosure is monitored for alarms via SNMP

    - by VSAC
    We have HP D2700 disk enclosure and we would like to monitor D2700 (connected to HP Proliant DL360G8) for alarms.I have following questions regarding this. What are the options available for reporting D2700 hardware alarms (disk failure, power failure) via SNMP? We understand the D2700 to have an Ethernet interface for controller A and B and alarms are available via SNMP via this interface. Can anyone provide the actual alarms via this interface? (MIB and alarm list) As we have a number of D2700’s and would like to minimize the number of physical connections to the switch and associated IP addresses; Is there a mechanism to monitor the D2700 from the SCSI connected HPDL360 and raise SNMP alarms from the DL360 for hardware failures on the D2700? If so can anyone provide details and the actual alarms and MIB via this mechanism? Thanks!

    Read the article

  • disk-to-disk backup without costly backup redundancy?

    - by AaronLS
    A good backup strategy involves a combination of 1) disconnected backups/snapshots that will not be affected by bugs, viruses, and/or security breaches 2) geographically distributed backups to protect against local disasters 3) testing backups to ensure that they can be restored as needed Generally I take an onsite backup daily, and an offsite backup weekly, and do test restores periodically. In the rare circumstance that I need to restore files, I do some from the local backup. Should a catastrophic event destroy the servers and local backups, then the offsite weekly tape backup would be used to restore the files. I don't need multiple offsite backups with redundancy. I ALREADY HAVE REDUNDANCY THROUGH THE USE OF BOTH LOCAL AND REMOTE BACKUPS. I have recovery blocks and par files with the backups, so I already have protection against a small percentage of corrupt bits. I perform test restores to ensure the backups function properly. Should the remote backups experience a dataloss, I can replace them with one of the local backups. There are historical offsite backups as well, so if a dataloss was not noticed for a few weeks(such as a bug/security breach/virus), the data could be restored from an older backup. By doing this, the only scenario that poses a risk to complete data loss would be one where both the local, remote, and servers all experienced a data loss in the same time period. I'm willing to risk that happening since the odds of that trifecta negligibly small, and the data isn't THAT valuable to me. So I hope I have emphasized that I don't need redundancy in my offsite backups because I have covered all the bases. I know this exact technique is employed by numerous businesses. Of course there are some that take multiple offsite backups, because the data is so incredibly valuable that they don't even want to risk that trifecta disaster, but in the majority of cases the trifecta disaster is an accepted risk. I HAD TO COVER ALL THIS BECAUSE SOME PEOPLE DON'T READ!!! I think I have justified my backup strategy and the majority of businesses who use offsite tape backups do not have any additional redundancy beyond what is mentioned above(recovery blocks, par files, historical snapshots). Now I would like to eliminate the use of tapes for offsite backups, and instead use a backup service. Most however are extremely costly for $/gb/month storage. I don't mind paying for transfer bandwidth, but the cost of storage is way to high. All of them advertise that they maintain backups of the data, and I imagine they use RAID as well. Obviously if you were using them to host servers this would all be necessary, but for my scenario, I am simply replacing my offsite backups with such a service. So there is no need for RAID, and absolutely no value in another layer of backups of backups. My one and only question: "Are there online data-storage/backup services that do not use redundancy or offer backups(backups of my backups) as part of their packages, and thus are more reasonably priced?" NOT my question: "Is this a flawed strategy?" I don't care if you think this is a good strategy or not. I know it pretty standard. Very few people make an extra copy of their offsite backups. They already have local backups that they can use to replace the remote backups if something catastrophic happens at the remote site. Please limit your responses to the question posed. Sorry if I seem a little abrasive, but I had some trolls in my last post who didn't read my requirements nor my question, and were trying to go off answering a totally different question. I made it pretty clear, but didn't try to justify my strategy, because I didn't ask about whether my strategy was justifyable. So I apologize if this was lengthy, as it really didn't need to be, but since there are so many trolls here who try to sidetrack questions by responding without addressing the question at hand.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >