Search Results

Search found 14435 results on 578 pages for 'disk usage'.

Page 40/578 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Programatically determining file "size on disk" in advance

    - by porkchop
    I need to know how big a given in-memory buffer will be as an on-disk (usb stick) file before I write it. I know that unless the size falls on the block size boundary, its likely to get rounded up, e.g. a 1 byte file takes up 4096 bytes on-disk. I'm currently doing this using GetDiskFreeSpace() to work out the disk block size, then using this to calculate the on-disk size like this: GetDiskFreeSpace(szDrive, &dwSectorsPerCluster, &dwBytesPerSector, NULL, NULL); dwBlockSize = dwSectorsPerCuster * dwBytesPerSector; if (dwInMemorySize % dwBlockSize != 0) { dwSizeOnDisk = ((dwInMemorySize / dwBlockSize) * dwBlockSize) + dwBlockSize; } else { dwSizeOnDisk = dwInMemorySize; } Which seems to work fine, BUT GetDiskFreeSpace() only works on disks up to 2GB according to MSDN. GetDiskFreeSpaceEx() doesn't return the same information, so my question is, how else can I calculate this information for drives 2GB? Is there an API call I've missed? Can I assume some hard values depending on the overall disk size?

    Read the article

  • Ubuntu virtual memory caches suck up memory

    - by Tom
    Hey all, I've got an Ubuntu 9.10 64-bit server that seems to use up all available memory. According to my munin graphs, almost all of the memory used up is in the swap cache, cache, and slab cache. (I take this to mean virtual memory caches, am I right in assuming this?) Once memory usage approaches 100%, some (although not all) system services such as SSH become sluggish and unresponsive. After rebooting the system, performance and memory usage become normal for a time. Some interesting tidbits: The system runs Apache 2, MySQL, Munin, and sshd. The memory usage spikes happen at the same time every night (at 10 PM sharp.) There appears to be nothing in the crontab for any of the users, and nothing in /etc/cron.d/* out of the ordinary, let alone something that would occur at 10 PM. My question is, how do I figure out what is causing the memory suckage? I've tried the usual utilities (e.g. ps, top, etc) but I can't seem to find anything unusual. Any ideas? Thanks in advance!

    Read the article

  • missing network usage through iptables

    - by Purres
    I inserted a rule to iptables to track the input usage to a certain ip address. The vps server's IP is 192.168.1.5 and the guest os's IP is 192.168.1.115. I ran 'yum update' inside the guest OS to get some network traffic. Then I ran iptables -vnL from the hypervisor. However it only showed network usage to the host, but not to the guest. Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target source destination 0 0 0.0.0.0/0 0.0.0.0/0 destination IP range 192.168.1.115-192.168.1.115 1853 114K 0.0.0.0/0 0.0.0.0/0 destination IP range 192.168.1.5-192.168.1.5 I ran tcpdump and the log showed that there're data packets to the guest os. 16:17:43.932514 IP mirrordenver.fdcservers.net.http > 192.168.1.115.34471: Flags [.], seq 17694667:17696115, ack 1345, win 113, options [nop,nop,TS val 1060308643 ecr 1958781], length 1448 16:17:43.932559 IP 192.168.1.115.34471 > mirrordenver.fdcservers.net.http: Flags [.], ack 17696115, win 15287, options [nop,nop,TS val 1958869 ecr 1060308643], length 0 Why the guest OS network usage couldn't be tracked? iptables -L will return the INPUT chain as following: Chain INPUT (policy ACCEPT) target prot opt source destination all -- anywhere anywhere destination IP range 192.168.1.115-192.168.1.115 all -- anywhere anywhere destination IP range 192.168.1.5-192.168.1.5 all -- anywhere anywhere

    Read the article

  • Windows 7 - "A disk read error occured. Press Ctrl + Alt + Del to restart"

    - by Senthil
    Problem: When I switch on my PC, after BIOS POST, a cursor is blinking for about 5 seconds and then I am getting this error message: A disk read error occurred. Press Ctrl + Alt + Del to restart. I am able to go into BIOS. But Windows loader doesn't even start. This message is shown after my motherboard logo comes and goes. Symptoms: I DID notice my system freezing for minutes at a time for past two days. Also, in the past two days, it stopped half way through the Window booting process. I had to do hard reset couple of times to get it working. But since today morning, I only get this error message. Configuration: Operating System: Windows 7 Ultimate 32-bit only. Hard disk: 1 Physical Disk - 80GB SATA Partitions: Two (2) - C: and D: File System: NTFS No drive encryption or compression is turned on. After I searched on the net, I have found people mentioning these possible causes: Hard Disk is physically failing Corrupt MBR Bad Sector I am planning to buy a new hard disk, install Windows on it and continue. But I need data from the old hard disk. The data I want is in D: drive, outside any Windows user folder, is not encrypted or compressed or protected in anyway. I think if someone/something can get the disk working again and knows NTFS, the data can be hopefully read. What steps should I follow to recover files from the defective disk? Update: I bought a new disk, installed windows on it and added the defective one as a slave. Then I was able to read the data from the defective hard disk. Though chkdsk found lots of errors, the files I wanted were not affected and I got them back :) I am not using that hard disk anymore though it seems to be working at the moment.

    Read the article

  • Out of space despite lots of free space remaining

    - by Kristian Thomsen
    When upgrading Ubuntu from 11.10 to 12.04 I discovered an unexpected problem. The upgrade was stopped because there wasn't enough free space for the installation. I managed to free some space and do the upgrade but now a prompt appears after logging in saying I'm out of space. This prompt asks me if I want to examine the problem. The "Disk Usage Analyser" is opened. In the top it says: Total filesystem capacity: 47.0 GB (used: 13.5 GB available: 33.4 GB) Folder -- Usage -- Size / -- 100% -- 12.5 GB usr -- 44.8 % -- 5.6 GB home -- 30.3 % -- 3.8 GB lib -- 13.0 % -- 1.6 GB var -- 9.1 % -- 1.1 GB boot 2.5 % 309.5 GB and a lot of small contributors like: etc, opt, sbin, bin etc. I do not really understand this problem since the analyser in the top says that I have 33.4 GB left in this file system. What can I do to make Ubuntu use the remaining space? Running df -i in the terminal gives: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda7 610800 576874 33926 95% / udev 213451 563 212888 1% /dev tmpfs 218524 486 218038 1% /run none 218524 3 218521 1% /run/lock none 218524 7 218517 1% /run/shm /dev/sda8 2264752 16371 2248381 1% /home What does this mean?

    Read the article

  • Is basing storage requirements based on IOPS sufficient?

    - by Boden
    The current system in question is running SBS 2003, and is going to be migrated on new hardware to SBS 2008. Currently I'm seeing on average 200-300 disk transfers per second total across all the arrays in the system. The array seeing the bulk of activity is a 6 disk 7200RPM RAID 6 and it struggles to keep up during high traffic times (idle time often only 10-20%; response times peaking 20-50+ ms). Based on some rough calculations this makes sense (avg ~245 IOPS on this array at 70/30 read to write ratio). I'm considering using a much simpler disk configuration using a single RAID 10 array of 10K disks. Using the same parameters for my calculations above, I'm getting 583 average random IOPS / sec. Granted SBS 2008 is not the same beast as 2003, but I'd like to make the assumption that it'll be similar in terms of disk performance, if not better (Exchange 2007 is easier on the disk and there's no ISA server). Am I correct in believing that the proposed system will be sufficient in terms of performance, or am I missing something? I've read so much about recommended disk configurations for various products like Exchange, and they often mention things like dedicating spindles to logs, etc. I understand the reasoning behind this, but if I've got more than enough random I/O overhead, does it really matter? I've always at the very least had separate spindles for the OS, but I could really reduce cost and complexity if I just had a single, good performing array. So as not to make you guys do my job for me, the generic version of this question is: if I have a projected IOPS figure for a new system, is it sufficient to use this value alone to spec the storage, ignoring "best practice" configurations? (given similar technology, not going from DAS to SAN or anything)

    Read the article

  • HyperV - low CPU usage

    - by Klark
    I am very new to HyperV and virtual machine philosophy in general, so please expect more or less nooby questions :) I have a server that is only used as a host for virtual machines. OS is windows server 2008 R2 and it is running on 16 CPU and 48 GBs of RAM. On aforementioned server there are 8 VMs, each having 4 CPUs and 4 GBs of RAM. On those VMs we are running some CPU intensive tasks. Each machine has nearly 100% cpu usage. After I noticed slow performance I went to the host machine and started playing with process explorer. It turned out that cpu usage is very low. Also I/O is very low, and of course, memory consumption is high, which is expected. Of course, I don't expect that those 4 virtual cores dedicated to a VM work as fast as real, hardware 4 cores, but still I expected a higher consumption of real hardware. Is this sort of behaviour normal? I see that the most of CPU usage on host machine are marked as interrupts (which I guess is normal) and all those interrupts are passed to only one core (which is strange). Are there out of box optimization that I could perform to finally use all that processing power that is under the hood. My knowledge of virtualization technology is near to embarrassing, so I would be grateful for any links that could enlightened me :) Thanks.

    Read the article

  • ASP.NET Session State SQL Server 2008 R2 Freezes with High CPU Usage

    - by jtseng
    Our ASP.Net website uses SQL Server as the session state provider. We currently host the database on SQL Server 2005 since it does not play well on 2008 R2. We would like to know why, and how to fix it. hardware setup Our current session state server has SQL Server 2005 with the files hosted on a single local disk. It is one of our oldest servers since it has served us well, and we never felt the need to upgrade it. The database is about 2 GB holding 6000 sessions. (The sessions are a little big, but we need it.) We have another server with SQL Server 2008 R2 with a much faster CPU, much more RAM, and a much faster hard disk. situation One day, we have a huge surge in traffic. The transaction log growth on SQL Server freezes the server for 10's of seconds, allowing only a few requests through in minutes. So we load up the new server with ASPState with very large data and log files and point all of our applications to the new server. It chugs along fine for about 5 minutes, and then the CPU usage jumps up to 50% of the 16 cores that Standard Edition can use and freezes for 10's of seconds at a time. The files do not record any autogrowth events. The disk queue is nice and low. RAM usage is low. CPU usage on our old server has never been higher than 5%. What happened on the new server? Alternatively, I would like to hear success stories with ASP.NET session state server running on SQL Server 2008 R2 with an average write load of 30MB/sec with bursts up to 200MB/sec.

    Read the article

  • Nginx Multiple If Statements Cause Memory Usage to Jump

    - by Justin Kulesza
    We need to block a large number of requests by IP address with nginx. The requests are proxied by a CDN, and so we cannot block with the actual client IP address (it would be the IP address of the CDN, not the actual client). So, we have $http_x_forwarded_for which contains the IP which we need to block for a given request. Similarly, we cannot use IP tables, as blocking the IP address of the proxied client will have no effect. We need to use nginx to block the requested based on the value of $http_x_forwarded_for. Initially, we tried multiple, simple if statements: http://pastie.org/5110910 However, this caused our nginx memory usage to jump considerably. We went from somewhere around a 40MB resident size to over a 200MB resident size. If we changed things up, and created one large regex that matched the necessary IP addresses, memory usage was fairly normal: http://pastie.org/5110923 Keep in mind that we're trying to block many more than 3 or 4 IP addresses... more like 50 to 100, which may be included in several (20+) nginx server configuration blocks. Thoughts? Suggestions? I'm interested both in why memory usage would spike so greatly using multiple if blocks, and also if there are any better ways to achieve our goal.

    Read the article

  • linux: per-process monitor, every 10 minutes, with history access

    - by Inbar Rose
    I really didn't know a better way to ask my question, hence you get a horribly named question. I will explain what i want to do, maybe that will help you help me. I would like to have my linux machine continuously monitor (every 10 minutes) all the processes on my machine. The information from each process that I require is the name, CPU usage, allocated (virtual) memory, and resident (ram) memory. If these periodic reports were to be looked at, they would look something like this: PROCESS CPU RAM VIRTUAL name1 % MB MB name2 % MB MB ...etc..etc These reports should be stored in such a way that I can access them at a later date by giving a date/time scope (range). For instance, if I want to see the history of my processes from 12:00:00 1.12.12 till 12:00:00 2.12.12 I can - and it should give me the history of the processes for every 10 minutes between those date/time borders. The format of the return is not important, that will be handled by a script anyway and can be modified into anything I need. I have looked into a few things so far, but have not found something that clearly meets my needs. Among the things i searched: sar, free(1), top(1).. and a few other things. It should be a simple issue, i can already see all this information by simply looking at my htop, but i need only a tool that will gather the desired fields for me for each processes every 10 minutes, and then also let me extract slices of that data based on date/time scopes (ranges). note: I have limited experience with linux, so please give detailed information. note2: The desired output will be something like this (after receiving the desired range) CPU USAGE BY PROCESS: proc_nameA 1,2,2,2,2,2...... numbers represent % usage every 10 minutes... proc_nameB 4,3,3,6,1,2...... The same idea with the other information.

    Read the article

  • Cost Comparison Hard Disk Drive to Solid State Drive on Price per Gigabyte - dispelling a myth!

    - by tonyrogerson
    It is often said that Hard Disk Drive storage is significantly cheaper per GiByte than Solid State Devices – this is wholly inaccurate within the database space. People need to look at the cost of the complete solution and not just a single component part in isolation to what is really required to meet the business requirement. Buying a single Hitachi Ultrastar 600GB 3.5” SAS 15Krpm hard disk drive will cost approximately £239.60 (http://scan.co.uk, 22nd March 2012) compared to an OCZ 600GB Z-Drive R4 CM84 PCIe costing £2,316.54 (http://scan.co.uk, 22nd March 2012); I’ve not included FusionIO ioDrive because there is no public pricing available for it – something I never understand and personally when companies do this I immediately think what are they hiding, luckily in FusionIO’s case the product is proven though is expensive compared to OCZ enterprise offerings. On the face of it the single 15Krpm hard disk has a price per GB of £0.39, the SSD £3.86; this is what you will see in the press and this is what sales people will use in comparing the two technologies – do not be fooled by this bullshit people! What is the requirement? The requirement is the database will have a static size of 400GB kept static through archiving so growth and trim will balance the database size, the client requires resilience, there will be several hundred call centre staff querying the database where queries will read a small amount of data but there will be no hot spot in the data so the randomness will come across the entire 400GB of the database, estimates predict that the IOps required will be approximately 4,000IOps at peak times, because it’s a call centre system the IO latency is important and must remain below 5ms per IO. The balance between read and write is 70% read, 30% write. The requirement is now defined and we have three of the most important pieces of the puzzle – space required, estimated IOps and maximum latency per IO. Something to consider with regard SQL Server; write activity requires synchronous IO to the storage media specifically the transaction log; that means the write thread will wait until the IO is completed and hardened off until the thread can continue execution, the requirement has stated that 30% of the system activity will be write so we can expect a high amount of synchronous activity. The hardware solution needs to be defined; two possible solutions: hard disk or solid state based; the real question now is how many hard disks are required to achieve the IO throughput, the latency and resilience, ditto for the solid state. Hard Drive solution On a test on an HP DL380, P410i controller using IOMeter against a single 15Krpm 146GB SAS drive, the throughput given on a transfer size of 8KiB against a 40GiB file on a freshly formatted disk where the partition is the only partition on the disk thus the 40GiB file is on the outer edge of the drive so more sectors can be read before head movement is required: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 3,733 IOps at an average latency of 34.06ms (34 MiB/s). The same test was done on the same disk but the test file was 130GiB: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 528 IOps at an average latency of 217.49ms (4 MiB/s). From the result it is clear random performance gets worse as the disk fills up – I’m currently writing an article on short stroking which will cover this in detail. Given the work load is random in nature looking at the random performance of the single drive when only 40 GiB of the 146 GB is used gives near the IOps required but the latency is way out. Luckily I have tested 6 x 15Krpm 146GB SAS 15Krpm drives in a RAID 0 using the same test methodology, for the same test above on a 130 GiB for each drive added the performance boost is near linear, for each drive added throughput goes up by 5 MiB/sec, IOps by 700 IOps and latency reducing nearly 50% per drive added (172 ms, 94 ms, 65 ms, 47 ms, 37 ms, 30 ms). This is because the same 130GiB is spread out more as you add drives 130 / 1, 130 / 2, 130 / 3 etc. so implicit short stroking is occurring because there is less file on each drive so less head movement required. The best latency is still 30 ms but we have the IOps required now, but that’s on a 130GiB file and not the 400GiB we need. Some reality check here: a) the drive randomness is more likely to be 50/50 and not a full 100% but the above has highlighted the effect randomness has on the drive and the more a drive fills with data the worse the effect. For argument sake let us assume that for the given workload we need 8 disks to do the job, for resilience reasons we will need 16 because we need to RAID 1+0 them in order to get the throughput and the resilience, RAID 5 would degrade performance. Cost for hard drives: 16 x £239.60 = £3,833.60 For the hard drives we will need disk controllers and a separate external disk array because the likelihood is that the server itself won’t take the drives, a quick spec off DELL for a PowerVault MD1220 which gives the dual pathing with 16 disks 146GB 15Krpm 2.5” disks is priced at £7,438.00, note its probably more once we had two controller cards to sit in the server in, racking etc. Minimum cost taking the DELL quote as an example is therefore: {Cost of Hardware} / {Storage Required} £7,438.60 / 400 = £18.595 per GB £18.59 per GiB is a far cry from the £0.39 we had been told by the salesman and the myth. Yes, the storage array is composed of 16 x 146 disks in RAID 10 (therefore 8 usable) giving an effective usable storage availability of 1168GB but the actual storage requirement is only 400 and the extra disks have had to be purchased to get the  IOps up. Solid State Drive solution A single card significantly exceeds the IOps and latency required, for resilience two will be required. ( £2,316.54 * 2 ) / 400 = £11.58 per GB With the SSD solution only two PCIe sockets are required, no external disk units, no additional controllers, no redundant controllers etc. Conclusion I hope by showing you an example that the myth that hard disk drives are cheaper per GiB than Solid State has now been dispelled - £11.58 per GB for SSD compared to £18.59 for Hard Disk. I’ve not even touched on the running costs, compare the costs of running 18 hard disks, that’s a lot of heat and power compared to two PCIe cards!Just a quick note: I've left a fair amount of information out due to this being a blog! If in doubt, email me :)I'll also deal with the myth that SSD's wear out at a later date as well - that's just way over done still, yes, 5 years ago, but now - no.

    Read the article

  • Backup VM : Copy virtual disk xxx.vmdk The virtual disk is either corrupted or not a supported forma

    - by boiteavinc
    Hi I'm french. I'm student and I use ESXI4 for my studies. When I try backup my VMs, with ghettoVCBg2.pl and vSphere Management Assistant, I get the following error message on Vsphere Client : "Copy virtual disk xxx.vmdk The virtual disk is either corrupted or not a supported format". I have no error in the log VCB ; 03-23-2010 08:31:56 -- debug: Main: Login by vi-fastpass to: esxi 03-23-2010 08:31:56 -- debug: copyTask: Task START 03-23-2010 08:31:56 -- debug: copyTask: waiting for next job and sleep ... 03-23-2010 08:32:00 -- info: Initiate backup for AD_DNS_DHCP found on esxi 03-23-2010 08:32:09 -- debug: AD_DNS_DHCP original powerState: poweredOn 03-23-2010 08:32:09 -- debug: Creating Snapshot "ghettoVCBg2-snapshot-2010-03-23" for AD_DNS_DHCP 03-23-2010 08:33:19 -- info: AD_DNS_DHCP has 1 VMDK(s) 03-23-2010 08:33:19 -- debug: backupVMDK: Backing up "Raptor1 AD_DNS_DHCP/AD_DNS_DHCP.vmdk" to "Backup_VM VM/AD_DNS_DHCP/AD_DNS_DHCP-2010-03-23/$ 03-23-2010 08:33:19 -- debug: backupVMDK: Signal copyThread to start 03-23-2010 08:33:19 -- debug: backupVMDK: Backup progress: Elapsed time 0 min 03-23-2010 08:33:19 -- debug: copyTask: Wake up and follow the white rabbit, with status: doCopy 03-23-2010 08:33:19 -- debug: CopyThread: Start backing up VMDK(s) ... 03-23-2010 08:33:25 -- debug: copyTask: send copySuccess message ... 03-23-2010 08:33:25 -- debug: copyTask: waiting for next job and sleep ... 03-23-2010 08:34:20 -- debug: backupVMDK: Successfully completed backup for Raptor1 AD_DNS_DHCP/AD_DNS_DHCP.vmdk Elapsed time: 1 min 03-23-2010 08:34:22 -- debug: Removing Snapshot "ghettoVCBg2-snapshot-2010-03-23" for AD_DNS_DHCP 03-23-2010 08:34:24 -- debug: checkVMBackupRotation: Starting ... 03-23-2010 08:34:26 -- debug: Purging Backup_VM VM/AD_DNS_DHCP/AD_DNS_DHCP-2010-03-23--1 due to rotation max 03-23-2010 08:34:28 -- info: Backup completed for AD_DNS_DHCP! 03-23-2010 08:34:28 -- debug: Main: Disconnect from: esxi 03-23-2010 08:34:28 -- debug: Main: Calling final clean up 03-23-2010 08:34:28 -- debug: cleanUP: Thread clean up starting ... 03-23-2010 08:34:28 -- debug: cleanUp: Send exit to copyThread 03-23-2010 08:34:28 -- debug: copyTask: Wake up and follow the white rabbit, with status: exit 03-23-2010 08:34:28 -- debug: copyTask: die ... 03-23-2010 08:34:28 -- debug: cleanUp: Join passed 03-23-2010 08:34:28 -- info: ============================== ghettoVCBg2 LOG END ============================== My ghetto conf file is : VM_BACKUP_DATASTORE = "Backup_VM" VM_BACKUP_DIRECTORY = "VM" VM_BACKUP_ROTATION_COUNT = "3" DISK_BACKUP_FORMAT = "thin" ADAPTER_FORMAT = "lsilogic" POWER_VM_DOWN_BEFORE_BACKUP = "0" VM_SNAPSHOT_MEMORY = "1" VM_SNAPSHOT_QUIESCE = "1" LOG_LEVEL = "info" VM_VMDK_FILES = "all" I tried several I tried several DISK_BACKUP_FORMAT, I have same error. Despite the error, even when I get files on the NFS share. But when I try to open the vmx file with vmware workstation. I get this error: Can not open the disk 'G: \ Backup ESX \ VM \ AD_DNS_DHCP \ AD_DNS_DHCP-2010-03-23 - 1 \ AD_DNS_DHCP.vmdk' or one of the snapshot disks it depends on. Reason: The called function can not be performed on partial chains. Please open the parent virtual disk. I have no snapshot on my VM on ESXi. Can you help me ?

    Read the article

  • Free Mac whole disk encryption

    - by stevekuo
    Are there any free solutions for encrypting the entire boot disk on a Mac? I'm aware of PGP's Whole Disk Encryption, but it's a bit steep at $149. TrueCrypt's System Encryption seems to only work with Windows.

    Read the article

  • Disk drive disappear in Windows Explorer

    - by Stan
    OS: WinXP SP3 When pressing win+E or open Windows Explorer, under My Computer, usually there are several disk drives. But somehow they just disappeared. If I type c:\ in address bar, then the disk will re-appear in the tree though. How to get them back? Thanks.

    Read the article

  • Disk usage treemap software for headless Linux

    - by CyberShadow
    There are some programs which can display used disk space using a treemap, such as WinDirStat for Windows and KDirStat for KDE/Linux: I'm looking for something similar, but for a headless Linux box. Alternatively, what are other good ways to visualize used disk space with just SSH access?

    Read the article

  • Using SSD as disk cache

    - by casualcoder
    Is there software for Linux to use an SSD as disk cache? I believe that Sun does something like this with ZFS, though not sure. A quick search provides nothing suitable. The goal would be to put frequently requested files on the SSD on-the-fly. Since the SSD has more capacity than RAM for less money and better performance than hard disk, this should provide an efficient performance boost.

    Read the article

  • Installing Windows 7 upgrade version on a clean disk

    - by BobMarley
    Is it possible to install the much cheaper Windows 7 upgrade version on a clean disk? What information will I need? 1) Will the Windows 7 installer ask me for my XP license key? or 2) Will the Windows 7 installer only run if it can detect an existing XP installation? Furthermore, what will happen if my disk crashes and I need to reinstall in the future? Will I need my XP license key again?

    Read the article

  • Hard disk number in boot.ini?

    - by MA1
    Hi All Need help to remove a confusion. Suppose we have two hard disk. Referring to boot.ini, In case of MULTI and SCSI syntax: which parameter exactly tells us hard disk number? Regards,

    Read the article

  • Hard disk with trustworthy SMART support

    - by Paggas
    Which hard disk drive do you suggest with trustworthy SMART diagnostics? That is, a hard disk that can truthfully report sector reallocations and other pre-failure indicators. I'm asking this because I have seen quite a few hard disks with SMART support fail with no warning in the SMART diagnostics, so a hard drive that can report such problems with some degree of reliability would be much appreciated :)

    Read the article

  • How to measure disk-performance under Windows?

    - by Alphager
    I'm trying to find out why my application is very slow on a certain machine (runs fine everywhere else). I think i have traced the performance-problems to hard-disk reads and writes and i think it's simply the very slow disk. What tool could i use to measure hd read and write performance under Windows 2003 in a non-destructive way (the partitions on the drives have to remain intact)?

    Read the article

  • When to use a RAM-Disk?

    - by Ice
    Hi, I know that RAM-Disk are fast, faster than any Disk but they lose their contents on a shutdown of the operating system. The capacity is limited to the RAM. Is there a usefull implementation on a new 64-Bit windows 2008 server? Peace Ice

    Read the article

  • RunDLL error Hard Disk Drive

    - by stanleyenriquez
    I can't open my hard disk drive. The error box says "RUnDLL, there was a problem starting ~$WCFLPM.FAT32 The specified module could not be found". My HDD has a virus before. My hard disk contains a shortcut and the shortcut contains all files. I don't want to format it just yet because it contains quite many files. I tried troubleshooting it but none of the solutions in the internet helped.

    Read the article

  • mac external-hard-disk "software update"

    - by Pietro
    When I make a software update, the files are downloaded on my MacBook's internal hard disk. How can I set a different hard disk as default? I suppose the files related to the software update are compressed packages that have to be saved, opened and decompressed. I would like to use the internal HD just to update MacOS, without storing any temporary files. Thank you! Pietro MacBook Pro 2009, 256 GB SSD, MacOSX 10.6.4

    Read the article

  • external disk suddenly unmounting

    - by hasen j
    Platform: Ubuntu 9.10 Disk Brand/model: WD My Book The external hard disk suddenly unmounts after a while. I suspect it's due to it "sleeping" to save power. I don't recall the problem having occurred before the upgrade to Karmic. How can this be fixed?

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >