Search Results

Search found 7637 results on 306 pages for 'cd drives'.

Page 42/306 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • compiling lua, getting makefile CreateProcess error

    - by Iggyhopper
    I'm trying to compile Lua 1.1. Why? Because I can. Here's the makefile contents. all: (cd src; make) (cd clients/lib; make) (cd clients/lua; make) clean: (cd src; make clean) (cd clients/lib; make clean) (cd clients/lua; make clean) Here's the error I get just from running make all. (cd src; make) process_begin: CreateProcess((null), (cd src; make), ...) failed. make (e=2): The system cannot find the file specified. make: *** [all] Error 2 Why do I get this error? I'm on WinXP-32.

    Read the article

  • how to call a function in c# which return type is array.

    - by Manoj Wadhwani
    public CD[] GetCDCatalog() { XDocument docXML = XDocument.Load(Server.MapPath("mydata.xml")); var CDs = from cd in docXML.Descendants("Table") select new CD { title = cd.Element("title").Value, star = cd.Element("star").Value, endTime = cd.Element("endTime").Value, }; return CDs.ToArray<CD>(); } I am calling this function on page load ie. string[] arr = GetCDCatalog(); but this is giving Error Cannot implicitly convert type 'Calender.CD[]' to 'string[]' Please suggetst how can i call function on page load which return type is array.

    Read the article

  • How can I boot a vm on Hyper-V 2012 when it has a virtual hard-drive missing?

    - by Zone12
    We have a Hyper-V 2012 server with 8 VM's on. We have attached extra virtual hard-drives to each of the computers to store backups on. These drives are stored on a NAS. After a power failure, we tried to boot the VM's and found that they couldn't be booted without the attached backup drives. We couldn't boot the NAS at that point and so we had to remove all the extra drives manually, boot the VM's and re-attach the drives at a later date when we got the NAS back up and running. These backup drives are non-essential to the running of the system. I would like to know if there is a way to boot a VM on Hyper-V 2012 with some of the hard-drives (scsi) missing so that we can recover automatically from a power failure.

    Read the article

  • Display attribute from XML using PHP

    - by user560411
    Hello. I need to display the id attribute of CD from the following XML file. I display correctly everything except the id. Any help would be appreciate. display code <?php $doc = new DOMDocument(); $doc->load( 'insert.xml' ); $CATEGORIES = $doc->getElementsByTagName( "CD" ); foreach( $CATEGORIES as $CD ) { $TITLES = $CD->getElementsByTagName( "TITLE" ); $TITLE = $TITLES->item(0)->nodeValue; $BANDS= $CD->getElementsByTagName( "BAND" ); $BAND= $BANDS->item(0)->nodeValue; $YEARS = $CD->getElementsByTagName( "YEAR" ); $YEAR = $YEARS->item(0)->nodeValue; echo "<b>$TITLE - $BAND - $YEAR\n</b><br>"; } ?> XML <?xml version="1.0" encoding="utf-8"?> <MY_CD> <CATEGORIES> <CD id="3231"> <TITLE>NEVER MIND THE BOLLOCKS</TITLE> <BAND>SEX PISTOLS</BAND> <YEAR>1977</YEAR> </CD> <CD id="2453"> <TITLE>NEVERMIND</TITLE> <BAND>NIRVANA</BAND> <YEAR>1991</YEAR> </CD> </CATEGORIES> </MY_CD>

    Read the article

  • How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk

    - by The Geek
    We’ve covered loads of different anti-virus, Linux, and other boot disks that help you repair or recover your system, but why limit yourself to just one? Here’s how to combine your favorite repair disks together to create the ultimate repair toolkit for broken Windows systems—all on a single flash drive. The ones we’ve covered already? Here’s a quick list of all the ways you can recover your system with a rescue disk: How to Use the Avira Rescue CD to Clean Your Infected PC How to Use the BitDefender Rescue CD to Clean Your Infected PC How to Use the Kaspersky Rescue Disk to Clean Your Infected PC Change or Reset Windows Password from a Ubuntu Live CD The 10 Cleverest Ways to Use Linux to Fix Your Windows PC Change Your Forgotten Windows Password with the Linux System Rescue CD Use Ubuntu Live CD to Backup Files from Your Dead Windows Computer If you need to clean up an infected system, we’d absolutely recommend the BitDefender CD, since it’s auto-updating. Best bet? Create your ultimate boot disk with as many of the different utilities as your flash drive can hold Latest Features How-To Geek ETC How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC Luigi Installs Any OS on Google’s Cr-48 Notebook DIY iPad Stylus Offers Pen-Based Interaction on the Cheap Serene Blue Ubuntu Wallpaper for Your Desktop Enjoy Old School Style Video Game Fun with Chicken Invaders Hide the Twitter “Litter” in Twitter’s Sidebar Area (Chrome and Iron) Public Domain Day: Reflections on Copyright and the Importance of Public Domain

    Read the article

  • What free space thresholds/limits are advisable for 640 GB and 2 TB hard disk drives with ZEVO ZFS on OS X?

    - by Graham Perrin
    Assuming that free space advice for ZEVO will not differ from advice for other modern implementations of ZFS … Question Please, what percentages or amounts of free space are advisable for hard disk drives of the following sizes? 640 GB 2 TB Thoughts A standard answer for modern implementations of ZFS might be "no more than 96 percent full". However if apply that to (say) a single-disk 640 GB dataset where some of the files most commonly used (by VirtualBox) are larger than 15 GB each, then I guess that blocks for those files will become sub optimally spread across the platters with around 26 GB free. I read that in most cases, fragmentation and defragmentation should not be a concern with ZFS. Sill, I like the mental picture of most fragments of a large .vdi in reasonably close proximity to each other. (Do features of ZFS make that wish for proximity too old-fashioned?) Side note: there might arise the question of how to optimise performance after a threshold is 'broken'. If it arises, I'll keep it separate. Background On a 640 GB StoreJet Transcend (product ID 0x2329) in the past I probably went beyond an advisable threshold. Currently the largest file is around 17 GB –  – and I doubt that any .vdi or other file on this disk will grow beyond 40 GB. (Ignore the purple masses, those are bundles of 8 MB band files.) Without HFS Plus: the thresholds of twenty, ten and five percent that I associate with Mobile Time Machine file system need not apply. I currently use ZEVO Community Edition 1.1.1 with Mountain Lion, OS X 10.8.2, but I'd like answers to be not too version-specific. References, chronological order ZFS Block Allocation (Jeff Bonwick's Blog) (2006-11-04) Space Maps (Jeff Bonwick's Blog) (2007-09-13) Doubling Exchange Performance (Bizarre ! Vous avez dit Bizarre ?) (2010-03-11) … So to solve this problem, what went in 2010/Q1 software release is multifold. The most important thing is: we increased the threshold at which we switched from 'first fit' (go fast) to 'best fit' (pack tight) from 70% full to 96% full. With TB drives, each slab is at least 5GB and 4% is still 200MB plenty of space and no need to do anything radical before that. This gave us the biggest bang. Second, instead of trying to reuse the same primary slabs until it failed an allocation we decided to stop giving the primary slab this preferential threatment as soon as the biggest allocation that could be satisfied by a slab was down to 128K (metaslab_df_alloc_threshold). At that point we were ready to switch to another slab that had more free space. We also decided to reduce the SMO bonus. Before, a slab that was 50% empty was preferred over slabs that had never been used. In order to foster more write aggregation, we reduced the threshold to 33% empty. This means that a random write workload now spread to more slabs where each one will have larger amount of free space leading to more write aggregation. Finally we also saw that slab loading was contributing to lower performance and implemented a slab prefetch mechanism to reduce down time associated with that operation. The conjunction of all these changes lead to 50% improved OLTP and 70% reduced variability from run to run … OLTP Improvements in Sun Storage 7000 2010.Q1 (Performance Profiles) (2010-03-11) Alasdair on Everything » ZFS runs really slowly when free disk usage goes above 80% (2010-07-18) where commentary includes: … OpenSolaris has changed this in onnv revision 11146 … [CFT] Improved ZFS metaslab code (faster write speed) (2010-08-22)

    Read the article

  • Exchange Server 2007 Setup

    - by AlamedaDad
    Hi, I'm working on a upgrade to Exchange 2007 and I wanted to get some advise on hardware choices. We currently have an Exchange 2003 STD server with 400 users split between 6 AD Sites, that is housed on a single server. We need to move to a redundant, fault tolerant system to support our users. I'm planning on installing 2 Dell 1950 servers with W2k8-std to act as CAS and Hub servers, with NLB to allow abstraction of the actual server name to the users. There won't be an edge system since we have a Barracuda box already that will handle in/out spam/virus filtering. Backend I'm planning on 2 mailbox servers which will be Dell 2950s with 16GB RAM, 2 either dual-core or quad-core CPUs and 6 300GB SAS drives in some RAID config. These systems will be clustered using W2k8 Ent clustering and running CCR in Exchange. My questions are as follows: Is 16GB enough RAM for serving that many mailboxes along with the windows clustering and ccr? I'm trying to figure out disk layouts and I'm unsure of whether to use all local disk or some local and some SAN, via an OpenFiler iSCSI server. The SAN would be a Dell 2850 with 6 - 300GB SCSI drives and a PERC controller to slice as I want, with 8GB RAM. Option 1: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 1 - Mail stores Option 2: 2 drives, RAID 1 - OS and logs 4 drives, RAID 5 - Mail Stores and scratch space for eseutil. Option 3: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 0 - scratch space ~300GB iSCSI volume for mail stores Option 4: 2 drives, RAID 1 - OS 4 drives, RAID 5 - scratch space ~300GB iSCSI volume for mail stores ~300GB iSCSI volume for logs I have 2 sockets for CPUs and need to chose between dual and quad cores. The dual core have faster clocks but less cache and I'm thinking older architecture. Am I better off with more cores and cache while sacraficing clock speed? I am planning on adding the new E2K7 cluster to the E2K3 server and then move each mailbox over, all at once, then remove the old server. This seems more complicated than simply getting rid of the 2003 server and then adding the 2007 cluster and restoring the mailboxes using PowerControls or exmerge. The migration option lets me do this on my time, where a cutover means it all needs to work at once. If I go with the cutover method, how can I prebuild the servers and add them to the domain right after removing the 2003 server, or can't I? I think the answer is no and the migration is my only real option if I want to prebuild. I need to also migrate about 30GB of Public Folders. Is there anything special about this, other than specifying in the E2K7 install that I want older Outlook clients and PF's setup? I guess I could even keep the E2K3 server to host just the PFs? Lastly, if I have a mix of Outlook 200, 2003 and 2007 what do I need to do to make sure they all have access to the GAL and OAB? At time of cutover, we'll be at like 90% 2007, but we will have some older stuff around. My plan is to use Outlook Anywhere on laptops that are used outside the physical network. Are there any gotchas involved in that? I'm even thinking about using is for all Outlook clients, does anyone do that? The reason I'm considering it is that our WAN is really VPN tunnels over internet connections, so not a fully messhed, stable WAN. Thank you all very much for the assistance in advance and I look forward to discussion of these points! Regards...Michael

    Read the article

  • cron.daily not running at the time it should?

    - by Mariano Martinez Peck
    My /etc/cron.daily scripts seem to be executing far later from what I understand they should. I am in Ubuntu and anacron is installed. If I do a sudo cat /var/log/syslog | grep cron I get something like: Aug 23 01:17:01 mymachine CRON[25171]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 02:17:01 mymachine CRON[25588]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 03:17:01 mymachine CRON[26026]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 03:25:01 mymachine CRON[30320]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )) Aug 23 04:17:01 mymachine CRON[26363]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 05:17:01 mymachine CRON[26770]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 06:17:01 mymachine CRON[27168]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 07:17:01 mymachine CRON[27547]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 07:30:01 mymachine CRON[2249]: (root) CMD (start -q anacron || :) Aug 23 07:30:02 mymachine anacron[2252]: Anacron 2.3 started on 2014-08-23 Aug 23 07:30:02 mymachine anacron[2252]: Will run job `cron.daily' in 5 min. Aug 23 07:30:02 mymachine anacron[2252]: Jobs will be executed sequentially Aug 23 07:35:02 mymachine anacron[2252]: Job `cron.daily' started As you can see, at 3:25 it tried to do something. But the cron.daily execution started really at 7:35. My /etc/crontab is: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 3 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # From what I understand, daily scripts are indeed for 3:25. My /etc/anacrontab is: # /etc/anacrontab: configuration file for anacron # See anacron(8) and anacrontab(5) for details. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HOME=/root LOGNAME=root # These replace cron's entries 1 5 cron.daily run-parts --report /etc/cron.daily 7 10 cron.weekly run-parts --report /etc/cron.weekly @monthly 15 cron.monthly run-parts --report /etc/cron.monthly So...does someone know why my cron started to do something at 3:25 but then really start the jobs at 7:35? Also..as you can see in the log, hourly jobs are being executed at correct time: hour and 17 minutes, which is exactly what I have in /etc/crontab Finally, from the logs, it seems my daily jobs are being actually run by anacron rather than cron? So cron finds nothing to run (at 3:25) and then anacron runs the jobs at 7:35? If true, how can I fix this? Thanks in advance,

    Read the article

  • Did a recent WinXP update break CD/DVD read speeds? SP2/SP3

    - by quack quixote
    I have two systems with fresh installations of Windows XP Pro SP3 (SP3 slipstreamed into the installer; fully updated after install). One's a refurbished 2.4GHz Pentium4 system; the other is a new 1.6GHz Atom330 build. Both have brand-new dual-layer CD/DVD burners (one's a LiteOn IDE, the other an LG SATA). Both take a really looooong time to read a single-layer DVD in Windows with Cygwin tools. Specifically, 40 minutes or more. I burn backup data to single-layer DVD+/-R and use MD5 hashes for data verification (made with the standard md5sum tool in Unix or Cygwin). The hashes are burned to disc with the data files, and I use this command to verify: $ cd /path/to/disc/mountpoint ; time md5sum -c < md5.txt Here's how long that takes to run on a full single-layer DVD+/-R disc: Old system (WinXP SP2, 1.8GHz Athlon 2500+, last summer): ~10 minutes Old system (Ubuntu 9.04, 1.8GHz Athlon 2500+): ~10 minutes Old system (Debian 5, dual 550MHz P3): ~10 minutes New Pentium4 system (running Ubuntu 9.04): ~5 minutes New Pentium4 system (running WinXP SP3, file copy from Win Explorer): ~6 minutes New Atom330 system (running WinXP SP3, file copy from Win Explorer): ~6 minutes Now the weird stuff: Old system (WinXP SP2, 1.8GHz Athlon 2500+, today): ~25 minutes New Pentium4 system (running WinXP SP3, read from Cygwin): ~40-50 minutes (?!!) New Atom330 system (running WinXP SP3, read from Cygwin): ~40 minutes (can do it in ~30 minutes ...if i have another program spin up the drive first) Since both systems will copy files in 6 minutes using Windows Explorer, I know it's not a hardware problem. Windows just never spins up the drive during the Cygwin read, so it stays super-slow the whole time. Other programs like EAC and DVD Decrypter seem to spin up the disc just fine during their processing. DMA is enabled on both systems. (Can confirm in Windows' Device Manager on the Atom330, not on the P4.) Nero's DriveSpeed tool doesn't seem to have any effect. Copy times are comparable from commandline with Windows' xcopy. Copying with Cygwin's cp looks more like the problem state -- it will spin up the drive for a short time, never reaches full speed, and lets it spin back down again for most of the copy. What I need is to get full read speeds from Cygwin. Is this a known issue with SP3 or some other recent Windows update? Any other ideas? Update: More testing; Windows will spin up the drive when data is copied with Windows tools, but not when read in place or copied with Cygwin tools. It doesn't make sense to me that Windows spins up the drive for copying, but not for other reads. Might be more of a Cygwin problem? Update 2: GUI activity is sluggish during the problem state -- during the Cygwin verifies, there's a slight but noticable delay when dragging windows or icons around on the desktop, switching windows, Alt-Tabbing through open applications, opening new windows, etc. It reminds me of the delay when opening a Windows Explorer window on My Computer just after inserting a DVD. I've tried updating Cygwin (from 1.5.x to 1.7.x), but no change in the problem behavior. I've also noticed this issue occurs on WinXP SP2, but it's not exactly the same -- some spin-up occurs, so the read happens in ~25-30 minutes instead of 40+. The SP2 system used to run the verifies in ~10 minutes, and when it first changed (not sure exactly when, maybe in late November or early December 2009) I thought it was dying hardware. This is why I suspect an official update of breaking this functionality; this has worked for years on that SP2 box.

    Read the article

  • CSMA/CD once channel captured, can only one or multiple frames be sent before other stations try tra

    - by Bryce Thomas
    Hi there, I have a question regarding CSMA/CD in the IEEE 802.3 LAN standard. I'm trying to understand the behavior of CSMA/CD after a station has captured the channel. Say station A has captured the channel and has an infinite supply of frames it is sending. Also assume that station B has something it wants to send. Now, if I understand correctly, station B senses the line and sees that station A has captured the channel/is transmitting. My question is, does station B see the channel as being captured for the duration of ALL of the frames that station A sends (an infinite period of time), or does station B only consider the channel captured for the first frame it sees A send, after which B goes ahead and transmits its own frame (which will collide with one of the remaining frames still coming from A)?

    Read the article

  • Software to cd (change directories) into .jar/.ear files?

    - by Segphault
    Is there any software/script that will allow me to cd (change directories) into .jar/.ear/.zip files and edit the contents of the files it contains? I'm working on a large EJB project (yuck), and I frequently find myself in situations like the following: something.ear/ |-- something.jar/ | `-- fileINeedToEdit.xml I work primarily via the command line (Mac/Linux), so I find myself decompressing the files with jar -xvf, editing the file I need to edit, and then recompressing with jar -cvf. Obviously, this becomes a major headache after the first few times. I'd like to be able to treat the compressed files as directories, and simply cd (or some alternate command) to the file I want to edit. Does anyone know how I can accomplish this?

    Read the article

  • How do I make a project built into a downloadable Cd?

    - by Dcurvez
    hi all I just got finished making my first project in visual studio 2008. what I want to do is put it on a CD and let another person run it. I went to the clean and build and publish and all went well. When I took it to my friends house to put it on her computer..i never got a "download" and "install" screen. We were able to see it and use it on her computer (which is good cuz she doesnt have Visual on her machine) but I wanted her to be able to use it without keeping the CD in the drive. Is there some setup or something I have to do to make my program download and install? The whole shibang was all done with starting a new windows application..and then just adding windows forms to the original. Thanks :)

    Read the article

  • PostgreSQL server: 10k RPM SAS or Intel 520 Series SSD drives?

    - by Vlad
    We will be expanding the storage for a PostgreSQL server and one of the things we are considering is using SSDs (Intel 520 Series) instead of rotating discs (10k RPM). Price per GB is comparable and we expect improved performance, however we are concerned about longevity since our database usage pattern is quite write-heavy. We are also concerned about data corruption in case of power failure (due to SSDs write cache not flushing properly). We currently use RAID10 with 4 active HDDs (10k 146GB) and 1 spare configured in the controller. It's a HP DL380 G6 server with P410 Smart Array Controller and BBWC. What makes more sense: upgrading the drives to 300GB 10k RPM or using Intel 520 Series SSDs (240GB)?

    Read the article

  • How to (hardware) RAID 10 on Ubuntu 10.04 LTS with 4 drives and motherboard with RAID contoller

    - by lollercoaster
    I have 4 500GB hard drives. I set up a RAID 10 in BIOS, much like shown here: http://www.supermicro.com/manuals/other/RAID_SATA_ESB2.pdf Then I followed these instructions: http://www.unrest.ca/Knowledge-Base/configuring-mdadm-raid10-for-ubuntu-910 Basically I cannot get it to work. I go through the instructions when I get to the "partition" section of the install, creating 4 RAID 1's (2 partitions on each drive, one for primary and one for swap space), then combining to make a RAID 10. Unfortunately it still shows 2 partitions, one 500 GB and another being 36GB for some reason. Any ideas? I think best would be if anyone had found good instructions (step by step) for how to do this...I've been googling for hours and haven't found anything...

    Read the article

  • Enabled storing Bitlocker keys in Active Directory, is there a way to upload keys of drives encrypted before this?

    - by Rossaluss
    We have enabled storing of Bitlocker keys within the device object on Active Directory, however before this was implemented, we had encrypted 100+ devices using bitlocker and we've only found ways to upload the key to AD when enabling bitlocker for the first time on an install. Does anybody know of a way where we can upload all the keys for all the devices which already had their drives encrypted with Bitlocker into their respective device objects in AD? Or are we going to have to decrypt and re-encrypt all the devices on the floor? (Google seems to say this is what we're going to have to do, however we're no experts in Bitlocker, so may have missed something) When we go into Manage Bitlocker of an already encrypted device, we only get the same options of saving the key to a file, a memory stick or printing it out, no option is available to save to AD etc. Any help would be appreciated.

    Read the article

  • How to rescan and remount drives on Ubuntu Hardy or Jaunty?

    - by pts
    When I connect an USB drive to an Ununtu Hardy and Jaunty system, the system mounts the partitions found on the drive, and opens a Nautilus window for each mounted partitions. Within Nautilus, I am able to unmount partitions. What I need is a command or action which forces the system to rescan the available drives and partitions, and automount each not mounted partition, including those which I've manually unmounted from Nautilus. sudo /etc/init.d/udev restart or ... reload doesn't do this. As of now, I just unplug the USB drive, and commect it again, which will force a scan and a mount on that drive. But I want to do force the rescan and remount without unplugging anything, preferably without the user having the know device or drive names.

    Read the article

  • Want to install OS from USB instead of CD : How to deal with *.img image files?

    - by claws
    I'm on windows. I'm trying DragonflyBSD operating system. as you can see here: http://www.dragonflybsd.org/download/ there are two kinds of images CD (.iso) and USB (.img) files available for download. I downloaded *.iso and using UNetbootin to make a bootable USB stick. But its taking hell lot of time. Its been 2 hours and its just 50% done(9k of 18k files). I'm really pissed off now! I used *.iso because I didn't know how to deal with *.img files. Will it be quick *.img file? How to use it to make bootable USB?

    Read the article

  • Why do msi installations use slower drives over faster ones in windows 7?

    - by Joshua C
    I have noticed that the slowest drive in my system is used most during an msi installation. I mainly notice this when running windows updates but it seems to be msi installs in general. The setup I last saw this occur on was running Windows 7 with the following drives: Sata: 240GB SSD NTFS ~515MB/s Operating system drive 1TB NTFS ~110MB/s Firewire: 4TB ExFAT ~80MB/s I would think that windows would choose the fastest drive with available space for temporary files. But it will instead choose the external drive with the slowest transfer speed. I could also understand choosing the 1TB for not being an ssd in an attempt to preserve the longevity of the ssd write capacity. Why does this happen? Is there a way to force these installations to use the OS drive or a specific drive?

    Read the article

  • what software will rip a CD to flac files in one step?

    - by jcollum
    I'm using EAC to rip files to FLAC. Seems fine. But for a large amount of CDs (200 or so) the process is a little labor intensive. There's 4 or 5 clicks in there that seem extraneous -- EAC seems to want to rip the CD to a set of WAV files then (after all those files are ripped) I have to select "Process WAV files" (I'd get it exact, but EAC is occupied right now). I'd like to send all of it to FLAC files in just a few steps: get cddb info, select art, rip. Is there a way to do this with EAC that I'm missing? Haven't been able to find anything on the web that explains this. Is there a better program for this?

    Read the article

  • How to use BT or emule across 2 or more hard drives?

    - by the searcher
    One difficulty with BT or emule is that, when the hard drive is full, we constantly need to move older files to a new hard drive so that we can download newer files. We can change BT or emule's setting so that the folder for downloading points to the new hard drive, but then, what if emule haven't finished downloading for some files that are hard to find, and it is 92% done... in that case, we would like to keep the old setting so that when the last 8% arrives, it can go into the correct file. (and same for BT, if we haven't finished some file or if we want to seed something later). So is there a good way to let BT or emule point to 2 hard drives, or somehow let the new hard drive "merge" into the existing hard drive / folder?

    Read the article

  • How can I get my License Key from a boot CD?

    - by dubRun
    We recently acquired a server that's been in use for a while, but no associated software, logins, etc. We attempted to blank the administrator account password, but that didn't work. We also tried some deeper edits on the password, but no avail there either. Now what I'm looking to do is to re-install windows using the existing registry key on the server now. I've read that you can access the product key in the registry, and using the password tool (a linux boot cd) we are able to view the registry. When I tried this, I got the ProductId (Which version of windows), not the registry key. The OS I'm attempting to read from in Windows Server 2003 R2.

    Read the article

  • How do I back up my Windows partition from an Ubuntu live CD?

    - by lalli
    My Windows partition (C:) is corrupt. I'm booting up from an Ubuntu live CD and trying to copy all the files from C: to my external drive, but the system expands all of the links, producing a projected copy size of 1.8TB (my external drive is just 1TB, and the data in c: is around 700MB). Then I looked at dd and other backup utilities. Anything I looked into, I couldn't figure out whether or not the image would be readable in Windows through any other app. Has anyone else tried to back up data from a corrupted Windows installation using Ubuntu?

    Read the article

  • Run Wave Trusted Drive Manager from a bootable CD, recover crashed enrypted SSD?

    - by TigerInCanada
    Is there a way to run Wave Trusted Drive Manager from a live-cd to access a non-bootable SSD with Full Disk Encyption hard disk? http://www.wave.com/products/tdm.asp The crashed disk is a Samsung SSD PB22-JS3, 128Gb. Is has bad blocks at 128-block intervals. If the SSD password could be unset, is sending the unit for disaster recovery possible? What might cause a nearly new SSD to crash in this way, and what is the probability of it happening again? We have other units in service an I can do without every laptop disk in the company crashing...

    Read the article

  • Why are hard drives moving to 4096 byte sectors, vs. 512 byte sectors?

    - by Chris W. Rea
    I've noticed that some Western Digital hard drives are now sporting 4K sectors, that is, the sectors are larger: 4096 bytes vs. the long-standing standard of 512 bytes. So: What's the big deal with 4K sectors? Is it marketing hype, or a real advantage? Why should somebody building a new PC care, or not, about 4K sectors? Why is this transition taking place now? Why didn't it happen sooner? Are there things to look out for when buying a 4K sector hard drive? e.g. incompatibility? Anything else we should know about 4K sectors?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >