Search Results

Search found 7583 results on 304 pages for 'roger guess'.

Page 214/304 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • Is it better to always copy and delete, rather than move?

    - by nbolton
    Generally speaking, I find myself panicking when I realise that if I cancel a file move, it could cause the target or source to be incomplete. This question applies to Windows and Unix-based platforms. I can never remember exactly how the move command works in either case. For example, if you're moving a directory; does it copy the entire directory, then delete it after, or does it copy then delete each file individually? I always realise after typing something like, mv verybigdir dest that I really should have typed cp -R verybigdir dest && rm verybigdir (where the && operator only moves to the next command if the first was successful) -- or is this pointless? What happens exactly when I press Ctrl+C half way through a move? Likewise, what exactly happens on Windows when I press the cancel button? I can't count the number of times I've moved something (the last time was when using svn) and had two directories, with split contents. I guess the answer is difficult, because not all applications move groups of files in the same way.

    Read the article

  • Relayout LVM Disk

    - by Tom
    I have an Ubuntu 11.10 system with two 500GB disks. The partition tables look like this: /dev/sda1 primary 465.52GB /dev/sda2 extended 243.17MB -> /dev/sda5 logical 243.14MB /dev/sdb1 primary 465.76GB sda1 and sdb1 are in a single LVM physical volume group containing a single logical volume containing a single logical filesystem which is mounted as /. sda5 is mounted as /boot. The problem comes when I want to upgrade to Ubuntu 12.04, which requires at least 247MB free on /boot. So I need to reduce the size of sda1 so that I can increase the size of sda2 and sda5. How the heck do I do that? I can find how to shrink the logical volume group, but I'm not at all clear on how to clear out the end part of sda1 so that I can reduce the physical volume group. Does pvresize just deal with this automagically? Or is that wild wishful thinking? I guess the alternatives are to back everything up onto something or other and recreate the thing from scratch or find out whether GRUB2 supports using LVM for /boot.

    Read the article

  • Help configuring Mercury mail or similiar with XAMPP to send e-mail outside of localhost

    - by user291040
    I'm building a PHP/MySQL driven website for my department at work (installed via XAMPP). I need to be able to send mail to outside e-mail addresses (e.g., Yahoo, Hotmail, etc.) using the PHP mail() function. As I see it I have to solutions: Configure the SMTP directive in php.ini to the server running at my work. Configure/run a mail server that can send e-mails outside of localhost (I'm trying Mercury because it comes installed with XAMPP). Here are problems I've come up against: I took a guess at our SMTP server name, and when calling PHP mail(), I get the error SMTP server response: 530 5.7.1 Client was not authenticated I can't be sure, however, the SMTP name is correct (I can't get help from our IT guys because of politics). I have tried to use mercury mail. Mercury seems to be picking up the request, but it doesn't want to forward the e-mail to the outside. I keep getting a Temporary error 240 (temporary MX resolution error). I've searched high and low but still can't find a definitive answer on how to send e-mails outside of localhost. Any help is greatly appreciated.

    Read the article

  • Can't detect hard drive on Macbook Pro

    - by MartinMoizard
    I changed a few month ago the config of my Macbook Pro with the following: I bought a SSD hard drive I removed the hard drive of my Mac book Pro and installed there my brand new SSD Then I removed my DVD drive and installed my hold hard drive instead with a caddy Everything was working great until today when I couldn't access anymore to my old hard drive because it is not detected anymore. Sometimes Mac OSX is mounting it but it takes like 15 min to browse a simple folder. I opened my laptop to have a look at the problem. It seemed like the optical drive connector was not plugged correctly to the motherboard (that connector: http://cl.ly/2T0X2e1j0J1g47061d1t). So I plugged it correctly and reboot. It didn't fix my problem. Then I tried to put my SSD in the caddy and to boot: no hard drive was detected. So I guess there is something wrong either with the caddy, either with the optical drive connector, or either with the plug that is on the motherboard. So my question is, how can I know where the problem comes from?

    Read the article

  • Curious enigma of a network cable / connection / quality

    - by Foo Bar
    So, the situation is like this: I'm renting an apartment in a large house and I'm sharing internet with the landlord who lives downstairs. The internet is (in my best guess) optical 20/20Mbit. I don't know how it's all wired in his flat (haven't been there / seen it). Anyway, in my flat comes a cable which seems to be connected directly to the optic to ethernet router (and the password is the default one, so I have access, he he). There was a switch connected to that and to wires that go around the flat, and the wiring is terrible. It's even mixing phone and ethernet, and from what I see some cables are even interconnected!? Anyways, this cable that comes to my flat is very short. I can barely connect my computer on it, but if I do, I seem to get decent speed / performance. Not great, but decent. If, however, I connect switch to it (tried 2 different switches and a wifi switch) it's all blinking but I can't even connect to 192.168.1.1 (the router). DHCP fails, ping is losing 80-100% of replies. So I connected this cable directly to the other cable which goes to my work room, with a connector that has two female jacks and no electronics. Now when I connect my computer in my room, again, the performance is decent. When I connect WRT54GL (with tomato, DHCP disabled) to it and I plug a cable in this WRT and to my computer... the performance is gone. Download seems okay on Speedtest, but upload is .2Mbps and it's connecting forever. So what kind cable troll am I having here? Any ideas?

    Read the article

  • Why is my new PC so slow at startup?

    - by rumtscho
    Bought a new PC this weekend, and it works really good. Only I have one big problem: startup time. Its BIOS needs 62 sec to load, then from Grub start to pw entering screen it's another 26 sec. I think this is a lot, because my old PC needs 34 sec for BIOS and another 8 sec to pw screen. After I enter the pw, the desktop is usable with practically no delay on both. The new PC is a core i7-930, running a Lucid Lynx 64 bit from a Intel Postville SSD (no internal HDs). The old PC is a Pentium 4 celeron (forgot the clock speed) running a Lucid Lynx 32 bit from an ATA 100 hard drive. Neither PC is overclocked. The new one has boot sequence 1.DVD ROM, 2.SSD (connected over SATA in AHCI mode), 3. removable drive. The old one boots from 1. DVD ROM, 2. HDD, 3. Floppy. Neither has a second OS installed. The new one has less software installed than the old one (I think), but the boot time difference was noticeable even before I made any installs. As far as I know, just the SSD should be enough to make a noticeable difference in boot time. I thought that having a good mainboard on the new PC as opposed to the basic office model on the old one would also mean a faster loading BIOS. If these assumptions are right, I guess I must have misconfigured something in the BIOS of the new PC. How should I configure it for a fast boot? It has an ASUS P6X58D board with an AMI BIOS, if you need the BIOS revision number I could post that too.

    Read the article

  • Will a SQL Server client alias survive a sysprep?

    - by shufler
    I want to sysprep a Windows Server 2008 R2 SP1 machine that has SQL Server 2008 R2 SP1 installed (for reference, SQL Server 2008 R2 has a new sysprep feature that allows the instance to be sysprepped). On the server is a SQL Server client alias that points to the default SQL Server database engine instance. For reference, the alias is called Alias-SQLServer and has been configured in both 32-bit and 64-bit cliconfig versions (that is, both registry keys exist) The alias points to the local instance as the image will be used to create development VMs and the installation script for the application that is being developed will use the SQL Server client alias in order to generalize the installation scripts. I can't seem to find information about whether the sysprep tool will update the SQL Server client alias's registry keys with the server's new name once it's unsealed. My guess is that it is not; how is sysprep to know that the server name the alias points to will be different for each image? Right? Perhaps if the alias points to localhost instead of the server name this will work?

    Read the article

  • How to handle files that don't need version control in mercurial

    - by richardh
    I am new to mercurial, and for the most part do LaTeX reports and statistical calculations in R using .csv and/or .sqlite files. Re LaTeX, all I really care is the .tex file. Re R, I don't need version control on the .csv or .sqlite files because they are static. When I do 'hg add' for a repo with a .csv and/or .sqlite file, I get a warning like: rev2.sqlite: up to 3070 MB of RAM may be required to manage this file (use 'hg revert rev2.sqlite' to cancel pending addition) So I revert and subsequently use adds like hg add -X *.sqlite. I guess I really have two questions: (1) Should I ignore these warnings? Because these large files are static, can I just add to the repo knowing that the diff files will always be empty and not worry about wasted resources? (2) If I should keep excluding these files from the repo, is there away that I can fix this option? I.E., add to my .hgrc file something that always appends an option like -I *.tex -I *.R to my 'hg add' commands? Thanks!

    Read the article

  • How is the extra mSATA SSD disk used/configured in a Dell XPS laptop?

    - by Mark
    Some machines in the new XPS laptop range from Dell come with a regular, large (500GB+) HDD and an additional 32GB m-SATA SSD. The only detail I can find about this extra drive on the Dell site is this: Store your important files, multimedia and photos with XPS 15’s large hard drive options. To get instant access to your media, choose an optional mSATA solid-state drive (SSD) that can boot up to twice as fast as a regular hard drive and resumes in less than 1 second. I'd like to know more about how this extra drive is set up and used, specifically: Is anything installed on it (e.g. OS files or a boot loader) or is it just used as swap space? Is the m-SATA drive visible as a lettered drive in Windows? (I'd guess not if it's used for swap file only.) Is this unusual configuration likely to cause any problems later down the line - e.g. when upgrading to Windows 8? As usual, Dell's sales team haven't been able to help. If anyone's actually got a Dell machine with this or a similar hard drive set-up and can give a definitive answer rather than speculation I'll accept the answer.

    Read the article

  • What does it mean to install two OS's alongside each other?

    - by Josh
    I currently have Windows 7 installed on my PC. However, I just tried out Ubuntu via booting from a disc and I love it. I want to install it onto my HDD, but I don't want to get rid of Windows 7. I know HOW to do this, but I am a little unsure what the consequences might be. What does it mean to install Ubuntu alongside Windows? Do they share the same resources? Also, I have my HDD already partitioned into two sections, a 70 GB section where Windows is installed and then another 400 GB section where all my data is stored. There is currently 26 GB free on the 70GB partition. I know Ubuntu doesn't take up much space. However, if I install Ubuntu in that space, will I still be able to install programs on Windows in the future? My main concern is that I am going to short-change my hard drive space for future installations. EDIT: I guess another big question I have is if I install a program on one OS, will the other be able to use it?

    Read the article

  • why would resetting the Netgear N300 router fix my Win 7 laptop's slow wifi?

    - by rjnagle
    In the past day the wifi download speeds of my Win 7 HP 64 bit laptop have slowed considerably. I am trying to troubleshoot the problem and to figure out whether it's hardware related (i.e., is the Intel(R) Centrino(R) Wireless-N 2230 the problem?) or router related. I have a Netgear N300 router connected to my modem. I'm using Speedtest to measure my speed. First, during my problem state, my ipad can download and upload at normal speeds. It's only my Win 7 laptop which is having problems. Because my ipad downloads at normal speeds, that would tell me that the problem is specific to the laptop (either HW or SW). But when I restarted my Netgear router, the laptop wifi problems disappeared. That just doesn't make sense. If we know that one device can connect properly to the router, why would a laptop have problems? What are some possible reasons why this might happen? Also, during my problem state, I noticed that on my laptop upload speeds were faster than my download speeds. Anybody have a guess about what might cause upload speeds on one device to be faster than another? Is there any actions i could take (or options to enable) so this problem won't occur. (I initially thought my problem might be software related or memory related -- Norton AV or browser plugins. But even after I disabled everything and made sure memory footprint was minimal, the slowdown was still occurring -- and it solved itself altogether when the router was reset).

    Read the article

  • Display errors using VMWare Player and Remote Desktop on Windows XP

    - by Tim
    I've come up against a weird display issue that I can't seem to find any "fix" for. When I first boot up my computer, everything behaves normally. If I start to use VMWare Player and/or Remote Desktop, my desktop starts having some odd video issues. The frames for some windows aren't drawn at all, if I move windows around rapidly, the area under where the window used to be isn't cleared (still shows artifacts of the content of the window), etc. In some cases, the minimize / restore / maximize buttons aren't drawn (but are click-able if you can guess where they are). I've tried the usual stuff - current drivers all around, using a single monitor, etc.. none of it seems to have any bearing. If I try to disable hardware acceleration, it tends to crash the computer. As I said earlier, it's running Windows XP, dual monitors, an NVidia en8400GS video card, asus p7p55d motherboard. Not sure what other pertinent details are needed. I would appreciate any help or suggestions!

    Read the article

  • What is the max connections via remote desktop for a small server?

    - by Jay Wen
    I have a small server running MS Server 2012. The CPU is a Xeon E3-1230 V2 @ 3.30GHz, 4 Cores, 8 Logical Processors, 8 GB RAM. Main HD is a Samsung 840, and the big storage is a 4 disk WD Black Raid 10 Array in a Synology NAS enclusure. My question is: given this hardware, approximately how many users can the system support via "Remote Desktop Connection"? Assume there are no licensing limits. These are not admin users. I know there is a two admin limit. This boils down to: What resources does one remote connection require? RAM? % of the CPU? Networking bandwidth? I guess the base case would be for a conection where the user is inactive or simply browsing cnn. Once you know this, you know how many you could fit on the machine before something is maxed-out. In reality, users would be mostly on Excel (multi-MB spreadsheets). I know the approx. resources currently required by each copy of Excel.

    Read the article

  • Matched or unmatched drives for RAID arrays?

    - by Will
    Looking around there is conflciting information on this, with some strongly suggesting one or the other. From my understanding the issue with matched drives is that the wear on both drives is more or less the same, so the potential for the second drive failing with or very soon after the first is pretty high. People also claim matched drives give substianatally higher performance however assuming the unmatched drives are more or less the same (eg 2, 1 TB STATA II 7200rpm drives with 32MB cache), would the minor differences between say a Seagate and a Western Digital one (say one has a 128MB/s read rate, and the other a 150MB/s read rate, as well as I guess various other minor differences) actually cause any notable performance loss, ie potentialy worse than two matched 128MB/s drives, or does RAID not really care and give you essentially an optimal solution (eg upto 278MB/s total read speed for RAID 0 and 1) and similar for other RAID with more "unmatched" drives (5 and 1+0 come to mind as possibilities)? Also I couldnt find much info on how this is different on different RAID setups, eg RAID 0 or RAID 1, software or hardware RAID, etc. I'm assuming such things have an effect, and thats it's not all the same for RAID in general?

    Read the article

  • LVS / IPVS difference in ActiveConn since upgrading

    - by Hans
    I've recently migrated from an old version of LVS / ldirectord (Ultra Monkey) to a new Debian install with ldirectord. Now the amount of Active Connections is usually higher than the amount of Inactive Connections, it used to be the other way around. Basically on the old load balancer the connections looked something like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 12 252 -> 10.84.32.22:0 Masq 1 18 368 However since migrating it to the new load balancer it looks more like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 313 141 -> 10.84.32.22:0 Masq 1 276 183 Old load balancer: Debian 3.1 ipvsadm 1.24 ldirectord 1.2.3 New load balancer: Debian 6.0.5 ipvsadm 1.25 ldirectord 1.0.3 (I guess the versioning system changed) Is it because the old load balancer was running a kernel from 2005, and ldirectord from 2004, and things have simply changed in the past 7 - 8 years? Did I miss some sysctl settings that I should be enforcing for it to behave in the same way? Everything appears to be working fine but can anyone see an issue with this behaviour? Thanks in advance! Additional info: I'm using LVS in masquerading mode, the real servers have the load balancer as their gateway. The real servers are running Apache, which hasn't changed during the upgrade. The boxes themselves show roughly the same amount of Inactive Connections shown in ipvsadm.

    Read the article

  • pfSense routing between two routers with shared network

    - by JohnCC
    I have a network set-up using two pfSense routers arranged like this:- DMZ1 WAN1 WAN2 DMZ2 | | | | | | | | \___ PF1 PF2___/ | | | | \___TRUSTED___/ Each pfSense router has its own separate WAN connection, and a separate DMZ network attached to it. They share a common TRUSTED LAN between them. The machines on the trusted network have PF1 as their default gateway. PF1 has a static route defined to DMZ2 via PF2, and PF2 has a static route to DMZ1 via PF1. There is NAT to the WAN but internal networks (DMZ1/2 and TRUSTED) use different RFC1918 subnets. I inherited this arrangement, and all used to work fine. I made a config change to PF1 (relating to multicast), and machines on DMZ2 suddenly could not talk to TRUSTED. I rolled the change back, but the problem persisted. What I guess you'd hope would happen is that TCP packets would go DMZ2 - PF2 - TRUSTED and on return TRUSTED - PF1 - PF2 - DMZ2. That's the only way I can see it would have worked. However, PF1 drops the returning packets. I've verified this using tcpdump. I've worked around this by adding static routes to DMZ2 via PF2 to the servers on TRUSTED, but some devices on there do not support static routes so this is not ideal. Is there way to make this arrangement work decently, or is the design inherently flawed? Thanks!

    Read the article

  • Data recovery; nearly 1 tb of movies on a WD 3.5 tb personal cloud drive disappears with scanty traces

    - by Effector Dhanushanth
    I have a great collection of movies that I had stored in a logical mesh of folder on my 3.5 tb WD personal cloud drive. I woke up 1 morning and found that everything was fine with my data on this drive, except for my movie collection: There were two great folders, one "2sort" nd the other "segregated". out of all the segregated sub folders, only letter C D and 2 or 3 others remain. and the 2 sort folder, which has umpteen subfolders, amounting to more than 0.5 tb. is.. it's just gone!! this is a great downfall.. now this is a personal cloud drive and has no usb port etc. unfortunately to hardwire and recover files.. now I'm sure there are softwares out there that can help me recover my beloved movies from such an interestingly "hard-to-reach" (should I say?) device? what may that software be compadre, my happiness lies within your answer.. thank you.. remember, recovery software or (WD) personal cloud. :) these ovies were All, "hand-picked", over the course of ten years.. I just never catalogued my collection.. if I could just get the "list" of my lost collection, that'd be enough.. recovering em would be a bonus.. but they out to be damaged if I were to somehow recover you know? still, I'm certain they're all intact.. I guess the file index just got corrupted.. There surely is a veil of some sort that need to be thrown or pushed aside to reveal my movies.. what software can do/does that? thanks immensely!

    Read the article

  • Can I use HP Recovery Discs for a different hard drive capacity and make?

    - by Fasih Khatib
    About two years ago I created HP Recovery Discs (3 of them). Now my hard drive has crashed and new one is still a week from delivery. I was reading up on how to reinstall the genuine OS using the Recovery Discs as i was not given any Windows 7 installation discs. I did my bit of research after getting answers from the community on what these discs do and found out on other sites that people experience issues when recovering their OS from the disc. Especially when they change the make or capacity of the harddrive. Unfortunately I had to change the make as the hard drive that came built in has gone out of production. This question is just a part of my checklist to avoid problems when recovering the OS. I have: HP DV4-2126TX (available only in India I guess) I had: Seagate Momentus 320 GB I ordered: Western Digital Scorpio Black 500 GB Windows 7 Home Premium 32-bit Is there a possibility to encounter any problems due to the changed capacity and make? I only want my genuine OS and drivers – not my data. I was told that Disc 1 contained the OS and drivers, and the rest of the discs contained data. I couldn't verify that.

    Read the article

  • Setting a subdomain to access home machine with windows remote desktop

    - by ianhales
    I'm trying to remotely connect to home machine through Windows Remote Desktop (amongst other things, but this is currently my primary focus). I can do this fine using my home WAN's static IP (thank god for cable!) with port-forwarding, but I would like to access it from a subdomain of my web-site (e.g. home.mydomain.co.uk). In the cPanel for my hosting account, I've gone into DNS zones and altered the A-record to point to my WAN's IP, which I thought should do the job, but I still cannot connect. When I ping the subdomain, I get my web-host's IP, which I guess is to be expected as I believe the DNS of the host domain is used first, then my server handles the redirection of traffic to the IP in the A-record. Is this the correct idea? Do A-record changes suffer from the same propagation delays as DNS record changes, as I suppose that could explain it? (by the way, this thread confirms my thoughts that setting the A-record should be enough: Hostmonster Subdomain redirected to home server IP: How to ssh into home server using subdomain)

    Read the article

  • Windows Server 2008 is stuck at "configuring updates - stage 3 of 3 - 0% complete"

    - by Chris
    This has happened the last two times I've done updates to this system, and I really have no idea what is going on. It is installing a only a month's worth of updates. It only responds to ping and no services are up, so I can't view the system remotely (I have to hook up a monitor to see this message). In the past I've just restarted the system at this point and it eventually finishes updating. I want to know what I can do to avoid this situation, how to diagnose what is going on, and how to get any kind of remote access during the updates. Edit: I can start the machine in safe mode (where I did nothing but backup some files). I restarted and it no longer tries to do a windows update, just goes to the desktop where everything seems extremely broken. I can click on some things, but not launch most programs. I guess all I can do at this point is do a system restore or something. Edit: Re-installed windows on this system yesterday. That's my usual solution to issues I don't feel like diagnosing, like this one.

    Read the article

  • Interaction between two Clouds

    - by Snehal Masne
    I have setup the Cloud-A with 1 - [CLC+CC] and 2 - [NC] computers. I have another Cloud-B with same configuration using the Ubuntu Enterprise Cloud Both of them working fine individually, in the same LAN. Now if I want to add the NC of Cloud-A to CC of Cloud-B, [in case the resources of Cloud-B are exhausted] how can I make it possible ? I guess this calls for the interoperability stuff... Could you please explain what happens exactly when we ask for instance, the direct interaction happens between the client and NC or it goes through the CLC and CC ? What I want to say is, say there are multiple cloud providers. A user is subscribed to any one of them, say Cloud-A for IaaS. As the requirements are dynamic, all the resources of Cloud-A may get exhausted. There may be another Cloud-B which can provide the services but that Cloud-A can't ask the client to go for Cloud-B. So if it is possible to have some co-ordination between this two providers to share resources mutually, making client fully unaware of whats going on in the background....?

    Read the article

  • How can I disable 'natural breaks' in Workrave?

    - by Pixelastic
    I've just discovered Workrave, and was trying to use it along the Pomodoro technique (5mn break every 25mn). But the concept of 'natural breaks' of Workrave seems to interfere with what I'm trying to achieve. Workrave tries to guess that I'm doing a natural break if I stop using my mouse and keyboard for longer than 5s. It then stops the work timer, and start counting time as if I was doing my break. Here is a typical example : I've configured a 5mn rest break every 25mn. I start working. 10mn later, I receive a phone call, or start talking with a colleague, or any work-related action that do not need either keyboard nor mouse. Workrave then stops counting my time as work time, and starts its rest timer. If my phone call is shorter than 5mn, then Workrave will resume its timer where it stopped it. Meaning that my time on the phone is not counted as work time, and so my break time is pushed a few minutes later than it should be. Even worse, if my phone call is longer than 5mn, then Workrave count it as a complete rest break, and when I'll resume working, it will restart its timer completly. I'm looking for either a way to disable the natural breaks, or increase the 'inactivity time' from 5s to maybe ~1mn. Or maybe an other angle to look at the natural breaks that might work with the Pomodoro technique (forced 5mn breaks every 25mn). I'm using Ubuntu 11.10.

    Read the article

  • Puppet: is it ok to "force" certname when you expect to shuffle nodes around?

    - by Luke404
    We all know (good example on SF) that Puppet hostname detection could be... fun. At our company (and I guess we're not alone at this) we usually pre-configure servers at our offices and test them before bringing the gear to a remote datacenter and rack them. Of course the reverse dns will change when doing that, even if we don't change the actual hostname of the system. We're slowly drafting our puppet setup and I'd like to be sure those moves won't create problems. My idea is to explicitly configure the desired full FQDN of the system as certname in puppet.conf at server provision time (before the very first puppet run). My process would look something like this: basic o.s. installation basic network configuration, enough to reach the internet and resolve dns install puppet and set up certname start puppet and let him manage the whole configuration test, fix problems in config (via puppet), re-test, and so on... manually stop puppet set up new network configuration for the datacenter network move the machine to DC turn it on puppet should automatically start and keep on doing its job The process is supported by detecting the environment in puppet's manifests (eg. based on subnet, like they do at Wikimedia) and modify configuration as needed (eg. resolv.conf contents appropriate for each network). Each node's certname will never change for the whole system life cycle. Is there any problem with this approach? Could it be improved?

    Read the article

  • saving data from a failing drive

    - by intuited
    An external 3½" HDD seems to be in danger of failing — it's making ticking sounds when idle. I've acquired a replacement drive, and want to know the best strategy to get the data off of the dubious drive with the best chance of saving as much as possible. There are some directories that are more important than others. However, I'm guessing that picking and choosing directories is going to reduce my chances of saving the whole thing. I would also have to mount it, dump a file listing, and then unmount it in order to be able to effectively prioritize directories. Adding in the fact that it's time-consuming to do this, I'm leaning away from this approach. I've considered just using dd, but I'm not sure how it would handle read errors or other problems that might prevent only certain parts of the data from being rescued, or which could be overcome with some retries, but not so many that they endanger other parts of the drive from being saved. I guess ideally it would do a single pass to get as much as possible and then go back to retry anything that was missed due to errors. Is it possible that copying more slowly — e.g. pausing every x MB/GB — would be better than just running the operation full tilt, for example to avoid any overheating issues? For the "where is your backup" crowd: this actually is my backup drive, but it also contains some non-critical and bulky stuff, like music, that aren't backups, i.e. aren't backed up. The drive has not exhibited any clear signs of failure other than this somewhat ominous sound. I did have to fsck a few errors recently — orphaned inodes, incorrect free blocks/inodes counts, inode bitmap differences, zero dtime on deleted inodes; about 20 errors in all. The filesystem of the partition is ext3.

    Read the article

  • Grayed-Out Sleep and Hibernate Options on Windows 7 After Updating Graphics Driver

    - by Maxim Z.
    I have a Gateway M275 Tablet PC, on which I've installed Windows 7 Ultimate. The laptop is quite old, so there aren't any Win7 drivers for it, not to mention any Vista drivers. Win7 has been working for some time, but I noticed that my video output wasn't working. I went into Device Manager and found that I didn't have a driver for my video card: it just recognized it as the standard one. I searched online and found an XP driver for it, released by Gateway. Device Manager accepted this driver and prompted me to reboot. After that, I noticed that my Sleep and Hibernate options in the Shut Down menu have been grayed-out. I looked online and found that many people are attributing this to display drivers, as such an old driver would surely not be compatible with the standby procedures Windows 7 uses. To make it clear: I was able to Sleep and Hibernate before updating the drivers; now, I can't. Running powercfg /a gives me, "An internal system component has disabled this standby state," for each available standby mode. Is there some way that the driver can be modified to support hibernation? The new driver fixed my video output problem, but I guess hibernation is more important for me. If not, what steps should I take to remove the driver and just leave the standard Windows one, which previously supported hibernation and sleep on this computer? Thanks in advance.

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >