Search Results

Search found 11262 results on 451 pages for 'important directories'.

Page 349/451 | < Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >

  • How I can recover files when the folder shows empty but the files are not deleted?

    - by Borror0
    Yesterday, my laptop caught a virus which caused massive damage. Since them, I have been trying to recover important files before reformatting my computer, a task the virus has not made easy. Restoration points predating the attack have been deleted. Most of my folders show empty. My Start menu is essentially empty, with the exception of Trillian and Mirror's Edge. The same goes for my Desktop, which only has programs which were installed after the attack. Searching for files though my computer is pretty much useless, as it only rarely brings up anything. I suspect most of my files have not been deleted. While my folders show empty, uTorrent still does display them and I can open them from here. Unfortunately, when I select Open Containing Folder, the folder still shows as completely empty even if I'm currently watching a video from that very folder. Further adding evidence to the not-deleted, just-missing theory, the data recovery software I'm using (Restoration) cannot find only find an handful of the missing files. If they were deleted, I could do a forensic recovery to get them back but since they're probably still somewhere on my computer, just out out of my reach, I can't find them. Under those circumstances, is there a way I can recover those files?

    Read the article

  • All FireFTP passwords gone after auto-update

    - by GitaarLAB
    For the last six months (since the Firefox madness started and they keep on taking control of my PC) I'm terrified to touch Firefox. Problem is however, I've been using it in my business (since once upon a time it was a trustworthy application with useful extensions like FireFTP) and that installation (and plugins) holds four years of information. So Firefox continually deletes my important data (by) messing up (or blocking/or worse: auto-updating) my plug-ins, even crashing my computer as a result. Today Firefox killed FireFTP by (again) autoupdating FireFTP without my permission, and I did my best to disable that nonsense in about:config). Result: none of the (over 100) FireFTP accounts can be logged on to, they suddenly all ask for a password. I do not have the time to to find all of the passwords and reconfigure FireFTP again. How can I undo the mess Firefox created once again? That is, where are the passwords, how do I downgrade? As a side-question, how can I make Firefox behave again? I'm the boss of my computer, not them! How can I once and for-all take back control and completely kill every kind of auto-update feature?

    Read the article

  • How do you backup your own files? [on hold]

    - by Antonis Christofides
    I'm a system administrator and I use rsnapshot to backup some servers, duplicity for some others. Both work fine, each one with advantages and disadvantages. Despite that, I am at a loss on how to backup my own private files. I'd use duplicity to automatically backup my files to a remote server; but the problem is that once in a while I must do a full backup. My emails and important files are 9G, and I expect this to increase. Uploading through aDSL at 1Mbit would be 20 hours. Too much. rsnapshot doesn't require periodic full backups (only the first time), but it must be running on the remote server and have a means to connect to my computer; if the server is compromised (or simply if the NSA decides to use it), my own machine is also compromised. Not good. The only solution I've come up with is use encfs, use unison to synchronize the files to a remote server, and use duplicity or rsnapshot on the remote server to backup these files. In that case, the question is whether I can sync the files on many computers; is it possible for encfs to be used with the same key on many computers? I also think that if I append one character to the unencrypted file, its encrypted encfs counterpart might change a lot, so that incrementals with duplicity would be less efficient—but not a big deal. Maybe also, when I need to restore a file, finding the correct file to restore could be a pain, because of filename encryption. I wonder whether there is any other possibility that I've overlooked. Maybe I'm asking too much for my personal use, and I should settle with an external disk?

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

  • Very Slow DSL (ethernet) speed [New Interesting Update]

    - by Abhijit
    Very IMPORTANT and INTERESTING UPDATE: Due to some reason I just thought to do a complete new setup and this time I decided to again have openSUSE plus ubuntu. So I first reinstall lubuntu and then I installed OpenSUSE 12.2 (64 bit). Now, my DSL speed is working very normal and fine on opensuse. So this is very scary. Is it possible for any operating system to manipulate my NIC so that it will work fine only on that operating system and not on another os? Regarding positive thinking and not being paranoid, what is it that makes ONLY suse to get my NIC to work at normal speed but ubuntu can not do it? Not even fedora? Not even linux mint? What all these OS are lacking that enables suse to work great? == ORIGINAL QUESTION == I 'was' on opensuse 12.2 when my dsl speed was normal. Yesterday I switched from opensuse to ubuntu 12.04 and speed decreased. It came to range of 7-10-13-20-25-kbps. Then I switch to linux mint, and then to fedora. Still slow speed. When I was in ubuntu I disabled ipv6 but still no luck. Now I am in fedora but this time with DIFFERENT ISP. And still I am getting very slow sped. So my guess is this is nothing to do with os. What can be wrong? Is this problem of NIC? Does NIC speed decreases over time? Does NIC life ends over time as with keyboard or mouse? Help please All the os I used are 64 bit and my laptop is Compaq Presario A965Tu Intel Centrino DUal Core. Interesting thing to notice is I get normal speed while downloading torrent inside torrent client softwares. This slow speed issue applied to download from any web browser or installing software using terminal.

    Read the article

  • Short, intermittent USB port timeouts

    - by jacobsee
    I have a data acquisition application controlled by a Windows PC. I am using an Intel Desktop Board DH67CL motherboard. This has 6 USB2 ports along with 2 of the new blue USB3 ports. The DAQ instrument connects via USB to the computer. Roughly once every day or two there is a short communication glitch with the USB that causes the instrument to disconnect briefly and then reconnect. This is logged so I know whenever it happens. Occasionally it will cause the data acquisition to stop. I've verified that the glitches will go away if I use the USB3 ports, or if I use a PCI add-on card with USB ports. So it seems that there is something going on with these built-in USB2 ports on this motherboard. I haven't yet had a chance to test with other motherboards. My question: what could be causing these glitches and how to get rid of them for this board, or is there a better motherboard to use? We have standardized on the DH67CL because it's inexpensive, has 3 PCI slots that we need, and is readily available. We don't need the power of higher-end server boards but reliability is important. Thanks. Update: This problem has been reproduced many times on different hardware, though we always use the same model of motherboard (DH67CL) and power supply (Antec EA-430D). I don't think the power requirements are very high but will check on that.

    Read the article

  • Missing Boot Manager in Vista

    - by Selase
    I am in really deep trouble here and would need advice. A message just pop up on my screen and I had to restart my laptop. Upon restarting the Boot Manager got corrupted. I am running Windows Vista 32 bit by the way. I got onto Google with a friend's PC and found two basic ways of fixing it. The first one that requires Windows to automatically fix it using Startup repair ends up with the error message: Startup REpair cannot repair this computer automatically The second option that requires me to rebuild the BCD scans my system and finds the operating system on drive D:\Windows which I believe should be C:. If I hit Y(yes) for the rebuild process to take place I get the message The required system device cannot be found I then try the second option which requires me to recreate the BCD Store. It ends up with an error message that says: The store export operation has failed. The requested system device cannot be found Proceeding from there is meaningless since the system device cannot be found. I somehow believe the device cannot be found because it's identifying the Windows installation on D: instead of C: but how to change that I have no idea. I don't know how it happens to identify an operating system on D: when there's none there. How do I go about fixing the Boot Manager? I have very important files on my system and can't afford to reinstall Windows. I really need to fix this.

    Read the article

  • On clients use generic driver for printer shared by CUPS

    - by Daniel
    I know this worked really easy with a recent CUPS version some years ago. Unfortunately I fail to do the following with latest CUPS nowadays. How to share a printer without using printer specific drivers on the clients with CUPS? The printer is a Samsung ML-2010. The important fact is that it needs a quite specific library for printing called splix. That is installed on the server and prints well. What makes trouble is using that printer over the network. I found out how to use Avahi under Linux to make use of DNSSD to advertise and discover printers. But as far as I understand the new CUPS offers the internal driver interface on the network. This has 2 major issues: anybody can fiddle with the driver and I don't trust any uncommon printer library to be "network secure" anyone who wants to use my printer including guests needs to install the specific drivers first I remember the old days when I could enable "Share this printer" on the CUPS server and all clients would magically detect the printer and just send their job data to the server and have it do the driver stuff. After everything a read I guess this is related to the changes Apple introduced with CUPS including dropping of the integrated network discovery protocol. If it helps: Server: Ubuntu LTS 12.04 Server with CUPS 1.5.3 Client: Arch Linux with current CUPS 1.6.1 On another box with Ubuntu the printer was setup automatically at least but the mechanism used the Splix library for that.

    Read the article

  • Consulting: Organizing site/environment documentation for customers?

    - by ewwhite
    Over time, I've taken on consulting and contract engineering work for various clients. More recently, customers are asking for certain types of documentation. These are small businesses and typically do not have dedicated technical staff. Within a single company, Wiki/Confluence/Sharepoint, etc. all make sense as a central repository for documentation and environment information. I struggle with finding a consistent method to deliver the following information to discrete customers. I'm shooting for a process that's more portable, secure and elegant than a simple spreadsheet or the dreaded binder full of outdated information. Important IP addresses, DHCP scope, etc. Network diagram (if needed). Administrative usernames and passwords and management URLs. Software license keys. Support contracts and warranty information. Vendor support contacts and instructions. I know there are other consultants here. Any suggestions or tips on maintaining documentation across multiple environments in a customer-friendly format? How do you do it?

    Read the article

  • Step by step instructions to make dos "see" and "access" usb hard drives

    - by Gireesh Venkateswaran
    I am stuck with an USB external hard drive (Maxtor 1 Touch) 750GB that crashed. I have photos and home movies (my son's birth, first birthday etc) that are very important to me. I am given to understand that Spinrite is a very good tool to use, but It does not come "with out of the box" capabilities to access USB drives. If I open the case to get the HDD out from my External HDD, I would compromise the warrenty and I would not be able to exchange the Drive. I have done a bit of research and have the drivers that could help. But the bit I am missing is, How to put it all together. I would really appreciate it if some one can give me step by step instruction where I can create a dos boot cd that can load the drivers and assign a drive letter to it so that I can make Dos "See" the external hard drive. I have a Toshiba satellite laptop that runs Windows XP (Home). It does not have a floppy drive. I will be greatful to your help in the regard.

    Read the article

  • Explorer.exe hangup during move large file into external drive

    - by PiotrK
    During move large files (700mb+) to external drive formated NTFS via USB 3.0 I've noticed strange things about explorer.exe (I am using Windows 7 up to date) Sometimes after move file the explorer get stuck (ie. it can happen after few files during move of several large files) - moving window freeze and I am unable to kill explorer (via taskmgr, or cmdline TASKKILL). In command line I've got something like this (taskmgr shows that explorer.exe is still running - I've got the same PID every time I try to kill it and no diagnostic message): C:\Windows\system32TASKKILL /F /IM explorer.exe SUKCES: proces "explorer.exe" o identyfikatorze PID 6296 zostal zakonczony. C:\Windows\system32TASKKILL /F /IM explorer.exe SUKCES: proces "explorer.exe" o identyfikatorze PID 6296 zostal zakonczony. If I try to run another explorer.exe process at this point, I got desktop icon and start bar back but I cannot open any explorer window After few minutes explorer.exe finally dies and I am able to rerun it without rebooting File that I moved have two copies - one local and one on the external drive (the original file wasn't delete after move); Both copies seems to contain the same data (same length and CRC info) If this happen during move of multiply files, only some files are moved and one of them have two copies (both locally and on the external drive) What can I do to fix those explorer hangs? Added: The same problem exist when copying files, it hangsup between large files Similar problem exist when I tried to use TotalCommander (x64): copying paused at 80% of one of files, TC didn't hung up (but clicking cancel in copying dialog box doesn't have any effect). During this pause I can't kill TotalCmd.exe just like Explorer.exe Added (2): This problem seems to disappear when I use 32 bit applications (like TotalCommander (x86) ), but I need to do more testing to be sure of this Added (3): There are several errors in event log, source: disk, id: 11, qualifiers: 49156, task: 0, level: 2, keywords: 0x80000000000000 (This may be important, and I forgot to mention this): Main disk is encrypted via Truecrypt (boot-in password)

    Read the article

  • Western Digital HDD disappears and reappears in BIOS

    - by tbkn23
    I know many people asked about similar problems, but I have a very specific case where I can't understand what's going on... I have a 3TB Western Digital Caviar Green disk connected in my Desktop, that also has a seagate 1.5TB disk and 2 SSD drives (OCZ and Sandisk). After working fine for quite some time (probably more than a year), suddenly my Caviar Green drive disappeared from windows. I checked the BIOS, and it wasn't there either. I opened my PC, played with the connectors, power, etc, but nothing helped. Even tried switching connectors with those of the 1.5TB disk, and nothing changed, the 1.5TB seagate was there, but the 3TB WD was not. Ok, now for the strange part. I have another desktop at home, so I took out my 3TB drive, connected it there, and it worked fine! I copied the most important files out of it, and then made another attempt in the original desktop. Surprise! It now appeared in the BIOS and worked fine! I even ran the SMART test with the WD tools and it said everything was intact. It doesn't end here. After leaving it overnight in the original desktop, it disappeared again in the morning. I repeated the entire process, connecting it to the second desktop, and there it is again working fine. Now for my question... Whats going on? The disk seems to be appearing on/off in my original Desktop, while other drives there work fine. SMART test says the disk is fine. Any ideas? Is the disk defective and should be replaced? Or maybe there's a problem with the controller in the desktop? I'm using a Gigabyte GA-880GA-UD3H motherboard and tried connecting the drive to both bridges (SATA2 and SATA3 bridges). Thanks EDIT: Power options are set never to turn off hard drives:

    Read the article

  • Why not block ICMP?

    - by Agvorth
    I think I almost have my iptables setup complete on my CentOS 5.3 system. Here is my script... # Establish a clean slate iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -F # Flush all rules iptables -X # Delete all chains # Disable routing. Drop packets if they reach the end of the chain. iptables -P FORWARD DROP # Drop all packets with a bad state iptables -A INPUT -m state --state INVALID -j DROP # Accept any packets that have something to do with ones we've sent on outbound iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # Accept any packets coming or going on localhost (this can be very important) iptables -A INPUT -i lo -j ACCEPT # Accept ICMP iptables -A INPUT -p icmp -j ACCEPT # Allow ssh iptables -A INPUT -p tcp --dport 22 -j ACCEPT # Allow httpd iptables -A INPUT -p tcp --dport 80 -j ACCEPT # Allow SSL iptables -A INPUT -p tcp --dport 443 -j ACCEPT # Block all other traffic iptables -A INPUT -j DROP For context, this machine is a Virtual Private Server Web app host. In a previous question, Lee B said that I should "lock down ICMP a bit more." Why not just block it altogether? What would happen if I did that (what bad thing would happen)? If I need to not block ICMP, how could I go about locking it down more?

    Read the article

  • Choice of filesystem for GNU/Linux on an SD card

    - by gspr
    Hi. I have am embedded ARM-based system running on an SD card. It's currently Debian GNU/Linux using ext3 as filesystem. As I'm about to reinstall the system, I started wondering about changing to a more flash-friendly filesystem. I've heard about JFFS2, YAFFS2 and LogFS, and they all seem suited to the job. Which one would you recommend? Also, I've heard there have been a lot of ext4 improvements to better suit SSD disks; am I to interpret that as running ext4 should be just fine? What do I need to think especially about in that case? I guess the usage of the system is important. But for the sake of generality, imagine it'll do standard desktop stuff (even though it is infact a small ARM-based system). Thanks for any replies. Edit: Wikipedia tells me (in a "citation needed" statement) that Removable flash memory cards and USB flash drives have built-in controllers to perform wear leveling and error correction so use of a specific flash file system does not add any benefit. Thus, I'm leaning towards sticking with an ext filesystem.

    Read the article

  • Use autocomplete in dropdown cells with Excel 2007?

    - by Martin
    I want to make a survey with Excel and I therefore have defined the cells for the answers as a dropdown cell which only accepts answers from a certain list, e. g.: The two Lists List1 and List2 (yellow cells) are the possible answers for the questions in Block 1.x resp. 2.x (blue) . There might be a block 4 with more questions, which again use List1 for their possible answers. My problem is: I'd like to be able to use the autocompleate feature to fill in the blue cells with the dropdown menu, so that the user only types 5 and it automatically expands to "5: extremely important" or "5: extremely difficult". According to my research on the www, this should be possible if I add the list with possible answers directly above the cells where autocomplete should work (I did this with the green helper cells which could be hidden) . But I have to enter at least 4 characters 5: e to get the autocompleted suggestion. Is there a way to make autocomplete already replace a "5" by the corresponding valid term? As the survey file shall be distributed to a lot of people "outside", I can not use VBA magic because it may be blocked on their computer and might not work. EDIT: it seems to have to do with the numbers I use: If I'd start my List items with A, B, C instead of 1, 2, 3, it would work perfectly. Excel seems to ignore the pure numbers when they are entered and does not try to autocomplete them.. is there a workaround? (I hope it is clear what I want, it seems a little difficult to explain.)

    Read the article

  • file system that allow to specify different RAID level per directory and change it afterward

    - by Adam Ryczkowski
    I have 5 hard drives, where I want to keep my data. Some of my files are more important, and some of them are less. So some of them I wish to put on RAID-6, and for some it RAID-5 is sufficient. It is difficult to predict at the moment of creation of the arrays how much space of each type to declare. What I would do if I didn't hear about zfs, is partition the hard drives into identical 100GB partitions, and as my needs grow, assemble those partitions into md devices using linux-raid. Then, I'd combine those devices using lvm into logical volumes where I'd put my data. So when I'd need more space of e.g. RAID-6, I'd take 100GB partition from each hard drive and assemble them into another RAID-6 md device and would use it as physical storage for the logical volume group dedicated for RAID-6 data. Then I could grow the file system on this logical volume. On top of RAID-6 and RAID-5 Volume Groups (managed by lvm) would reside completely independent file systems, which I'd later merge with multiple mount --bind into a single directory structure that would reflect the logical structure of data rather that of the storage. But now, when I heard about the ZFS with all the performance, data-healing and compression capabilities I cannot stop thinking if it can help me. If so, what do you think would be the best setup?

    Read the article

  • Proper approach to debug PC startup problems (POST)

    - by saurabhj
    My CPU was heating up to around 65 deg C and last time this had happened (about a year ago), I got thermal paste put between the CPU and heat sink and this managed to get it down to about 45 - 50 degrees. This time, I got some thermal paste and put it myself. However, my PC is not showing the POST display and not starting up. This is what happens LEDs light up HDDs spin Mouse is getting power All fans including the processor fan starts No display on monitor No diagnostic beep sounds (no sounds at all) What I have tried Removing everything including RAM, HDD, PCI cards, AGP card Boot up machine No changes from first state. What steps can I take to figure out where the problem lies? Note (might be important) When I removed the heat sink, the processor came out with it (it was stuck to it inspite of the processor latch on) Had to pry it separate with a screw-driver. Configuration Pentium 4, 2.8 Ghz with HT (very old, I know) Original Intel Mobo with onboard sound and graphics (GB series) 2x512 Mb DDR-RAM 2 SATA disks (320 Gigs / 250 gigs) DVD Writer Creative Sound Card Network card Any help would be appreciated. Thanks!

    Read the article

  • How can one keep secure regular backups of his desktop on a remote server through aDSL? [on hold]

    - by Antonis Christofides
    I'm a system administrator and I use rsnapshot to backup some servers, duplicity for some others. Both work fine, each one with advantages and disadvantages. Despite that, I am at a loss on how to backup my own private files. I'd use duplicity to automatically backup my files to a remote server; but the problem is that once in a while I must do a full backup. My emails and important files are 9G, and I expect this to increase. Uploading through aDSL at 1Mbit would be 20 hours. Too much. rsnapshot doesn't require periodic full backups (only the first time), but it must be running on the remote server and have a means to connect to my computer; if the server is compromised (or simply if the NSA decides to use it), my own machine is also compromised. Not good. The only solution I've come up with is use encfs, use unison to synchronize the files to a remote server, and use duplicity or rsnapshot on the remote server to backup these files. In that case, the question is whether I can sync the files on many computers; is it possible for encfs to be used with the same key on many computers? I also think that if I append one character to the unencrypted file, its encrypted encfs counterpart might change a lot, so that incrementals with duplicity would be less efficient—but not a big deal. Maybe also, when I need to restore a file, finding the correct file to restore could be a pain, because of filename encryption. I wonder whether there is any other possibility that I've overlooked. Maybe I'm asking too much for my personal use, and I should settle with an external disk?

    Read the article

  • Grayed-Out Sleep and Hibernate Options on Windows 7 After Updating Graphics Driver

    - by Maxim Zaslavsky
    I have a Gateway M275 Tablet PC, on which I've installed Windows 7 Ultimate. The laptop is quite old, so there aren't any Win7 drivers for it, not to mention any Vista drivers. Win7 has been working for some time, but I noticed that my video output wasn't working. I went into Device Manager and found that I didn't have a driver for my video card: it just recognized it as the standard one. I searched online and found an XP driver for it, released by Gateway. Device Manager accepted this driver and prompted me to reboot. After that, I noticed that my Sleep and Hibernate options in the Shut Down menu have been grayed-out. I looked online and found that many people are attributing this to display drivers, as such an old driver would surely not be compatible with the standby procedures Windows 7 uses. To make it clear: I was able to Sleep and Hibernate before updating the drivers; now, I can't. Running powercfg /a gives me, "An internal system component has disabled this standby state," for each available standby mode. Is there some way that the driver can be modified to support hibernation? The new driver fixed my video output problem, but I guess hibernation is more important for me. If not, what steps should I take to remove the driver and just leave the standard Windows one, which previously supported hibernation and sleep on this computer? Thanks in advance.

    Read the article

  • VMware Workstation Bridged Network Host UnReachable

    - by user2097818
    VMware Workstation 7 on Win7-64 (Home Premium). I have confirmed this on any guest running on this machine (from winxp to debian). I am using a bridged network connection for my guests (Automatic on VMnet0). All of the network configuration is done with DHCP (including on the host). Problem What I can not do: Ping my host machine from inside any VM. (either shows me "Destination Host Unreachable" or will just timeout) What I CAN do right after power up, with no problems at all. I can connect to the internet from inside the VM I can ping my router from inside the VM I can ping other machines on my network from inside the VM Other machines can ping the VM Other machines can ping the host My host machine can ping the VM (this one is important. read further) Details So I have my router assigned as 192.168.2.1/255.255.255.0, and the router provides the DHCP service (and it seems to be doing so successfully). There are no IP conflicts on the network that I am aware of. All Gateways and Subnet masks are appropriate and matching. My entire workshop is on one single subnet, with one single DHCP server and gateway. There is one method in which I can ping successfully, but it requires an active connection initiated from the host (I start pinging from host to VM). During the period of the active connection, I can successfully ping from VM to host, using explicit IP address. As soon as the host connection is closed, the VM ping starts hanging with the same old messages. My Thoughts This really feels like a firewall problem, but I have turned off all firewalls on host and VM, powered down the network, powered back up, and the problem still persists. And if it was firewall, why would only the IP address associated with bridged VM networks be blocked. I feel as though my host operating system (Win7) is somehow configured incorrectly, or, VMware Workstation is configured incorrectly from the host side. Although I have done my best to put everything in default, I feel like I am missing something silly.

    Read the article

  • Computer making strange sound when turned on, ever since power outage

    - by Dot NET
    Recently we experienced a power outage, and the PC was off. However, once the power came back, I switched on the PC and heard a strange noise - almost as if the hard disk or fans were struggling to work. I can't really describe the sound, but it's a laboured, loud sound almost like a jack-hammer. This has been persisting ever since the power outage, however the noise stops after around 10 minutes or so, and doesn't start again until the computer is turned off and on again. At first I thought it had something to do with the HDD, but all my files are intact, chkdsk did not report any issues and performance is 100% unchanged, even in games (so the gfx card is fine, and so is the HDD most likely). My PC setup basically has around 3 cooling fans, but I'm not sure if it's one of these either as the noise actually stops after 10 minutes or so, and if I leave the PC on for 4 hours (for example) the noise never starts again. It's there solely when turning on the PC. I haven't got a UPS, and it's important to note that the computer was not on when the power went out - it was merely plugged in. I then promptly unplugged the PC once the power was out, and only plugged it in again when the power came back. Could it be the power supply? Unfortunately I can't open my tower as I would void the warranty. Are there any tests which I could carry out without voiding the warranty?

    Read the article

  • External-Harddisk drive couldn't run unless in safe mode

    - by zfm
    This is the strangest thing ever happened to me... I have an external harddisk drive (ext-HDD), bought around 2 years ago (don't know whether this is an important issue or not). Here, I have a video file (.avi) in my internal harddisk dive (HDD), it worked very well, then I copy it to my ext-HDD, but I couldn't run the file directly from my ext-HDD! I tried to copy it back to my HDD (from the ext-HDD), and now the copy couldn't be run on my HDD too. Remember that I copy the file, so the original one was still there. I tried to go to safe mode (forget to mention, I use Windows 7 Pro), and this is where the strange thing happened, the copied files (both in ext-HDD and HDD) can be run in this safe mode. So, my question is, what could actually be happened there? PS: My ext-HDD is Axioo, 250 GB, exFAT... Edited: Currently I used MacOSX, and the file in the harddisk still can't be run. I haven't tried safe mode for Mac (is there one?), but will try later (if there is)

    Read the article

  • Drowning in documents - recommend doc management solutions?

    - by Martin Day
    I've been researching document management lately. I want to organise my docs at home and also at the office. Finding affordable solutions one can actually test drive is quite hard. Some that I've downloaded just don't seem to work (testing on brand new Vista PC). I've seen some software on Amazon like Paperport but not really sure what they're like. For home I'd like something to organise files, full text search, good scanner integration, nice interface etc. But for the office it seems harder. I need something that does proper workflow and keeps versions. It will have an audit trail. Documents can be approved, checked in/out etc. I know a few clients who would like something similar. It would be great just to import thousands of documents from a shared drive and get them indexed with dupes killed. I'd like to be super clear about how/where the documents are being stored so that maintenance and backups are clear. My Google/twitter searches lead back to the same tired and vague webpages pushing what look like expensive and custom made solutions. Some might be very good I suppose but it's darn hard to tell. I don't mind a hosted package but all in all I don't think something like Google Docs, as good as it is now, will work. There are too many quirks and missing features (as compared to Office). Being able to work directly with the common Office file formats is important. I've noted a similar sounding question asked here back in August but it didn't seem to turn up too many solutions that I could easily and quickly apply. Also there could have been some changes since then so I feel it's worth asking.

    Read the article

  • What hardware would I need (approx) to run ESXi server?

    - by mr.b
    Hi, I am considering to purchase off-the-shelf commodity hardware in order to build server that will host virtual machines using ESXi server. Intended purpose for this server is NOT mission critical tasks. It will have to run perhaps 20-50 Windows XP/Vista/7 virtual machines (in total, but closer to 20 figure). Each guest would have to have 1-2 GB of ram, and probably two-three times more disk space than guest OS needs with clean install and all updates applied (that would be around 6-8 GB for XP, and i believe closer to 10-15 for win7). Those guests will act as a test ground for a new product that is network management software, thus guests will idle most of their time once initially loaded, but if I give them some task to complete, they should be able to perform reasonably well. Now, from what I have learned... CPU is usually not much of an issue (6 cores would do it), memory should not be lacking, but doesn't have to be sum of all guests, because of overcommitment... That leads me to IO, which is, as it seems, the bottleneck. Since I have very little experience with ESXi (and ESX, too) server, I'd like to ask: How much memory could I save by overcommitment, and how does it affect performance? Is 6-core cpu enough to run above described system? Would it be possible to run entire server off two (or even one) SSD drives (to host system virtual disks, with few additional HDDs (2-3) in RAID 0 to be used as secondary storage? I read somewhere that ESXi allows having something like "master image", essentially virtual machine that is "deployed" many times, so that disk space can be saved by having only differences stored by specific guests, instead of copying around whole virtual disks. Is this true, and how can this help me? Are there any other things I need to take into consideration when building this off-the-shelf solution? I should probably mention here that I'm fully aware of issues like SPOF regarding power supply, raid 0, etc, but since it's only a testing ground and not a production system, it's not so important for me. Thanks, B.

    Read the article

  • Deploying multiple identical copies of a virtual machine for compute tasks

    - by Reid
    I have a compute task which has a large number of library dependencies. I would like to deploy it on some of my company's large Linux clusters, where I do not have root. I could probably track down, compile, and install the right versions of all the libraries, but this looks to be quite tedious and would have to be repeated if I deployed it again somewhere else. On the other hand, it's pretty easy to install on current Ubuntu. This led me to wonder about a virtual machine approach. Could I put together a virtual machine which booted up, ran the computation (with parameters from and results to the host), and then shut down? In other words, I'd like a command like this that I could run on the host: $ ./run-vm --ram N --task /path/on/host/foo.sh --results /another/host/dir/ This would boot the VM, run foo.sh, and put the (relatively small) results of the computation in /another/host/dir/. It's important to start up many instances of the VM simultaneously, both on a single node and multiple nodes of the cluster. So it would be nice if I didn't have to make many copies of the VM virtual disk and metadata. As the task instances are completely independent, the VMs would not need any network support once deployed, or any outside communications beyond reading and writing the host filesystem. Is this possible, and if so, how might I go about doing it? Are there assumptions I've made above which are bogus?

    Read the article

< Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >