Search Results

Search found 22449 results on 898 pages for 'complete pc backup'.

Page 28/898 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • rsnapshot schedule overlapping, help with backup schedule

    - by Znarkus
    Hello, I have to following configuration. rsnapshot.conf interval halfhourly 4 interval hourly 6 interval twohourly 12 interval daily 7 interval weekly 4 crontab 0,30 * * * * /usr/bin/rsnapshot halfhourly >> /var/log/rsnapshot.halfhourly.log 2>&1 5 * * * * /usr/bin/rsnapshot hourly >> /var/log/rsnapshot.hourly.log 2>&1 10 */2 * * * /usr/bin/rsnapshot twohourly >> /var/log/rsnapshot.twohourly.log 2>&1 15 3 * * * /usr/bin/rsnapshot daily >> /var/log/rsnapshot.daily.log 2>&1 20 6 * * MON /usr/bin/rsnapshot weekly >> /var/log/rsnapshot.weekly.log 2>&1 Only halfhourly is running correctly now. hourly spits out this error: rsnapshot encountered an error! The program was invoked with these options: /usr/bin/rsnapshot hourly ---------------------------------------------------------------------------- ERROR: Lockfile /var/run/rsnapshot.pid exists and so does its process, can not continue To me it seems like my 5 min space between halfhourly and hourly is too small. Is this configuration crazy? I like having backups every thirty minutes, that will probably save my ass some day. Please help me make a decent backup schedule, that doesn't clog up the system, but creates frequent enough backups. Thank you.

    Read the article

  • System State Backup Retention Policies

    - by isoscelestriangle
    I was wondering if there was a general consensus on how long to keep system state backups. I am trying to reevaluate our current backup process, and trying to get a good handle on our current storage requirements. Our current setup involves tapes and sending backups offsite with Barracuda Networks. We have been doing our system state backups with Barracuda now, which does full backups daily, leaving our storage requirements growing quite quickly. My boss is a little too gung-ho with backups and wants our system states saved for quite a while. We currently have 5 days of nightlies, 5 weeklies, 3 monthlies, and so on. I think this is quite overkill for system state backups. My boss wants the ability to go back in time to find when an issue appeared, but I don't think that is practical. Many things change in the course of several months. I also think it would be hard not to notice problems with our DCs and other servers for several months. I would think that a previous week's snapshot and the current week's dailies would suffice. Any advice or reading you can point me to? Thanks!

    Read the article

  • Centralized backup method recommendation for SMEs with various OSes

    - by Akinator
    Hi I was wondering what in your opinion is the "best" method for having "everything" backed-up in the following situation. We are a SMEs with 10 computers in total. Three of those computers are MACs The rest are windows (1 vista, 4 win7 and 2 XPs) I'm very open to what the method should be but you should also consider the follwing: Very limited resources Quite "small" bandwidth (4 MBs for all (download) 0.4 MBs (upload, yep, thats it)- though this might get, a little bit better) One of the main thing to back up would be the mails, considerations: All windows computers use outlook, mainly 2003 There is one mac that uses outlook too (for mac of course - not 2011 yet) We also have to backup the files: Not a huge amount Very few very big files Very organizes (by machine) What I would like is to hear your opinions as to which would be the best method (or combination of methods - preferably one of course) considering. We are not sure what do we need and I'm open to suggestions, though an online (cloud based applications) would be great, remember the the bandwidth is unbearable. Last think to consider, it that we would like to do weekly updates (unless the method is very easy of course). Thanks in advance!! I tried to be as specific as possible, but if anything is needed I'll gladly update, please ask for any clarification needed! Please avoid any answers like upgrade all to windows 7 and throw away your macs :) our's may not be an ideal situation, but it is what it is, and right now, it would be impossible for us to change it for a lot of circumstances.

    Read the article

  • Centralized backup method recommendation for SMEs with various OSes

    - by Akinator
    Hi I was wondering what in your opinion is the "best" method for having "everything" backed-up in the following situation. We are a SMEs with 10 computers in total. Three of those computers are MACs The rest are windows (1 vista, 4 win7 and 2 XPs) I'm very open to what the method should be but you should also consider the follwing: Very limited resources Quite "small" bandwidth (4 MBs for all (download) 0.4 MBs (upload, yep, thats it)- though this might get, a little bit better) One of the main thing to back up would be the mails, considerations: All windows computers use outlook, mainly 2003 There is one mac that uses outlook too (for mac of course - not 2011 yet) We also have to backup the files: Not a huge amount Very few very big files Very organizes (by machine) What I would like is to hear your opinions as to which would be the best method (or combination of methods - preferably one of course) considering. We are not sure what do we need and I'm open to suggestions, though an online (cloud based applications) would be great, remember the the bandwidth is unbearable. Last think to consider, it that we would like to do weekly updates (unless the method is very easy of course). Thanks in advance!! I tried to be as specific as possible, but if anything is needed I'll gladly update, please ask for any clarification needed! Please avoid any answers like upgrade all to windows 7 and throw away your macs :) our's may not be an ideal situation, but it is what it is, and right now, it would be impossible for us to change it for a lot of circumstances.

    Read the article

  • How should I configure backup of my server?

    - by ed209
    I have just rented a dedicated server. If it helps this is the config I have: CPU1 Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz (Cores 8) RAM 15975 MB Disk /dev/sda doesn't contain a valid partition table (=> /dev/sda doesn't) Disk /dev/sdc doesn't contain a valid partition table (=> /dev/sdc doesn't) Disk /dev/sdb doesn't contain a valid partition table (=> /dev/sdb doesn't) Disk /dev/sda: 120.0 GB (=> 114 GIB) Disk /dev/sdc: 3000.6 GB (=> 2861 GIB) Disk /dev/sdb: 3000.6 GB (=> 2861 GIB) /dev/sda is a 120GB SSD. This is where I have Ubuntu/lamp installed. It's the drive that will run my site. With the account I got two other drives of 3000GB each which I really don't need but they came with the account. I figured I could use these to back up my main 120gb drive. So a couple of things I wondered were: Should I use these for backups? How should I back up. The data I want to back up is a user uploads directory full of images and the database. Everything else is either in a code repo or backed up some other way. For example, it would be nice to know there is a disk image of the 120gb drive somewhere that I can copy over should there be any problems but equally I don't mind doing a fresh install of all the software and copying over just the images and database dump. Thanks for your advice! (also, happy to not use the two other drives and backup elsewhere if it's more sensible)

    Read the article

  • Need Help Accessing the Vista Wampserver localhost from Virtual PC 2007 running an XP VM.

    - by Reg
    (I had posted this on stack overflow but it was suggested there that I post it here instead). I have a Vista laptop on which I'm running wampserver. I have Virtual PC 2007 setup with Windows XP running on the VM. My goal is to be able to use the XP VM to run IE6 to view the localhost in the Vista wampserver. I'm not interested in having the XP VM have any access to the internet -- only to my Vista wampserver's localhost. The vista wampserver works fine. As suggested on a blog I read, I installed the loopback adapter on Vista and I set the loopback to 192.168.21.1 and I set the xp vm ip to 192.168.21.2. I am able to successfully ping the vista-loopback adapter from the xp vm. I've turned the wampserver to "server online", and I've disabled the firewalls in both the vista host and the xp vm. But for some reason, I still can't seem to get the virtual XP to see the localhost on the vista wampserver. I've tried using the vista //name, and I've tried the ip 192.168.21.1 directly and with the port. For whatever its worth, I'm not able to see anything under the XM VM's network places (though I don't know if I'm supposed to be able to see anything). So at this point I'm stuck and I'm still not sure how to get this XP VM to "talk" to my vista wampserver localhost. Any advice on how to fix this problem is much appreciated. Thanks in advance for your help. -R

    Read the article

  • Developer's PC - worth getting more than 8GB RAM?

    - by Borek
    I'm building a developer PC and am wondering whether to get 8GB or 12GB. It's a Core-i7 860 system, i.e., 1156 motherboards with 4 slots for RAM sticks, dual channel, usually up 16GB (as opposed to 1366 sockets where 6 banks / triple-channel are used). 8GB would be cheaper to get especially because price per GB is lower with 4x2GB compared to 2x4GB. Also the availability is worse for 4GB DIMMs here where I live; those are the main practical advantages of 8GB. (Edit: I should have stressed the price difference more - in the eshop I'm buying from, the difference between 12GB and 8GB is so big that I could almost buy a whole new netbook for it.) However, I understand that more RAM can never do harm which is the point of this question - how much of a difference will 12GB make as opposed to 8GB? Honestly, I've always been on 3.2GB systems (4GB but 32bit system) and never felt much pain from having too little memory - of course there could be more but for instance compiler's performance was usually held back by slow I/O or not utilizing multiple cores on my CPU. Still, I'm not questioning that 8GB will be useful, however, I'm not sure about the additional 4GB difference between 8 and 12 gig. Anyone has experience with 8GB / 12GB systems? The software I usually run all the time: Visual Studio or Eclipse (both should be fine with ~2GB RAM, after that I feel their performance is I/O bound) Firefox (it can never have enough RAM can it? :) Office (~500MB RAM should be enough) ... and then some smaller apps like Skype, other browsers, some background services etc.

    Read the article

  • EEE PC Keyboard malfunctioning - Ctrl key "sticks" after 10 seconds

    - by DWilliams
    I was given a EEE PC belonging to a friend of a friend to fix. The keyboard did not appear to work at all. I spent a while testing out various things, blowing the keyboard out, checking for damage, and so on. Nothing appeared to be physically wrong. At first I noticed that the keyboard appeared to work just fine for 10 seconds (on average, sometimes more sometimes less) after being powered on. It had been restored to the factory default xandros installation with no user set up, so I couldn't get in to mess with things since I couldn't type to make a user. I made an ubuntu live USB to boot it from, and managed to get the boot order changed to boot from USB in the ~10 seconds of working keyboard I had (I don't think I've ever had to rush around BIOS menus that quickly). After I got Ubuntu up on it, I played around a bit more and determined that apparently the ctrl key is stuck down (not literally, but it's on all the time). If I open gedit, pressing the "o" key brings the open dialog, "s" opens the save dialog, and all other behaviour you would expect to see if you were holding down the control key. The only exception that I noticed is the "9" and "0" keys. They function normally. Figuring that out I made a xandros user with a name/password consisting of 9's and 0's. I couldn't find any options in Xandros that could potentially be helpful. I'm not familiar with EEE PCs. Is it safe to assume that the keyboard is simply dead or could there be another problem? I don't want to purchase another keyboard for him if that isn't going to fix the problem. The netbook doesn't show any obvious signs of damage but the owner is a biker and very often has it with him on the road so it's been subjected to a good bit of vibration.

    Read the article

  • Eee PC Seashell series netbook screen is cut off at bottom no matter the resolution

    - by Yzmir Ramirez
    I have an Eee PC 1015PE Seashell netbook running Windows 7 Home Premium with an Intel Graphics Media Accelerator 3150 (8.14.10.2230) with a "Generic Non-PnP Monitor" detected. I tried: Changing the resolution (Control Panel = Appearance and Personalization = Display = Screen Resolution) to 1024x768 Updating the video driver (to 8.14.10.2230) Uninstalling the driver and rebooting Pressing the Windows Key + "-" (magnifier) Pressing Ctrl + Mouse Scroll only resizes the desktop items Pressing Fn + F4 shows 1024x600 (which I think is what I should be using, but nothing happens) EDIT: Changed from Landscape to Portrait and it works Attached an External Monitor and when I extend or set as desktop it works only on the External Monitor (shows up as "Generic PnP Monitor in Device Manager) Basically the bottom inch of my desktop is off-screen hiding my start bar, but my wigets are in their proper position (the start bar is not hidden). Pressing Ctrl + Esc shows the start menu but its cut-off. I'm pretty sure I should be using 1024x600 resolution, any advice? What's odd is that this only started happening recently. EDIT2: Here are some screenshots showing the problem: Resized Window to fit: Opened Start Menu - notice it cut off: Maximized window and then scrolled down - notice no Start Menu: I downgraded my graphic driver I downloaded from the Intel Download Center for the Graphic Media Accelerator 3150 (now: 8.14.10.1972) and now my "Generic non-PnP Montior" detects as "Digital Flat Panel (1024x768 60Hz)".

    Read the article

  • PC freezes after repeated clicking noises

    - by Péter Török
    I have an oldish PC (Athlon XP 2200, WinXP). Every time I switch it on, I hear a loud "click" first of all, and when it is shut down, again a click is the last sound I hear. Lately it started to behave erratically: it started to click loudly in the middle of a session. First only once in a while, then repeatedly for several seconds in a row, and finally it froze completely. We were not doing anything particular during these times, usually just browsing the web. This has already happened twice in a few week's time period. We typically use it only in the evenings, so when the freeze happened, I just decided it is time for the bed. When it was started up the next day, everything looked fine. Any hints on what could be the culprit? Could it be caused by an ageing heat fan, or dust that has accumulated inside the case? We are backing up all the data stored on it, and then will open the case to look inside, but I thought it would be good to get some background info first of all.

    Read the article

  • Desktop PC does not power up on power button

    - by hIpPy
    When I press the power button on my desktop, it does not power up completely. Before I press the power button, I see lights on the motherboard. Everything is normal. On power button press, the fans on the cpu, graphics card and motherboard start to spin a little for a second or two and then they stop. No beeps during this process. It has been doing this for a while now but it used to start up after some trials. Once it starts up, I have NO issues at all like random shutdowns so it is not an issue with OS. I'm just guessing here but it seems as if the PSU (Antec TP2-550ATX) is dying out and does not have enough power now - just a guess. It's an old desktop assembled in 2005 but I have maintained it well. Any ideas? Please help. Thanks. Below is the complete configuration. DFI LAN-Party UT NF4 Ultra-D 6/23 {6.70}, Evercool EC-VC-RE 41/47C, AMD Opteron 170 2.0GHz {1.3.2.16} 1.312V 36/41C, ThermalRight SI-120, Panaflo 120×38mm OCZ Platinum 2×1GB 200MHz 2.66V 3-3-2-7 1T XFX 7800GTX 256MB 475/1250MHz {91.31}, Zalman VF900 Cu led 41/56C WD Caviar 320GB 7200RPM 16MB SATA 3Gb/s Antec TP2-550ATX Antec P180 WinXP sp2 KB896256 Logitech MX310 Razer Mantis Speed BenQ FP91G+ 19" LCD 8ms DVI Creative Audigy2 ZS {4.42} BenQ DW1640 Logitech z-5300e 5.1 280W Legend: Driver versions: {} User settings: [] Voltage: V Wattage: W Temperature: C (Celsius) min/max

    Read the article

  • Need help toubleshooting PC

    - by brux
    I have had problems since my dog pee'd on my computer. Problem: loads windows fine, at random intervals from 5 minutes to 30 minutes it restarts itself. There is nothing in the event log such as errors, no BSOD, just cold restart. after restarting - sometimes- it POST's and restarts itself at the end of POST. It will do this many times and then finally load windows. The cycle then begins again, it will restart eventually. What I have done: I thought it was HDD at first, since this is the only part of the computer which actually got wet with any fluid ( the case is off the PC and the dog pee'd down the front where the HDD is located). Seatool, the seagate HDD tool, found errors when I ran it inside windows, so I ran it in DOS mode from boo-table USB and ran it. It found the same number of errors and fixed them all. I ran the scan again and it says "Good". I loaded windows and ran the scan and it also said "Good there. So the HDD appears to be fine but the problem persists, random restarts. What else could this be? I have taken the computer apart and cleaned everything and also taken the PSU apart and cleaned it thoroughly. The problem still persists, what should my next steps be?

    Read the article

  • Need help ttoubleshooting PC

    - by brux
    I have had problems since my dog pee'd on my computer. Problem: loads windows fine, at random intervals from 5 minutes to 30 minutes it restarts itself. There is nothing in the event log such as errors, no BSOD, just cold restart. after rstarting - sometimes- it POST's and restarts itself at the end of POST. It will do this many times and then finally load windows. The cycle then begins again, it will restart eventually. What i have done: I thought it was HDD at first, since this is the only part of the coputer which actually got wet with any fluid ( the case is off the PC and the dog pee'd down the front where the HDD is located). Seatool, the seagate HDD tool, found errors when I ran it inside windows, so I ran it in DOS mode from bootable USB and ran it. It found the same number of errors and fixed them all. I ran the scan again and it says "Good". I loaded windows and ran the scan and it also said "Good there. So the HDD apears to be fine but the problem persists, random restarts. What else could this be? I have taken the computer apart and cleaned everything and also taken the PSU apart and cleaned it thoughrouly. The problem still persists, what should my next steps be? Thanks in advance.

    Read the article

  • How Windows 8's Backup System Differs From Windows 7's

    - by Chris Hoffman
    Windows 8 contains a completely revamped backup system. Windows 8’s File History replaces Windows 7’s Windows Backup – if you use Windows Backup and update to Windows 8, you’ll find quite a few differences. Microsoft redesigned Windows’ backup features because less than 5% of PCs used Windows Backup. The new File History system is designed to be simple to set up and work automatically in the background. This post will focus on the differences between File History and the Windows Backup feature you may be familiar with from Windows 7 – check out our full walkthrough of File History for more information. HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows?

    Read the article

  • Software developer needs Validation for VA Chap 31 to purchase Macbook Pro vs. PC [closed]

    - by David
    I am currently attending college with a path of software development and working towards my BS thanks to VA Chap 31. My old original Macbook Pro is near dead and no longer upgradable on the software or hardware side. The VA has offered to purchase a PC laptop for me (Because my syllabi says computer required), but I do not want to go backwards. I have a lot invested in OS X software and Mac peripherals, not to mention I prefer to program in an Apple environment. PC vs. Mac costs are so drastically different that I must validate my request for a new Macbook Pro. In my request to the VA, I stated the above and some other topics but they requested more validation. Can anyone recommend issues, reasons, etc. to help me validate this purchase by the VA for school? Thanks in advance for your help, David

    Read the article

  • Auto backup mysql database to dropbox [closed]

    - by Rob
    Is it possible to automatically backup my database to dropbox? If so how can I do it? The key criteria I need it to do is: Be automatic. Be Mac compliant. Be weekly. Sync with dropbox (http://www.dropbox.com) automatically. Be able to backup several databases from several websites. Be free... or relatively cheap! Have a guide on how to setup the solution. UPDATE: I've managed to setup an auto weekly backup using a cronjob: mysqldump -u username -pMyPassword Mydatabase > backup-file.sql That is saving the backup to my hosting space. It's a start but isn't ideal, how can I save that backup to a folder on my computer? Automatically of course.

    Read the article

  • How to Eliminate Tape Backup and Off-site Storage Service?

    - by Daniel Lucas
    PLEASE READ UPDATE AT THE BOTTOM. THANKS! ;) Environment Info (all Windows): 2 sites 30 servers site #1 (3TB of backup data) 5 servers site #2 (1TB of backup data) MPLS backbone tunnel connecting site #1 and site #2 Current Backup Process: Online Backup (disk-to-disk) Site #1 has a server running Symantec Backup Exec 12.5 with four 1TB USB 2.0 disks. BE jobs for full backups run nightly on all servers in site #1 to these disks. Site #2 backs up to a central file server there using software they already had when we purchased them. A BE job pulls that data nightly to site #1 and stores them on said disks. Off-site Backup (tape) Connected to our backup server is a tape drive. BE backs up the external disks to tape once a week which gets picked up by our off-site storage company. Obviously we rotate two tape libraries, one is always here and one is always there. Requirements: Eliminate the need for tape and off-site storage service by doing disk-to-disk at each site and replicating site #1 to site #2 and vice versa. Software based solution as hardware options have been too pricey (ie, SonicWall, Arkeia). Agents for Exchange, SharePoint, and SQL. Some Ideas So Far: Storage DroboPro at each site with an initial 8TB of storage (these are expandable up to 16TB at present). I like these because they are rackmountable, allow disparate drives, and have iSCSI interfaces. They are relatively cheap too. Software Symantec Backup Exec 12.5 already has all the agents and licenses we need. I'd like to keep using it unless there is a better solution, similarly priced, that does everything BE does plus deduplication and replication. Server Because there is no more need for a SCSI adapter (for tape drive) we are going to virtualize our backup server as it is currently the only physical machine save for SQL boxes. Problems: When replicating between sites we want as little data as possible to go across the pipe. There is no deduplication or compression in what I have laid out here so far. The files being replicated are BE's virtual tape libraries from our disk-to-disk backup. Because of this each of those huge files will go across the wire every week because they change every day. And Finally, the Question: Is there any software out there that does deduplication, or at least compression, to handle just our site-to-site replication? Or, looking at our setup, is there any other solution that I am missing that might be cheaper, faster, better? Thanks. Sorry so long. UPDATE 2: I've set a bounty on this question to get it more attention. I'm looking for software that will handle replication of data between two sites using the least amount of data possible (either compression, deduplication, or some other method). Something similar to rsync would work but it needs to be native to Windows and not a port involving shenanigans to get up and running. Prefer a GUI based product and I don't mind shelling out a few bones if it works. Please, answers that meet the above criteria only. If you don't think one exists or if you think I'm being to restrictive keep it to yourself. If after seven days there is no answer at all, so be it. Thanks again everyone. UPDATE 2: I really appreciate everyone coming forward with suggestions. There is no way for me to try all of these before the bounty expires. For now I'm going to let this bounty run out and whoever has the most votes will get the 100 rep points. Thanks again!

    Read the article

  • Flash AS3: (VideoEvent.COMPLETE, completePlay) - listener is triggered before video is completed

    - by Tevi
    Hello, I have a flash video using the standard FLV Playback component that comes with Flash. I'm using ActionScript 3 to modify the appearance and set up an event listener. I've set it up to go to a new URL using "externalInterface" when the video completes play. The URL is set in a variable using SWFObject. On only a few instances (3 people out of 50 - tested using Amazon Turk), people reported being taken directly to the new url, before the video even started playing. It's difficult to repeat the issue, but it did happen to me once. It doesn't have anything to do with cache, since it has been reported on people going to the url for the first time. Here's the url to the video: http://www.partstown.com/is-bin/INTERSHOP.enfinity/WFS/Reedy-PartsTown-Site/en_US/-/USD/ViewStaticPage-UnFramed?page=tourthetown Here's the code: import flash.external.*; import fl.video.*; var myVideo:FLVPlayback = new FLVPlayback(); var theUrl:String = this.loaderInfo.parameters.urlName; var theScript:String = this.loaderInfo.parameters.scriptName; myVideo.source = this.loaderInfo.parameters.videoPath;//"partstown.flv"; myVideo.skin = this.loaderInfo.parameters.skinPath;//"SkinUnderPlayStopSeekMuteVol.swf" myVideo.skinBackgroundColor = 0xAEBEFB; myVideo.skinBackgroundAlpha = 0.5; myVideo.width = 939; myVideo.height = 660; myVideo.addEventListener(VideoEvent.COMPLETE, completePlay); function completePlay(e:VideoEvent):void { myVideo.alpha=0.2; ExternalInterface.call(theScript); } addChild(myVideo); Why would the listener be triggered before the event complete? How can I fix it? Thanks!

    Read the article

  • The subsets-sum problem and the solvability of NP-complete problems

    - by G.E.M.
    I was reading about the subset-sums problem when I came up with what appears to be a general-purpose algorithm for solving it: (defun subset-contains-sum (set sum) (let ((subsets) (new-subset) (new-sum)) (dolist (element set) (dolist (subset-sum subsets) (setf new-subset (cons element (car subset-sum))) (setf new-sum (+ element (cdr subset-sum))) (if (= new-sum sum) (return-from subset-contains-sum new-subset)) (setf subsets (cons (cons new-subset new-sum) subsets))) (setf subsets (cons (cons element element) subsets))))) "set" is a list not containing duplicates and "sum" is the sum to search subsets for. "subsets" is a list of cons cells where the "car" is a subset list and the "cdr" is the sum of that subset. New subsets are created from old ones in O(1) time by just cons'ing the element to the front. I am not sure what the runtime complexity of it is, but appears that with each element "sum" grows by, the size of "subsets" doubles, plus one, so it appears to me to at least be quadratic. I am posting this because my impression before was that NP-complete problems tend to be intractable and that the best one can usually hope for is a heuristic, but this appears to be a general-purpose solution that will, assuming you have the CPU cycles, always give you the correct answer. How many other NP-complete problems can be solved like this one?

    Read the article

  • PC runs very slowly for no apparent reason

    - by GalacticCowboy
    I have a Dell Latitude D820 that I've owned for about 2.5 years. It is a Core 2 Duo T7200 2.0GHz, with 2 GB of RAM, an 80 GB hard drive and an NVidia Quadro 120M video card. The computer was purchased in late November of 2006 with XP Pro, and included a free upgrade to Vista Business. (Vista was available on MSDN but not yet via retail, so the Vista Business upgrades weren't shipped until March of '07.) Since we had an MSDN subscription at the time, I installed Vista Ultimate on it pretty much as soon as I got it. It ran happily until sometime in the spring of 2007 when Media Center (which I had never used except to watch DVDs) started throwing some kind of bizarre SQL (CE?) error. This error would pop up at random times just while using the computer. Furthermore, Media Center would no longer start. I never identified the cause of this error. I had the Vista Business upgrade by this time, so I nuked the machine, installed XP and all the drivers, and then the Vista Business upgrade. Again, it ran happily for a few months and then started behaving badly once again. Vista Business doesn't have Media Center, so this exhibited completely unrelated symptoms. For no apparent reason and at fairly random times, the machine would suddenly appear to freeze up or run very slowly. For example, launching a new application window (any app) might take 30-45 seconds to paint fully. However, Task Manager showed very low CPU load, memory, etc. I tried all the normal stuff (chkdsk, defrag, etc.) and ran several diagnostic programs to try to identify any problems, but none found anything. It eventually reached the point that the computer was all but unusable, so I nuked it again and installed XP. This time I decided to stick with XP instead of going to Vista. However, within the past couple of months it has started to exhibit the same symptoms in XP that I used to see in Vista. The computer is still under Dell warranty until December, but so far they aren't any help unless I can identify a specific problem. A friend (partner in a now-dead business) has an identical machine that was purchased at the same time. His machine exhibits none of these symptoms, which leads me to believe it is a hardware issue, but I can't figure out how to identify it. Any ideas? Utilities? Seen something similar? At this point I can't even identify any pattern to the behavior, but would be willing to run a "stress test"-style app for as long as a couple days if I had any hope that it would find something. EDIT July 17 I'm testing jerryjvl's answer regarding the video card, though I'm not sure it fully explains the symptoms yet. This morning I ran a video stress test. The test itself ran fine, but immediately afterward the PC started acting up again. I left ProcExp open and various system processes were consuming 50-60% of the CPU but with no apparent reason. For example, "services.exe" was eating about 40%, but the sum of its child processes wasn't higher than about 5%. I left it alone for several minutes to settle down, and then it was fine again. I used the "video card stability test" from firestone-group.com. Its output isn't very detailed, but it at least exercises the hardware pretty hard. EDIT July 22 Thanks for your excellent suggestions. Here is an update on what I have tried so far. Ran memtest86, SeaTools (Seagate), Hitachi drive fitness test, video card stability test (mentioned above). The video card test was the only one that seemed to produce any results, though it didn't occur during the actual test. I defragged the drive (again...) with JkDefrag I dropped the video card

    Read the article

  • Backup, Migrate or Clone Failing CentOS 4 (LVM)

    - by Hegelworm
    I've been running a BlueQuartz CentOS 4 system (Nuonce.net distro) for a few years now and although the hard drive (Deskstar) has always been a bit noisy, on a few recent occasions I've heard it having trouble spinning up. Basically, I want to clone this drive to a similar sized one (80 Gig). I've spent many hours reading upon dd, dd_rescue, rsync, clonezilla and LVM mirroring yet the sheer number of options and nightmarish accounts has left me frozen - unable to make an informed decision as to how to start. I've made a few attempts. dd failed after about 2 hours, as, although the drives appeared to be identical on the surface (ATA Seagate Barracudas, Thai not Chinese), the destination drive is slightly smaller. My most recent attempt involved using a Debian CD to format the new drive and then rsync-ing everything over and editing the new drive's grub and fstab to reflect the changes. No joy here either as I hadn't chosen LVM when partitioning the destination drive and it wouldn't boot. As you can probably tell, I'm out of my depth here and a panic-invoking mixture of caution and frustration has prompted me to sign up here. The server itself, although not strictly a production environment, has a very specific installation of Festival, LAME and ffMpeg and provides the back-end for a Text-to-Speech jQuery plugin that I've built over the last 2 years. I'm also planning to rebuild the whole TTS system on Debian as the existing CentOS system still has PHP4 etc. For now though, I'd really like to just shift everything over to a new drive. As this is my first post, please feel free to lay any house rules on me that I might've overlooked; I've been hovering around StackOverflow for a while now but have only just signed up. Many thanks. Update: Thanks for your responses so far - it's much appreciated and makes me feel a little more confident when I can double-check things here. I had the idea of doing a fresh install of CentOS (from the original disk) on the new drive so the partitions and LVM were all set up correctly (after disconnecting my source drive to prevent painful mistakes). I then booted into rescue mode from the same CD, and, to avoid a conflicting label, changed the /boot partition's label using e2label to /bootnew. I then changed the VolGroup name using lvm vgrename from VolGroup00 to VolGroup001. I could then boot with both drives in. After mounting the new drive (via its VolGroup001 alias) into /newhd, I rsync-ed over everything I could to the new drive, using -avr switches and backslashes. Like mentioned here. I then disconnected my original source drive again, booted from the liveCD again, changed back the boot partition label from /bootnew to /boot using e2label and then renamed the VolGroup back to VolGroup00. I then rebooted and it went through the familiar start-up routine only to not find a host of files in proc, usr, lib, var etc. The boot did complete but there were lots of red 'FAILS'. I could log in with my existing creds, but the network was kaput, I couldn't startX (desktop GUI) and there were also a few (a lot) of error messages pertaining to iptables. Back to square one. I naively thought I'd nailed it. Shall I just buy a bigger hard drive and attempt the dd route? I've read that this can mess with LVM setups and there's the added risk of working on two unmounted drives at once with a low-level tool. Thanks again.

    Read the article

  • How to access previous VHD versions of system backup?

    - by feklee
    Quote from the 31 Oct 2009 TechNet article "Learn more about system image backup": During the first backup, the backup engine scans the source drive and copies only blocks that contain data into a .vhd file stored on the target, creating a compact view of the source drive. The next time a system image is created, only new and changed data is written to the .vhd file, and old data on the same block is moved out of the VHD and into the shadow copy storage area. Volume Shadow Copy Service is used to compute the changed data between backups, as well as to handle the process of moving the old data out to the shadow copy area on the target. This approach makes the backup fast (since only changed blocks are backed up) and efficient (since data is stored in a compact manner). When restoring the image, blocks will be restored to their original locations on the source disk. If you want to restore from an older backup, the engine reads from the shadow copy area and restores the appropriate blocks. For the last days, a daily system backup of drive C: to drive E: has been scheduled and run by Windows 7 Backup and Restore. Drive C: currently holds 233 GB of data, which fits comfortably on drive E:, a 1 TB drive, with 727 GB of free space remaining. How do I access the previous version of a VHD? I right clicked on files and folders in E:\WindowsImageBackup, and I looked for Previous Versions but always: There are no previous versions available

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Code Complete 2ed, composition and delegation.

    - by Arlukin
    Hi there. After a couple of weeks reading on this forum I thought it was time for me to do my first post. I'm currently rereading Code Complete. I think it's 15 years since the last time, and I find that I still can't write code ;-) Anyway on page 138 in Code Complete you find this coding horror example. (I have removed some of the code) class Emplyee { public: FullName GetName() const; Address GetAddress() const; PhoneNumber GetWorkPhone() const; ... bool IsZipCodeValid( Address address); ... private: ... } What Steve thinks is bad is that the functions are loosely related. Or has he writes "There's no logical connection between employees and routines that check ZIP codes, phone numbers or job classifications" Ok I totally agree with him. Maybe something like the below example is better. class ZipCode { public: bool IsValid() const; ... } class Address { public: ZipCode GetZipCode() const; ... } class Employee { public: Address GetAddress() const; ... } When checking if the zip is valid you would need to do something like this. employee.GetAddress().GetZipCode().IsValid(); And that is not good regarding to the Law of Demeter ([http://en.wikipedia.org/wiki/Law_of_Demeter][1]). So if you like to remove two of the three dots, you need to use delegation and a couple of wrapper functions like this. class ZipCode { public: bool IsValid(); } class Address { public: ZipCode GetZipCode() const; bool IsZipCodeValid() {return GetZipCode()->IsValid()); } class Employee { public: FullName GetName() const; Address GetAddress() const; bool IsZipCodeValid() {return GetAddress()->IsZipCodeValid()); PhoneNumber GetWorkPhone() const; } employee.IsZipCodeValid(); But then again you have routines that has no logical connection. I personally think that all three examples in this post are bad. Is it some other way that I haven't thougt about? //Daniel

    Read the article

  • Turing-Complete language possibilities?

    - by I can't tell you my name.
    In every Turing-Complete language, is it possible to create a working Compiler for itself which first runs on an interpreter written in some other language and then compiles it's own source code? (Bootstrapping) Standards-Compilant C++ compiler which outputs binaries for, e.g.: Windows? Regex Parser and Evaluater? World of Warcraft clone? (Assuming the language gets the necessary API bindings as, for example, OpenGL and the WoW source code is available) (Everything here theoretical) Let's take Brainf*ck as an example language.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >