Search Results

Search found 9847 results on 394 pages for 'cloud backup'.

Page 154/394 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • How can I check whether a volume is mounted where it is supposed to be using Python?

    - by Ben Hymers
    I've got a backup script written in Python which creates the destination directory before copying the source directory to it. I've configured it to use /external-backup as the destination, which is where I mount an external hard drive. I just ran the script without the hard drive being turned on (or being mounted) and found that it was working as normal, albeit making a backup on the internal hard drive, which has nowhere near enough space to back itself up. My question is: how can I check whether the volume is mounted in the right place before writing to it? If I can detect that /external-backup isn't mounted, I can prevent writing to it. The bonus question is why was this allowed, when the OS knows that directory is supposed to live on another device, and what would happen to the data (on the internal hard drive) should I later mount that device (the external hard drive)? Clearly there can't be two copies on different devices at the same path! Thanks in advance!

    Read the article

  • Why Is Volume Shadow Copy Services stopping?

    - by David Mackintosh
    I am running Windows 7 Professional, 64-bit. I am running a backup-over-the-internet software client which depends on the Volume Shadow Copy Services running. Since I installed Service Pack 1 (or rather, didn't object when Windows Update forced Service Pack 1 on me) the backup service is failing to back everything up because VSC isn't running. Most of the time it fails to back up such noise as the Security Essentials database or the Messenger Live contact list -- stuff I really don't care about -- but I don't want to fall into the trap of accepting an Error-state backup as "normal". At the recommendation of the backup software, I have set the VSC service startup mode to be Automatic. When I look in the Event Log, System channel I can see at boot time: The Volume Shadow Copy service entered the running state. ...and then two or three minutes later: The Volume Shadow Copy service entered the stopped state. How do I figure out why VSC is stopping? At the suggestion of the backup vendor, I have already followed the suggestions from http://support.microsoft.com/default.aspx/kb/940184 net stop SENS net stop EventSystem net start EventSystem net start SENS net stop COMSysApp net stop SwPrv net stop VSS cd /d C:\Windows\system32 regsvr32 ole32.dll /s regsvr32 oleaut32.dll /s regsvr32 vss_ps.dll /s vssvc /register /s regsvr32 /i swprv.dll /s regsvr32 /i eventcls.dll /s regsvr32 es.dll /s regsvr32 stdprov.dll /s regsvr32 vssui.dll /s regsvr32 msxml.dll /s regsvr32 msxml3.dll /s regsvr32 msxml4.dll /s net start SwPrv net start VSS net start ProtectedStorage ...and per http://support.microsoft.com/kb/940184 I have deleted the key tree HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\EventSystem\{26c409cc-ae86-11d1-b616-00805fc79216}\Subscriptions I have also run chkdsk /F and chkdsk /R on both permanent hard disks. (I had a similar problem with another computer (same OS, same failure, same start point after SP1 install) but the problem went away when I forced Volume Shadow Copy Services to Automatic startup rather than Manual. I did not have to resort to following the Microsoft KB instructions.)

    Read the article

  • Making an application draw a surface other than the desktop

    - by Cloud
    Hello Im looking for a way to get an application, any application, that has been started using ShellExecuteEx or CreateProcess to draw on an offscreen surface such as a bitmap instead of drawing on the desktop, this should include any dialogs the (Open, Save, messages) that the application invokes. I am familiar with the Windows API, GDI and device context, any suggestions would be much appreciated.

    Read the article

  • PHP Websites: Very high IOPS

    - by Khuram
    We are hosting a set of websites on VM Cloud. These sites were previously on a couple of dedicated servers but to enhance performance, we transferred them onto a Cloud environment. The Cloud has SSD storage but they are now saying that we have very high IOPS and are goign to degrade us if we do not do anything soon. We have good PHP Websites but they are run without any Caching. how do I start to debug this? Sincerely, Khuram

    Read the article

  • Bash loop to move directories on a remote host via ssh

    - by I Forgot
    I'm trying to figure out a way to perform the following loop on a remote host via ssh. Basically it renames a series of directories to create a rotating backup. But it's local. I want it to work against directories on a remote host. while [ $n -gt 0 ]; do { src=$(($n-1)) dst=$n if [ -d /backup/$src ]; then { mv /backup/$src /backup/$dst; } fi; } ((n--)) done;

    Read the article

  • Time machine link error

    - by robinjam
    When attempting to perform a Time Machine backup, the backup appears to proceed as normal, until it finishes copying files at which point it complains that an "error occurred while linking files for" one of my external hard disks. During the previous backup that particular disk was in fact empty, and therefore I can't understand why Time Machine is attempting to link back to it. But alas. I've verified all my disks using Disk Utility and they all appear to be fine. Does anybody know what causes this error, and how I might go about fixing it? Failing that is there a way to force Time Machine to create a brand new backup rather than an incremental one? Thanks in advance!

    Read the article

  • Seriousness of a "Smart" disk error. How long will it last?

    - by Workshop Alex
    I have an 1 TB data disk and the bios and Windows are reporting a "Smart" error. At least, I get a Smart event but it doesn't indicate how serious the failure could be. My system is about 6 months old, including the disk so the warranty will cover the damage. Unfortunately, I lack a second disk of 1 TB in size which I can use to make a full backup. The most important data on this disk is safe, but there's a lot of work data which can be regenerated but this would cost a lot of time. So I ordered an USB disk of 1 TB which will arrive in three days. By then I can make a full backup of the data and afterwards, it can crash. But will the disk live that long? (Well, I won't use the PC as long as I can't make a backup.) How serious is such a Smart event? I know it's serious enough to have it replaced, but will it live for another week or could it die any moment?Update: I purchased an 1 TB external disk and spent most of the day making a backup of the 1 TB disk. It survived that. I then received a new disk, since it was still under warranty and replaced the hard disk. Then I had to spend most of a day again to put back the backup. I need to send back the faulty disk and now have an additional external disk, which could always be practical. :-) The Smart Error report did not cause any failures on the original disk. I won't advise to ignore these warnings, but the disk still has enough life in it to last a few more days. (Just make sure you have a good back-up.) And oh, the horror of having to make a complete backup such a huge disk. :-) If your data is important, make sure you have something that supports incremental backups and lots of space. (In my case, the data wasn't very important, just practical to have on-disk together.)

    Read the article

  • Can domain "masking" be setup in BIND\cPanel

    - by ServerAdminGuy45
    I am supporting a client, let's say he has the domain "acme.com". He registered with GoDaddy and set the name servers to point to his crappy hostgator shared account. He uses cPanel on the hostgator account to set up his subdomains. Is it possible to setup some kind of domain masking so that when someone connects to "application.acme.com", it really forwards to "cloud-solution-provider.com". I mean the actual domain "cloud-solution-provider.com" because it resolves to different IPs based upon geolocation. For this reason I can't just set application.acme.com to point to the IP that cloud-solution-provider.com resolves to. I want the ability for a user to RDP to "application.acme.com" and be sent to the desktop served by "cloud-solution-provider.com", whatever that IP may be. Perhaps I can have GoDaddy be the nameserver? I have a feeling this would break Hostgator since there is a website at acme.com and shop.acme.com

    Read the article

  • Restore files from certain increments using Duplicity

    - by luckytaxi
    Given the following backup sets ... Found primary backup chain with matching signature chain: ------------------------- Chain start time: Tue Jun 21 11:27:26 2011 Chain end time: Tue Jun 21 11:27:59 2011 Number of contained backup sets: 2 Total number of contained volumes: 2 Type of backup set: Time: Num volumes: Full Tue Jun 21 11:27:26 2011 1 Incremental Tue Jun 21 11:27:59 2011 1 If i run the following command, it works (1308655646 was converted from Tue Jun 21 11:27:26 2011): duplicity --no-encryption --restore-time 1308655646 --file-to-restore ORIG_FILE \ file:///storage/test/ restored-file.txt However, if I run the following command, it restores the from the latest set. duplicity --no-encryption --restore-time 2011-06-21T11:27:26 --file-to-restore \ ORIG_FILE file:///storage/test/ restored-file.txt What am I doing wrong w/ the time? I prefer the second option only because I don't want to have to do the conversion manually.

    Read the article

  • Script to mirror MS SQL Server databases between 2 servers

    - by David W
    Hi I have about 200 sites each of which have 2 servers running MSSQL (2k5 at some sites, 2k8 at others) One server is production and the other is primarily there as a backup. We're rebuilding all of these servers this year and as part of that we will have to set up mirroring for ... a lot ... of databases. Some of these sites have 45 databases so mirroring them manually is going to be a huge pain. I was going to write a batch script which uses SQLCMD to backup the database and log, copies to the secondary server, restores the backup and log with norecovery, creates the endpoints and sets the partner. This in itself isn't too complicated, but i'd love to see what other people have done as i'm not very confident in catching errors using the process i've outlined above. I've seen Tools to manage sql 2008 database mirroring? Which looks really good, but the formatting is jumbled and I can't get it to work. If anyone has any other scripts they've written and are willing to share I'd be eternally grateful. Ideally I'd love to be able to use a script to ensure there are matching endpoints (same ports) on both servers, backup the database, backup the log, copy the backups to second server, restore database and log with norecovery, set the partners on both servers, and somehow confirm that the databases are linked and synchronized. Well, thanks for reading :)

    Read the article

  • xcopy files and directory

    - by user1044937
    I have a folder named "C:\Jobs\job#1" , "C:\Jobs\job#2" "C:\Jobs\job#3" etc and a lot of directories and sub-directories under it. I want to get the all the directories under Jobs and xcopy them to C:\backup. Then I want to xcopy all the files under each Job#1, 2 ,3 etc. to C:\backup\job#1\month\\*.* To make it clearer. Source dir = C:\Jobs\job#1\"myfiles&dir" Destination dir = C:\Backup\job#1\month\"myfiles&dir" then do the next folder Source dir = C:\Jobs\job#2\"myfiles&dir" Destination dir = C:\Backup\job#2\month\"myfiles&dir" ...until all folders are back-up. Since the job folder keep increasing, by doing it this way I don't have to add extra code on this script except modify the month. Thank you.

    Read the article

  • Howto get exit code of a script started in screen session

    - by Bettina
    Hi folks, I am currently creating a backup script which uses screen to start a backup job with rsync inside a screen session. The backup jobs are started as follows. screen -dmS backup /usr/bin/rsync ... As soon as the rsync job is finished, the screen session is terminated automatically. To make sure, that the backup was successful, I would like to check the exit code of the rsync job but unfortunately I really don't know how to get the exit code after the screen was terminated. Does someone have a good idea how to automatically check, if the rsync job was successful or not? Would be great if someone does. I already thought about using a temp file but like this: screen -dmS myScreen "rsync -av ... ; echo $? /tmp/myExitCode" but this unfortunately does not work. Then I thought about using stderr like in the example below: screen -dmS myScreen "rsync -av ... 2 /tmp/rsync-sterr None of my ideas worked out so far, since stderr is not written when I use the command above. :-( ? Would be great if someone has a good idea or even a solution. Cheers, Bettina

    Read the article

  • Windows Azure:broken logging after migration to the new SDK 1.3

    - by cloud.dev
    Hi, pls, help. I've migrated to new SDK 1. (Full-IIS mode) I use the following logging: case TraceLevel.Error: Trace.TraceError(message); break; case TraceLevel.Warning: Trace.TraceWarning(message); break; case TraceLevel.Info: Trace.TraceInformation(message); break; case TraceLevel.Verbose: Trace.WriteLine(message); break; it worked fine until I migrated to the new SDK. now, logging works only for Worker Roles. Web-Role can log only inside OnStart-method of WebRole.cs in other cases: logged nothing I understand that Full-IIS means different domains. so, I must call someway WaIIS.exe from w3wp.exe or ...?

    Read the article

  • Hybrid Exchange Online setup with on premise public folders, certificate issues?

    - by exxoid
    We have a Hybrid Exchange setup with Exchange Online (v15 tenant) and Exchange 2010 on premise. The hybrid configuration for the most part is working, what I am having an issue with is getting public folders to work for cloud users. I followed the official documentation here (http://technet.microsoft.com/en-us/library/dn249373(v=exchg.150).aspx) and it kind of works. When I am accessing Outlook on a public wifi I am able to bring up the cloud mailboxes and on premise public folders show up in Outlook. When I am accessing email via Outlook as a cloud user on the same LAN as the on premise exchange, the cloud user makes the outlook.com connection for live/ad/archive mailbox but fails to create a proxy connection for the on premise public folders. The error I get is a certificate mismatch, it seems that when a user on the LAN accesses Outlook/Exchange it is using a different certificate vs. when Outlook is launched on a WiFi network. When I look at the Outlook connection information, I see the connection to outlook.com for ad/live/archive mailbox but no entry for public folder connection. Our on premise Exchange is 2010 SP3 with latest CUs. The client is a domain joined laptop with Windows 7 and Office 2010 SP2, latest windows updates applied. Our infrastructure has a working ADFS 3 and DirSync setup for Office 365. My question then is, what do I need to do to make sure that the Cloud user launching Outlook on the LAN uses the proper certificate (the wildcard 3rd party cert.. vs. the self signed certificate which it looks like it may be using during the connection attempt).

    Read the article

  • Using robocopy and excluding multiple directories

    - by GorrillaMcD
    I'm trying to copy some directories from a server before I restore from backup (my latest backup was corrupt, so I have to use an older one :( ). I'm in the Windows Recovery Environment and have access to the server's file system G:\ and my backup media C:\. But, since I'm more familiar with Linux, I'm having a bit of trouble with the command line in Windows, specifically robocopy. I want to copy multiple directories (maintaining the same directory structure) from G:\ to C:\ while excluding others (namely, the Windows and Program Files folders). I can't figure out the syntax for the /XD option. I was hoping to do something like: robocopy G: C:\backup /CREATE /XD "dir1","dir2", ...

    Read the article

  • How to use rsync when filenames contain double quotes?

    - by wfoolhill
    I am trying to synchronize the content of the directory my_dir/ from /home to /backup. This directory contains a file which name has a double quote in it, such as to"to. Here is my rsync command: rsync -Cazh /home/my_dir/ /backup/my_dir/ And I get the following message: rsync: mkstemp "/backup/my_dir/.to"to.d93PZr" failed: Invalid argument (22) For info, rsync works well when the synchronized filenames contain single quote, parenthesis and space. Thus, why is it bugging with a double quote? Thanks for any help.

    Read the article

  • rsync --link-dest behaviour when run as sudo

    - by fotNelton
    In order to create regular backups, I'm using rsync together with --link-dest so as to create hard-links for unchanged files. For example: rsync -ax \ --partial --delete --delete-excluded --inplace \ --exclude-from=/tmp/temp_excludes \ --link-dest=/Volumes/Backup/current \ /Users /Volumes/Backup/2012-06-25 This works very well as long as I start the process from my normal user account. Though as soon as I start the process using sudo it behaves erradically, meaning that rsync copies all the unchanged files instead of hard-linking them. Since sudo modifies the environment, I've already also tried sudo -E in conjunction with making sure that my sudoers file has the corresponding option set. Well, that didn't work either. So, the question is, how can I run rsync using sudo? Whereas the above example only shows a backup of the Users directory, I also need to backup some system files that I can only access as root.

    Read the article

  • is it possible that a greasemonkey script can work on one computer but not on another?

    - by plastic cloud
    i'm writing an greasemonkey script for somebody else. he is a moderator and i am not. and the script will help him do some moderating things. now the script works for me. as far as it can work for me.(as i am not a mod) but even those things that work for me are not working for him.. i checked his version of greasemonkey plugin and firefox and he is up to date. only thing that's really different is that i'm on a mac and he is pc, but i wouldn't think that would be any problem. this is one of the functions that is not working for him. he does gets the first and third GM_log message. but not the second one ("got some(1) .."). kmmh.trackNames = function(){ GM_log("starting to get names from the first "+kmmh.topAmount+" page(s) from leaderboard."); kmmh.leaderboardlist = []; for (var p=1; p<=(kmmh.topAmount); p++){ var page = "http://www.somegamesite.com/leaderboard?page="+ p; var boardHTML = ""; dojo.xhrGet({ url: page, sync: true, load: function(response){ boardHTML = response; GM_log("got some (1) => "+boardHTML.length); }, handleAs: "text" }); GM_log("got some (2) => "+boardHTML.length); //create dummy div and place leaderboard html in there var dummy = dojo.create('div', { innerHTML: boardHTML }); //search through it var searchN = dojo.query('.notcurrent', dummy).forEach(function(node,index){ if(index >= 10){ kmmh.leaderboardlist.push(node.textContent); // add names to array } }); } GM_log("all names from "+ kmmh.topAmount +" page(s) of leaderboard ==> "+ kmmh.leaderboardlist); does anyone have any idea what could be causing this ?? EDIT: i know i had to write according to what he would see on his mod screen. so i asked him to copy paste source of pages and so on. and besides that, this part of the script is not depending on being a mod or not. i got everything else working for him. just this function still doesn't on neither of his pc's.

    Read the article

  • How do I find the screen size in a fragment class

    - by thomas.cloud
    I was looking at this posting: Android: How to get screen dimensions when I was trying to determine the size of the device's screen while in a fragment class. One answer was close to what I needed but the only code that ended up working for me was: WindowManager wm = (WindowManager) getView().getContext().getSystemService(Context.WINDOW_SERVICE); Display screen = wm.getDefaultDisplay(); whereupon I could then use getHeight(); or another non-deprecated term. I realize this is exactly the same as the other site except this way you don't have define your context on a separate line.

    Read the article

  • What will Time Machine do when

    - by Joel Budgor
    When Time Machine says "I will delete the oldest files first" does it mean this literally. Here is a theoretical example. Source Drive: 300 GB, consisting of 1 280 GB file and a 1 GB file. Backup Drive: 300 GB The initial backup will backup both files, using 281 GB. If I modify the 1 GB file 21 times, what will Time machine do when I run out of room on the backup drive; Delete the original 280 GB because it is the oldest file or delete the oldest version of the file I have modified 21 times. I hope it would delete the oldest version of the file I have modified 21 times, but I want to be sure. Thanks, Joel Budgor

    Read the article

  • Only allow root to change filesystem

    - by Uejji
    The VPS I manage uses a simple hard link rsync archive daily backup system saved to a loop file. This is great, because each backup only takes up as much space as what has changed each day, and all user/group permissions are kept. I would like to give users direct access to their home directories in each backup, but I'm worried about intentional or accidental backup data destruction, as how it stands now users can actually change, destroy or add to backed up data they originally owned. I've been looking for a way to mount this filesystem similar to an ro mount option, but something that would still allow rw access to root, but I've had absolutely no luck. In other words, I want users to be able to view and copy their backed up data without actually being able to change it, and have that data maintain the original permissions. I've got no real preferences as far as filesystem, as long as it's a standard unix filesystem that can preserve permissions, support hard links and deny write access to users without actually stripping the w permission from everything.

    Read the article

  • Sql database dumps failing every night

    - by chaseman36
    Hey guys, I have sql05 and my maintenance plan which backs up a database to an external storage SAN, has been failing every night. Here is my error: Executing the query "BACKUP DATABASE [master] TO DISK = N'\\192.168.x.x\vmbackup\server\dbbackup\master_backup_201004222300.bak' WITH NOFORMAT, NOINIT, NAME = N'master_backup_20100422230002', SKIP, REWIND, NOUNLOAD, STATS = 10 " failed with the following error: "Cannot open backup device '\\192.168.x.x\vmbackup\server\dbbackup\master_backup_201004222300.bak'. Operating system error 5(Access is denied.). BACKUP DATABASE is terminating abnormally.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. I googled this error and tried adding permissions to the backup device for network service as recommended at experts exchange, no dice. Does anyone have any ideas?

    Read the article

  • Optimizing JS Array Search

    - by The.Anti.9
    I am working on a Browser-based media player which is written almost entirely in HTML 5 and JavaScript. The backend is written in PHP but it has one function which is to fill the playlist on the initial load. And the rest is all JS. There is a search bar that refines the playlist. I want it to refine as the person is typing, like most media players do. The only problem with this is that it is very slow and laggy as there are about 1000 songs in the whole program and there is likely to be more as time goes on. The original playlist load is an ajax call to a PHP page that returns the results as JSON. Each item has 4 attirbutes: artist album file url I then loop through each object and add it to an array called playlist. At the end of the looping a copy of playlist is created, backup. This is so that I can refine the playlist variable when people refine their search, but still repopulated it from backup without making another server request. The method refine() is called when the user types a key into the searchbox. It flushes playlist and searches through each property (not including url) of each object in the backup array for a match in the string. If there is a match in any of the properties, it appends the information to a table that displays the playlist, and adds it to the object to playlist for access by the actual player. Code for the refine() method: function refine() { $('#loadinggif').show(); $('#library').html("<table id='libtable'><tr><th>Artist</th><th>Album</th><th>File</th><th>&nbsp;</th></tr></table>"); playlist = []; for (var j = 0; j < backup.length; j++) { var sfile = new String(backup[j].file); var salbum = new String(backup[j].album); var sartist = new String(backup[j].artist); if (sfile.toLowerCase().search($('#search').val().toLowerCase()) !== -1 || salbum.toLowerCase().search($('#search').val().toLowerCase()) !== -1 || sartist.toLowerCase().search($('#search').val().toLowerCase()) !== -1) { playlist.push(backup[j]); num = playlist.length-1; $("<tr></tr>").html("<td>" + num + "</td><td>" + sartist + "</td><td>" + salbum + "</td><td>" + sfile + "</td><td><a href='#' onclick='setplay(" + num +");'>Play</a></td>").appendTo('#libtable'); } } $('#loadinggif').hide(); } As I said before, for the first couple of letters typed, this is very slow and laggy. I am looking for ways to refine this to make it much faster and more smooth.

    Read the article

  • Crontab -- scheduling my backups

    - by Garfonzo
    I want to do a backup every Friday night (no, this is not the whole backup routine, just part of it). Each Friday night's backup will not be overwritten until 4 weeks later. So, essentially, I have a four revolving backups: Week1, week2, week3, and week4. Now, I need the week1 backup script to run every 4 weeks. But I also want week2's script to run every four weeks. I know that I can tell the crontab to execute something every X weeks/days/hours/whatever. However, how do I set it up so that each of these four scripts actually run on different weeks, how do I avoid all 4 scripts running on the same night, then dutifully waiting for weeks only to all run again? Thanks.

    Read the article

  • Can i use a Windows 2008 r2 Cluster for file redundancy

    - by JERiv
    I'm researching a sever clustering architecture as a redundancy and backup solution for a client, and something that isn't made clear is whether or not i can use server clustering to replace a file server with backup solution. Forgive my Elementary understanding of server clustering but supposing: 2 Sites (NJ, CA) Identical Servers at each site setup as a Remote Site Cluster nodes with Windows Enterprise server 2008 r2 Services: File, Terminal, AD, and maybe DNS Will the following will be true: Files (including data drives) will be synced between the two servers eliminating the need for third party backup/mirroring software to sync/backup files. Also supposing i use roaming profiles w/ folder redirection; How will client computer in the WAN access their data through the cluster (i.e. will they automatically choose the best route)

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >