Search Results

Search found 7618 results on 305 pages for 'backup exec'.

Page 12/305 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • how can exec change the behavior of exec'ed program

    - by R Samuel Klatchko
    I am trying to track down a very odd crash. What is so odd about it is a workaround that someone discovered and which I cannot explain. The workaround is this small program which I'll refer to as 'runner': #include <stdio.h> #include <unistd.h> #include <string.h> #include <errno.h> int main(int argc, char *argv[]) { if (argc == 1) { fprintf(stderr, "Usage: %s prog [args ...]\n", argv[0]); return 1; } execvp(argv[1], argv + 1); fprintf(stderr, "execv failed: %s\n", strerror(errno)); // If exec returns because the program is not found or we // don't have the appropriate permission return 255; } As you can see, all this program does is use execvp to replace itself with a different program. The program crashes when it is directly invoked from the command line: /path/to/prog args # this crashes but works fine when it is indirectly invoked via my runner shim: /path/to/runner /path/to/prog args # works successfully For the life of me, I can figure out how having an extra exec can change the behavior of the program being run (as you can see the program does not change the environment). Some background on the crash. The crash itself is happening in the C++ runtime. Specifically, when the program does a throw, the crashing version incorrectly thinks there is no matching catch (although there is) and calls terminate. When I invoke the program via runner, the exception is properly caught. My question is any idea why the extra exec changes the behavior of the exec'ed program?

    Read the article

  • How do I create a read only MySQL user for backup purposes with mysqldump?

    - by stickmangumby
    I'm using the automysqlbackup script to dump my mysql databases, but I want to have a read-only user to do this with so that I'm not storing my root database password in a plaintext file. I've created a user like so: grant select, lock tables on *.* to 'username'@'localhost' identified by 'password'; When I run mysqldump (either through automysqlbackup or directly) I get the following warning: mysqldump: Got error: 1044: Access denied for user 'username'@'localhost' to database 'information_schema' when using LOCK TABLES Am I doing it wrong? Do I need additional grants for my readonly user? Or can only root lock the information_schema table? What's going on? Edit: GAH and now it works. I may not have run FLUSH PRIVILEGES previously. As an aside, how often does this occur automatically? Edit: No, it doesn't work. Running mysqldump -u username -p --all-databases > dump.sql manually doesn't generate an error, but doesn't dump information_schema. automysqlbackup does raise an error.

    Read the article

  • Backup / Disaster Recovery, should I store RAR-compressed files?

    - by moraleida
    I'm in the process of recovering files from an accidentally formated Ext4 partition using Photorec. It had about 300Gb of data, of which I've already got hold of about 30Gb. So far, it seems to me that the recovery of RAR-compressed files has been much more successful than the recovery of individual uncompressed files and ZIP compressed files - in the sense that a lot of recovered files/zips were unreadable, and pretty much all of the RAR files were intact. Is there such a relation? Are RAR-compressed files really less prone to corruption and thus easier to recover?

    Read the article

  • Time Machine for Windows

    - by Kevin L.
    A simple Google search for "Time Machine for Windows" results in a flurry of different little apps. But instead of relying on forum anecdotes and advertisements, I call on the much wiser Super User beta community for some depth on this one. Having Time Machine running on Leopard is like a warm, fuzzy blanket of comfort that I never got with RAID, rsync, or SyncToy on Windows. I'm not asking the community what the "best" backup software for Windows is, but instead: Is there any true Time Machine clone for Windows, one that includes as many of the following as possible: Completely transparent, "set-it-and-forget-it" backup Incremental backups (changes only) for every hour for a day, every day for a month, and every week until the backup disk is full Ability to rebuild from this backup disk in case of main drive meltdown (the backup doesn't have to be bootable; neither are Time Machine disks) Extremely easy to use UI (target user == wife). Bonus points for a beautiful UI

    Read the article

  • Why does this rsnapshot exclude not work?

    - by bstpierre
    Rsnapshot passes excludes directly to rsync, but rsync's behavior appears inconsistent. I've simplified my rsnapshot backup test to the following directory tree (this tree will be backed up): gorilla:~# find /tmp/snaptest -exec file {} \; /tmp/snaptest: directory /tmp/snaptest/SKIPTHIS: directory /tmp/snaptest/SKIPTHIS/xyz: directory /tmp/snaptest/SKIPTHIS/xyz/testing: ASCII text /tmp/snaptest/SKIPTHIS/bar: ASCII text /tmp/snaptest/SKIPTHIS/foo: ASCII text /tmp/snaptest/SKIPTHIS.txt: ASCII text My config file: config_version 1.2 snapshot_root /tmp/backup-media no_create_root 1 cmd_cp /bin/cp cmd_rm /bin/rm cmd_rsync /usr/bin/rsync cmd_ssh /usr/bin/ssh cmd_logger /usr/bin/logger cmd_du /usr/bin/du interval hourly 6 interval daily 7 interval weekly 4 interval monthly 3 verbose 3 loglevel 3 logfile /media/maxtor-one-touch/rsnapshot.log lockfile /media/maxtor-one-touch/backups/.rsnapshot.pid rsync_short_args -a rsync_long_args --delete --numeric-ids --relative --delete-excluded exclude "SKIPTHIS/**" link_dest 1 backup /tmp/snaptest snaptest The result: gorilla:~# rsnapshot -c /tmp/snaptest.conf hourly echo 12638 > /media/maxtor-one-touch/backups/.rsnapshot.pid mkdir -m 0755 -p /tmp/backup-media/hourly.0/ /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \ --exclude="SKIPTHIS/**" /tmp/snaptest \ /tmp/backup-media/hourly.0/snaptest touch /tmp/backup-media/hourly.0/ rm -f /media/maxtor-one-touch/backups/.rsnapshot.pid gorilla:~# find /tmp/backup-media/ -exec file {} \; /tmp/backup-media/: directory /tmp/backup-media/hourly.0: directory /tmp/backup-media/hourly.0/snaptest: directory /tmp/backup-media/hourly.0/snaptest/tmp: sticky directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest: directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS: directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/xyz: directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/xyz/testing: ASCII text /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/bar: ASCII text /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/foo: ASCII text /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS.txt: ASCII text My confusion stems from the fact that if I copy-paste the rsync command echoed by rsnapshot, the SKIPTHIS directory is excluded! (I've tested with various other SKIPTHIS patterns with the same results.) Any idea what's going on?

    Read the article

  • PHP Exec command - How to pass input to a series of questions

    - by user556597
    I have a program on my linux server that asks the same series of questions each time it executes and then provides several lines of output. My goal is to automate the input and output with a php script. I know how to capture the output in an array by writing: $out = array(); exec("my/path/program",$out); But how do I handle the input? Assume the program asks 3 questions and valid answers are: left 120 n What is the easiest way using php to pass that input to the program? Can I do it somehow on the exec line? I’m not a php noob but simply have never needed to do this before. Alas, my googling is going in circles.

    Read the article

  • to Imagemagick PHP exec

    - by Erik Smith
    I found a very helpful post on here about cropping images in a circle. However, when I try to execute the imagemagick script using exec in PHP, I'm getting no results. I've checked to make sure the directories have the correct permissions and such. Is there a step I'm missing? Any insight would be much appreciated. Here's what my script looks like: $run = exec('convert -size 200x200 xc:none -fill daisy.jpg -draw "circle 100,100 100,1" uploads/new.png'); Edit: Imagemagick is installed.

    Read the article

  • ls output changing when used through exec()

    - by user359650
    I'm using the ls command via PHP and exec() and I get a different output than when I run the same command via the shell. When running ls through PHP the year and month of the date get changed into the month name: Running the command through the shell: $ ls -lh /path/to/file -rw-r--r-- 1 sysadmin sysadmin 36M 2011-05-18 13:25 file Running the command via PHP: <?php exec("ls -lh /path/to/file", $output); print_r($output); /* Array ( [0] => -rw-r--r-- 1 sysadmin sysadmin 36M May 18 13:25 file ) */ Please note that: -the issue doesn't occur when I run the PHP script via the cli (it only occurs when run through apache) -I checked the source code of the page to make sure that what I was seeing was what I was getting (and I do get the month name instead of the proper date) -I also run the ls command through the shell as the www-data user to see if ls was giving different output depending on the user (the output is the always the same from the shell, that is I get the date in yyyy-mm-dd instead of the month name)

    Read the article

  • Javascript / jQuery Exec turns up Null

    - by Matrym
    How do I skip over this next line if it turns out to be null? Currently, it (sometimes) "breaks" and prevents the script from continuing. var title = (/(.*?)<\/title/m).exec(response)[1]; $.get(url, function(response){ var title = (/<title>(.*?)<\/title>/m).exec(response)[1]; if (title == null || title == undefined){ return false; } var words = title.split(' '); $.each(words, function(index, value){ $link.highlight(value + " "); $link.highlight(" " + value); }); });

    Read the article

  • PHP exec error, possibly MAMP using ghostscript

    - by user1762526
    I have been trying to use ghostscript in PHP to convert pdf files to images (png, jpg). I don't really care as long as they are images. This is the code that I used. exec("gs -sDEVICE=jpeg -sOutputFile=/Applications/Mamp/htdocs/cover.jpg -r144 /Applications/Mamp/htdocs/test.pdf"); When I enter the exact same thing, without the exec and quotes obviously, into the command line it does exactly what I want. However, when I run the php file nothing happens. I am using a MAMP server and the server seems to work fine, whenever I run another file with it I have no issues. Anyone have any ideas why it might not execute right?

    Read the article

  • SBS 2008 SP2 Backup - Volume Shadow Copy Operation Failed

    - by Robert Ortisi
    Server Setup Exchange 2007 Version: 08.03.0192.001 (Rollup 4) Windows Small Business Server 2008 SP2 (Rollup 5) Exchange set up on D: drive (449 GB / 698 GB Free) 80 GB / 148 GB Free on OS drive. Issue Backup Failure (VSS related) Backup Software Windows Server Backup (ver 1.0) Simplified Error Creation of the shared protection point timed out. Unknown error (0x81000101) The flush and hold writes operation on volume C: timed out while waiting for a release writes command. Volume Shadow Copy Warning: VSS spent 43 seconds trying to flush and hold the volume \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}. This might cause problems when other volumes in the shadow-copy set timeout waiting for the release-writes phase, and it can cause the shadow-copy creation to fail. Trying again when disk activity is lower may solve this problem. What I've tried Server Reboot. Updated Server and Exchange. ReConfigured Sharepoint (Helped resolve last vss error I encountered). registered VSS Dll's (Backups will sometimes work afterwards but VSS writers fail soon after). Tried Implementing Hotfix: http://support.microsoft.com/kb/956136 Tried Implementing Hotfix: http://support.microsoft.com/kb/972135 I left it for a few days and a few backups came through but then began to fail again. Detailed Information Log Name: Application Source: VSS Date: 16/11/2011 8:02:11 PM Event ID: 12341 Task Category: None Level: Warning Keywords: Classic User: N/A Computer: SERVER.DOMAIN.local Description: Volume Shadow Copy Warning: VSS spent 43 seconds trying to flush and hold the volume \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}. This might cause problems when other volumes in the shadow-copy set timeout waiting for the release-writes phase, and it can cause the shadow-copy creation to fail. Trying again when disk activity is lower may solve this problem. Operation: Executing Asynchronous Operation Context: Current State: flush-and-hold writes Volume Name: \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}\ Event Xml: 12341 3 0 0x80000000000000 1651049 Application SERVER.DOMAIN.local 43 \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}\ Operation: Executing Asynchronous Operation Context: Current State: flush-and-hold writes Volume Name: \?\Volume{b562a5dd-8246-11de-a75b-806e6f6e6963}\ ================================================================================= Log Name: System Source: volsnap Date: 16/11/2011 8:02:11 PM Event ID: 8 Task Category: None Level: Error Keywords: Classic User: N/A Computer: SERVER.DOMAIN.local Description: The flush and hold writes operation on volume C: timed out while waiting for a release writes command. Event Xml: 8 2 0 0x80000000000000 987135 System SERVER.DOMAIN.local ================================================================================== Log Name: Application Source: Microsoft-Windows-Backup Date: 16/11/2011 8:11:18 PM Event ID: 521 Task Category: None Level: Error Keywords: User: SYSTEM Computer: SERVER.DOMAIN.local Description: Backup started at '16/11/2011 9:00:35 AM' failed as Volume Shadow copy operation failed for backup volumes with following error code '2155348001'. Please rerun backup once issue is resolved. Event Xml: 521 0 2 0 0 0x8000000000000000 1651065 Application SERVER.DOMAIN.local 2011-11-16T09:00:35.446Z 2155348001 %%2155348001 ================================================================================== Writer name: 'FRS Writer' Writer Id: {d76f5a28-3092-4589-ba48-2958fb88ce29} Writer Instance Id: {ba047fc6-9ce8-44ba-b59f-f2f8c07708aa} State: [5] Waiting for completion Last error: No error Writer name: 'ASR Writer' Writer Id: {be000cbe-11fe-4426-9c58-531aa6355fc4} Writer Instance Id: {0aace3e2-c840-4572-bf49-7fcc3fbcf56d} State: [1] Stable Last error: No error Writer name: 'Shadow Copy Optimization Writer' Writer Id: {4dc3bdd4-ab48-4d07-adb0-3bee2926fd7f} Writer Instance Id: {054593e2-2086-4480-92e5-30386509ed1b} State: [1] Stable Last error: No error Writer name: 'Registry Writer' Writer Id: {afbab4a2-367d-4d15-a586-71dbb18f8485} Writer Instance Id: {840e6f5f-f35a-4b65-bb20-060cf2ee892a} State: [1] Stable Last error: No error Writer name: 'COM+ REGDB Writer' Writer Id: {542da469-d3e1-473c-9f4f-7847f01fc64f} Writer Instance Id: {9486bedc-f6e8-424b-b563-8b849d51b1e1} State: [1] Stable Last error: No error Writer name: 'BITS Writer' Writer Id: {4969d978-be47-48b0-b100-f328f07ac1e0} Writer Instance Id: {29368bb3-e04b-4404-8fc9-e62dae18da91} State: [1] Stable Last error: No error Writer name: 'Dhcp Jet Writer' Writer Id: {be9ac81e-3619-421f-920f-4c6fea9e93ad} Writer Instance Id: {cfb58c78-9609-4133-8fc8-f66b0d25e12d} State: [5] Waiting for completion Last error: No error ==================================================================================

    Read the article

  • Amazon EC2 EBS volume scheduled backup/snapshots using puppet

    - by Ehrann Mehdan
    I am not a Linux admin, although I wish I was, and I have seen these questions Amazon EC2 Backup Strategy Amazon EC2 + EBS:: Regular backup plan? Simple Backup Strategy for Amazon EC2 instances / volumes? And this suggestion http://alestic.com/2009/09/ec2-consistent-snapshot I tried using command line + crontab (the command line works, but crontab for some reason, doesn't) But I'm still pretty lost, all I want is an automated, rolling backup of my amazon EC2 (EBS) data (by rolling I mean keep 3-4 weeks back, but delete old snapshots as new ones come for cost control) And as things usually go, if there is something that is hard and painful, someone creates a solution for it. My question is simple, is there a way using a tool like Puppet to do it without a painful learning curve? (or via other tools like http://ylastic.com) If yes, how?

    Read the article

  • Free solution to backup folders to external SFTP server with shadow copy

    - by Sergiy Byelozyorov
    I have an account in university on Linux machine with 10TB of free space accessible via SFTP. I would like to backup my Windows 7 x64 laptop to university. Currently I am using rsync+cygwin, but backup is pretty slow (without shadow copy) and I hate console window appearing every day on my screen when I login. So I am looking for something like Windows Backup but with support for SFTP. Combination of tools will work too.

    Read the article

  • Backup IMAP mail, then access via IMAP again

    - by pauldoo
    I am looking for a tool to backup an entire IMAP account and then expose that backup (read-only) via IMAP again. This would be perfect for backing up email from any provider, and allowing the backup to be accessed from any mail client even years after closing the account. I suspect this could be achieved using a full blown IMAP server by configuring it to mirror some other server; but I am hoping for a simpler solution.

    Read the article

  • Backup software for Mac OS X

    - by Simone Carletti
    Which backup software do you recommend for Mac OS X? As you probably know, Leopard comes with an integrated backup tool called Time Machine. It works pretty well despite it misses some advanced restore/search features. Here's a list of backup software for Mac OS X: Time Machine (integrated) Carbon Copy Cloner (free) SuperDuper (commercial) iBackup (free) Do you know more? What software do you use and which feature can't you live without?

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • Backup to Synology NAS using rsync or NFS and hardlinks

    - by danilo
    I want to backup data from a Windows (Vista) computer to a Synology NAS (210j). The NAS supports FTP, SMB, NFS and also allows a rsync daemon to be set up. I want to backup different folders to the NAS, but I'd prefer to use the hardlink method to save diskspace (like this script does). With this method, a new folder is created for every backup, but if the file already exists on the target, only a hardlink is created. The filesystem on the Synology device is ext3, so I probably can't use rsyncbackup, as it is made for NTFS. Is there another way to do this backup with hardlink support?

    Read the article

  • How to backup a networked drive?

    - by nute
    I have a networked drive (Iomega Media Drive). To be safe in case the drives crashes, I've decided to buy an additional networked drive (WD MyBook World). Now, how do I backup one onto the other continuously? The WD drive came with a backup software (trial version, they didn't say that when i bought it), however it doesn't allow me to select a networked drive, only local drives. How do I backup a NETWORKED DRIVE ONTO A NETWORKED DRIVE? Thanks

    Read the article

  • Windows Backup (2008 R2) recovery and timezone

    - by GrZeCh
    Hello, does difference between timezones on Windows Server 2008 where backup was made and reovery console makes difference? Recovery console (wbadmin from command line too) is not finding any backup on local hard drive connected to server. Thanks EDIT: I'm working on Windows Server 2008 R2 EDIT2: This is not related to timezone. When I connected backup hard drive from Windows 2008 R2 Release Candidate recovery console runned from RTM system version DVD found stored backups from it without problems.

    Read the article

  • Virtualizor + VPS Backup (Bare Metal Restore capable) Using rSync 3

    - by Gaia
    I am using virtualizor to manage 3 XEN VPS. Hardware node and each VPS run CentOS 5.x. My backup needs are as follows: 1) I need to be able to bare metal restore the entire hardware node, excluding the VPSes (which would be restored via #2 below) 2) I need to have a complete backup of each VPS, ideally a backup that can be deployed on any other host that uses Xen, if the need arises. Naturally, I would also need to use this backup to restore an entire VPS to an earlier state within the same host. Which folders rSync needs to keep backed up in order to accomplish the above? The rSync specialists aren't sure of it either. Thanks

    Read the article

  • SBS 2011 backup

    - by Chris
    I have a freshly installed SBS 2011 server that I need to configure for backup. I tried using the SBS backup configuration tool, but it didn't want to use anything but an external drive. Previously, with our W2K3 servers, I used NTBackup to back up the server to disk and then copied the backup files to a remote server on a regular basis. It doesn't appear this is possible with the built-in backup tools in SBS 2011. Am I missing something? What other options are there that won't cost an arm and a leg?

    Read the article

  • Amazon EC2 EBS volume scheduled backup/snapshots using puppet / similar tools

    - by Ehrann Mehdan
    I am not a Linux admin, although I wish I was, and I have seen these questions Amazon EC2 Backup Strategy Amazon EC2 + EBS:: Regular backup plan? Simple Backup Strategy for Amazon EC2 instances / volumes? And this suggestion http://alestic.com/2009/09/ec2-consistent-snapshot I tried using command line + crontab (the command line works, but crontab for some reason, doesn't) But I'm still pretty lost, all I want is an automated, rolling backup of my amazon EC2 (EBS) data (by rolling I mean keep 3-4 weeks back, but delete old snapshots as new ones come for cost control) And as things usually go, if there is something that is hard and painful, someone creates a solution for it. My question is simple, is there a way using a tool like Puppet to do it without a painful learning curve? (or via other tools like http://ylastic.com) If yes, how?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >