Search Results

Search found 698 results on 28 pages for 'rsync'.

Page 12/28 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • How to write files in specific order?

    - by Bernie
    Okay, here's a weird problem -- My wife just bought a 2014 Nissan Altima. So, I took her iTunes library and converted the .m4a files to .mp3, since the car audio system only supports .mp3 and .wma. So far so good. Then I copied the files to a DOS FAT-32 formatted USB thumb drive, and connected the drive to the car's USB port, only to find all of the tracks were out of sequence. All tracks begin with a two digit numeric prefix, i.e., 01, 02, 03, etc. So you would think they would be in order. So I called Nissan Connect support and the rep told me that there is a known problem with reading files in the correct order. He said the files are read in the same order they are written. So, I manually copied a few albums with the tracks in a predetermined order, and sure enough he was correct. So I copied about 6 albums for testing, then changed to the top level directory and did a "find . music.txt". Then I passed this file to rsync like this: rsync -av --files-from=music.txt . ../Marys\ Music\ Sequenced/ The files looked like they were copied in order, but when I listed the files in order of modified time, they were in the same sequence as the original files: ../Marys Music Sequenced/Air Supply/Air Supply Greatest Hits ls -1rt 01 Lost In Love.mp3 04 Every Woman In The World.mp3 03 Chances.mp3 02 All Out Of Love.mp3 06 Here I Am (Just When I Thought I Was Over You).mp3 05 The One That You Love.mp3 08 I Want To Give It All.mp3 07 Sweet Dreams.mp3 11 Young Love.mp3 So the question is, how can I copy files listed in a file named music.txt, and copy them to a destination, and ensure the modification times are in the same sequence as the files are listed?

    Read the article

  • How to avoid duplicates when copying files that have been renamed at the destination

    - by Benoitt
    I have to get pictures from a folder – with subfolders which are updated automatically – with their extensions. These files have to be copied in a folder where a website based on PHP will edit them (by renaming and creating an XML file) to be downloadable and integrated in an XML feed. Because of the rename function of the script, when I perform the copy gain, all the files are duplicated, because the script has renamed the original ones already. I've tried a few things with rsync but I'm looking for something more powerful because I can't copy files with an external "history". #!/bin/bash find '/home/name/picture' -name '*.jpg' | while read FILE ; do rsync --backup --backup-dir=incremental --suffix=.old "$FILE" /var/www/media ; done wget --spider 'http://myscript.php' ; #exit 0 PS: As a little addition, I'd like to replace '.' with a 'space' just after the *.jpeg copy. My PHP script has some problem to define files with comma because of the extension. I'm finking about a command with find – like I did before – with a sed function? Is that a good idea?

    Read the article

  • Using RSYNC to Replicate Synology NAS DS710+ to Windows 7 Hard Drive

    Learn how to use a local backup drive on your windows 7 system to replicate the data on any or all of your directories on your Synology NAS (Network Hard Drive Device) DS710+.  This post will... This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Are there any scripts to synchronize sites?

    - by Matrym
    I've just set up a fail-over DNS to switch the site to a second host if the first is down. This is great for showing an old / archived version of the site, but I suspect maintenance is going to be a real pain. I moved the files over with rsync in the first place. Is this the kinda thing that could be run as a cron job, automatically moving over newer files?

    Read the article

  • Recommendations on managing dot files for users using Puppet

    - by Beaming Mel-Bin
    Goal is to have a collection of dot files (.bashrc, .vimrc, etc.) in a central location. Once it's there, Puppet should push out the files to all managed servers. I initially was thinking of giving users FTP access where they could upload their dot files and then having an rsync cron job. However, it might not be the most elegant or robust solution. Wanted to see if anyone else had some recommendations.

    Read the article

  • Sync local directory with remote FTPS?

    - by A T
    How do I keep my local directory in sync with my remote FTPS directory? Note that I've tried WinSCP, but found that it only works a few times then I need to restart it to get it going again. Also I've tried all the utilities mentioned here but only a few supported the connection requirements (explicit SSL over FTP), and those that did didn't have "realtime" directory sync. Also note that a curl, rsync or wput command which I can put into "scheduled tasks" will suffice, if it can do directory sync.

    Read the article

  • How to copy directory from one Linux server to another with a minimum in-between period?

    - by yegor256
    I have a rather big directory on one server (over 4000 files), which I'd like to copy to another server (which contains a previous version of this directory). rsync is the first option, but it will put the destination folder into waiting status for a rather long period of time (more than a minute). I'd like to do it a bit differently: gzip the source folder scp the archive to the destination server gunzip the file there delete the archive at the source and the destination What is the best way to accomplish all this?

    Read the article

  • I wanna save some terminal commands in a file

    - by Jakob Abfalter
    I am using Opensuse 12.3 What I wanna do is, create a link on my desktop for some specific terminal commandos. The backround is, that I do some backup via rsync and don`t wanna type the commandos everytime new. I also dont wanna use a cronjob, since my computer isnt running everytime. Perfect would be some desktop icons, which on clicking execute the command(s). Could somebody tell me how to do this?

    Read the article

  • Virtualizor + VPS Backup (Bare Metal Restore capable) Using rSync 3

    - by Gaia
    I am using virtualizor to manage 3 XEN VPS. Hardware node and each VPS run CentOS 5.x. My backup needs are as follows: 1) I need to be able to bare metal restore the entire hardware node, excluding the VPSes (which would be restored via #2 below) 2) I need to have a complete backup of each VPS, ideally a backup that can be deployed on any other host that uses Xen, if the need arises. Naturally, I would also need to use this backup to restore an entire VPS to an earlier state within the same host. Which folders rSync needs to keep backed up in order to accomplish the above? The rSync specialists aren't sure of it either. Thanks

    Read the article

  • Rsync like windows backup tool

    - by Halfgaar
    Hi, I need to backup some windows machines and have been unable to find the proper tool. What I need is a tool that does efficient copying of changed files to a windows network location, like Rsync does. In turn, the server will then back that up using rdiff-backup, a tool which does very clever incremental backups. Right now I'm using windows' 7 included backup feature, but I really don't get that. It's too much off-topic, but it doesn't suffice (seems buggy as well). I looked into Amanda, but as soon as it wanted to install MySQL, I aborted. I also tried Deltacopy, but unfortunately, I don't remember what the problem with that was... Any advice for an rsync like tool that just does daily syncs to a network location?

    Read the article

  • Running an rsync sweep before initializing lsyncd for synchronizing instances on EC2

    - by chrisallenlane
    My company uses several EC2 servers that will scale up and down according to the load we're receiving on our sites at any given moment. For the sake of our discussion here, we're running four instances: master.ourdomain.com - the file syncing "hub" of the webservers www1/www2/www3.ourdomain.com - three webservers which turn on or off as dictated by load I'm using lsyncd to keep all of the webservers in sync, and for the most part, it's working quite well. We're using a two-way syncing scheme, such that each webserver syncs against master, and master syncs against each webserver. Thus, the webservers are kept in sync, even though they aren't syncing against each other directly. I'm having one problem that I'm having a hard time solving,though. It occurs under these circumstances: When changes are made on master (perhaps after we've pushed new code), while some of the redundant webservers are sleeping And then a sleeping webserver wakes-up to absorb load Under that circumstance, I would like the following to happen: First, the newly-awoken webserver should sync its file structure - one way - against master, to bring its web application code up-to-date. Then, and only then, should it begin pushing changes in its file structure back to master. Unfortunately, currently, when a sleeping server is started, when lsyncd starts up, it pushes changes back to master before updating its own codebase, thus overwriting new code with old. Thus, before lsyncd starts, I'd like to be able to synchronize the webservers code against master's, perhaps by running a simple one-way rsync against the two machines. We're running lsyncd v.2, and I've tried to make this happen by using the "bash" configuration options documented in the lsyncd manual. My configuration file looks like this: settings = { logfile = "/home/user/log/lsyncd/log.txt", statusFile = "/home/user/log/lsyncd/status.txt", maxProcesses = 2, nodaemon = false, } bash = { onStartup = "rsync [email protected]:/home/user/www /home/user/www" } sync{ default.rsyncssh, source="/home/user/www/", host="[email protected]", targetdir="/home/user/www/", rsyncOpts="-ltus", excludeFrom="/home/user/conf/lsyncd/exclude" } (I've obviously redacted that file somewhat to protect the identities of the guilty.) Simply put, though, this just isn't working. How else might I approach this problem? I was looking at the --delete-after option in man rsync, but I don't think that does what I'm looking for. Are there any suggestions about how I should approach this problem? Thanks for lending your time and expertise. Chris

    Read the article

  • Rsync like windows backup tool

    - by Halfgaar
    I need to backup some windows machines and have been unable to find the proper tool. What I need is a tool that does efficient copying of changed files to a windows network location, like Rsync does. In turn, the server will then back that up using rdiff-backup, a tool which does very clever incremental backups. Right now I'm using windows' 7 included backup feature, but I really don't get that. It's too much off-topic, but it doesn't suffice (seems buggy as well). I looked into Amanda, but as soon as it wanted to install MySQL, I aborted. I also tried Deltacopy, but unfortunately, I don't remember what the problem with that was... Any advice for an rsync like tool that just does daily syncs to a network location?

    Read the article

  • Remotely sync Time Machine drives

    - by Off Rhoden
    I have an Xserve that runs Time Machine to a local terabyte drive. I also connected my external terabyte drive for a time period and had Time Machine use it to establish the seed data. I plan to take my drive back home with me (out of state) and have the Xserve return to using its local drive for Time Machine. But when I get back home, is there a way to keep my external drive's copy of the Time Machine Backups folder in sync with the Backups folder back on the Xserve? I'm wanting a full copy of the history (makes an awesome remote backup). I've thought of using the unix command rsync. In fact, that's how I had been doing it but I was looking the compactness that Time Machine was able to achieve. Thanks.

    Read the article

  • Sync files between Mac, PC and Linux automatically

    - by Siriss
    Hello everyone- I have a MacBook Pro, a Windows 7 desktop, and a Fedora 13 netbook. I have been searching far and wide for an automatic solution to sync files (pictures, music, docs, etc...) between the three when they are all on the same LAN. To better explain, when I get home and turn my MacBook on, I want it to sync automatically any file changes to Windows 7 and the netbook. Likewise if I make changes on my Windows 7 box, I want them to be reflected on my Mac. I can use rsync, but it is not automatic as far as I can tell, and I would use Dropbox but I have a lot more that 2 gigs and do not want to pay. I also do not need internet syncing. I just want local LAN. Does anyone have any ideas? Thank you very much in advance.

    Read the article

  • Disk Redundancy across different server

    - by Mascarpone
    I have 3 servers, all with the same specs: Intel CPU 8 GB RAM Linux or BSD Single 2TB desktop SATA with more than 10K Hours of operation, with only less than 300 GB Used My provider cannot install a second hard drive, but can guarantee me that the drive will be replaced immediately in case of failure, with another equally crappy drive. The likelihood of drive failure is high, and since I can't use RAID, I was thinking about keeping a back up of each machine on all the other machines, so that there are always 2 copies on 2 different drives, plus the original. I would synchronize the drives every hour, with rsync, to guarantee some sort of redundancy, since bandwidth inside the DC is free, so it would be much cheaper than offsite backup. (A daily offiste backup is kept anyhow). What do you think? Any suggestion?

    Read the article

  • linux to linux, 10TB transfer?

    - by lostincode
    I've looked at all the previous similar questions, but the answers seemed to be all over the place and no one was moving a lot of data (100GB != 10TB). I've got about 10TB that I need to move from one raid to another, gigabit net, XFS file systems. My biggest concern is having the transfer die midway and not being able to resume easily. Speed would be nice, but ensuring transfer is much more important. Normally I'd just tar & netcat, but the raid I'm moving from has been super flaky as of late and I need to be able to recover and resume if it drops mid process. Should I be looking at rsync?

    Read the article

  • $RYSNC_PASSWORD not being read/responded-to correctly (Snow Leopard)

    - by warren
    Ignoring the security issues, I have the following script that synchronizes my music library from my MacBook Pro (running Snow Leopard) to the file store (CentOS 4) on my network: rsync -rav --progress --partial -e "ssh" ~/Music/iTunes/* user@scramasax:~/music/iTunes-scissor:~ When I try to use either a password provided on the command-line (), in a password file (--password-file), or in the environment variable RSYNC_PASSWORD, the login still goes interactive, requiring me to type my password again. I will be moving to pre-shared keys on my network, but in situations where that is not possible, such as rsync'ing files to a webserver, being able to successfully embed the password in the script would be very helpful.

    Read the article

  • Are periodic full backups really necessary on an incremental backup setup?

    - by user2229980
    I intend to use an old computer I have as a remote backup server for myself and a few other people. We are all geographically separated, and the plan is to do incremental daily backups using rsync and ssh. My original idea was to make one initial full backup then never again have to deal with the overhead of doing it, and from that moment on only copy the files changed since the last backup. I've been told that this could be bad, but I fail to understand why. Since each snapshot is comprised of hard links to the unchanged files plus the original changed ones, isn't it going to be identical to a new full backup? Why would I want to make another full backup?

    Read the article

  • Best way to compare (diff) a full directory structure?

    - by Adam Matan
    Hi, What's the best way to compare directory structures? I have a backup utility which uses rsync. I want to tell the exact differences (in terms of file sizes and last-changed dates) between the source and the backup. Something like: Local file Remote file Compare /home/udi/1.txt (date)(size) /home/udi/1.txt (date)(size) EQUAL /home/udi/2.txt (date)(size) /home/udi/2.txt (date)(size) DIFFERENT Of course, the tool can be ready-made or an idea for a python script. Many thanks! Udi

    Read the article

  • FreeNAS - how to "Exclude from file" in Rsyncd (GUI)

    - by user179181
    I am trying to set rsync tasks to Pull user profiles from 11 Windows machines running DeltaCopy Server and then configure ZFS periodic snapshot tasks for a backup solution. So far this has been working fine, although i would like to exclude certain file types like .DAT or NTUSER.DAT. My Exclusion file resides on the local ZFS Dataset (Receiving side) and is as follows: Temp Temporary Internet Files NTUSER.DAT NTUSER.DAT.LOG *.dat *.tmp *.DAT.log *.ost *.pst The command i typed under Auxiliary Parameters (Rsyncd Global Conf under services)is as follows: exclude from = /mnt/Storage/User_Profiles/exclude.txt Ive tried deleting the .DAT files from the receiving end and just as i start to get excited i click refresh and there they are again

    Read the article

  • Clone a Red Hat RAID as part of a disaster recovery plan

    - by Campo
    I am looking for recommendations to clone a Red Hat mirrored raid to a single hard drive located in the same machine. The idea is if the servers hardware ever has an issue we have a similar hardware machine ready to go. All we would have to do is pop in the cloned drive. If the servers RAID ever failed we could just switch to the single drive to maintain uptime and restore the original configuration on the spare server with a backup. This is a restaurant and they are open 7 days a week. We do have time from 12:am to 9:00am to perform the necessary steps for a clone and we talking about under 10 Gigs of information. There is a database on the server. I have looked into Rsync and Clonezilla. But I am just not confident either is capable of completing the task I want. Looking for some suggestions and possibly a step by step if you could be so kind.

    Read the article

  • WHM Backup recommended?

    - by user77284
    I have a VPS (CentOS) with WHM, about 25 GB. It has about 20 accounts on it. I am looking to effectively back it up. My thoughts: Back it up with WHM Backup locally. Use Rsync to mirror it to another server. My questions: Is WHM Backup a good solution? How can I keep several backups while keeping a minimal amount of space? Is there a different solution, I should consider? I am not an expert, so I want something simple that works with minimal maintenance. Thanks.

    Read the article

  • limit the speed of writing files to NFS

    - by xgwang
    CentOS 5.6 NFS is mounted on the server for backup disk space. When the backup job started, it could reach 80MB/s and we really do not expect it took so much bandwidth. So i need to find a way to limit the speed of writing to NFS. I tried rsync with --bwlimit=5000. However, it did limit the reading speed, but the accumulated data still was written at 80MB/s, and no writing activities for seconds. Is there any way to limit the writing speed of NFS?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >