Search Results

Search found 698 results on 28 pages for 'rsync'.

Page 19/28 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • What backup utilities are available for Trixbox CE?

    - by Tim Post
    I'm running Trixbox CE (v2.6.2.3) and I need to make a full system backup of all configurations, recordings, databases, etc in a manner that can be easily restored on a brand new installation of the same version of Trixbox CE. I started to write something myself using rsync, however I feel like I'm probably missing some things and needlessly copying other things. What are my options?

    Read the article

  • How to create a snapshot volume to a remote server using kvm?

    - by Purres
    I want to backup a few virtual machines to a backup server. Here're the backup steps. suspend a virtual machine create a snapshot of the virtual machine using lvcreate -s resume a virtual machine dd if=/virtual_machine_path | lzop > /temp/backup.lzo rsync /temp/backup.lzo -e "ssh " 1.2.3.4:/backup_path/ However, the hypervisor server doesn't have enough hard disk space to create a snapshot in step 2. Is there a way to create a logical volume snapshot to a remote server?

    Read the article

  • How to backup millions of small files?

    - by grassbl8d
    What is the best way to backup millions of small files in a very small time period? We have less than 5 hours to backup a file system which contains around 60 million files which are mostly small files. We have tried several solutions such as richcopy, 7z, rsync and all of them seems to have a hard time. We are looking for the most optimal way... We are open to putting the file in an archive first or transferring the file to another location via network or hard disk transfer thanks

    Read the article

  • Sync folders over FTP

    - by Oxwivi
    First an explanation about syncing folders instead of files. The folders in question has images that are not going to be edited and updated with the FTP server, but simply new images added to them. Is there a way to synchronize the presence of the images on both the client on server? The client server is a Windows machine, and while some may suggest using rsync and whatnot, the server is Linux-based but it's a shared webhost so no direct access other than FTP.

    Read the article

  • (Free) SSH tunneling to S60?

    - by Jawa
    What solutions are there to SSH tunneling certain ports on a Symbian S60 mobile phone? My goal is to rsync files to/from the phone but the only S60 SSH forwarding software that I know is pretty old and most importantly not free. (sshforwarding.com)

    Read the article

  • How does NFS read cache work on Debian?

    - by Ztyx
    I am planning to use NFS to serve out many small files. They will be read very often so client side caching is crucial. Does NFS handle this? Is there a way to increase the client side caching in some way? ...or should I look at another solution? Syncing using rsync or unison periodically is not an option since the files are modified on the client side from time to time.

    Read the article

  • Centralized logging for JBoss / log4j? [closed]

    - by mfarver
    Does anyone have advice or a pointer to articles on how to centralize logs in JBoss? JBoss will log to syslog, which makes it easy, but doing so breaks multi line debug messages (and Jboss loves dropping exception stack traces in the logs). I can rsync the logs, but that isn't realtime. Log4j has appenders for TCP and multicast sockets, so it seems like something probably exists for streaming logs, but I haven't found a receiver for the data. Thanks

    Read the article

  • How to install fink when behind proxy?

    - by Mad Fish
    I'm having a Mac computer at work, and the internet connection is through HTTP proxy. I want to install fink (or macports, doesn't really matter), to have all the useful utilities (like mc). I've set the proxy during fink installation. It has successfully installed, but "fink selfupdate" cannot download using rsync (using cvs it fails too). What can I do?

    Read the article

  • How to create a snapshot volume to a remote server?

    - by Purres
    I want to backup a few virtual machines to a backup server. Here're the backup steps. suspend a virtual machine create a snapshot of the virtual machine using lvcreate -s resume a virtual machine dd if=/virtual_machine_path | lzop > /temp/backup.lzo rsync /temp/backup.lzo -e "ssh " 1.2.3.4:/backup_path/ However, the hypervisor server doesn't have enough hard disk space to create a snapshot in step 2. Is there a way to create a logical volume snapshot to a remote server?

    Read the article

  • Windows Home Server Online Backup Solutions

    - by Sam Cogan
    Does anyone have any good, well priced online backup solutions for Windows Home Server? I've looked at using s3 but the pricing ends up to expensive with the amount of data I have. Mozy and alike don't support WHS. I was considering just getting a cheap linux VPS and using rsync to backup, if thats possible with WHS. Any thoughts or solutions you have appreciated.

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • How to print Linux command output to a file?

    - by vijay.shad
    Hi, I am creating a script to sync my important documents between two system. I want my script to generate a log file for the last action. can you suggest me a way to achieve this. Question: If I execute the rsync command with -v flag, it will print a lot of messages on the console. Is there any way. So, I can redirect these logs to a file?

    Read the article

  • Removing files on a limit access backup server

    - by Bart van Heukelom
    I have an account on a backup server but it's full, so I need to clear it. The problem is that It's only accessible via FTP, SFTP and Rsync (no shell) Deleting lots of small files (as in, multiple full Linux installations), which I have to do, is undoable over FTP/SFTP because it cannot recursively delete directories in one command (Yes, most clients will fake this by issueing all the seperate commands for you but the overhead is huge and the process takes several days...well it crashes before that). What do I do?

    Read the article

  • Secure, efficient, version-preserving, filename-hiding backup implemented in this way?

    - by barrycarter
    I tried writing a "perfect" backup program (below), but ran into problems (also below). Is there an efficient/working version of this?: Assumptions: you're backing up from 'local', which you own and has limited disk space to 'remote', which has infinite disk space and belongs to someone else, so you need encryption. Network bandwidth is finite. 'local' keeps a db of backed-up files w/ this data for each file: filename, including full path file's last modified time (mtime) sha1sum of file's unencrypted contents sha1sum of file's encrypted contents Given a list of files to backup (some perhaps already backed up), the program runs 'find' and gets the full path/mtime for each file (this is fairly efficient; conversely, computing the sha1sum of each file would NOT be efficient) The program discards files whose filename and mtime are in 'local' db. The program now computes the sha1sum of the (unencrypted contents of each remaining file. If the sha1sum matches one in 'local' db, we create a special entry in 'local' db that points this file/mtime to the file/mtime of the existing entry. Effectively, we're saying "we have a backup of this file's contents, but under another filename, so no need to back it up again". For each remaining file, we encrypt the file, take the sha1sum of the encrypted file's contents, rsync the file to its sha1sum. Example: if the file's encrypted sha1sum was da39a3ee5e6b4b0d3255bfef95601890afd80709, we'd rsync it to /some/path/da/39/a3/da39a3ee5e6b4b0d3255bfef95601890afd80709 on 'remote'. Once the step above succeeds, we add the file to the 'local' db. Note that we efficiently avoid computing sha1sums and encrypting unless absolutely necessary. Note: I don't specify encryption method: this would be user's choice. The problems: We must encrypt and backup 'local' db regularly. However, 'local' db grows quickly and rsync'ing encrypted files is inefficient, since a small change in 'local' db means a big change in the encrypted version of 'local' db. We create a file on 'remote' for each file on 'local', which is ugly and excessive. We query 'local' db frequently. Even w/ indexes, these queries are slow, since we're often making one query for each file. Would be nice to speed this up by batching queries or something. Probably other problems that I've now forgotten.

    Read the article

  • rsnapshot backup TO remote server

    - by Zulakis
    I just bought 100GB of "Cloud"-Space at Strato's HiDrive for remote server backups. They offer the following services: sftp,webdav,smb/cifs,rsync,scp Now i want to do a remote backup to my Backup-Space using rsnapshot. All the examples I found were only for backing up FROM remote servers to local machine, but not for backing up TO remote servers. How can I do incremental backups using rsnapshot using one of the protocols above?

    Read the article

  • How to remove directories from source after copying them?

    - by user55542
    I just want to move dirs. I looked successively at mv, cp and rsync, since each tool in turn didn't seem to have the option to remove directories from source after copying them. For instance, mv needs files, not dirs, when src and dst are on different devices: "inter-device move failed: src to dst; unable to remove target: Is a directory" Perhaps the simplest way to do this is by using an additional deletion cmd, although I'd prefer not to use it, since that increases risk of data loss.

    Read the article

  • Alternative to Dropbox (on my server)?

    - by jweede
    I love using Dropbox to sync files between all my machines, and I've heard it uses rsync internally to keep files synced. Sometimes I need to sync very large things, and I don't necessarily want to pay for storage space on someone else's server when I have my own. So does anyone know of any nice cross-platform (pref. open source) automatic file-sync applications out there for this?

    Read the article

  • Backup linux to ftp server

    - by Alakdae
    What do you use for backups to ftp server? I've tried the setup with Amanda and virtual tapes on the ftp server mounted with Curlftpfs and I'm not satisfied with it. I just don't feel confident about Amanda. Also I cannot use anything that uses rsync on the ftp mounted filesystem because it only creates the directories and doesn't create files as it cannot execute "mkstemp". I've been thinking about Bacula but I can't find any good HOWTO for it.

    Read the article

  • Alternative to Dropbox (on my server)?

    - by jweede
    I love using Dropbox to sync files between all my machines, and I've heard it uses rsync internally to keep files synced. Sometimes I need to sync very large things, and I don't necessarily want to pay for storage space on someone else's server when I have my own. So does anyone know of any nice cross-platform (pref. open source) automatic file-sync applications out there for this? sidenote: Here is a Dropbox referral link, if you're feeling generous.

    Read the article

  • SSH error: "Corrupted MAC on input" or "Bad packet length"

    - by William Ting
    I have 3 boxes set up as shown: The DFW box can communicate to the SFO / internet just fine, and I send files AUS - DFW. However, every time I trying transferring DFW - AUS it fails over SSH (ssh client, rsync, scp, sftp, etc) with the following error: Corrupted MAC on input. Disconnecting: Packet corrupt Occasionally I'll get a different error: Bad packet length 2097180. Disconnecting: Packet corrupt I've restarted the DFW box, as well as replaced the network cable. I'm not sure what else might be causing problems. Right now to get files from DFW I have to use DFW - SFO - AUS which is not optimal.

    Read the article

  • External drive hanging, load average through the roof

    - by Paul Tomblin
    I have an external USB drive, and I run an hourly rsync to it as a backup. This has been working fine for years. This weekend, I got two new 2Tb internal drives, and decided it was time to re-install Ubuntu from scratch to clear out all the old cruft. About once a day since the re-install, the backup script hangs hard, usually in the "rm -rf" I do before the rsync. By the time I notice the problem, my load average is in the stratosphere and climbing fast (one time, it was over 150), but anything that doesn't touch the drive seems to be running fine. One thing that I find suspicious is that something, I don't know what, is doing a "smartctl" and a "hdparm" command on the USB drive. I'm pretty sure smartctl isn't supposed to run on external drives. I can't figure out what's doing it, either. Here's part of ps auwwfx when it's hung: root 7310 0.0 0.0 4248 352 ? D 20:15 0:00 /sbin/hdparm -C /dev/sdd root 7808 0.0 0.0 17372 1632 ? D 20:15 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 8427 0.0 0.0 4248 356 ? D 20:20 0:00 /sbin/hdparm -C /dev/sdd root 8925 0.0 0.0 17372 1628 ? D 20:20 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 9529 0.0 0.0 4248 356 ? D 20:25 0:00 /sbin/hdparm -C /dev/sdd root 10026 0.0 0.0 17372 1628 ? D 20:25 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 10655 0.0 0.0 4248 356 ? D 20:30 0:00 /sbin/hdparm -C /dev/sdd root 11151 0.0 0.0 17372 1632 ? D 20:30 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 11774 0.0 0.0 4248 356 ? D 20:35 0:00 /sbin/hdparm -C /dev/sdd root 12271 0.0 0.0 17372 1628 ? D 20:35 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 12878 0.0 0.0 4248 352 ? D 20:40 0:00 /sbin/hdparm -C /dev/sdd root 13374 0.0 0.0 17372 1632 ? D 20:40 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 14011 0.0 0.0 4248 352 ? D 20:45 0:00 /sbin/hdparm -C /dev/sdd root 14507 0.0 0.0 17372 1628 ? D 20:45 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 15116 0.0 0.0 4248 352 ? D 20:50 0:00 /sbin/hdparm -C /dev/sdd root 15612 0.0 0.0 17372 1632 ? D 20:50 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 16223 0.0 0.0 4248 352 ? D 20:55 0:00 /sbin/hdparm -C /dev/sdd root 16734 0.0 0.0 17372 1632 ? D 20:55 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 17345 0.0 0.0 4248 352 ? D 21:00 0:00 /sbin/hdparm -C /dev/sdd root 17842 0.0 0.0 17372 1628 ? D 21:00 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 18463 0.0 0.0 4248 352 ? D 21:05 0:00 /sbin/hdparm -C /dev/sdd root 18960 0.0 0.0 17372 1628 ? D 21:05 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 19598 0.0 0.0 4248 356 ? D 21:10 0:00 /sbin/hdparm -C /dev/sdd root 20096 0.0 0.0 17372 1628 ? D 21:10 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 21280 0.0 0.0 4244 356 ? D 21:15 0:00 /sbin/hdparm -C /dev/sdd root 21784 0.0 0.0 17372 1632 ? D 21:15 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 22414 0.0 0.0 4244 356 ? D 21:20 0:00 /sbin/hdparm -C /dev/sdd root 22912 0.0 0.0 17372 1628 ? D 21:20 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 23541 0.0 0.0 4244 356 ? D 21:25 0:00 /sbin/hdparm -C /dev/sdd root 24038 0.0 0.0 17372 1632 ? D 21:25 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 24658 0.0 0.0 4244 356 ? D 21:30 0:00 /sbin/hdparm -C /dev/sdd root 25157 0.0 0.0 17372 1628 ? D 21:30 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd Why is this happening, and how can I stop it?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >