Search Results

Search found 698 results on 28 pages for 'rsync'.

Page 24/28 | < Previous Page | 20 21 22 23 24 25 26 27 28  | Next Page >

  • Mirroring of Apps across servers

    - by user1038814
    We wish to host multiple apps across multiple servers. What we are looking for (ideally) is an existing solution which will work. For example, normally to do it we'd follow a route (for failover) like: App is installed on one server along with mysql database App is also installed on a second server. Rsync is used to mirror the files over to the second server and ensure consistency MySQL is installed with a Master-Slave setup. We use a service such as DNS Made Easy which has a DNS failover. If one server goes down it automatically routes traffic to the backup server We have done the above a few times and generally its fine. The issue I have here is that the above is for one app. What I would like to look at is how we can manage for multiple apps and if there is a layer (such as VMWare) that has complete mirroring built in at the OS level? For example how do web hosts currently do it when they ensure that more than one machine is running a bunch of hosted websites. If you were running hosting and you had 200 clients on a server you would want the same clients across 2 or more servers and want everything mirrored. Any advice would be much appreciated.

    Read the article

  • need a different backup solution

    - by DigitalJedi
    I just built a new media/backup server using Ubuntu 12.04 64bit. I installed a hard drive to be used only for music, pictures, and videos and formatted it fat32 so my 1 and only Windows PC could map those folders as netshares. My laptop, also running Ubuntu 12.04, is what I am using the most so new media is first downloaded on my laptop. I've already got the music, videos, and pictures folders from my server mounting as shares on my laptop on boot thanks to some fstab edits and sshfs. Now I'm wanting either an app or script that could backup any new files I add to my local media folders to the mounted folders on my server. I've been Googling all day and found a few apps like rsync but they seem to have issues with ext4 to vfat backups. I thought maybe a script would be best but I'm new to scripting in Linux and don't want to mess anything up. Basically I am looking for something that will backup only newly added files to the server. I figure I could schedule it once a week. There are some stipulations. For example, my local music folder has over 700 folders for each artist/band then sub folders inside those for albums. I want something smart enough to only copy newly added content so I'm guessing the modified date would probably be a good condition if I were scripting. I'm rambling. Any suggestions would be GREATLY appreciated. I'm not finding anything to suit my needs. I'm almost to the point of just learning bas scripting so I can write something but then it will be a couple weeks or so before I have a possible solution and I'd like something in place sooner.

    Read the article

  • Pushing image changes to multiple servers

    - by gms8994
    I need the ability to push images out to multiple servers whenever they're updated. I've looked at Network Filesystems, but they're all but worthless due to their speed. Images can be uploaded to any one of 3 servers, and would then need to be copied to the other 2. Any suggestions? I'm open to try just about anything. EDIT: Graphics data (jpg, gif, png, etc). Linux only. We're currently using rsync. But having it work back and forth is getting cumbersome. It's all local network.

    Read the article

  • How do you limit the bandwidth for a file copy?

    - by wizard
    I've got an old windows 2000 box in a remote location with a T1 connection and a vpn to my location. I normally use smb mounts to transfer files but now it's time to decommission the server and copy it's backups to my location. I have about 40 gigabytes (compressed) to copy. I'm prepared for it to take a long time, but I have a few caveats. I need to limit the bandwidth so terminal service connections to the site are not affected I want to be able to resume a partial transfer There are a few small files and several large files (10-20 gigabytes). I'm familiar with rsync on *nix platforms but have had bad luck with windows and I don't know that it will really keep partially transfered files. What do you use?

    Read the article

  • Why is RAM usage so high on an idle server? [duplicate]

    - by DeeDee
    This question already has an answer here: Why is Linux reporting “free” memory strangely? 2 answers I'm investigating a server used for scientific data analysis. It's running RHEL 6.4 It has almost 200GB of RAM. It's been running very slowly for users via SSH, and after some poking around I quickly noticed that the RAM usage was sky-high. What's odd is that even in an idle state it's still using a ton of RAM: I also looked via htop and I can't see that any running process is using more than 0.1% of the RAM. So I wonder what's going on? Right now the only user-initiated process running is an rsync between two NFS-mounted shares. I tried rebooting the server and it was much more responsive for a few minutes, but then memory usage shot up again. Is there any way I can pinpoint why memory usage is so high?

    Read the article

  • Solution for file store needing large number of simultaneous connections

    - by Tennyson H
    So I'm fairly new to large-scale architectures. We're currently using linode instances for our project, but we're brainstorming about scaling. We need a file store system than can deliver ~50mb folders (user data) to our computing instances in a reasonable amount of time (<20 sec), and scale to 10000+ total users, and perhaps 100+ simultaneous transfers. We are also unsure whether to network mount (sshfs/nfs) or just do a full transfer store-instance at the beginning and rsync instance- store at the end. I've experimented with SSH-FS between our little Linode instances but it seems to be bottlenecked at 15mb/s total bandwith, which wouldn't do under 10+ transfer stress let alone scale v. large. I also tried to investigate NFS but couldn't get it working but have little hope that it'll do within our linode network. Are there tools on other cloud providers that match our needs? Should we be mounting, or should we be transferring? Thanks very much!

    Read the article

  • Program to Queue Files for Copy/Move/Delete in linux?

    - by laliga
    I've search the net for Linux's answer to something like Teracopy (Windows)... but could not find anything suitable. The closest things i got are: Krusader. Mentioned in their features but indicated as 'not implemented yet'. MiniCopier. A java based app http://a.courreges.free.fr/projets/minicopier/minicopier-en.php rsync is not an option. Can someone recommend me a simple file copy tool that can queue files for copy/move/delete? Preferably if I can drag and drop from Nautilus. If something like this does not exist, can someone please tell me why? ...am I the only person that needs something like this?

    Read the article

  • sync two huge filesystems

    - by guettli
    I need to sync two huge file systems. Both sides run linux with full root access. My preferred solution: I can read the list of changed files and directories and sync only the changed files. Here are some solutions and why they don't fit: rsync: Needs to check recursively all files. There are some million files and only little changes. The check takes too long. unison: the same: needs to check all files. inotify: I need a handler for every directory and there too many. Inotify was not build for "watch all files" scenarios. DRDB: Both sides should run independent.

    Read the article

  • Ubuntu 9.10 and Squid 2.7 Transparent Proxy TCP_DENIED

    - by user298814
    Hi, We've spent the last two days trying to get squid 2.7 to work with ubuntu 9.10. The computer running ubuntu has two network interfaces: eth0 and eth1 with dhcp running on eth1. Both interfaces have static ip's, eth0 is connected to the Internet and eth1 is connected to our LAN. We have followed literally dozens of different tutorials with no success. The tutorial here was the last one we did that actually got us some sort of results: http://www.basicconfig.com/linuxnetwork/setup_ubuntu_squid_proxy_server_beginner_guide. When we try to access a site like seriouswheels.com from the LAN we get the following message on the client machine: ERROR The requested URL could not be retrieved Invalid Request error was encountered while trying to process the request: GET / HTTP/1.1 Host: www.seriouswheels.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.307.11 Safari/532.9 Cache-Control: max-age=0 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Accept-Encoding: gzip,deflate,sdch Cookie: __utmz=88947353.1269218405.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); __qca=P0-1052556952-1269218405250; __utma=88947353.1027590811.1269218405.1269218405.1269218405.1; __qseg=Q_D Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Some possible problems are: Missing or unknown request method. Missing URL. Missing HTTP Identifier (HTTP/1.0). Request is too large. Content-Length missing for POST or PUT requests. Illegal character in hostname; underscores are not allowed. Your cache administrator is webmaster. Below are all the configuration files: /etc/squid/squid.conf, /etc/network/if-up.d/00-firewall, /etc/network/interfaces, /var/log/squid/access.log. Something somewhere is wrong but we cannot figure out where. Our end goal for all of this is the superimpose content onto every page that a client requests on the LAN. We've been told that squid is the way to do this but at this point in the game we are just trying to get squid setup correctly as our proxy. Thanks in advance. squid.conf acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl localnet src 192.168.0.0/24 acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow localnet http_access deny all icp_access allow localnet icp_access deny all http_port 3128 hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid/cache1 1000 16 256 access_log /var/log/squid/access.log squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern (Release|Package(.gz)*)$ 0 20% 2880 refresh_pattern . 0 20% 4320 acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9] upgrade_http0.9 deny shoutcast acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT cache_mgr webmaster cache_effective_user proxy cache_effective_group proxy hosts_file /etc/hosts coredump_dir /var/spool/squid access.log 1269243042.740 0 192.168.1.11 TCP_DENIED/400 2576 GET NONE:// - NONE/- text/html 00-firewall iptables -F iptables -t nat -F iptables -t mangle -F iptables -X echo 1 | tee /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -j MASQUERADE iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3128 networking auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 142.104.109.179 netmask 255.255.224.0 gateway 142.104.127.254 auto eth1 iface eth1 inet static address 192.168.1.100 netmask 255.255.255.0

    Read the article

  • How to do a "git export" (like "svn export")

    - by Greg Hewgill
    I've been wondering whether there is a good "git export" solution that creates a copy of a tree without the .git repository directory. There are at least three methods I know of: git clone followed by removing the .git repository directory. git checkout-index alludes to this functionality but starts with "Just read the desired tree into the index..." which I'm not entirely sure how to do. git-export is a third party script that essentially does a git clone into a temporary location followed by rsync --exclude='.git' into the final destination. None of these solutions really strike me as being satisfactory. The closest one to svn export might be option 1, because both those require the target directory to be empty first. But option 2 seems even better, assuming I can figure out what it means to read a tree into the index.

    Read the article

  • WinSCP equivalent for Linux/Ubuntu

    - by Shashank
    I'm shifting most of my projects to a Linux machine, and one of the things that I miss is WinSCP. I've found other answers saying that nautilus, FileZilla etc. can be used for SFTP, but something that I loved about WinSCP was that it has two panes (FileZilla's got that) and I could start synchronization from any directory. Unison or Rsync could work, but I'd have to create a folder pair every time I want to sync two folders. Is there an SFTP client for Linux that has a two-paned view and allows ad-hoc synchronization? Thanks!

    Read the article

  • Running Awk command on a cluster

    - by alex
    How do you execute a Unix shell command (awk script, a pipe etc) on a cluster in parallel (step 1) and collect the results back to a central node (step 2) Hadoop seems to be a huge overkill with its 600k LOC and its performance is terrible (takes minutes just to initialize the job) i don't need shared memory, or - something like MPI/openMP as i dont need to synchronize or share anything, don't need a distributed VM or anything as complex Google's SawZall seems to work only with Google proprietary MapReduce API some distributed shell packages i found failed to compile, but there must be a simple way to run a data-centric batch job on a cluster, something as close as possible to native OS, may be using unix RPC calls i liked rsync simplicity but it seem to update remote notes sequentially, and you cant use it for executing scripts as afar as i know switching to Plan 9 or some other network oriented OS looks like another overkill i'm looking for a simple, distributed way to run awk scripts or similar - as close as possible to data with a minimal initialization overhead, in a nothing-shared, nothing-synchronized fashion Thanks Alex

    Read the article

  • Tools to backup an external hard disk

    - by Kaushik Gopal
    Hey people, What's the best method to take an exact copy of my external hard disk? A guru suggested rsync, but I was wondering if there's an easier alternative. I do remember reading somewhere that Acronis also does this. Was looking for your advice on the best option. I'm running Windows. Essentially i have an external HDD which has a lot of stuff synchronized across various pcs. I wish to take a backup of this external Hard disk (ext.HDDs aren't entirely reliable so want to keep a backup of my ext.HDD). Cheers. K

    Read the article

  • File system to choose for NAS box based on FreeNAS or OpenFiler [closed]

    - by Chris
    I'm a photographer and I have a requirement find a better method of storing my photos other than multiple USB2 drives via USB hubs. Currently I use a Macbook Pro and 6 external drives connected via USB2 or FW800. 3 are a copy of the first three, kept up to day manually by running an rsync backup. I'd like to run a FreeNAS or OpenFiler NAS box using 2TB drives mirrored via software RAID. But - I would like to have the flexibility of also plugging into the drive physically for the faster throughput when necessary. So. My question is, is there a file system that both *nix and Mac OSX will play nice with? Many thanks, Chris.

    Read the article

  • Mercurial: pull changes from unversioned copy

    - by Austin Hyde
    I am currently maintaining a Mercurial repository of the project I am working on. The rest of the team, however, doesn't. There is a "good" (unversioned) copy of the code base that I can access by SSH. What I would like to do is be able to do something like an hg pull from that good copy into my master repository whenever it gets updated. As far as I can tell, there's no obvious way to do this, as hg pull requires you have a source hg repository. I suppose I could use a utility like rsync to update my repository, then commit, but I was wondering: Is there was an easier/less contrived way to do this?

    Read the article

  • nested if: too many arguments?

    - by FLX
    For some reason this code creates problems: source="/foo/bar/" destination="/home/oni/" if [ -d $source ]; then echo "Source directory exists" if [ -d $destination ]; then echo "Destination directory exists" rsync -raz --delete --ignore-existing --ignore-times --size-only --stats --progress $source $destination chmod -R 0755 $destination else echo "Destination directory does not exists" fi else echo "Source directory does not exists" fi It errors out with: Source directory exists /usr/bin/copyfoo: line 7: [: too many arguments Destination directory does not exists I used nested if statements in bash before without a problem, what simple mistake am I overlooking? Thanks!

    Read the article

  • How does load balancing work with multiple server with multiple DBs

    - by Matt
    I guess what im looking for is a description on how this all works together. I'm used to setting up one server with maybe another server to handel the DB. My question is how does the load balancer work where do all the script(php,python) files go? If i make a change to one i have to rsync them to all the server that the balancer refers to? Also does each server need client side DB's installed so they can reference the DB's that are on other servers? If there is a site that explains all this i would be happy to read it.

    Read the article

  • Is there a distributed VCS that can manage large files?

    - by joelhardi
    Is there a distributed version control system (git, bazaar, mercurial, darcs etc.) that can handle files larger than available RAM? I need to be able to commit large binary files (i.e. datasets, source video/images, archives), but I don't need to be able to diff them, just be able to commit and then update when the file changes. I last looked at this about a year ago, and none of the obvious candidates allowed this, since they're all designed to diff in memory for speed. That left me with a VCS for managing code and something else ("asset management" software or just rsync and scripts) for large files, which is pretty ugly when the directory structures of the two overlap.

    Read the article

  • How to make an ISO copy of Linux-filesystem and user files of VPS Debian based?

    - by moogeek
    Hello! I have a Debian-Based VPS on some hosting. I want to migrate from it and i need to make a full copy of all Linux-filesystem (and installed packages) + all home directory with website files. And then pack/convert it to ISO image so that to use it on cloud hostings like Amazon. The problem is that i have only ssh root access. Hosting support can't do that for me. Another part of the question - is it possible to enlarge the Linux-filesystem by not re-installing it and using the free space of home directory? Is it possible to do? I guess it is possible with rsync or something like that. Will my Mysql databes copy together with all other data? Thanks in advance!

    Read the article

  • how to run existing symfony 1.4.18 project in localhost

    - by nickleefly
    I have all the symfony 1.4.18 source file in a folder called symfony. And I have the database. under this symfony folder it has apps, cache, config, data, lib, log, plugins, test, web, .gitignore, symfony what all i need to do to make it work in another shared host server? how can i make it run in localhost ? the config file is the following Under the config folder, vhost.conf file DocumentRoot /var/www/vhosts/example.com/httpdocs/web <Directory /var/www/vhosts/example.com/httpdocs/web> Options +FollowSymlinks php_admin_value open_basedir "/var/www/vhosts/example.com/httpdocs:/tmp:/usr/local/php:/usr/share/pear" </Directory> properties.ini file [prod] host=youripaddress port=22 user=yourusername dir=/var/www/vhosts/example.com/httpdocs/ type=rsync password=yourpassword databases.yml all: doctrine: class: sfDoctrineDatabase param: dsn: 'mysql:host=localhost;dbname=yourdbname' username: yourdbusername password: yourdbpassword attributes: default_table_type: InnoDB default_table_collate: utf8_unicode_ci default_table_charset: utf8 use_dql_callbacks: true

    Read the article

  • Questions about using git as a backend storage system

    - by XO
    New to git here... I want to commit my personal file share to a git repo (text, docs, images etc). As I make modifications to various files over time, telling git about them along the way, how do go about things so I can: Get out of the business of traditional fulls/incrementals. Be able to do a point-in-time file or full clone restore. Basically, I want something granular, such that, if I make an edit to a file 5 times on a particular day. I will have 5 versions of that file that I can refer back to- forever. Or even just derive the a full copy of everything the way it looked on that particular day. I am currently using rsync for remote incremental syncs (no file versioning).

    Read the article

  • Does anyone has the experience of using the new p4 replicate command in their Perforce back-up /rest

    - by Thomas Corriol
    Hi all, we recently performed an upgrade of our whole perforce system to 2009.02 During this exercise, we noticed that the back-up /restore process that was installed here by the Perforce consultant a year ago was not completely working. Basically, the verify command has never worked (scary !). As we are obliged to revisit our Back-Up/Restore scripts, I was toying with the idea of using the new p4 replicate command. The idea is to use it alongside an rsync of the data files, so that in case of crash we will lose at worst an hour of work (if we execute them every hour). Does anyone has the experience or an example of back-up/restore scripts using the p4 replicate command of the 2009.02 version ? Thanks, Thomas

    Read the article

  • How to auto-deploy web-app

    - by Frankie
    Hello, I'm trying to make sense on the best way to do automatize a series of things in a row in order to deploy a web-app and haven't yet came up with a suitable solution. I would like to: use google's compiler.jar to minify my JS use yahoo's yui-compressor.jar to minify my CSS access a file and change a string so that header files like "global.css?v=21" get served the correct version deploy the app (sftp, mercurial or rsync?) omitting certain directories like "/userfiles" Can you guys put me on the right track to solve this? Thank you!

    Read the article

  • $HOME git repo (selectively) to github?

    - by user428502
    I keep many files in my home directory under git. Important dotfiles, my thesis, etc. I want to push certain files to github, e.g., my emacs configuration, to share. Obviously, I don't want to push the entire repo. Are submodules the way to go? My first thought is to make a directory ~/github/emacs, and rsync selective files here, then add a submodule under that directory, pointing to github, to push. Is this a good idea, or is there a better way? (I don't want my local git repo storing all files to get muddled up with this stuff, though.)

    Read the article

  • Is it possible to use os.walk over SSH?

    - by LeoB
    I'm new to Python so forgive me if this is basic, I've searched but can't find an answer. I'm trying to convert a Perl script into Python (3.x) which connects to a remote server and copies the files in a given directory to the local machine. Integrity of the transfer is paramount and there are several steps built-in to ensure a complete and accurate transfer. The first step is to get a complete listing of the files to be passed to rsync. The Perl script has the following lines to accomplish this: @dir_list = `ssh user@host 'find $remote_dir -type f -exec /bin/dirname {} \\;'`; @file_list = `ssh user@host 'find $remote_dir -type f -exec /bin/basename {} \\;'`; The two lists are then joined to create $full_list. Rather than open two separate ssh instances I'd like to open one and use os.walk to get the information using: for remdirname, remdirnames, remfilesnames in os.walk(remotedir): for remfilename in remfilesnames: remfulllist.append(os.path.join(remdirname, remfilename)) Thank you for any help you can provide.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28  | Next Page >