Search Results

Search found 9847 results on 394 pages for 'cloud backup'.

Page 39/394 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Laptop Backup Software (Corporate)?

    - by Hutch
    I wondered if any of you who have a fleet of laptops are using anything to back them up, and if so what? In particular I'm looking for a solution that is totally hands-off once installed i.e. the user doesn't have to do anything, press anything, remember to change something when their domain password changes etc. Right now we use Druva Insync which I have to say is pretty damned good, however our license is up for renewal in a couple of months so I want to be sure it's the best solution before renewing - the only other vaguely comparable product that I know of is from Atempo but the cost of a SQL Server license is a big problem there. Thanks.

    Read the article

  • Boot to VHD backup plan

    - by Josh Barker
    I have a machine that I just reinstalled Windows and all of my applications onto... what a chore that is. I want to totally and completely avoid this from now on by creating an image. My first thought was to see if it possible to copy a VHD file when you are booted into it since I am using Windows 7 Ultimate as boot-to-vhd (without a parent machine). Is this possible and if so, how could I accomplish this? Keep in mind, this is my personal machine and I'm trying to keep things inexpensive (a good script would work). Thanks, Josh

    Read the article

  • Collect and Backup Photos from Multiple Photographers

    - by Graviton
    I have a few photographers working under me ( well, not exactly under me, but I say it anyway just for illustration purpose), so they shoot a lot of photos, the problem is that they all store their photos on their hard disk, and I have no way to retrieve them unless I pass them an USB and ask them to fill it with their photos. Very labor intensive and inefficient! Is there any other better ( more automated) ways of doing this? For the fear of losing the resolution, I don't really favor a online-synch approach, because I think all the photos uploaded to any website will suffer a resolution loss, which is the last thing I want. Is there a better idea? Edit: Being artistic as they are, I can't guarantee that they all use PC and Windows; so the software must at least be able to run on Mac.

    Read the article

  • backup and file server for 50+ TB of data

    - by a-bomb
    our office wants to build a new server to handle our data, over the last 10 years our data was stored on CDs, DVDs, HDDs but now they want all of it in one place that is attached to the network for everybody in the office to access it. the data is 20TB new data and the rest is old, the important now is to store these 20tb and gradually store the other 30tb over time. so what is the best solution to do ? we thought of getting an hp server and connect it to an external enclosure that either had tape drives or HDDs (we haven;t decided yet) or to get a NAS server and connect it to the hp server. what should we do because this is new for us ...

    Read the article

  • Backup linux to ftp server

    - by Alakdae
    What do you use for backups to ftp server? I've tried the setup with Amanda and virtual tapes on the ftp server mounted with Curlftpfs and I'm not satisfied with it. I just don't feel confident about Amanda. Also I cannot use anything that uses rsync on the ftp mounted filesystem because it only creates the directories and doesn't create files as it cannot execute "mkstemp". I've been thinking about Bacula but I can't find any good HOWTO for it.

    Read the article

  • PST backup with Volume Shadow Copy Service

    - by NoMadMan
    I was asked to implement the task of backing up 35 PST files ranging from 800Mb to 2000Mb. Windows XP and Windows 2000 workstations are assigned to the users and we have a Windows 2000 domain controller we use to back up files on 3x 500Gb external hard drives. I found several methods from applications to scripts. Local or remote applications would be my last resort. I came across this script based on Volume Shadow Copy Service. CopyWithVss I wanted to know if there would be a problem if the path had spaces. Would mounting the destination path of each PST folder with a drive letter be more practical? My concern with mounting option is that i would eventually run out of letters since I have 35 and possibly more workstations to back up. Lastly, can someone give me an example of CopyWithVss if it were run on a production network? The script is a bit cryptic even after reading several times. Where in the script do I enter the source and the destination? I'm a Mac user so please excuse my ignorance with Windows platform.

    Read the article

  • Restoring file properties but not the complete files, from backup

    - by Jon
    While copying data from my old storage on a Linux computer to the new (linux-based) NAS, I accidentially failed with getting the properties (most important: the modify dates) along to the new location. I also continued to use/modify the files at the new location and hence, cannot just copy it all over again. What I would like to do is a diff between files in the old vs. the new storage, and for those being identical, restore the properties from Linux storage to the NAS storage files. Is there a clever way such as a script or a tool to do this? I could either run it on the Linux box or in worst case from a remote Windows computer. Grateful for any suggestions. /Jon

    Read the article

  • How restore qmail backup files

    - by Maysam
    We are using qmail as our mail application on a linux server. A few weeks ago our server crashed and we had everything installed from scratch and our users started to send & receive email again. The problem is they have lost their old emails. We have a back up of the whole qmail directory. But I don't know how to restore the old emails without losing the new ones. It's worth mentioning that I don't have any problem with restoring old sent mails. When I copy email files into .sent-mail/cur directory, I have them restored in sent box of users, but restoring files in /cur directory doesn't work for inbox emails and I can't get them restored.

    Read the article

  • Lazy linux backup system?

    - by Alex
    Ok, so I want to say Time Machine, but that's note exactly what I'm looking for. I want to set up a system that will regularly (hourly?) back up the /home directory of our machine. time Machine style things are naturally preferable since they save space by only saving the changes, but honestly, this is important enough that I can suffer some waste. Any ideas?

    Read the article

  • Using Windows Azure storage for backup

    - by Bruno
    I am currently looking at Windows Azure blobs as an option for backing up archive data. I want to be able to upload files from an external windows machine via the internet but I don't know enough about Windows Azure storage to make a decision. Some of the questions I have are How do I upload the files. Is there a client application, can I use robocopy? Would it be fast enough? i.e. Could I download or upload 1TB of data in a week? Is it secure? Hopefully someone smarter than me can help me :-)

    Read the article

  • How To Place a Drupal Site on to a server from backup

    - by CCG121
    I Backed up my site then totally re-did my server with a different Control Panel which created a different directory structure /var/www/vhosts/user/site.com/httpdocs I put the files into the httpdocs folder and put the database back correctly (I think) I can see the main page but clicking on any links gets me a Not Found Message I have tried running update.php and I cannot access /user/login either.

    Read the article

  • Schedules Folder Backup

    - by Junaid Saeed
    i have some folders in C drive on which i work on daily basis and the data in them is very critical.. so every night when i shutdown my PC i copy - paste - overwrite existing files these folders to a separate location... so that of the system crashed or something bad happens.. i will be able to easily format C and all i cannot move these folders from C drive because these folders include C:\wamp\www\ of WAMP server and such folders... is there a tool on which i can schedule that everyday at X time these folders will be backuped to 'Y' path

    Read the article

  • Backup with bash and rsync...

    - by Roger
    Is there a way to auto-rename an existing file on the receiver? For example: if filename already exists, it auto-rename filename to something like filename_001, filename_002 and so on.... So far all I have is this: $ rsync -rh --progress --stats --exclude '.thumb' \ --update --perms /origin /destination By the way, I know rsync has --ignore-existing to "skip updating files that exist on receiver", but I guess what I need would be something like --rename-existing.

    Read the article

  • Setting up logging for a remote backup script

    - by Brian Dainis
    So I wrote up a short script that I am planning to run via a cron job daily to package up my site files and send them to a remote location. I also plan to incorporate DB dumps, but I have not gotten that far yet. My issue today however is that Im am uncertain how to log the output of each command for errors, warnings, or other pertinent information the command may output. I would also like to install sometype of fail safe so if something goes horribly wrong the script will stop dead in its tracks and notify me via email or something. Ok the email thing is not as critical, but would be nice. Does anybody have any ideas for that? Here is what I have so far. By the way, both servers are CentOS 6.2 running standard LAMP. #!/bin/sh ################################# ### Set Vars ################################# THEDATE=`date +%m%d%y%H%M` ################################# ### Create Archives ################################# tar -cf /root/backups/files/server_BAK_${THEDATE}.tar -C / var/www/vhosts gzip /root/backups/files/server_BAK_${THEDATE}.tar ################################# ### Send Data to Remote Server ################################# scp /root/backups/files/server_BAK_${THEDATE}.tar.gz user@host:/home/bak1/ftp/backups/ ################################# ### Remove Data from this Server ################################# rm -rf /root/backups/files/server_BAK_${THEDATE}.tar.gz

    Read the article

  • Backup, Migrate or Clone Failing CentOS 4 (LVM)

    - by Hegelworm
    I've been running a BlueQuartz CentOS 4 system (Nuonce.net distro) for a few years now and although the hard drive (Deskstar) has always been a bit noisy, on a few recent occasions I've heard it having trouble spinning up. Basically, I want to clone this drive to a similar sized one (80 Gig). I've spent many hours reading upon dd, dd_rescue, rsync, clonezilla and LVM mirroring yet the sheer number of options and nightmarish accounts has left me frozen - unable to make an informed decision as to how to start. I've made a few attempts. dd failed after about 2 hours, as, although the drives appeared to be identical on the surface (ATA Seagate Barracudas, Thai not Chinese), the destination drive is slightly smaller. My most recent attempt involved using a Debian CD to format the new drive and then rsync-ing everything over and editing the new drive's grub and fstab to reflect the changes. No joy here either as I hadn't chosen LVM when partitioning the destination drive and it wouldn't boot. As you can probably tell, I'm out of my depth here and a panic-invoking mixture of caution and frustration has prompted me to sign up here. The server itself, although not strictly a production environment, has a very specific installation of Festival, LAME and ffMpeg and provides the back-end for a Text-to-Speech jQuery plugin that I've built over the last 2 years. I'm also planning to rebuild the whole TTS system on Debian as the existing CentOS system still has PHP4 etc. For now though, I'd really like to just shift everything over to a new drive. As this is my first post, please feel free to lay any house rules on me that I might've overlooked; I've been hovering around StackOverflow for a while now but have only just signed up. Many thanks. Update: Thanks for your responses so far - it's much appreciated and makes me feel a little more confident when I can double-check things here. I had the idea of doing a fresh install of CentOS (from the original disk) on the new drive so the partitions and LVM were all set up correctly (after disconnecting my source drive to prevent painful mistakes). I then booted into rescue mode from the same CD, and, to avoid a conflicting label, changed the /boot partition's label using e2label to /bootnew. I then changed the VolGroup name using lvm vgrename from VolGroup00 to VolGroup001. I could then boot with both drives in. After mounting the new drive (via its VolGroup001 alias) into /newhd, I rsync-ed over everything I could to the new drive, using -avr switches and backslashes. Like mentioned here. I then disconnected my original source drive again, booted from the liveCD again, changed back the boot partition label from /bootnew to /boot using e2label and then renamed the VolGroup back to VolGroup00. I then rebooted and it went through the familiar start-up routine only to not find a host of files in proc, usr, lib, var etc. The boot did complete but there were lots of red 'FAILS'. I could log in with my existing creds, but the network was kaput, I couldn't startX (desktop GUI) and there were also a few (a lot) of error messages pertaining to iptables. Back to square one. I naively thought I'd nailed it. Shall I just buy a bigger hard drive and attempt the dd route? I've read that this can mess with LVM setups and there's the added risk of working on two unmounted drives at once with a low-level tool. Thanks again.

    Read the article

  • stsadm farm backup exits with ffffffff

    - by overbyte
    I have a SP2007 farm that uses stsadm through Scheduled Tasks to run farm backups. It always worked fine, however it ran for a couple of seconds one day and just exited with code ffffffff. Looked at Event Viewer, the Sharepoint logs themselves and nothing unusual happened at the time this job ran. No files were created so an spbackup.log doesn't exist. Searched the net for batch files and STSADM return codes but the error message doesn't even exist. Any other recommended place to look for issues like this?

    Read the article

  • Best tool to backup your firefox shortcuts

    - by vaccano
    I have lost my shortcuts a few times (from hard drive crashes). Is there a good tool to back them up easily. (I would prefer to not have to remember to do it.) Backing them up to the internet would be a nice bonus, but it is not required for my needs.

    Read the article

  • How do I backup a git repo?

    - by acidzombie24
    I am planning to switch from SVN to git. With svn I just copy my repo folder when I want to back it up. However git doesn't have one so what do I do? Should I create a clone on a separate drive and update by pulling from my project? Then I can burn/archive this folder and it will have all the history? This is probably obvious but I want to make sure when it comes to backups. I still pretend there is a root repository.

    Read the article

  • Bootable backup for Windows (7) - Like Super Duper for Mac

    - by Dan F.
    Just got an SSD installed on my notebook and as people suggested I want have my bases covered in case it fails and I expect it to fail. Here is what I have in mind... keep a partition on the main drive (HDD) the same size as the SSD and keep a "clone" there, and in case the SSD fails... I take the SSD out and boot from the clone partition. From my understanding SuperDuper! does just that for MacOS, but I don't seem to find a something similar. I've found a lot of great tools out there that enables you to make bootable images (CloneZilla, DriveImage XML, Acronis® True Image™ to name a few), that is not what I'm looking for.

    Read the article

  • Are there free FTP serer backup repositories

    - by Saif Bechan
    I was wondering if there is a free service that provides a repository for your backups. These are backups of my server, there are usually about 200mb. I want to FTP the last 2 or 3. I am looking for a service that maybe provide some gigs of space and with FTP access. Looking at email providers such as gmail and hotmail that give you a couple of gigs of free space, this should also be possible or am i horribly wrong.

    Read the article

  • Time drift in Cloud Server - need to mainpulate GRUB config

    - by Aditya Advani
    We are hosting a VPS on a popular host and are experiencing a regular time drift of several minutes a day forward (approx 7). Linux Kernel: 2.6.18-164.11.1.el5 GNU/Linux Distro: CentOS release 5.4 (Final) We reached out to our hosting provider and their support advised us " This is a known issue with Cloud Servers. To fix this you will need to add one line to your grub config located at: /boot/grub/menu.lst The line you need to add is: noapic nolapic divider=10 nolapic_timer This should correct this issue. You will need to restart after this is added in. " Because I am wary of manipulating grub, mostly I'm terrified that our server may fail to restart - I ask you guys, the pro *nix admins - where exactly in this file does the recommended insertion below: # line from 1&1 for time syncing issue (Case 5163) noapic nolapic divider=10 nolapic_timer go? Please specify where exactly, and whether the order of commands is or is not important. Why is the block below "title CentOS ..." indented? If someone could give me an overview of how this works or point me to a resource that's easy to follow, that's what I'm looking for immediately, a light overview or basic understanding of what I;m doing. If GRUB and bootloaders are a deep dark treasure trove of kernel hacking or something, that's great well-recommended in-depth resources are also very welcome. This is my current /boot/grub/menu.lst # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file #boot=/dev/sda # serial --unit=0 --speed=57600 terminal --timeout=5 serial console timeout=5 title CentOS (2.6.18-164.11.1.el5) root (hd0,0) kernel /boot/vmlinuz-2.6.18-164.11.1.el5 ro root=/dev/hda1 console=tty0 console=tty initrd /boot/initrd-2.6.18-164.11.1.el5.img MOST IMPORTANT: I need to know where in the file above it is appropriate to paste the suggested line so I can confidently restart my VPS after manipulating GRUB config

    Read the article

  • Unable to send mail to hotmail from rackspace cloud

    - by Jo Erlang
    I'm having issue sending mail from postfix on a rackspace cloud instance for my domain. Hotmail says "550 SC-001 (SNT0-MC4-F35) Unfortunately, messages from 198.101.x.x weren't sent. Please contact your Internet service provider since part of their network is on our block list. " Here is the mail log Sep 20 08:02:59 mydomain postfix/smtpd[1810]: disconnect from localhost[127.0.0.1] Sep 20 08:02:59 mydomain postfix/smtp[1814]: 59CFF4B191: to=<[email protected]>, relay=mx3.hotmail.com[65.55.92.184]:25, delay=0.19, delays=0.1/0.01/0.06/0.01, dsn=5.0.0, status=bounced (host mx3.hotmail.com[65.55.92.184] said: 550 SC-001 (SNT0-MC4-F35) Unfortunately, messages from 198.101.x.x weren't sent. Please contact your Internet service provider since part of their network is on our block list. You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors. (in reply to MAIL FROM command)) Sep 20 08:02:59 mydomain postfix/smtp[1814]: 59CFF4B191: lost connection with mx3.hotmail.com[65.55.92.184] while sending RCPT TO I have implemented rDNS, SPF and DKIM they all are looking fine. I have checked my IP and domain, on most of the spam black lists and it is listed as ok on those, (not listed as spamming IP) What should I try next?

    Read the article

  • What is the best cloud technology to use for MongoDB/GridFS database servers

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. GridFS is a module for MongoDB that allows to store large files in de database. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities? Edit: Added meaning of GridFS

    Read the article

  • What is the best private cloud storage setup

    - by vdrmrt
    I need to create a private cloud and I'm searching for the best setup. These are my 2 most important requirements 1. Disk and system redundant 2. Price / GB as low as possible The system is going to be used as backup setup which will receive data 24/7 over SFTP and rsync. High throughput is not that important. I'm planning to use glusterfs and consumer grade 4TB hard-drives. I have worked out 3 possible setups 3 servers with 11 4TB HDD Setup up a replica 3 glusterfs and setup each hard drive as a separate ext4 brick. Total capacity: 44TB HDD / TB ratio of 0.75 (33HDD / 44TB) 2 servers with 11 4TB HDD The 11 hard-drives are combined in a RAIDZ3 ZFS storage pool. With a replica 2 gluster setup. Total capacity: 32TB (+ zfs compression) HDD / TB ratio of 0.68 (22HDD / 32TB) 3 servers with 11 4TB consumer hard-drives Setup up a replica 3 glusterfs and setup each hard-drive as a separate zfs storage pool and export each pool as a brick. Total capacity: 32TB (+ zfs compression) HDD / TB ratio of 0.68 (22HDD / 32TB) (Cheapest) My remarks and concerns: If a hard drive fails which setup will recover the quickest? In my opinion setup 1 and 3 because there only the contents of 1 hard-drive needs to be copied over the network. Instead of setup 2 were the hard-drive needs te be reconstructed by reading the parity of all the other harddrives in the system. Will a zfs pool on 1 harddrive give me extra protection against for example bit rot? With setup 1 and 3 I can loose 2 systems and still be up and running with setup 2 I can only loose 1 system. When I use ZFS I can enable compression which will give me some extra storage.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >