Search Results

Search found 9620 results on 385 pages for 'backup profile'.

Page 83/385 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • HP Storageworks 448 tape drive input/output error with Ubuntu

    - by Dan D
    I'm trying to set a backup to tape of a machine using flexbackup. However any attempt to write to the tape drive (via either flexbackup or just tar) results in "/dev/st0: Input/output error" The machine seems to recognise the drive (HP Storageworks Ultrium 448) and that there's a tape in it and "mt status" seems to work... "mt -f /dev/st0 rewind" or "erase" throw no errors... root@stor001:/# mt status SCSI 2 tape drive: File number=0, block number=0, partition=0. Tape block size 0 bytes. Density code 0x42 (LTO-2). Soft error count since last status=0 General status bits on (41010000): BOT ONLINE IM_REP_EN root@stor001:/# cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: DVDRAM GSA-4084N Rev: KS02 Type: CD-ROM ANSI SCSI revision: 05 Host: scsi2 Channel: 00 Id: 03 Lun: 00 Vendor: HP Model: Ultrium 2-SCSI Rev: S65D Type: Sequential-Access ANSI SCSI revision: 03 "tell" does however root@stor001:/# mt -f /dev/st0 tell /dev/st0: Input/output error Based on a forum post I found, I tried: root@stor001:/# dd if=/dev/zero of=/dev/nst0 bs=1024 count=10 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 5.0815 s, 2.0 kB/s which gave the person on the forum an error but seems to work for me. If anyone has any suggestions, I'm all ears...

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • Backing up VMs to a tape drive

    - by Aljoscha Vollmerhaus
    I've got myself one of these fancy tape drives, HP LTO2 with 200/400 GB cartridges. The st driver reports it like this: scsi 1:0:0:0: Sequential-Access HP Ultrium 2-SCSI T65D I can store and retrieve files like a charm using tar, both tar cf /dev/st0 somedirectory and tar xf /dev/st0 work flawless. However, what I really would like to backup are LVM LVs. They contain entire virtual machines with varying partition layouts, so using mount and tar is not an option. I've tried using something like dd if=/dev/VG/LV bs=64k of=/dev/st0 to achieve this, but there seem to be various problems associated with this approach. Firstly, I would like to be able to store more than 1 LV on a single tape. Now I guess I could seek to concatenate the data on the tape, but I think this would not work very well in an automated scenario with many different LVs of various sizes. Secondly, I would like to store a small XML file along with the raw data that contains some information about the VM contained in the LV. I could dump everything to a directory and tar it up - not very desirable, I would have to set aside huge amounts of scratch space. Is there an easier way to achieve this? Thirdly, from googling around it seems like it would be wise to use something like mbuffer when writing to the tape, to prevent what wikipedia calls "shoe-shining" the tape. However, I can't get anything useful done with mbuffer. The mbuffer man page suggests this for writing to a tape device: mbuffer -t -m 10M -p 80 -f -o $TAPE So I've tried this: dd if=/dev/VG/LV | mbuffer -t -m 10M -p 80 -f -d 64k -o /dev/st0 Note the added "-d 64k" to account for the 64k block size of the tape. However, reading data back from a tape written in this way never seems to yield any useful results - dd has been running for ages now, and managed to transfer only 361M of data from the tape. What's wrong here?

    Read the article

  • How do I keep folders synced and backed up between two macs using a Linux NAS (rsync?)

    - by Hultner
    I've got two primary computers, one Mac Pro and one MacBook Pro for when I'm on the go. I've also got a Linux sever which also acts as NAS. Currently I backup the entire computers to an external drive with Time Machine which is rather useless and doesn't sync anything. What I really want to do is to keep my important files synced between both computers and my NAS (which is running RAID 5), that way I'm not backing up easily replaceable systemfiles and I've got all my important files in 3 places where two of them are running raid so at least 5 drives would have to crash at the same time before actual data loss occur. Folders I want to keep synced is basically my photo, documents, development, mamp and work folders and then I want to keep the user library folder backed up but not synced. I'm thinking that I'd have to use rsync but don't know how. Before suggesting Dropbox and similar suggestions I don't want to use them because of several reasons some of them being security (Dropbox obviously proved this), Speed (sometimes I'll sync gigabytes of data and that will be significantly faster locally and probably even through VPN as I have a Gigabit pipe), Space (space on my NAS is cheap and only practically limited by my needs), reliability (even if my internet were to go down I still need to be able to keep my files synced incase I'd need to go somewhere on the fly), price (I already have all the hardware and for the amount of gigabytes and bandwidth I'd need I doubt that there's any free or cheap service). Those are my main reason for wanting to keep it locally. I'm sorry for any spelling or grammatical mistakes that I've might have done. I'm writing this on my smartphone from a shaky train and English isn't my mother tongue. I gratefully appreciate any answers even if only partly solving my problem.

    Read the article

  • Hard drive failed, suspected filesystem corruption, still cannot salvage any data from harddrive

    - by Hippy-Head
    Firstly, I am terribly sorry if this is a duplicate, but I couldn't find a similar issue to mine, so here goes. I have a 1TB hdd bought around 8 months ago used as backup hard drive. I have not used the drive for a period of time whatsoever, and when I was trying to get back to some files on it, it was completely wiped just like that. At first it would not boot I tried everything from command line chkdsk and filesystem recovery software to rebuilt it. After a few attempts I managed to initialize it, at that time it was an achievement. The problems started when I tried to recover the data inside, I have used A LOT of software free and commercial software on both Mac and Windows, with the help of cmd or Terminal commands, however no data of any kind was recovered, even after leaving it thoroughly scan for around 9-10 hours all night sometimes longer, with no results at all. I am somewhat desperate, I am usually good at retrieving data from corrupt hard drives, but this is not the case. Call me paranoid, but I do not want to give it to someone to fix it for me, as I have a lot of photos and personal stuff that I do not want anyone to see.

    Read the article

  • Recover mysql database - mysqldump gives "table <tablename> doesn't exist (1146)"

    - by Matthew
    Backstory Ubuntu died (wouldn't boot) and I couldn't fix it. I booted a live cd to recover the important stuff and saved it to my NAS. One of the things I backed up was /var/lib/mysql. Reinstalled with Linux Mint because I was on Ubuntu 10.0.4 this was a good opportunity to try a new distro (and I don't like Unity). Now I want to recover my old mediawiki, so I shut down mysql daemon, cp -R /media/NAS/Backup/mysql/mediawiki@002d1_19_1 /var/lib/mysql/, set file ownership and permissions correctly, and start mysql back up. Problem Now I'm trying to export the database so I can restore the database, but when I execute the mysqldump I get an error: $ mysqldump -u mediawikiuser -p mediawiki-1_19_1 -c | gzip -9 > wiki.2012-11-15.sql.gz Enter password: mysqldump: Got error: 1146: Table 'mediawiki-1_19_1.archive' doesn't exist when using LOCK TABLES Things I've tried I tried using --skip-lock-tables but I get this: Error: Couldn't read status information for table archive () mysqldump: Couldn't execute 'show create table `archive`': Table 'mediawiki-1_19_1.archive' doesn't exist (1146) I tried logging in to mysql and I can list the tables that should be there, but trying to describe or select from them errors out the same way as the dump: mysql> show tables; +----------------------------+ | Tables_in_mediawiki-1_19_1 | +----------------------------+ | archive | | category | | categorylinks | ... | user_properties | | valid_tag | | watchlist | +----------------------------+ 49 rows in set (0.00 sec) mysql> describe archive; ERROR 1146 (42S02): Table 'mediawiki-1_19_1.archive' doesn't exist I believe mediawiki was installed using innodb and binary data. Am I screwed or is there a way to recover this?

    Read the article

  • Securing bash scripts

    - by minnur
    Hi There, Does anybody know what is the best way to secure bash scripts. I have a script which creates database and source code backup and ftp it to other server. And login/password for destination ftp are plain text. I need somehow encrypt it or hide it in case of website hacking. Or should i create script written on C to create bash file then run it and delete ? Thanks. Thanks for the answers and I am sorry, i wasn't clear enough. I would like to clarify my question in the following items. We are storing the data in Rackspace Cloud files. We can't pull as Cloud files doesn't allow you run a script. We can write the script to run on Server A and pull FTP and MySQL data on servers B, C, D, etc. And we want to protect the passwords on A from the situation where A is hacked. Can we compile our script file to hide them? Thanks

    Read the article

  • Optimal file system type and mount options for an rsnapshot dedicated drive

    - by Nimmy Lebby
    We have an external USB 2 drive that we are using as a backup drive for our configuration. We use rsnapshot for the backups. It uses a few standard commands for managing snapshots: rm -rf: deletes expired snapshots mv: moves older snapshots down a slot cp -al: duplicates last snapshot to new slot rsync -a --delete --numeric-ids --relative: synchronizes new snapshot As you could see by the log below, the majority of the time is spent on the rm -rf and the cp -al steps: [25/Dec/2010:14:00:02] rsnapshot hourly: started [25/Dec/2010:14:00:02] echo 21012 > /var/run/rsnapshot.pid [25/Dec/2010:14:00:02] rm -rf /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.4/ /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.3/ /mnt/extdrive/snapshots/hourly.4/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.2/ /mnt/extdrive/snapshots/hourly.3/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.1/ /mnt/extdrive/snapshots/hourly.2/ [25/Dec/2010:14:15:48] cp -al /mnt/extdrive/snapshots/hourly.0 /mnt/extdrive/snapshots/hourly.1 [25/Dec/2010:14:23:32] rsync -a --delete --numeric-ids --relative /etc /mnt/extdrive/snapshots/hourly.0/sm4/ [25/Dec/2010:14:23:52] touch /mnt/extdrive/snapshots/hourly.0/ [25/Dec/2010:14:23:52] rm -f /var/run/rsnapshot.pid [25/Dec/2010:14:23:52] rsnapshot hourly: completed successfully My questions: I'm currently using ext4 for the filesystem. Maybe this is not the best choice from those available in Red Hat. Anyone have any recommendations that would speed up the process? The partition's mount options are sync,dirsync 1 2. Is there a way to optimize this since it's solely used for rsnapshot? Of course, reasoning would be greatly appreciated.

    Read the article

  • Replacing all disks in a non-OS RAID 5 volume

    - by molecule
    Hi all, We currently have a server with 8 x HDD slots. It is a HP DL380G5 with a P400 controller. 2 x HDD are in a RAID 1+0 config and this hosts the OS. 6 x HDD are in a RAID 5 config and holds an Oracle DB. Basically the RAID 5 volume is running out of space and we would like to swap all 6 with higher capacity disks. Excuse my ignorance as I am pretty new to this... I believe we will need to backup the data, delete the RAID volume, insert the new disks, recreate the volume, and restore the data. 2 questions: Do we need to worry about the OS partition or is it completely independent so we can simply take out the 6 and insert 6 new disks and get the controller to recognize the 6 new disks and form a new RAID 5 volume? We should not need to reinstall OS or Oracle correct? Since we are going to restore the data on the volume from another source (our vendor will take care of this) but we would like to keep the existing data on the 6 disks just in case we run into issues and want to fall back, is this possible? Thanks in advance.

    Read the article

  • Jobs with anacron won't run

    - by mareser
    I would like to run two bash scripts daily using anacron in order to backup some data. Unfortunately I can't figure out why said scripts are not executed. For test purposes I let cron execute the scripts and it worked fine. cat /etc/anacrontab gives # /etc/anacrontab: configuration file for anacron # See anacron(8) and anacrontab(5) for details. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # These replace cron's entries 1 5 cron.daily nice run-parts --report /etc/cron.daily 7 10 cron.weekly nice run-parts --report /etc/cron.weekly @monthly 15 cron.monthly nice run-parts --report /etc/cron.monthly 1 5 TB_bak /bin/sh /home/vasco2/Dropbox/Scripts/backup_TB.sh 1 5 key_db_bak /bin/sh /home/vasco2/Dropbox/Scripts/bak_key_db.sh The output of ls ~/Dropbox/Scripts/ is backup_TB.sh bak_key_db.sh I use Linux Mint Katya. uname -a gives Linux vasco2 2.6.38-8-generic-pae #42-Ubuntu SMP Mon Apr 11 05:17:09 UTC 2011 i686 i686 i386 GNU/Linux I would be very happy if somebody could point me in the right direction on why those scripts won't get executed. P.S.: There is no anacron tag on superuser.com. Maybe somebody wants to change that.

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • BackupExec 2012 File System Archiving - Access is denied to Remote Agent

    - by AllisZero
    Gentlemen, I've been struggling with a Trial version of Symantec Backup Exec 2012 for about a week now. It was installed as an upgrade to our 12.5 license, and the setup completed with no issues. The reason I upgraded is solely for the File System Archiving option as I'm working to reduce the amount of live data in my servers. Backups work A-Ok and I have followed the instructions in the Admin Manual to make sure I had filled all requirements. The account BE is running under is a member of the Local administrators group as required and has been added to the test share that I'm using to evaluate the archiving function. Testing the credentials in the job setup window always works fine, and I am able to add both regular and Admin$ shares to my Archive selection. However, every time I run the Archive job, I get the following message: https://dl.dropbox.com/u/59540229/BEXec.png I've already tried to troubleshoot DNS resolution issues as suggested in the Symantec KB to no avail. The only thing I can think of, at this point, is that a trial license doesn't allow me to use the Archiving function, although that would seem silly on their part. Appreciate any assistance or information. Thanks.

    Read the article

  • Windows 7 shows a drive as full in summary but files shown on drive are very small

    - by Rob
    I have a drive partitioned so it is seen by Windows as 2 drives: C:\ and D:\ Windows 7 shows D:\ as full up in the graphical summary in 'My Computer' summary of all the drives, e.g. the bar graph indicates full and nearly all of the drive's capacity, 108Gb, is full. So I go into the D:\ drive to look at the files, I see several folders. I select them all and the right-click menu Properties to count their size, expecting the value to be about the same as what Windows reports in the summary, i.e. nearly 108Gb. But the properties window shows the files are very small, Kbs and Mbs, nowhere near 108Gbs. One of the folders is a backup, but its size is very small. I've checked the folder options to show all system files and hidden files too - and counted these in the properties. Something invisible is holding the space. What is happening here? I'm afraid to delete anything if it removes valuable backups. Have I got huge backups here? Why can't I see them? How do I see them?

    Read the article

  • Sharepoint AD imported users are becomming sporadically corrupted, causing us to have to create a ne

    - by TrevJen
    Sharepoint 2007 MOSS with AD imported users. All servers are 2008. I have around 50 users, over the past 2 months, I have had a handful of the users suddenly unable to login to Sharepoint. When they login, they either get a blank screen or they are repropmted. These users are using accounts that have been used for many months, sometimes the problem originates with a password change. In all cases, the users account works on every other Active Directory authenticated resource (domain, exchange, LDAP). In the most recent case, last night I was forced deleted a user ("John smith") because of corruption. The orifinal account name was jsmith. I deleted him from active directory, then deleted him from the profile list in Sharepoint Shared Services. I could not find a way to delete him from the Sharepoint user list, but I reran the import after recreating his account (renamed it too just to be sure to "smithj"). At first, this did not wor, the user could still access all other resources but Sharepoint. then, some 30 minutes later it inexplicably started working. This morning, the user changed passwords, which immediatly broke the login on Sharepoint again. I am at a loss on how to troubleshoot this.

    Read the article

  • effective back-up using Raid / Win7 back-up

    - by Job
    I have a stand-alone pc system with two 2 tb harddiscs, one of which configured as Raid1, i.e. mirorring. The operational drive is partitioned. I use an external 1 tb harddisc for back-up using Windows 7 back-up facility which will be swapped weekly and stored on other premises. I back-up all partitions AND allow a system back-up. All application software is on the C: partition. Questions: How can I see whether Raid1 is working; i.e. is doing its job. All I see now is a status message in the start-up procedure that says its status is normal. How can I see used or available space on Raid 1? The Win-7 backup allows for 1 schedule only as far as I can see. I want daily back-ups of data. However due to the single schedule I am forced to do the time-consuming system back-up and c: back-up as well. Is there a way to activate two schedules allowing a frequent (daily) data back-up and a system back-up with c: drive back-up on a say weekly basis? Of course it can be forced by hand but I am likely to forget that. I am not the programming type of person so looking for simple and controllable solutions. Thank you - any help is apreciated.

    Read the article

  • XCOPY access denied error on My Documents folder

    - by Ryan M.
    Here's the situation. We have a file server set up at \fileserver\ that has a folder for every user at \fileserver\users\first.last I'm running an xcopy command to backup the My Documents folder from their computer to their personal folder. The command I'm running is: xcopy "C:\Users\%username%\My Documents\*" "\\fileserver\users\%username%\My Documents" /D /E /O /Y /I I've been silently running this script at login without the users knowing, just so I can get it to work before telling them what it does. After I discovered it wasn't working, I manually ran the batch script that executes the xcopy command on one of their computers and get an access denied error. I then logged into a test account on my own computer and got the same error. I checked all the permissions for the share and security and they're set to how I want them. I can manually browse to that folder and create new files. I can drag and drop items into the \fileserver\users\first.last location and it works great. So I try something else to try and find the source of the access denied problem. I ran an xcopy command to copy the My Documents folder to a different location on the same machine and I still got the access denied error! So xcopy seems to be denied access when it tries to copy the My Documents folder. Any suggestions on how I can get this working? Anyone know the reason behind the access denied error?

    Read the article

  • External Storage for 2TB of backups and 4TB of data RAID level? HW vs Software?

    - by Jerry Mayers
    I have a Mac Mini set up as a media center/file server. Currently I just have a hodgepodge mess of external drives for storage. I'm maxed out, and I have some new laptops on the way with much larger drives and I need to work out a good storage solution for backing them up, as well as storing media on the server. I need around 2 TB of storage for the time machine backups from my various systems and around 2 TB more for media. I would like to build this to handle around 6 TB total so I have some growing room. Since I'm using a Mac Mini as the server I need to use external enclosure(s) that support USB 2 or Firewire 800 (preferred) or gigabit Ethernet. Performance of the system isn't a huge concern since the majority of the access from other computers is done over 802.11N. I plan on using 2TB drives, for the final version, but initially I'll try and use my existing 2 (1TB) drives + some new 2TB drives, and swap the 1TB ones out as I fill up. As to the actual questions: Should I use hardware RAID in some enclosure? Because if the enclosure dies I have to find an identical one to get to my data right? Wouldn't a software RAID be better as I can use any method of connecting the drives to the system? Remember OS X server is my OS. What if I had to reinstall OS X, can I restore the software RAID easily? What RAID version should I use? For the 2TB used for the time machine disk I don't see why I need RAID here, just a single 2TB drive since its already the backup, but for the remaining 4TB it would be the only copy of the data so I should build some redundancy. I had a RAID 5 setup using a cheep RAID PCI card years ago running RAID 5 in a 2 TB array and when a drive died it wanted 48 hours to rebuild. Is this crazy slow for a setup of this size or is this to be expected? Any suggestions as to drive enclosures?

    Read the article

  • How to back up a database with thousands of files

    - by Neal
    I am working with a Fedora server that runs a customized software package. The server software is quite old, and its database consists of 1,723 files. The database files are constantly changing - they continually grow and changes are not necessarily appended to the end. So right now, we currently back up every 24 hours at midnight when all users are off of the system and the database is in an internally consistent state. The problem is that we have the potential to lose an entire day's worth of work, which would be unrecoverable. So I'd like to know if there is a way to take some sort of an instantaneous snapshot of these database files that we could back up every 30 minutes or so. I've read about Linux LVM snapshots, and am thinking that I might be able to do accomplish the goal by taking a snapshot, rsync'ing the files to a backup server, then dropping the snapshot. But I've never done this before,so I don't know if this is the "right" fix. Any ideas on this? Any better solutions? Thanks!

    Read the article

  • Copying files between linux machines with strong authentication but without encryption

    - by Zizzencs
    I'm looking for a suitable program to copy files from one linux machine to another one. The program should be able to do authentication but it should not do encryption. The reason behind the latter is the lack of CPU power to do the encryption. I copy backups from ~70 machines to a single backup server simultaneously. The single server is an HP Proliant DL360 G7, with 10 Gbps ethernet connection and an FC storage backend that can do 4 Gbps. Through FTP I can write ~400MB/sec to the storage (that's about what I want) but through ssh with arcfour I can only do ~100MB/sec while having 100% CPU usage. That's why I want file transfers not to be encrypted. The alternatives that I found not really suitable: rcp: no authentication, forget it FTP: making the authentication "secure" (at least preventing plain-text password exchange) is possible but not really easy and I haven't found a method to force any FTP daemon to encrypt the control channel (for the authentication) and not to encrypt the data channel (for data transfers) SCP/SFTP: in farely recent ssh(d) implementations you can't turn off encryption. The best you can do is to use the arcfour cypher for the encryption but it sill uses too much CPU power for my needs. rsync over ssh: same problems as with SCP/SFTP. plain rsync: from the documentation of rsyncd: "The authentication protocol used in rsync is a 128 bit MD4 based challenge response system. This is fairly weak protection, though (with at least one brute-force hash-finding algorithm publicly available), so if you want really top-quality security, then I recommend that you run rsync over ssh." It's a no-go. Is there a protocol/program that can do exactly what I want? (A big plus would be if it could work on windows as well and/or if it would support rsync-stlye copying/synchronization (e.g. copy only the differences).)

    Read the article

  • Hard Disk based storage library

    - by Ryan M.
    We have a Tandberg T24 tape device to handle all of our long term backups right now. We decided that we're not backing up nearly everything that we would like to and that we still have a lot of vulnerabilities. To get to where we want to be, we're going to have to back up a lot more servers than we're currently doing. All of our internal servers have some sort of directly attached drive (I.e. LaCie Raid box or a simple portable hard drive) doing backups, but what we want to do is get those backups off-site. The current tape drive is directly attached via SCSI to a Windows Server 2008 File Server. So to back up anything to tape, it has to be funneled through the File Server. With the current increase that we have planned, I don't think that funneling everything through the File Server is the right course of action and I'm thinking that maybe a second backup device would be more appropriate. I would like your input on a couple of ideas. 1) Doing HDD instead of tape. Tape is hard to deal with. We have a regular rotation cycle, so they don't need years and years of shelf life, so I'm wondering if something HDD-based would be better. 2) Something accessible over the network. Instead of having the device directly attached to one specific machine, have it available to all the servers over the network. Our File Server is a 12-disk raid 6 set up.. I was thinking something like that, but with no raid involved, all disks are stand alone so they can be used/installed/removed on an individual basis. Does any such thing exist? Thanks for your ideas. I'm really interested to hear about some of the solutions you guys are using..

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • How to do a Windows 7 Image restore to an external drive?

    - by Vaccano
    I have a system that I have done a Windows 7 Image restore on. I would like to migrate that image to a different hard drive. Is there a way to restore the image to an externally connected hard drive? For example: I have 3 hard drives: The first in the source machine (the one I want to copy). The second in a machine that I want to do the work. And the third is not in a machine. It is the target that I want to overwrite with the contents of the first. I boot up a 2nd machine and connect the 3rd hard drive externally (using some cool cables I have). I then use some cool feature of Windows 7 to replace what is on the 3rd hard drive with the windows 7 image of my 1st machine (that is on on my networked backup server). I need to know what the above mentioned "cool feature of windows 7" is, if there is one. And how to use it. Any ideas? Note: that I very much so don't want it to overwrite what is on the 2nd machine/hard drive.

    Read the article

  • Could I have destroyed Partitioning-Scheme/Filesystem of HDDs with External Harddrive Case with builtin Raid-Controller?

    - by th3m3s
    I had just recently bought a Fantec QB-35US3R to have a nice box on my desk to make some backups to. Along with the HDD-Bay I had ordered some 4TB HDDs to let them run in Raid 5, which is handled by the hardware RAID controller of the Fantec HDD-Bay. The QB-35US3R arrived a few days before the hard drives, so I got impatient and had the idea to put three old 1TB disks in the Fantec device, just to test it... Long story short: I made a backup of the most important data on these three disks before they broke. I had set the configuration scheme to RAID 3 at the Fantec device. It seems, that the Fantec RAID controller has "somehow" destroyed the partitioning scheme or the file system, because when put into a HDD docking station, they get recognized by the OS (Ubuntu/Linux) but are not mountable anymore. I tried to recover the data from one HDD via gParted (parted), which ran some hours without success. Here I stopped, before trying other tools, cos I read that the longer a hard drive is running after a the partitioning got destroyed, the worse it gets. What could the HDD-Bay probably have done to my lovely hard drive disks? Is there some routine a RAID controller is executing, when it wants to create a RAID system? Like erasing the partition table (seems not plausible to me.) or writing some information to every hard drive in the RAID (seems more likely to me.)? Is there a chance to recover the data from these HDDs, or is the change a RAID controller makes so significant, that no software is of help?

    Read the article

  • How can I backup files on /etc using Deja Dup? [closed]

    - by Chris Good
    Possible Duplicate: How do I open Deja Dup as root? I am using Ubuntu 12.04 LTS. I'm trying to setup Deja Dup to backup the files for several users (/home) and /etc but as it runs as myself, there are many files it cannot read and therefore cannot backup. I want to try running Deja Dup as root as then it should be able to read and backup all files. I've seen some other info about how to set up a particular App to run as root, so I guess I should try that.

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >