Search Results

Search found 9620 results on 385 pages for 'backup profile'.

Page 92/385 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Limited access to Amazon S3 buckets

    - by Tomas Markauskas
    Is it possible to somehow limit the access to an Amazon S3 account. I don't really like the idea of distributing my secret access key to all of my applications, that want to access just a single bucket on my account. If someone gains access to one of the applications, I could loose all my data stored on S3. One way I was thinking to do it would be creating a second S3 account and give it access to just one bucket of the main account, but it's not really a great solution. Another nice thing for me would be to give the secondary account only write (but not modify/delete) and read access. That way I could upload backups or other files and be sure, that they won't get lost.

    Read the article

  • Download databasename.bak file

    - by Jordon
    I have downloaded databasename.bak file from my hosting company, when i tried to restore that DB file in SQL server 2008 it is keep on giving me following error. The media family on device 'C:\go4sharepoint_1384_8481.bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3241) According to this error and from following link http://www.sqlcoffee.com/Troubleshooting047.htm It is clear that either file i am downloading is corrupt or it is getting corrupted on the way? Any idea, why I am keep on receiving this error? I tried almost all ways but unable to fix this problem, please help me.

    Read the article

  • mirroring linux server to external usb harddrive

    - by DuPie
    My google-fu must be sucking. i havent been able to find a good solution for the following: numerous Linux server on commodity hardware Trying to do a recovery mirror copy to external harddrives External harddrives are smaller than source harddrives, but larger than data External drives are connected via usb2 (slow) Servers range from 20GB of data to 400GB of data Servers are remote, so hands on access is a pain need to copy boot files. empty external drives currently Basicly, looking for a way to do use a ghosting solution from INSIDE a running linux server to an external harddrive, without booting a cd etc. the rsync/cpio solutions i've looked at dont work great with grub/dev/proc etc. I understand that since the system isnt offline, it wont be a "mirror" image as files change, but thats ok. Are there any free/commercial products that would work?

    Read the article

  • i[Pod|Phone|Pad|*] backups in iTunes

    - by Maroloccio
    iTunes <- iPhone. At sync time, a back-up is performed. Which data is included, which data is not? i.e. are songs (potentially redundant) backed-up so that a computer ends up having both the source file on the filesystem and the copy within the device back-up? Is anything on the iPhone filesystem not backed up? (i.e. on a Mac using Time Machine, some files are excluded from the back-up even if not all of them can be recreated upon restore - I lost my postfix config this way..)

    Read the article

  • Where can I find ready to use windows scripts that used robocopy?

    - by Geo
    We are installing the Windows Resource Kit, and that installs RoboCopy. We want to have access to a few windows scripts that uses RoboCopy so we can start from those to build something else. Any ideas on where I can find a few samples? NOTE 1: A bit of information. Every time we try to copy D drive to E drive (new drive) we get an error that says: ERROR 32 (0x000000020) Copying File d:\pagefile.sys The process cannot access the file because it is being used by another process. Waiting 30 seconds. Just to help figure it out.

    Read the article

  • Where Are Databases Located - MySQL File Location

    - by nicorellius
    I just installed a CRM application with a MySQL database. I thought I new the name of the database but I can't find it. Now I am trying to perform a mysqldump but I don't know the name of my database or where it's located. Most docs I read assume the admin knows where this database is located and thee name of it - I should know this, I know.

    Read the article

  • Mapping skydrive as network drive in macos

    - by vittore
    as you probably know, if you have windows live account you can use free skydrive 25 gb storage. Even more a lot of people know that if you go to your skydrive in browser and copy cid query parameter value (https://...live.com/...&cid=xxxxxxxx ) you will be able to map skydrive as network drive in windows using this network pass \[cid].docs.live.net[cid]\ I do now that if you have network share like \server\folder i can map it in macos too, as smb://server/folder. however it is doesn't seem to be a case with skydrive when i try to map it as smb://[cid].docs.live.net/[cid] finder tells it can't connect. Anyone know how to map it ?

    Read the article

  • Should I use "Raid 5 + spare" or "Raid 6"?

    - by Trevor Boyd Smith
    What is "Raid 5 + Spare" (excerpt from User Manual, Sect 4.17.2, P.54): RAID5+Spare: RAID 5+Spare is a RAID 5 array in which one disk is used as spare to rebuild the system as soon as a disk fails (Fig. 79). At least four disks are required. If one physical disk fails, the data remains available because it is read from the parity blocks. Data from a failed disk is rebuilt onto the hot spare disk. When a failed disk is replaced, the replacement becomes the new hot spare. No data is lost in the case of a single disk failure, but if a second disk fails before the system can rebuild data to the hot spare, all data in the array will be lost. What is "Raid 6" (excerpt from User Manual, Sect 4.17.2, P.54): RAID6: In RAID 6, data is striped across all disks (minimum of four) and a two parity blocks for each data block (p and q in Fig. 80) is written on the same stripe. If one physical disk fails, the data from the failed disk can be rebuilt onto a replacement disk. This Raid mode can support up to two disk failures with no data loss. RAID 6 provides for faster rebuilding of data from a failed disk. Both "Raid 5 + spare" and "Raid 6" are SO similar ... I can't tell the difference. When would "Raid 5 + Spare" be optimal? And when would "Raid 6" be optimal"? The manual dumbs down the different raid with 5 star ratings. "Raid 5 + Spare" only gets 4 stars but "Raid 6" gets 5 stars. If I were to blindly trust the manual I would conclude that "Raid 6" is always better. Is "Raid 6" always better?

    Read the article

  • Time machine folders won't restore recursively

    - by Brian Postow
    I have a Snow Leopard system with Time Machine. Every so often I need to look at an old folder, so I go in and try to restore it, usually to a different location. What I end up with is an empty folder of the appropriate name. none of the files, nothing. It doesn't give any error messages. The files ARE there, because I can see them, also, if I go in via the mount point, and trace down through the file system, I can do a filecopy of the folder, and everything turns out fine. However, that seems like a bad idea. so, I'd like to know why he "right" way doesn't work... I believe that it used to work. It's also possible that it only doesn't work for certain folders. I haven't tested extensively.

    Read the article

  • Linux: Alternative to rsync? (ie, scp with resume)

    - by Joernsn
    I've been using rsync to automatically send files from one box to another, which is great compared to scp, since it supports resuming. However, when resuming a very large file (10gb) rsync has to read both files and compare them, which is very slow. I don't need fancy error handling, just "scp with resume", so here's my question: Is there an alternative to rsync/scp, that supports resuming without having to read both source and destination files? I've read the manuals without finding anything I can use, please let me know if I've missed something. This is the rsync line I've been using: rsync -av --partial --progress --inplace SRC DST

    Read the article

  • Asus WL-520GC + external HDD via Ethernet

    - by Azat
    Has anyone experience of using such combination? Exactly with the same model (or from same model line). How does it works? How hard is to set up all that? What difficulties? I use this router a couple of years, but now i haven't such external drive at my disposal. I want to buy something like Verbatim 47591 or Western Digital WDH1NC10000. (By the way, my router has no USB ports, therefore only Ethernet-ports for external HDD are supported.) Thanks you a lot in advance!

    Read the article

  • What is the fastest way to clone an INNODB table within the same server?

    - by Vic
    Our development server is a replication slave of our production server. We have a script that developers use if they want to run their applications/bug fixes against fresh data. That script looks like this: dbs=( analytics auth logs users ) server=localhost conn="-h ${server} -u ${username} --password=${password}" # Stop the replication client so we don't encounter weird data. echo "STOP SLAVE" | mysql ${conn} # Bunch of bulk insert optimizations echo "SET autocommit=0" | mysql ${conn} echo "SET unique_checks=0" | mysql ${conn} echo "SET foreign_key_checks=0" | mysql ${conn} # Restore all databases and tables. for sourcedb in ${dbs[*]} do destdb=${prefix}${sourcedb} echo "Dropping database ${destdb}..." echo "DROP DATABASE IF EXISTS ${destdb}" | mysql ${conn} echo "CREATE DATABASE ${destdb}" | mysql ${conn} # First, all the tables. for table in `echo "SHOW FULL TABLES WHERE Table_type <> 'VIEW'" | mysql $conn $sourcedb | tail -n +2`; do if [[ "${table}" != 'BASE' && "${table}" != 'TABLE' && "${table}" != 'VIEW' ]] ; then createTable=`echo "SHOW CREATE TABLE ${table}"|mysql -B -r $conn $sourcedb|tail -n +2|cut -f 2-` echo "Restoring ${destdb}/${table}..." echo "$createTable ;" | mysql $conn $destdb insertData="INSERT INTO ${destdb}.${table} SELECT * FROM ${sourcedb}.${table}" echo "$insertData" | mysql $conn $destdb fi fi done done echo "SET foreign_key_checks=1" | mysql ${conn} echo "SET unique_checks=1" | mysql ${conn} echo "COMMIT" | mysql ${conn} # Restart the replication client echo "START SLAVE" | mysql ${conn} All of these operations are, as I mentioned, within the same server. Is there a faster way to clone the tables I'm not seeing? They're all INNODB tables. Thanks!

    Read the article

  • Best way to synchronise photos to three machines. Laptop, Desktop, NAS

    - by wookiebreath
    I'm using Picasa as my photo management software, and I have a collection of photos that gets downloaded from my cameras either onto my Desktop or onto my Laptop. I'd like to automatically have copies of all my photos on both my laptop, desktop and my NAS. Does anyone else do this? Do you have any recommendations for Software or processes? Is there anything I need to be careful of? I had a look at Dropbox, but it appears to have a 2 gig limit? What about something like SyncBack?

    Read the article

  • Synchronize large objects to S3 efficiently

    - by emk
    I need to synchronize about 30GB of git repositories to S3. These repos may contain some very large pack files, on the rough order of 2GB. I know that S3 has recently added support for large objects, and has new APIs that allow the objects to be uploaded as several parallel chunks. Is there a good command-line tool for Linux that allows me to efficiently synchronize large objects with S3 in a fashion similar to s3sync?

    Read the article

  • Can't use command line – "command not found" after editing PATH

    - by MEM
    I'm running OS X Mavericks and was trying to install MAMP PRO 2.2. I was trying to configure the PATH variable to have the PHP binaries of MAMP PRO. I added the following line on my ~/.bash_profile file: export PATH=/Applications/MAMP PRO/bin/php/php5.5.3/bin:$PATH As you may notice, since I have MAMP PRO and not just MAMP, I've added a space. As a consequence, I know have the following error each time I run the terminal: -bash: export: `PRO/bin/php/php5.5.3/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin': not a valid identifier Worst: I can't get any command to run, like: ls, clear etc. I always get: "command not found" I don't even know the absolute path for ls. How can I make the commands work again, so that I can properly fix the path I was trying to setup on the .bash_profile file?

    Read the article

  • What does dd conv=sync,noerror do?

    - by dding
    So what is the case when adding conv=sync,noerror makes a difference when backing up an entire hard disk onto an image file? Is conv=sync,noerror a requirement when doing forensic stuff? If so, why is it the case with reference to linux fedora? Edit: OK, so if I do dd without conv=sync,noerror, and dd encounters read error when reading the block (let's size 100M), does dd just skip 100M block and reads the next block without writing something (dd conv=sync,noerror writes zeros to 100M of output - so what about this case?)? And if is hash of original hard disk and output file different if done without conv=sync,noerror? Or is this only when read error occurred?

    Read the article

  • rsync to cifs mount but preserve permissions

    - by weberwithoneb
    I'm backing up a linux server to a windows share. I'm currently mounting the windows share with cifs and using rsync for incremental backups. File permissions and ownership are not being preserved, as should be expected after reading this samba document: The core CIFS protocol does not provide unix ownership information or mode for files and directories. Because of this, files and directories will generally appear to be owned by whatever values the uid= or gid= options are set, and will have permissions set to the default file_mode and dir_mode for the mount. How can I achieve my goal of preserving unix file permissions while writing to a windows share? Is there another network file system that would allow me to do this? Thanks.

    Read the article

  • Roaming Profiles: Best Practices

    - by Noah Clark
    I want to setup roaming profiles for about 50 users. What is the best way to go about doing this? What are the best practices. I've read about desktops/my Documents being TOO big. How big is too big? We have a few users who keep a lot of media on their machine to listen to throughout the day. I would imagine they have a few gigs of MP3's in their My Documents folder. How do you deal with this? Thanks!

    Read the article

  • Best way to synchronise photos to three machines. Laptop, Desktop, NAS

    - by user9632
    I'm using Picasa as my photo management software, and I have a collection of photos that gets downloaded from my cameras either onto my Desktop or onto my Laptop. I'd like to automatically have copies of all my photos on both my laptop, desktop and my NAS. Does anyone else do this? Do you have any recommendations for Software or processes? Is there anything I need to be careful of? I had a look at Dropbox, but it appears to have a 2 gig limit? What about something like SyncBack?

    Read the article

  • Configure bash_profile for one single terminal emulator

    - by Hugo
    I'm using a new terminal emulator. Terminology is the E17 default terminal, and it have a great command, $ tyls with is a "graphical" $ ls I want to create an alias just for this terminal, because the command "tyls" don't make sense to konsole, rxvt or other terminals. I'm thinking in some kind of "if" in ~/.bash_profile to test if I'm on terminology and then run the following command: alias ls="tyls" But how can I test if I'm in terminology but not xterm? Can someone help me? Thanks!

    Read the article

  • How do I copy files between harddrives on Ubuntu CLI?

    - by ed209
    I have a dedicated server with a 120gb main ssd. The server happens to come with a couple of 3000GB hard drives. I'd like to use them to back up my main drive. Preferably, I'd like one as an exact copy of the main SSD and the other with incremental backups of the mysql database and a user uploads file. These are the drives I have Disk /dev/sda: 120.0 GB, 120034123776 bytes 255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f2e18 Device Boot Start End Blocks Id System /dev/sda1 2048 4196352 2097152+ 83 Linux /dev/sda2 4198400 5246976 524288+ 83 Linux /dev/sda3 5249024 234441647 114596312 83 Linux Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table The first problem I have, is that I have no idea how to copy from one drive to another. Kind of embarrassing I know, but I don't know where to start. I'm thinking of this in terms of Mac OS cli where I'm able to copy between /Volumes - is there an equivalent? (there is nothing under /mnt or /media)

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >