Search Results

Search found 7545 results on 302 pages for 'backup and restore'.

Page 59/302 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • rsync'd a folder, folder doesn't show up, but free disk space decreased

    - by Patrick
    I am currently trying to switch from mac to windows/ubuntu dual boot (on 2 seperate internal HDDs), but ran into some trouble restoring my documents. I am not sure all the information below is necessary, but if I knew how to solve it, I wouldn't ask it here. I backed up my mac before buying this laptop on an external HDD with Carbon Copy Cloner. I wanted to put these files on my user folder on my windows HDD, but I could not do that from inside windows (HFS+ format of mac), so I used rsync from inside Ubuntu to copy the documents from the ext hdd to the windows partition. It seemed like it went okay, but from inside windows (and later also Ubuntu) the folder didn't show up. My free HDD space, however, has reduced with about 200 GB (the size of the backup) when looking at the disk properties (from inside Windows and Ubuntu). rsync command I used: rsync -av /media/patrick/Toshiba\ 1.5T/Users/patrickvandenberg/ /media/patrick/Windows8_OS/Users/Patrick/MacBackup/ Folder does not exist: patrick@patrick-Lenovo-IdeaPad-Y410P:~$ cd /media/patrick/Windows8_OS/Users/Patrick/MacBackup bash: cd: /media/patrick/Windows8_OS/Users/Patrick/MacBackup: No such file or directory Size of disk: patrick@patrick-Lenovo-IdeaPad-Y410P:~$ du -hs /media/patrick/Windows8_OS/ 195G /media/patrick/Windows8_OS/ Size of disk according to Disk properties: http://i.stack.imgur.com/OteMX.png (not enough rep to insert the image)

    Read the article

  • Keeping multiple root directories in a single partition

    - by intuited
    I'm working out a partition scheme for a new install. I'd like to keep the root filesystem fairly small and static, so that I can use LVM snapshots to do backups without having to allocate a ton of space for the snapshot. However, I'd also like to keep the number of total partitions small. Even with LVM, there's inevitably some wasted space and it's still annoying and vaguely dangerous to allocate more. So there seem to be a couple of different options: Have the partition that will contain bulky, variable files, like /srv, /var, and /home, be the root partition, and arrange for the core system state — /etc, /usr, /lib, etc. — to live in a second partition. These files can (I think) be backed up using a different backup scheme, and I don't think LVM snapshots will be necessary for them. The opposite: putting the big variable directories on the second partition, and having the essential system directories live on the root FS. Either of these options require that certain directories be pointers of some variety to subdirectories of a second partition. I'm aware of two different ways to do this: symlinks and bind-mounts. Is one better than the other for this purpose? Is there another option? Do any of the various Ubuntu installation media/strategies support this style of partition layout?

    Read the article

  • Mac OS X kernel panic after TM restore

    - by Sev
    I changed my HD in my MacBook Pro, and restored from Time Machine backup. Now I keep getting kernel panic error everytime I restart. I booted from the DVD and ran a few tests, noticed that HD and RAM are being detected, also did a repair on the disk through disk utility, still getting same error. Any suggestions?

    Read the article

  • Rsync over ssh: "ERROR: module is read only" suddenly appeared

    - by user978548
    I've used from some time rsync/ssh to backup my shared host contents to my personal Synology NAS (212j for that matter), and it worked quite well. For information, I use a password-less ssh connection. 3 days ago, I updated my NAS software and since (or at least I believe it's since that), the backup won't work anymore. I get the following error on the host: rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) ERROR: module is read only ..which I do not understand. beside that nothing changed that I know of in both source and destination that can be related to rsync or ssh, I did check a few things and all seems to be alright: I can still connect through ssh from the host to my NAS with the good user, so ssh stuff like keys haven't changed. I also have the correct file permissions on the NAS (I checked, and also tried to create files, directories, .. with the user used by rsync through ssh). I read here and there that the error means that I have to ensure that my rsyncd.conf have the right read only = no in it, but as far as I know, I never used rsyncd as well as I never configured anything for it and until now it worked like a charm.. I use the following command to do the backup: rsync -ab --recursive \ --files-from="$FILES_FROM" \ --backup-dir=backup_$SUFFIX \ --delete \ --filter='protect backup_*' \ $WDIRECTORY/ \ remote_backup:$REMOTE_BACKUP/ So I'm stuck and really can't figure out what happened. Edit: As suggested in comments, I also tried passing commands to ssh (but not from inside a ssh session), that worked as expected, and also tried a single rsync command, which didnt worked, failing just like the complete backup command. (sharedHost):hostuser:~ > touch test.txt (sharedHost):hostuser:~ > rsync test.txt remote_backup:backups/test.txt ERROR: module is read only rsync error: syntax or usage error (code 1) at main.c(1034) [Receiver=3.0.8] rsync: connection unexpectedly closed (9 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [sender=3.0.7] and (sharedHost):hostuser:~ > ssh remote_backup 'touch /abs_path_to_backups/backups/test2.txt && echo "ProoF" > /abs_path_to_backups/backups/test2.txt' (sharedHost):hostuser:~ > ssh remote_backup 'cat /abs_path_to_backups/backups/test2.txt' ProoF

    Read the article

  • Suggestions for splitting server roles amongst Hyper-V virtual servers / RAID6 or RAID10? / AppAssure

    - by Anon
    We have 2 Hyper-V hosts at present running 1 virtual server that was converted from a physical box running all roles. My plan is to split the roles over various virtual machines, upgrading to the latest software versions as I go, and use the backup server as a standby in case the main server fails. AppAssure backup software has a feature called Virtual Standby, so the VHD's can be ready to be fired up on the backup server if necessary. Off-site backups will be done via external USB drive for now. I'm just seeking some input/suggestions into how I'm planning to split the roles out amongst various virtual servers. Also, I'm curious how to setup the storage on the servers. We do not have any NAS's, SAN'S or any budget for this. What would the best RAID level be to use? I'm thinking either RAID6 (which is currently used) however I'm concerned about the write speeds, or RAID10 but again I'm worried that I can only lose 1 drive (from the same mirror) as opposed to any 2 with RAID6. I realise I have a hot swap for this, but what if a further drive fails during a rebuild? Is the write penalty of RAID6 worth the extra reliability over RAID10? Or will it be too slow with all the roles I am planning, therefore RAID10 is my only real option? The reason for the needed redundancy is I am the only technician and I'm not always on-site. Options I've considered: 1) 5 drives in RAID6 set, 200gb for host OS, rest for VM storage. 1 drive for hot swap - this is how it is currently setup 2) 4 drives in RAID10 set, 200gb for host OS, rest for VM storage. 2 drives for hot swap 3) 4 drives in RAID10 set for VM storage, 2 drives in RAID1 set for host OS. No drives for hot swap - While this is probably the best option with the amount of drives I have, I don't like the idea of having no hot swap 4) 3 drives in RAID6 set for VM storage, 2 drives in RAID1 set for host OS. 1 drive for hot swap All options give us enough storage capacity for our files, etc. We don't have any budget for extra drives or extra hot swap HD chassis for the servers. We have about 70 clients and about 150 users. MAIN SERVER Intel Xeon 5520 @ 2.27 GHz (2 processors) 16GB RAM 6 x 1TB Seagate Barracuda ES.2 Enterprise SATA drives Intel SRCSATAWB RAID controller Virtual machine workload using Hyper-V on Windows Server 2008 R2: DC01 - Active Directory Domain Controller / DNS server / Global catalog - 1GB RAM DC02 - Active Directory Domain Controller / DNS server / Global catalog - 1GB RAM Member Server - DHCP server, File server, Print server - 1GB RAM SCCM Member Server - 4GB RAM Third Party Software Member Server - A/V server, Ticketing software, etc - 4GB RAM Exchange 2007 - 4GB RAM - however we are probably migrating to a hosted solution, therefore freeing up resources BACKUP SERVER Intel Xeon E5410 @ 2.33GHz (2 processors) 16GB RAM 6 x 2TB WD RE4 SATA drives Intel SRCSASRB RAID controller Virtual machine workload using Hyper-V on Windows Server 2008 R2: AppAssure backup software - 8GB RAM

    Read the article

  • how to restore os using acronis

    - by user23950
    I have made a backup using Acronis 2010. And I've tested it on vmware. And I'm having problems booting up the dual boot vm. After restoring from the .tib file. What do I do?What other software that can image the os can you recommend that is easier to use than Acronis.

    Read the article

  • Is there a simple automatic backup system for Visual Studio projects?

    - by Jelly Amma
    Hello, I'm using Visual Studio 2008 Express and I would like Visual Studio (or perhaps an Add-in) to save my whole project to some sort of auto-incrementing archive or whatever would help me recover from disasters. I don't have much need for SVN or complex versioning systems. I'm just looking for something simple and lean. Any help would be much appreciated. Jenny PS : I looked into the built-in AutoRecover feature but it doesn't seem to save more than a few files.

    Read the article

  • Home directory messed up

    - by nuthan
    Recently, I installed Ubuntu 12.04. For some reason, i backed-up my home directory contents to another directory(say bkp dir). Precisely, I moved Documents, Pictures, Downloads, etc.. to bkp directory. Now, Deleted all my original home directory contents. I restored all the bkp directory content back to home. I find them all on my Desktop. I believe some kinda chain is broken. I dont find respective home directories icons on them. How do i restore it? Also, i dont find them in my places options. Messed up image link Thanks... ls -lh ~ ~/Desktop nuthan@nuthan-desktop:~$ ls -lh ~ ~/Desktop /home/nuthan: total 2.8M drwxr-x--- 11 nuthan nuthan 4.0K May 28 20:05 android-sdk-linux drwxrwxr-x 4 nuthan nuthan 4.0K May 25 13:36 android-sdks drwxrwxr-x 2 nuthan nuthan 4.0K May 18 17:30 convert -rw-rw-r-- 1 nuthan nuthan 0 May 30 09:07 dependancies~ drwxr-xr-x 2 nuthan nuthan 4.0K Jun 6 14:16 Desktop drwxrwxrwx 6 nuthan nuthan 4.0K Jun 6 12:06 Documents drwxr-xr-x 28 nuthan nuthan 12K Jun 6 13:52 Downloads drwxrwxr-x 2 nuthan nuthan 4.0K Mar 6 13:23 examples -rw-r--r-- 1 nuthan nuthan 8.3K May 11 16:19 examples.desktop drwxrwxr-x 4 nuthan nuthan 4.0K May 18 19:04 github -rw-rw-r-- 1 nuthan nuthan 0 May 28 18:40 linux~ drwxr-xr-x 2 nuthan nuthan 4.0K May 11 16:46 Music drwxrwxr-x 2 nuthan nuthan 4.0K May 30 08:48 node-code drwxrwxr-x 4 nuthan nuthan 4.0K May 30 08:40 node_modules drwxr-xr-x 7 nuthan nuthan 4.0K May 25 13:55 noduino drwxrwxr-x 2 nuthan nuthan 4.0K May 30 11:58 nuthan drwxrwxrwx 3 nuthan nuthan 4.0K May 24 11:13 Pictures drwxrwxr-x 6 nuthan nuthan 4.0K Mar 6 13:23 public drwxr-xr-x 2 nuthan nuthan 4.0K May 25 11:44 Public drwxrwxr-x 2 nuthan nuthan 4.0K May 29 18:50 python -rw-rw-r-- 1 nuthan nuthan 983K Jun 6 14:16 Screenshot from 2012-06-06 14:16:37.png -rw-rw-r-- 1 nuthan nuthan 980K Jun 6 14:20 Screenshot from 2012-06-06 14:20:24.png -rw-rw-r-- 1 nuthan nuthan 731K Jun 6 14:22 Screenshot from 2012-06-06 14:22:06.png drwxrwxr-x 3 nuthan nuthan 4.0K May 31 18:17 sketchbook drwxrwxr-x 2 nuthan nuthan 4.0K Jun 6 13:05 sql -rw-rw-r-- 1 nuthan nuthan 201 May 28 22:08 sql~ drwxr-xr-x 2 nuthan nuthan 4.0K May 11 16:46 Templates -rw-rw-r-- 1 nuthan nuthan 5.1K Jun 4 12:29 test~ drwxrwxr-x 3 nuthan nuthan 4.0K May 14 11:09 Titanium_Studio drwxrwxr-x 4 nuthan nuthan 4.0K May 14 21:00 Titanium Studio Workspace drwxrwxr-x 4 nuthan nuthan 4.0K Jun 1 18:29 TPM_Trak drwxrwxr-x 2 nuthan nuthan 4.0K May 24 17:30 Ubuntu One drwxr-xr-x 2 nuthan nuthan 4.0K May 11 16:46 Videos drwxrwxr-x 6 nuthan nuthan 4.0K May 31 15:10 workspace drwxrwxr-x 3 nuthan nuthan 4.0K May 14 11:57 Zend /home/nuthan/Desktop: total 20K -rw-rw-r-- 1 nuthan nuthan 441 May 30 09:07 dependancies -rw-rw-r-- 1 nuthan nuthan 1.6K May 28 18:40 linux -rw-rw-r-- 1 nuthan nuthan 470 May 28 22:16 sql -rw-rw-r-- 1 nuthan nuthan 5.1K Jun 4 19:34 test nuthan@nuthan-desktop:~$

    Read the article

  • Updating a backup image (.wim and/or Acronis .tib)

    - by Backdraft
    Anyways, I've got a Windows 7 installation that I want to make a generalized backup image of so I can use it for future installs on not only my desktop from which the image is to be derived from, but also other systems with dissimilar hardware. Therefore I've arrived at either 2 options, using either sysprep/imagx from WAIK (guide here), or the simpler Acronis True Image w/ their Universal Restore addon. Of course, they create distinct image file types, .wim and .tib respectively. What I'd like to do is to periodically update this image, say with Windows Updates, by booting it to either a physical partition or using virtualization (VirtualBox/VMWare), perform the updates, and save the updated .wim or .tib image file again. What's the simplest way I could do this? Another question is, I created this generalized backup image on a 500GB Seagate 7200RPM HDD. Say I get an SSD as an OS drive in the future, can I just deploy this backup image to the SSD normally, or are there any potential problems to be aware/avoid (ie. is it best to completely reinstall the OS on the SSD from scratch, or can I use the image created on the normal HDD with no issue)? Thanks and Happy Holidays.

    Read the article

  • Tidy up old Windows Server Backup snapshots

    - by dty
    Hi, I'm running wbadmin from a scheduled job, backing up my C: and D: drives to my E: and (I believe!) including the system state: wbadmin start backup -backuptarget:e: -include:c:,d: -allCritical -noVerify -quiet I'd like to delete old backups, but I'm concerned that all the information I can find says to use wbadmin to delete old system state backups, and vssadmin to delete other backups. As far as I know, my backups ARE system state backups, but are using VSS on E: for storage, so I'm worried about trying either of these techniques for fear of losing all my backups. This is a home network, so I don't have a spare server to test this on. I'm also happy to simply restrict the space used on E:, but I can't make sense of the difference between the /for and /on parameters of the relevant vssadmin command. For reference, here's the output of vssadmin show shadows: Contents of shadow copy set ID: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} Contained 1 shadow copies at creation time: 07/01/2011 08:12:05 Shadow Copy ID: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} Original Volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Volume: \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy83 Originating Machine: x.y.com Service Machine: x.y.com Provider: 'Microsoft Software Shadow Copy provider 1.0' Type: DataVolumeRollback Attributes: Persistent, No auto release, No writers, Differential [... repeated a lot...] vssadmin show shadowstorage: Shadow Copy Storage association For volume: (C:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (C:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 0 B Allocated Shadow Copy Storage space: 0 B Maximum Shadow Copy Storage space: 5.859 GB Shadow Copy Storage association For volume: (D:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (D:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 0 B Allocated Shadow Copy Storage space: 0 B Maximum Shadow Copy Storage space: 40.317 GB Shadow Copy Storage association For volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 168.284 GB Allocated Shadow Copy Storage space: 171.15 GB Maximum Shadow Copy Storage space: UNBOUNDED wbadmin get versions: Backup time: 07/01/2011 03:00 Backup target: 1394/USB Disk labeled xxxxxxxxx(E:) Version identifier: 01/07/2011-03:00 Can Recover: Volume(s), File(s), Application(s), Bare Metal Recovery, System State [... repeated a lot...]

    Read the article

  • How to backup encrypted home in encrypted form only?

    - by Eric
    I want to backup the encrypted home of a user who might be logged in at backup time. Which directories should I backup if I want to ensure that absolutely no plaintext data can be leaked? Are the following folders always encrypted? /home/user/.Private /home/user/.ecryptfs Just want to make sure that no data leaks, as the backup destination is untrustworthy. Edit: Yes, as Lord of Time has suggested, I'd like to know which folders and/or files I need to backup if I need to store only encrypted content in a way that allows me to recover it later with the right passphrase.

    Read the article

  • Restore Oracle XE data from *.DBF

    - by asero
    Is it possible to restore oracle database from *.DBF files? If yes, then how? I really find it hard to deal with backup and restore things in Oracle compared to SQL Server. I have a backup of the whole oraclexe folder including these files.

    Read the article

  • Restore file's modification time in git

    - by rampion
    I understand the default git behaviour of updating the modification time every time it changes a file, but there are times when I want to restore a file's original modification time. Is there a way I can tell git to do this? (As an example, when working on a large project, I made some changes to configure.ac, found out that autotools doesn't work on my system, and wanted to restore configure.ac's to its original contents and modification time so that make doesn't try to update configure with my broken autotools).

    Read the article

  • Restore WindowState from Minimized

    - by tbetts42
    Is there an easy method to restore a minimized form to its previous state, either Normal or Maximized? I'm expecting the same functionality as clicking the taskbar (or right-clicking and choosing restore). So far, I have this, but if the form was previously maximized, it still comes back as a normal window. if (docView.WindowState == FormWindowState.Minimized) docView.WindowState = FormWindowState.Normal; Do I have to handle the state change in the form to remember the previous state?

    Read the article

  • Problem with user logins after db Restore

    - by JJgates
    I have two SQL 2005 instances that reside on different networks. I need to backup a database from instance A and restore it to a database in instance B on a weekly basis so that both databases hold the same data. After the restore, logins SIDS on database B are changed and therefore users can't log into database B and connection strings for the web application it supports are broken. Is there a work around for this? Thanks.

    Read the article

  • Postgres: clear entire database before re-creating / re-populating from bash script

    - by Hoff
    hi folks, I'm writing a shell script (will become a cronjob) that will: 1: dump my production database 2: import the dump into my development database Between step 1 and 2, I need to clear the development database (drop all tables?). How is this best accomplished from a shell script? So far, it looks like this: #!/bin/bash time=`date '+%Y'-'%m'-'%d'` # 1. export(dump) the current production database pg_dump -U production_db_name > /backup/dir/backup-${time}.sql # missing step: drop all tables from development database so it can be re-populated # 2. load the backup into the development database psql -U development_db_name < backup/dir/backup-${time}.sql Many thanks in advance! Martin

    Read the article

  • Restoring Windows 2008 Server X86 and X64

    - by rihatum
    Restoring Windows 2008 Server (Domain Controller) We are using Backup Exec System Recovery 2010 to Image our DC. Now this software has a feature to convert the backup into a vmware or hyper-v VM I have also used disk2vhd to convert one of our dc's to a vhd and when I connected it into Hyper-V, it booted fine, I can login - BUT :-) As soon as I login, I get the activation error, that change product key, this product key isn't good for this machine etc. Question is : When in a real recovery situation, what would be the procedure to restore it either virtual or onto a physical box but be able to login and change product key etc ? In this scenario its just locked down and I cant' do anything, if this is the case, how would I replicate my production environment via these tools ? Any Ideas ? Will be grateful for some real world examples here. Same thing happens with our exchange backup / test restore either physical or virtual, can login but nothing else. Now we don't have the keys as they are OEM keys and just wondering what will happen in a real scenario, would we be purchasing another KEY or using the OEM key on our new server ? This is a test environment I am trying to create by restoring our backups either into hyper-v or physical test machines. Also, If I build up a machine (Server 2008) in a VM (Hyper-V), How can I restore just the system state backup of my DC into it ? will that give me the activation error too ? even though I would use the TRIAL ISOs provided by Microsoft ? Kind regards

    Read the article

  • Update a bootable OS X drive clone with rsync?

    - by Joe
    The question: is it possible to keep a boot-able backup drive clone of OS X updated with rsync? If rsync is not a viable option are there alternatives? The Setup: My situation is as shown above. One internal Samsung 840 SSD [120g] in use as my OS X 10.8 boot disk on a recent model Mac Mini. I have successfully cloned that drive with disk utility to a 125g partition of another HDD in an external USB 3 enclosure and at that point I am able to boot to it. The Goal: As my last system went out in a fiery blaze taking much valuable data with it, I have a new respect for a proper backup solution and really want to do this right. My goal is to achieve an automated differential backup/update from Disk A to Disk B while most importantly maintaining boot-ability on the external drive. And I would prefer to do this differentially to minimize stress on the drives. Hence rsync was the first thing to come to mind. What I have tried: following along with Jamie Zawinski's differential mac bootable backup solution running this manually initially worked - i tested it with only very miniscule file change and everything was fine / external booted and all. now after subsequent passes rsync fails throwing errors particularly relating to updating 'boot.efi' (not at the machine currently I will update the precise log message once I return home) is this a drive partition size issue? does rsync require more space? if it cant be done, are there any alternatives? i've heard whispers of dd

    Read the article

  • Can SQL Server (2008) transaction logs handle the database being dropped and re-created?

    - by Ben
    We're trying to restore a database (created programatically by running a hand-crafted SQL script we have). Our backup routine is to create a full backup of every database on the SQL Server 2008 instance on a Saturday then automatic transaction logs (I assume these are created automatically anyway - we appear to have lots of log files, possibly one per transaction after the full backup was taken?). On Tuesday this week the database in question was dropped and another one with the exact same name and schema was created. SQL Server has continued to create transaction log files but it hasn't had chance to create a new full backup (that won't happen until next Saturday). Now as it turns out we need to restore the database to how it was on Thursday. This is after the "drop and re-create". My question is, is this possible? If it isn't, what exactly does SQL Server think that it's writing to those transaction logs created since the drop and re-create? (I understood they were kind of files containing a binary delta, which makes me think maybe we can restore from them?) I'm no DBA but then neither is our IT department, so I'm doing the best I can to resolve this. Any advice much appreciated!

    Read the article

  • Block-level deduplicating filesystem

    - by James Haigh
    I'm looking for a deduplicating copy-on-write filesystem solution for general user data such as /home and backups of it. It should use online/inline/synchronous deduplication at the block-level using secure hashing (for negligible chance of collisions) such as SHA256 or TTH. Duplicate blocks need not even touch the disk. The idea is that I should be able to just copy /home/<user> to an external HDD with the same such filesystem to do a backup. Simple. No messing around with incremental backups where corruption to any of the snapshots will nearly always break all later snapshots, and no need to use a specific tool to delete or 'checkout' a snapshot. Everything should simply be done from the file browser without worry. Can you imagine how easy this would be? I'd never have to think twice about backing-up again! I don't mind a performance hit, reliability is the main concern. Although, with specific implementations of cp, mv and scp, and a file browser plugin, these operations would be very fast, especially when there is a lot of duplication as they would only need to transfer the absent blocks. Accidentally using conventional copy tools that do not integrate with the FS would merely take longer, waste some bandwidth when copying remotely and waste some CPU, as the duplicate data would be re-read, re-transferred and re-hashed (although nothing would be re-written), but would absolutely not corrupt anything. (Some filesharing software may also be able to benefit by integrating with the FS.) So what's the best way of doing this? I've looked at some options: lessfs - Looks unmaintained. Any good? [Opendedup/SDFS][3] - Java? Could I use this on Android?! What does [SDFS][4] stand for? [Btrfs][5] - Some patches floating around on mailing list archives, but no real support. [ZFS][6] - Hopefully they'll one day relicense under a true Free/Opensource GPL-compatible licence. Also, 2 years ago I had a go at an attempt in Python using Fuse at the file-level to be used over the top of a typical solid FS such as EXT4, but I found Fuse for Python underdocumented and didn't manage to implement all of the system calls. My first post here, so I can't post more than 2 links until I get over 10 rep: [3]: http://www.opendedup.org/ [4]: https://en.wikipedia.org/w/index.php?title=SDFS&action=edit&redlink=1 [5]: https://en.wikipedia.org/wiki/Btrfs#Features [6]: https://en.wikipedia.org/wiki/ZFS#Linux

    Read the article

  • Restore deleted default folders

    - by Helena T.
    I was tinkering with my new laptop and, on purpose, deleted those default folders in my "home" directory: "My Music", "Links", "Favorites". This, because, i decided i wanted all my data on another partition, leaving C: only for applications and configs files. But now, some of the explorer functionalities are gone: i cannot use the Favorites tree in the left side pane, also discovered that "My Documents" stores some PowerShell config file. I feel like i misunderstood this folders' purpose and by deleting them, provoked some Explorer instability. Is there any way to restore them? I do not seem to find it. Thank you for taking the time to read this.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >