Search Results

Search found 14013 results on 561 pages for 'remote backup'.

Page 90/561 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • PostgreSQL continuous archiving not running archive_command

    - by Whatsit
    I've been trying to set up continuous archiving for a simple, test PostgreSQL 9.0 database, as per the documentation. In postgres.conf I've set: wal_level = archive archive_mode = on archive_command = 'touch /home/myusername/backup/testtouch' archive_timeout = 30s ...and restarted PostgreSQL. The file listed by touch never appears. I can manually run the touch command and it works as expected. If I try to create a backup, it waits forever for the archive_command. In psql; postgres=# SELECT pg_start_backup('touchtest'); pg_start_backup ----------------- 0/14000020 (1 row) postgres=# SELECT pg_stop_backup(); NOTICE: pg_stop_backup cleanup done, waiting for required WAL segments to be archived WARNING: pg_stop_backup still waiting for all required WAL segments to be archived (60 seconds elapsed) HINT: Check that your archive_command is executing properly. pg_stop_backup can be cancelled safely, but the database backup will not be usable without all the WAL segments. What would cause this? How can I troubleshoot it? Additional info: Running on CentOS 5.4. PostgreSQL 9.0.2 installed as root.

    Read the article

  • Why should one have a secondary DNS server?

    - by Sam Levin
    I'm very confused. I basically understand how DNS works. Here's an example that helps illustrate what I'm having trouble understanding. Right now, I run a small web-server. I use my provider's DNS manager, so I don't have a DNS server hosted on the machine. Let's say for a second, that I don't use my host's DNS, and I decide to set up a DNS server on my server. Hypothetical scenario: my server (entire) server goes down - DNS included. Why do I need backup DNS? If the server is down, who cares if the DNS server is down too, considering that even if I had DNS up (it wasn't on the crashed server), it wouldn't be able to forward requests anyway since the server would be down? Is the point of having secondary DNS, to be able to change the IP addresses that your DNS server points to, so if your webserver was down, you could redirect traffic to a backup? How would you switch to the secondary provider, in the event that your main DNS provider becomes unavailable? Is a backup DNS system basically up all the time? How is it configured? Is it just an exact clone of the DNS server you would have on your server? Do they run simultaneously? Hopefully someone can see what I'm hung up on, and provide some guidance. Thanks

    Read the article

  • Robocopy silently missing files

    - by John Hunt
    I'm using Robocopy to sync data from our server's hard disk to an external disk as a backup. It's a pretty simple solution but pretty much the best/easiest one we could come up with - we use two external disks and rotate them offsite. Anyway, here's the script (with the comments taken out) that I'm using to do it. It works very well, it's quick and almost 100% complete - however it's acting pretty strange with a few files (note company name has been changed in paths to protect the innocent): @ECHO OFF set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% SET prefix="E:\backup_log-" SET source_dir="M:\Company Names Data\Working Folder\_ADMIN_BACKUP_FILES\COMPA AANY Business Folder_Backup_040407\COMPANY_sales order register\BACKUP CLIENT FOLDERS & CURRENT JOBS pre 270404\CLIENT SALES ORDER REGISTER" SET dest_dir="E:\dest" SET log_fname=%prefix%%date:~-4,4%%date:~-10,2%%date:~-7,2%.log SET what_to_copy=/COPY:DAT /MIR SET options=/R:0 /W:0 /LOG+:%log_fname% /NFL /NDL ROBOCOPY %source_dir% %dest_dir% %what_to_copy% %options% set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% cscript msg.vbs "Backup completed at %DATESTAMP% - Logs can be found on the E: drive." :END Normally the source would just be M:\Comapany name data\ but I altered the script a bit to test the problem. The following files in the source are not copied to the dest: Someclient\SONICP~1.DOC Someclient\SONICP~2.DOC Someclient\SONICP~3.DOC However, files in the same directory named: TIMESH~1.XLS TIMESH~2.XLS are copied. I'm able to open the files that aren't copied with no trouble at all, and they certainly weren't opened when I ran robocopy so it's not a locking issue. Robocopy is running as administrator so it's not a permissions issue. There's no trace these files were even attempted to be copied as there are no errors being output in the log or in my command prompt. Does anyone have any suggestions as to what this might be? Busted hard disk? Cheers, John.

    Read the article

  • How to move my data from my old MacBook Pro to my new one?

    - by Tim Büthe
    I just purchased a new MacBook Pro and already got an 2008 model. I wonder how I move all my data over to the new one. My first idea was, to use my Time Machine backup and restore from it, which seems to be a good idea and should work just fine regarding to this link: http://blog.duncandavidson.com/2008/01/restoring-from-time-machine.html. But, since my current MacBook got older Software on it, like iLife '08 instead of iLife '09 I would have to upgrade this afterwards. Is this correct, or does Time Machine does some magic to exclude well known software? And is it possible to reinstall or upgrade iLife with the included installation DVDs? My second idea is, to just swap the hard drives instead of using the Time machine backup. If it is not too complicated to remove the hdd, this should be the fastest way. This also has the benefit, that the 2008er MacBook then contains a brand new installation and I don't have to remove all my stuff or reinstall Mac OS before I give it away. My question on that second idea would be: does snow leopard handle this stuff correctly? I reboot with the new hardware and all just works fine? So in a nutshell: What would you do: restore from backup or swap drives? And what about the new software?

    Read the article

  • Solaris 10: How to image a machine?

    - by nonot1
    I've got a Solaris 10 workstation that I'd like to create a full image backup from. The machine has 2 drives, one UFS for system root, and 1 ZFS for data storage. I intend to add a third HD to keep the backup images of both primary drives (including any zfs snapshots). The purpose is not disaster recovery, but rather to allow me to easily blow away a series of application installation/configuration changes I intend to try. What's the best way to do this? I'm not too familiar with Solaris, but have some basic Linux knowledge. I looked at CloneZilla, but it does not support Solaris. I'm OK with just a dd | gzip > image style solution, but I'd need some way to first zero-out the non-used blocks on the primary drives to aid gzip. They are are much larger than my 3rd drive, but hardly have any real data. Update to clarify: I specifically want to avoid using any file-system snapshot functionality, because part of the app configuration changes involve/depend slightly on existing and new snapshots. Ideally the full collection of snapshots should be part of the backup. Virtualization not an option, because the goal is to do performance evaluation on a very specific HW configuration. For the same reason, the spurious "back up" snapshots could skew performance data. Thank you

    Read the article

  • Backing up large network (~200 clients) -- Enough Bandwidth?

    - by mtkoan
    My company wants to institute a backup plan for all of the clients on our network, which is about 200. We back up our servers and SQL databases regularly, but its been our policy to not backup individuals. What is most critical for people is their Documents and PST files in Outlook. PST files can be very large, and most people's are ~1-1.5 GB around here. So with PST files alone that is 200-300 GB of data needing to be transferred daily to a sever for backup. Or compressing first, then transferring, but many of the machines are VERY old and such a task would grind their computer to a halt. Isn't this the reason networks use things like VMware -- to reduce network traffic and streamline backups? Or is this only to reduce hardware costs? Would this much network traffic everyday drastically slow down our network? Enough to the point we'd have to mandate it to be done at night only? Or could we stagger then through out the day? Really appreciate any input, thank you.

    Read the article

  • Should I keep my ex-employer's data?

    - by Jurily
    Following my brief reign as System Monkey, I am now faced with a dilemma: I did successfully create a backup and a test VM, both on my laptop, as no computer at work had enough free disk space. I didn't delete the backup yet, as it's still the only one of its kind in the company's history. The original is running on a hard drive in continuous use since 2006. There is now only one person left at the company, who knows what a backup is, and they're unlikely to hire someone else, for reasons very closely related to my departure. Last time I tried to talk to them about the importance of backups, they thought I was threatening them. Should I keep it? Pros: I get to save people from their own stupidity (the unofficial sysadmin motto, as far as I know) I get to say "I told you so" when they come begging for help, and feel good about it I get to say nice things about myself on my next job interview Nice clean conscience Bonus rep with the appropriate deities Cons: Legal problems: even if I do help them out with it, they might just sue me for keeping it anyway, although given the circumstances I think I have a good case Legal problems: given the nature of the job and their security, if something leaks, I'm a likely target for retaliation Legal problems: whatever else I didn't think about I need more space for porn. Legal problems. What would you do?

    Read the article

  • Problems with USB-Devices using VDR

    - by emmsinator
    Hey Guys, I'm using VDR on vSphere4. It works sucessfully. I've already backuped several VMs with VDR and I like it very much. But now we got a problem. We have 2 VMs, using an USB-Device Server with a stick plugged in, which is definetely need by these 2 VMs for Licensing and so. Every time, I start the Backup process, the VMs lost the communication to the USB-Server and its stick after building the snapshot and while online. Because of that, the software on these VMs can't work correctly. I have to restart both Machines to solve this problem. These fact is bad for an automatic backup. Does VDR have a special function for those cases or is something like this already known? It would be no problem, to shutdown the servers for building snapshots on Saturday or Sunday. Can VDR initiate a shutdown before starting the backup process? Otherwise I must try to use scripts, but that wouldn't be so nice. Thanks a lot for your help.

    Read the article

  • VMWare Server modifying files related to paused VMs, is this expected?

    - by David Spillett
    While refreshing the backup of a VM used for testing, I experienced the following warning from tar: tar: /VMsR0/cli_noddyco_test/VM2K8_32_web.vmem: file changed as we read it The VMs in question were paused at the time. My first though was that I'd mixed up the machines and was trying to backup something that was still actively running. To be sure I unpaused and properly shut down the VM, and the vmem files that tar reported changing vanished as I would expect. Is it normal for VMWare Server to touch or alter files for paused VMs like this, or is there likely something amiss with our setup? If this is expected behaviour, is just touching the vmem file (and so altering the last modification date without actually changing content)? If it is normal for files relating to paused VMs to be updated I shall have to revise our backup procedures to make sure the VMs are fully shut down fully rather than just pausing them (this isn't a problem, but it seems strange and I'd prefer to understand what VMWare is doing and why instead of just dismissing it as "one of those things" and working around it). For further detail: the host in question is VMWare Server version 2.0.2 running on 64-bit Debian/Lenny, and that VM did not have a snapshots at the time. We have backed up paused VMs this way in the past with no such warnings from tar.

    Read the article

  • How to (properly) back up a live QEMU/KVM VM?

    - by Roman
    I'm currently engineering a backup solution for KVM VM's as an additional measure to traditional backups. Unfortunately, all currently (August 2013) existing solutions I came across so far either: do not ensure a consistent backup of the VM (losing RAM state, creating a dirty image, or other things), or require lengthy downtime (complete VM shutdown while backing up). I'm aware of QEMU/libvirt's functionality of taking snapshots, however, it's not yet usable since: image-internal snapshots present you with an ever-changing image file, resulting in a likely dirty backup (assuming one uses qcow2 images at all). one cannot yet merge a currently active external snapshot into the original backing image ("blockcommit"). Out of the above reasons, I'm now implementing a script that: Saves the VM's state and halts it Sets up a devicemapper snapshot(s) where the VM's disk images and state reside Resumes the VM Mount the snapshot(s) of step 2. Backs up the VM's disk and state (configuration for convenience) Merges back the snapshot(s). If I got everything right, this will take consistent backups of VM's with only seconds (if at all, since 1-3 is fast, possibly sub-second) of downtime. Of course, when restoring, the VM will be way in the past, but at least giving me the option of an orderly shutdown/reboot. Am I missing something with this solution? Or has someone indeed already implemented this?

    Read the article

  • Unnamed, hidden partitions on my 500 GB HD, HP Pavilion dm4 Laptop

    - by emotionull
    I have multiple doubts here. Its a Seagate 500GB 7200RPM HD. I had installed it few months back after my original Laptop HD stopped working. The current drives on my latop, as shown by the Windows Disk Management are: After installing the new HD, I had done a complete clean install of Windows 7 and I didn't create any parition myself, manually. So there are 4 drives. Even previously, before I installed this new HD, my laptop had 4 Partitions. But the there were no un-named partitions like the two in this case. The other two were HP tools and Recovery or something. It was pre-configured, Factory installed Windows. Also, now when I right cick on the unnamed Drives from Disk Management, all the options are greyed out (see image) except the delete partition image. So how do I know what's inside those partitions? Will it be ok if I delete them? I want install Ubuntu and dual boot it with my current windows installation. I cannot do it in current setup as there are already 4 partitions of my HD and if I will try to make a new partition, it will be a logical one (correct me if I am wrong here). So can I delete the un-named, hidden partitions and use them for Ubuntu? A bit unrelated question. As a backup option, can I use the Windows 7's Backup and Restore facility to keep a complete backup of all the drivers and system softwares.

    Read the article

  • Why is piping dd through gzip so much faster than a direct copy?

    - by Foo Bar
    I wanted to backup a path from a computer in my network to another computer in the same network over a 100MBit/s line. For this I did dd if=/local/path of=/remote/path/in/local/network/backup.img which gave me a very low network transfer speed of something about 50 to 100 kB/s, which would have taken forever. So I stopped it and decided to try gzipping it on the fly to make it much smaller so that the amount to transfer is less. So I did dd if=/local/folder | gzip > /remote/path/in/local/network/backup.img.gz But now I get something like 1 MB/s network transfer speed, so a factor of 10 to 20 faster. After noticing this, I tested this on several paths and files and it was always the same. Why does piping dd through gzip also increase the transfer rates by a large factor instead of only reducing the bytelength of the stream by a large factor? I'd expected even a small decrease in transfer rates instead, due to the higher CPU consumption while compressing, but now I get a double plus. Not that I'm not happy, but just wondering. ;)

    Read the article

  • SSAS Multithreaded sync with Windows 2008 R2

    - by ACALVETT
    We have been happily running some of our systems on WIndows 2003 and have had an upgrade to W2K8 R2 on the list for quite some time. The upgrade has now completed and we can start taking advantage of some of the new features which is the reason for this post. For a long time we have used the sample Robocopy script from the SQLCat team to synchronize some of our larger SSAS databases. If your wondering what i mean by large, around 5 TB with a good few thousand partitions. The script works like a dream...(read more)

    Read the article

  • How can I keep a folder synchronized to an external USB hard drive in Ubuntu?

    - by Cesar
    I have a growing music collection which I manually keep in sync with an external USB drive. Sometimes I edit their ID3 tags, add or delete a file in either the hard drive or the USB drive, and I would like to keep those changes synchronized between both. Does Ubuntu has something available that would help me with this scenario? Preferably something easy to use with a UI. Update: To clarify my question, changes may happen on both the local hard drive or the USB drive, so the sync process must be on both directions.

    Read the article

  • Review the New Migration Guide to SQL Server 2012 Always On

    - by KKline
    I had the pleasure of meeting Mr. Cephas Lin, of Microsoft, last year at the SQL Saturday in Indianapolis and then later at the PASS Summit in the fall. Cephas has been writing content for SQL Server 2012 Always On. Cephas has recently published his first whitepaper, a migration guide to SQL Server AlwaysOn. Read it and then pass along any feedback: HERE Enjoy, -Kev - Follow me on Twitter !...(read more)

    Read the article

  • Proper way to remove an active / inactive LVM snapshot

    - by user2622247
    I have created a sample ruby script file for removing extra LVM snapshots from the system. For removing LVM snapshot, we are using lvremove command. This command is working fine and we can remove snapshots from the system. # sudo lvremove /dev/ops/dbbackup lvremove -- do you really want to remove "/dev/ops/dbbackup"? [y/n]: y Sometimes while removing snapshots we are getting following errors. Unable to deactivate open rootfs_12.10_20140812_00-cow (252:8) Failed to resume rootfs_12.10_20140812_00. libdevmapper exiting with 7 device(s) still suspended. The system gets frozen. We cannot fire any command or can not perform any action on it. After restarting the system, it is functioning fine. We can perform all the operations even we can delete that snapshot also. I searched about it I found these threads https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=659762 and https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=674682 Solution from this thread is after getting the error but I have to avoid this type of error. I have a question, Is there any better way removing LVM snapshots? So that we can avoid this type of error. If anyone needs more info feel free to ask me.

    Read the article

  • Error 1130 connecting to MySQL on Ubuntu Server 12.04

    - by maGz
    I hope this is the right place for this...I currently am running Ubuntu Server 12.04 through VirtualBox on a Windows 7 host. I am trying to connect to the VM's MySQL engine using MyDB Studio for MySQL, and when I enter my MySQL login credentials, it gives me the following error back: Error 1130: Host '192.168.56.1' is not allowed to connect to this MySQL server I am running the VM with Adapter 1 enabled for NAT, and Adapter 2 enabled for Host-only Adapter. eth0 10.0.2.15 and eth1 192.168.56.21. I can connect to Apache at 192.168.56.21, and through PhpMyAdmin, everything works as it should. I did edit the /etc/mysql/my.cnf file and commented out the line bind-address = 127.0.0.1 by adding a # in front of it - I thought that this should have allowed remote connections. Any ideas on how I can solve this? What could be wrong? EDIT: I am trying to connect as 'root'. EDIT: SOLVED!!

    Read the article

  • How to make a disk image and restore from it later?

    - by Torben Gundtofte-Bruun
    I'm a new Linux user. I've reinstalled my Wubi from scratch at least ten times the last few weeks because while getting the system up and running (drivers, resolution, etc.) I've broken something (X, grub, unknowns) and I can't get it back to work. Especially for a newbie like me, it's easier (and much faster) to just reinstall the whole shebang than try to troubleshoot several layers of failed "fixing" attempts. Coming from Windows, I expect that there is some "disk image" utility that I can run to make a snapshot of my Linux install (and of the boot partition!!) before I meddle with stuff. Then, after I've foobar'ed my machine, I would somehow restore my machine back to that working snapshot. What's the Linux equivalent of Windows disk imagers like Acronis True Image or Norton Ghost? Note: I found a similar question here.

    Read the article

  • Disconnect have no effect using rdesktop

    - by Hongxu Chen
    So I'm using rdesktop with my labtop when I remote my PC in the lab,which is installed with Windows 7.Everything went well until I recently upgraded my lubuntu of the laptop(or maybe there's nothing with the upgrade at all,however I don't know).The rdesktop fails to disconnect when I disconnect from the start menu of Windows.This does not mean that I cannot return to my linux, actually I get back to lubuntu successfully and the terminal reports that I have disconnected.However when I re-login to Windows of the PC in the lab(via rdesktop) after I reboot my laptop, it fails.Then I come to the PC in the lab and the screen message tells me that it is still connected with my lubuntu. So what's the problem? Do any guys have similar experience? PC:Windows 7,in the lab;laptop:linux(lubuntu 12.04)

    Read the article

  • input / output error, drives randomly refusing to read / write

    - by ILMV
    I have an issue with one of our servers running Ubuntu 10.04, it is running BackupPC and collects backups from various machines / servers around the building. On the 8th minute (12:08, 12:18, 12:28 etc) the backups are transferred to an external hard drive, we have three and rotate one drive for another everyday. The problem we are having is we are randomly experiencing input / output errors, when this happens you cannot read / write to the drive, it hasn't unmounted so I can cd to the mount point /media/backup1. The drives are not faulty as it's happening on all of them, so I'm at a loss as to what the problem could be, here is an example of the many errors we get: gzip: stdout: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_1083_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1088_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1089_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1090_host1.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 39: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_1090_host1.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_591_tech2.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_591_tech2.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_592_tech3.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_593_tech3.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_593_tech3.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error EDIT » Resolved So it turns out Quamis was right, even though I didn't think it was possible it was actually a problem with the drive. You see we have three drives all formatted to ext2, on two of them we were getting I/O errors frequently, I cam back to Quamis' answer and discovered the fsck command, so ran it against the problems drives: fsck /dev/sdb1 This found and fixed a load of problems on the drive, most probably caused by power outages / unsafe removal of drives etc, as the drives are in the xt2 format they aren't journalled and thus aren't protected against such issues. Drives are now working beautifully, thanks all! :D

    Read the article

  • How to keep "dot files" under version control?

    - by andrewsomething
    Etckeeper is a great tool for keeping track of changes to your configuration files in /etc A few key things about it really stand out. It can be used with a wide variety of VCSs: git, mercurial, darcs, or bzr. It also does auto commits daily and whenever you install, remove or upgrade package. It also keeps track of file permissions and user/group ownership metadata. I would also like to keep my "dot files" in my home directory under version control as well, preferably bazaar. Does anyone know if a tool like etckeeper exists for this purpose? Worst case, I imagine that a simple cron job running bzr add && bzr ci once or twice a day along with adding ~/Documents, ~/Music, ect to the .bzrignore Anyone already doing something similar with a script? While I'd prefer bazaar, other options might be interesting.

    Read the article

  • How to make a disk image and restore from it later?

    - by torbengb
    I'm a new Linux user. I've reinstalled my Wubi from scratch at least ten times the last few weeks because while getting the system up and running (drivers, resolution, etc.) I've broken something (X, grub, unknowns) and I can't get it back to work. Especially for a newbie like me, it's easier (and much faster) to just reinstall the whole shebang than try to troubleshoot several layers of failed "fixing" attempts. Coming from Windows, I expect that there is some "disk image" utility that I can run to make a snapshot of my Linux install (and of the boot partition!!) before I meddle with stuff. Then, after I've foobar'ed my machine, I would somehow restore my machine back to that working snapshot. What's the Linux equivalent of Windows disk imagers like Acronis True Image or Norton Ghost?

    Read the article

  • Switch encoding of terminal with a command

    - by Tomas Lycken
    One of the servers I quite often ssh to uses western encoding instead of utf-8 (and there's no way I can change that). I've started writing a bash script to connect to this server, so I won't have to type out the entire address every time, but I would like to improve this script so it also changes the encoding of the terminal window correctly. The change I need to do can be performed using the mouse by navigating to "Terminal"-"Set Character Encoding..."-"Western (ISO-8859-1)". Is there a terminal command that does the same thing, for the current terminal window/screen? To clarify: I'm not interested in ways of switching the locale of the system on the remote site - that system is administered by someone else, and I have no idea what stuff might depend on the latin-1 encoding there. What I want to do is to let this terminal window on my side switch character encoding to the above mentioned, in the same way I can do with my mouse and the menus.

    Read the article

  • Unable to mount external hard drive

    - by arranjamesroche
    Basically I my 12.10 update crashed halfway through, so I've had to start again and put all my data onto an external HDD. It was all going fine until this came up : Error mounting /dev/sdb1 at /media/amy/CA47-8339: Command-line `mount -t "vfat" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,shortname=mixed,dmask=0077,utf8=1,showexec,flush" "/dev/sdb1" "/media/amy/CA47-8339"' exited with non-zero exit status 32: mount: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so When I tried to restore my info off the HDD. Now I'm stuck completely clueless as to how I'll get anything off the hard drive.

    Read the article

  • Review the New Migration Guide to SQL Server 2012 Always On

    - by KKline
    I had the pleasure of meeting Mr. Cephas Lin, of Microsoft, last year at the SQL Saturday in Indianapolis and then later at the PASS Summit in the fall. Cephas has been writing content for SQL Server 2012 Always On. Cephas has recently published his first whitepaper, a migration guide to SQL Server AlwaysOn. Read it and then pass along any feedback: HERE Enjoy, -Kev - Follow me on Twitter !...(read more)

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >