Search Results

Search found 9620 results on 385 pages for 'backup profile'.

Page 82/385 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • IT lead does not have a backup, DR plan in writing

    - by Alex
    This is a general management question to IT managers out there. We are a small firm with about 4 servers in our colo cabinent. No full time IT manager. But we do have one person on monthly contract and I am having a terrible time getting him to share what these plans actually are. I am sure he HAS a plan (and its probably in his head..) but that does us no good if he gets hit by a bus.. How would you guys handle this? He is a long time friend, but I fear this is dangerous for us long term..I have confronted him on several occasions about this, and he tells me not to worry, he has go it covered.. Thanks.

    Read the article

  • How to backup or export PowerStrip display profiles?

    - by Sk8erPeter
    I would like to save two of my saved PowerStrip display profiles. Earlier I set 720x540 resolution and some other settings (frequency, etc.) to another display device usually used in extended mode, which is now NOT connected: But when I go to "Advanced timing options", I see some different settings. I thought I could copy settings with the copy icon , but this way I would copy the wrong ones, not the predefined ones (with the 720x540 resolution): What is the best method to "export" these settings before formatting the hard drive?

    Read the article

  • OSX 10.9 Time Machine backup to NAS

    - by user214577
    I recently upgraded from 10.6.8 to 10.9. on snow leopard i was able to make time machine backups over the network to my nas, i think i had to tweak some settings but i dont recall what i did. now that i upgraded to mavericks, i cannot do backups to my nas using time machine. my question is, what do i have to do to allow time machine backups over the network in 10.9? i tried looking for solutions online but did not find anything relating to mavericks.

    Read the article

  • Secretary cannot add appointments to boss's calendar after exchange restore from backup

    - by therulebookman
    The calendar is the Boss's calendar on Exchange. I have set permissions for it through his Outlook to give the secretary and a few other people "Editor" access to his calendar. All the editors can view the calendar, but only he can add new appointments. Anyone else who tries to add an appointment gets "The item cannot be saved in this folder. The folder was deleted or moved or you do not have permission." The permissions are correct, editor. The item hasn't been deleted or moved. It's in his mailbox on exchange. The message says something about the mailbox size, but he is well under the size limit anyway. He is using Outlook 2003, and I have tried accessing it from 2003 and 2007, but I don't think that is related I tried clearing the forms cache and enabling disabled items: no disabled items and clearing cache didn't help. I also tried "Allow all forms" but this apparently doesn't apply in this scenario as we are not using any custom forms. Is there any way to delete just his calendar and then I can exmerge it back in (after exporting to PST of course)? I really can't exmerge out his mailbox, delete it, and exmerge it back in because he works all sorts of hours, but if this is the only way, then I'll have to do it. Is there any other possible solution?

    Read the article

  • Recover backup copy of a ubuntu linux installation on a usb stick using dd

    - by user10826
    Hi, I installed Ubuntu 10.04 on a usb stick in persistent install mode. So I could boot the laptop or my desktop computer with the stick, at boot time. Once I needed the 8GB stick for another purposes so I thought about coyping it to my desktop doing from mac os x: dd if=/dev/disks3s of=/Users/jack/Desktop/usb_copy Now I am trying to do the opposite, after having used the stick, which was formatted to NTFS, just doing dd if=/Users/jack/Desktop/usb_copy of=/dev/disks3s but although I can see that almost of the files are there, I can not boot again. IT is also strange the the file permissions are kind of strange, something like _user What can I do ? Thanks

    Read the article

  • Recover backup copy of a ubuntu linux installation on a usb stick using dd

    - by user10826
    Hi, I installed Ubuntu 10.04 on a usb stick in persistent install mode. So I could boot the laptop or my desktop computer with the stick, at boot time. Once I needed the 8GB stick for another purposes so I thought about coyping it to my desktop doing from mac os x: dd if=/dev/disks3s of=/Users/jack/Desktop/usb_copy Now I am trying to do the opposite, after having used the stick, which was formatted to NTFS, just doing dd if=/Users/jack/Desktop/usb_copy of=/dev/disks3s but although I can see that almost of the files are there, I can not boot again. IT is also strange the the file permissions are kind of strange, something like _user What can I do ? Thanks

    Read the article

  • SSH + MysqlDump Remote Backup Script

    - by bundini
    I'm trying to issue a remote mysqldump command, redirect stdout to a dmp file, then tar that up. I'm a bit confused as to how to do the redirection bits over ssh: i.e. ssh [email protected] mysqldump $dbname -u admin -p > dbdump.dmp && tar cvzf dbdump.tar.gz dbdump.dmp Issues: 1) I'm not providing the password because I want it to prompt me. Will an ssh remote command deal with this? 2) What's the deal with the syntax? Do I want to use quotations, or don't I? What happens with the redirects and pipes? Do those have to be escaped or formatted in some special fashion.

    Read the article

  • Find daily backup sizes in DPM 2007

    - by Paul D'Ambra
    I'm considering setting up a DPM server offsite and synchronising it with my onsite DPM server. How can I determine the size of the changes that would be replicated (based on my historical data) so that I can determine if we have the bandwidth to support this?

    Read the article

  • Make BIND use DHCP DNS as backup

    - by cainmi
    I run BIND locally on my OS X machine, to enable wildcard Apache vhosts, which requires setting the DNS server for all network interfaces to 127.0.0.1. This works great, but means when I am on a network which uses an internal DNS server to route special (i.e. .companyname) URLs to a server on the network, the lookup fails. I tried adding both 127.0.0.1 and the DHCP provided DNS server, but this doesn't work either. Is there a way to make BIND use the DHCP DNS server for requests it cannot resolve locally?

    Read the article

  • Backup data from RAID 1 disk out of its server

    - by Doomsday
    I'm facing with a pretty easy problem in my opinion. I've extracted a working disk from a RAID1 and I'm looking to copy only data (FS and RAID configuration doesn't matter) into another location (another FS). My problem is I'm not able to mount properly this disk into another linux. I've first looked the partition table : # fdisk -l /dev/sdc Disk /dev/sdc: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 1249535699 624767818+ fd Linux raid autodetect /dev/sdc2 1249535700 1250017649 240975 fd Linux raid autodetect /dev/sdc3 1250017650 1250258624 120487+ 82 Linux swap / Solaris I've understood I should use dmraid tools. Once installed : # cat /proc/mdstat Personalities : md0 : inactive sdc1[1](S) 624767744 blocks unused devices: <none> And some other informations : # mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : 8f292f54:7e5aef72:7e5ab5fd:b348fd05 Creation Time : Mon Jun 2 03:39:41 2008 Raid Level : raid1 Used Dev Size : 624767744 (595.82 GiB 639.76 GB) Array Size : 624767744 (595.82 GiB 639.76 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Update Time : Tue Feb 7 22:34:59 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : a505b324 - correct Events : 15148 Number Major Minor RaidDevice State this 1 8 1 1 active sync /dev/sda1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 1 1 active sync /dev/sda1 From here, I've tried to mount but I'm not comfortable with dmtools and how it's working. # mount /dev/sdc1 /mnt/sdc1 mount: unknown filesystem type 'linux_raid_member' # mount /dev/md0 /mnt/sdc1 mount: /dev/md0: can't read superblock I've seen some options to alter RAID array with mdadm but I only want to copy data on its filesystem before wiping them... Anyone has a clue ?

    Read the article

  • Super slow time machine backup on my mac

    - by lowellk
    I just got a new 2TB drive which I'm trying to use as a time machine drive for my mac which has a 1TB drive. On my first time trying to back it up, I'm getting terrible throughput, not even 1GB per day (it's been running for 36 hours now). I erased the disk and tried to copy a large file to it and got the same slow speed. What can I do to diagnose this? Are there any tools which can inspect the disk and tell me if it's messed up? Thanks!

    Read the article

  • What is the best way to auto failover to backup WAN link for web server

    - by user66735
    Hi Iam looking for the best way to ensure my server ( application ) remains available for all my users (on web/LAN/WAN ), when my primary ISP link fails. My server is behind a firewall on which both my primary & secondary links land. I have already assigned multiple IPs (both ISP's static IP) to the 'A' record ( host.example.com ) in the DNS. However in a round robin scenario is there a way I can ensure that my web user will not see a "cannot dislay web page" error ever ?? What are the better methods to achieve this??

    Read the article

  • What is the easiest way to make a backup of an entire hard disk

    - by Solignis
    Hi there, I got myself a dell laptop from the local computer store. Its a used machine with Windows Vista Home Basic on it. I want to load Ubuntu Desktop 10.10 though so I can do perl development. BUT I want to keep a copy of the entire harddrive with the dell utility partition and Windows Vista in case I want to go back. I was thinking I could image the drive but I not sure what to use, I don't have Ghost or anything, Someone had told me about Clonezilla. Would that work for me? Is it hard to use? Also I want to burn the data to a DVD or something more storable than a harddisk.

    Read the article

  • What is the easiest way to make a backup of an entire hard disk

    - by Solignis
    Hi there, I got myself a dell laptop from the local computer store. Its a used machine with Windows Vista Home Basic on it. I want to load Ubuntu Desktop 10.10 though so I can do perl development. BUT I want to keep a copy of the entire harddrive with the dell utility partition and Windows Vista in case I want to go back. I was thinking I could image the drive but I not sure what to use, I don't have Ghost or anything, Someone had told me about Clonezilla. Would that work for me? Is it hard to use? Also I want to burn the data to a DVD or something more storable than a harddisk.

    Read the article

  • Backup & Restore Group Policy of Workgroup Window XP

    - by Param
    I have around 20 system in Workgroup, I have configured a Group policy along with Administrative Template on one system. Do you know, how to transfer this Group Policy along with Administrative template to other system, without re-configuring it manually on all other systems. I have exported the Security setting in .inf file ( as Security Template ), but how to export setting related to Administrative template?

    Read the article

  • Backup/Multihomed network connection

    - by J_P
    We have a couple locations that require 24/7 access to Internet and our current provider (AT&T) while mostly good is not always up. My concern would be if I go with another provider (for example Comcast) I'm going to be subject to the same down time if it's in the "last mile". I for the most part don't know where the failure points are on the ISP side but I would imagine the large majority are within the last mile. I'd looked at Mifi or similar solution but have concerns about bandwidth caps and overall speed. Any suggestions would be appreciated.

    Read the article

  • How to make a backup VPN server?

    - by akalenuk
    I have a small VPN network with a bunch of clients working mostly with each other and a VPN server. Everything works fine, except, obviously I can't shut VPN server down without breaking the network. I have a spare machine, which worked as an VPN server for the same network before so it is signed with the same SA as the first one and basically configured just the same as the first one. Technically I can make my clients work with it with little adjustment (by setting remote in etc/openpvn/clientx.conf), but it would be great make this switch automated. So basically I want two VPN servers running in the same network to work completely interchangeable without clients even knowing this. Can I do this with VPN or should I dig deeper into physical network layer?

    Read the article

  • Alternative to robocopy /MIR

    - by Robin Day
    We run a number of web apps that store a lot of local data in small xml files. One part of our backup / recovery strategy is to produce a local mirror of the file system via a VPN to the hosting centre. The VPN connection is only via a 12Mbps ADSL and whilst there are a lot of files and directories, the actual number of files that changes is quite small. Although the bandwidth is probably an issue, I'm seeing results such as the output below. The robocopy /MIR took 5 hours to run yet only 30 mins to actually perform the copy. Does anyone have any suggestions as to ways to improve this. The 5 hours is now bordering on too slow and if we can't find a way to speed this up then we're going to have to come up with a completely different solution. Total Copied Skipped Mismatch FAILED Extras Dirs : 17625 6618 11007 0 0 0 Files : 1112430 1223 1111207 0 0 0 Bytes : 57.451 g 192.25 m 57.263 g 0 0 0 Times : 5:01:23 0:35:55 0:00:00 4:25:27 Speed : 93509 Bytes/sec. Speed : 5.350 MegaBytes/min. Ended : Fri Apr 16 05:54:23 2010

    Read the article

  • s3cmd fails too many times

    - by alfish
    It used to be my favorite backup transport agent but now I frequently get this result from s3cmd on the very same Ubuntu server/network: root@server:/home/backups# s3cmd put bkup.tgz s3://mybucket/ bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 36864 of 2711541519 0% in 1s 20.95 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.00) WARNING: Waiting 3 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 36864 of 2711541519 0% in 1s 23.96 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.01) WARNING: Waiting 6 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 18.71 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.05) WARNING: Waiting 9 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 18.86 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.25) WARNING: Waiting 12 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 15.79 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=1.25) WARNING: Waiting 15 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 12288 of 2711541519 0% in 2s 4.78 kB/s failed ERROR: Upload of 'bkup.tgz' failed too many times. Skipping that file. This happens even for files as small as 100MB, so I suppose it's not a size issue. It also happens when I use put with --acl-private flag (s3cmd version 1.0.1) I appreciate if you suggest some solution or a lightweight alternative to s3cmd. Thanks

    Read the article

  • Chunking large rsync transfers?

    - by Gabe Martin-Dempesy
    We use rsync to update a mirror of our primary file server to an off-site colocated backup server. One of the issues we currently have is that our file server has 1TB of mostly smaller files (in the 10-100kb range), and when we're transferring this much data, we often end up with the connection being dropped several hours into the transfer. Rsync doesn't have a resume/retry feature that simply reconnects to the server to pickup where it left off -- you need to go through the file comparison process, which ends up being very length with the amount of files we have. The solution that's recommended to get around is to split up your large rsync transfer into a series of smaller transfers. I've figured the best way to do this is by first letter of the top-level directory names, which doesn't give us a perfectly even distribution, but is good enough. I'd like to confirm if my methodology for doing this is sane, or if there's a more simple way to accomplish the goal. To do this, I iterate through A-Z, a-z, 0-9 to pick a one character $prefix. Initially I was thinking of just running rsync -av --delete --delete-excluded --exclude "*.mp3" "src/$prefix*" dest/ (--exclude "*.mp3" is just an example, as we have a more lengthy exclude list for removing things like temporary files) The problem with this is that any top-level directories in dest/ that are no longer present present on src will not get picked up by --delete. To get around this, I'm instead trying the following: rsync \ --filter 'S /$prefix*' \ --filter 'R /$prefix*' \ --filter 'H /*' \ --filter 'P /*' \ -av --delete --delete-excluded --exclude "*.mp3" src/ dest/ I'm using the show and hide over include and exclude, because otherwise the --delete-excluded will delete anything that doesn't match $prefix. Is this the most effective way of splitting the rsync into smaller chunks? Is there a more effective tool, or a flag that I've missed, that might make this more simple?

    Read the article

  • How do I add additional parameters to query string of a Firefox Search Plugin?

    - by Goto10
    I have just installed the DuckDuckGo add-on in Firefox 11.0, running on XP SP 3. I would like to add additional parameters to the query string. However, any changes I make are not reflected in the query string when doing a search. I found the duckduckgo.xml file at C:\Documents and Settings\User Name\Application Data\Mozilla\Firefox\Profiles\Profile Name.default\searchplugins. I opened it up with Notepad++ and added the line for kl=uk-en: <SearchPlugin xmlns="http://www.mozilla.org/2006/browser/search/" xmlns:os="http://a9.com/-/spec/opensearch/1.1/"> <os:ShortName>DuckDuckGo</os:ShortName> <os:Description>Search DuckDuckGo (SSL)</os:Description> <os:InputEncoding>UTF-8</os:InputEncoding> <os:Image width="16" height="16">data:image/x-icon;base64, -Removed to shorten-</os:Image> <os:Url type="text/html" method="GET" template="https://duckduckgo.com/"> <os:Param name="q" value="{searchTerms}"/> <os:Param name="kl" value="uk-en"/> </os:Url> </SearchPlugin> However, the kl=uk-en parameter does not appear in the query string when searching (despite several Firefox restarts).

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >