Search Results

Search found 2822 results on 113 pages for 'scheduled backups'.

Page 12/113 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • VPN/AFP server for centralized TimeMachine backups

    - by Keith Johnson
    I am a sysadmin for a small group of about 7 people who prefer Apple machines for their work. These machines are currently either a) not backed up at all, or b) backed up using Retrospect(Which I'm not very fond of). I don't really have the budget for anything fancy, and I'd like to keep it as user friendly as possible. Ideally I am thinking of a VPN server they can connect to(to keep the traffic secure, and because they work from home frequently) along with an AFP server for use with TimeMachine. The goal would be to get better backup coverage, along with user-initiated restores and overall ease of use. Does this seem like a reasonable idea? Has anyone done this before? Are there any obvious problems I've overlooked?

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • How to reconnect to Remote Session in a script?

    - by Lukasz
    Hello, I need to persist a Remote desktop connection across a reboot of a Terminal server. I'm thinking that it would be something like a scheduled task that would run periodically and check the running state of the session and restart it if it's down. BTW, I did check the "Reconnect..." checkbox on the advanced tab of the connection options, but it still goes down everytime we restart the terminal server. Does anyone have the script that would accomplish the above in a scheduled task, or perhaps another solution?

    Read the article

  • Permissions Required for Sharepoint Backups

    - by Wyatt Barnett
    We are in the process of rolling out an extranet for some of our partners using WSS 3.0 as the platform. We already use it internally for a variety of things, and we are using the following powershell script to backup the server: param( $url="http://localhost", $backupFolder="c:\" ) [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") $site= new-Object Microsoft.SharePoint.SPSite($url) $names=$site.WebApplication.Sites.Names foreach ($name in $names) { $n2 = "" if ($name.Length -eq 0) { $n2="ROOT" } else { $n2 = $name } $tmp=$n2.Replace("/", "_") + ".sbk" $saveas = "" if ($backupFolder.Length -eq 0) { $saveas = $tmp } else { $saveas = join-path -path $backupFolder -childPath $tmp } $site.WebApplication.Sites.Backup($name, $saveas, "true") write-host "$n2 backed up to $saveas." } This script works perfectly on the current installation running as our domain backup user. On the new box, it fails when ran as the backup user--claiming "The web application located at http://extranet/" could not be found. That url does, in fact, work so I'm fairly certain it isn't anything that dumb and rather is some permissions issue. Especially because, when executed from my security context, the script works perfectly. I have tried making the backup user a farm owner, as well as added him to the various site collection admin groups on the extranet. The one major difference between the extranet and the intranet server is that the extranet has an alternative access mapping (for https://xnet.example.com) and also uses forms authentication for that mapping. Anyhow, what permissions (or other voodoo) do I need to setup to get this script to work properly?

    Read the article

  • apcupsd on Linux does not report on APC BackUPS Pro 900

    - by lserni
    From what documentation I could find, the UPS should be (is!) supported by Linux and ought to work with apcupsd. I looked for specific problems such as the infamous Microlink protocol, and found none. I have found a feedback from a guy in UK that reports using this very model on a not-too-different OS version (his OpenSuSE 12.1, mine 12.3 x86_64). The USB port is detected, lsusb reports Bus 002 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power Supply and lsusb -v -s002:003 confirms and expands: Bus 002 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power Supply Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x051d American Power Conversion idProduct 0x0002 Uninterruptible Power Supply bcdDevice 0.90 iManufacturer 1 American Power Conversion iProduct 2 Back-UPS RS 900G FW:879.L4 .I USB FW:L4 bNumConfigurations 1 Configuration Descriptor: [...] Interface Descriptor: [...] bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 No Subclass bInterfaceProtocol 0 None iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.00 bCountryCode 33 US bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 1134 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 100 Device Status: 0x0000 (Bus Powered) The kernel recognizes this and duly sets up crw------- 1 root root 180, 96 Nov 4 16:11 /dev/usb/hiddev0 As far as I know, everything is as it should be. I have put the standard configuration in /etc/apcupsd/apcupsd.conf (which is Unix-terminated, ASCII-only, no BOM (just in case)) UPSCABLE usb UPSTYPE usb DEVICE (I have also tried commenting out DEVICE, and setting a device of /dev/puppa results in an access attempt to /dev/puppa, not some /var/lib/dev/puppa or /dev/puppa\r\n). Yet, what apcaccess tells me is VERSION : 3.14.10 (13 September 2011) suse CABLE : USB Cable DRIVER : USB UPS Driver UPSMODE : Stand Alone STARTTIME: 2013-11-04 16:24:22 +0100 MODEL : STATUS : NOBATT LINEV : 000.0 Volts LOADPCT : 0.0 Percent Load Capacity BCHARGE : 000.0 Percent TIMELEFT : 0.0 Minutes MBATTCHG : 5 Percent MINTIMEL : 3 Minutes MAXTIME : 0 Seconds SENSE : Low LOTRANS : 000.0 Volts HITRANS : 000.0 Volts It doesn't recognize the model, and reports no battery (and no voltage). This confirms that it's not the Microlink problem, or it would report the battery status, if precious little else. If I disconnect the USB cable, I get an apcupsd message to the effect that communications have been lost; and I get the "communication restored" broadcast too, if I reconnect the cable. apcupsd is monitoring. So everything tells me that it should work -- only it doesn't. Does anyone spot what I'm missing?

    Read the article

  • runas without asking for a password

    - by Gregory MOUSSAT
    On a Windows server which is in a domain, I have a script I run from scheduled tasks. I want this script to be run under a mydomain\peter user account. It is simple to do it with scheduled tasks, if you know Peter's password. And once done, the script stops when Peter decides to (or has to) change his password. On Linux, a cron job can be run with whatever user account without having to know the corresponding password. And root can run anything on behalf on another user (with su and sudo). Any way to do this with Windows? My need is for a old Windows 2003 server, but I can manage to run it from another computer.

    Read the article

  • Why is Windows Task Scheduler trying to launch multiple instances?

    - by Paul H
    We have a number of Windows Scheduled tasks that run on one Server 2008 Webserver (not R2) which is in a cluster. We recently moved from an original webserver Cluster to a new webserver Cluser (Server 2008 - not R2). The new webserver (in the cluster) running the Windows Tasks is setup the same as on the original we believe. BUT we now find that on the new Windows Server the Windows Task Scheduler seems to want to instantly start each task three times. If we set the option to queue up a new task we get: Event ID 324 Task Scheduler queued instance "{9a1a8411-b042-45ff-8e6b-89874df230d7}" of task "\Client Reporting" and will launch it as soon as instance "{2bcc3df6-ea3b-4453-90c2-75b8b1946388}" completes. If we set the option to stop an existing task we get: Event ID 323 Task Scheduler stopped instance "{e685a910-b32b-414e-85fd-96bbe54314a2}" of task "\Client Reporting" in order to launch new instance "{4db66265-1f51-4ede-8535-ac7c3cb5c4c1}" . Ticked settings: Allow task to be run on demand. Run task as soon as possible after a scheduled start is missed. Stop the task if running for longer than 1 hour. If the running task does not end when requested force it to stop. Start the task only if the computer is on AC power. Stop the task if the computer switches to battery power. Selected option: If the task is already running - stop the existing instance. Note: We moved the tasks from one server to another in the cluster to see if it the Task Scheduler on the particular server we'd picked causing the problem. Same behaviour. Could it be something to do with the build of the new servers? We have very similar tasks set up on another server cluster that work OK without all this multiple starting. Comparing those tasks to the ones here - there does not seem to be anything obviously different in terms of settings available to us through the options within the Task Scheduler. Trigger: The task is scheduled to be triggered daily, once an hour - and to be stopped if it exceeds this time. Action: Runs a .bat file. What could be causing this/where we can look to see what logic is causing the tasks to start multiple times in this way?

    Read the article

  • PostgreSQL encrypted backups

    - by Nikhil Gupte
    Is it possible to ensure that dumps taken from a PostgreSQL db are always encrypted? The data in the database is highly sensitive and we cannot afford un-authorized personnel, including Sys Admins who need to backup the db, to access the actual data.

    Read the article

  • i[Pod|Phone|Pad|*] backups in iTunes

    - by Maroloccio
    iTunes <- iPhone. At sync time, a back-up is performed. Which data is included, which data is not? i.e. are songs (potentially redundant) backed-up so that a computer ends up having both the source file on the filesystem and the copy within the device back-up? Is anything on the iPhone filesystem not backed up? (i.e. on a Mac using Time Machine, some files are excluded from the back-up even if not all of them can be recreated upon restore - I lost my postfix config this way..)

    Read the article

  • Backups of Exchange 2007 SP3 using VSS are abnormally large

    - by Stew
    I have recently implemented Veeam backup and recovery 6.0, and have noted when backing up my exchange server via incremental updates, it is transferring way more data than expected. Backup is incremental, and setup to use VSS. VSS is stable and healthy, according to vssadmin. Exchange 2007 SP3 running on Windows Server 2008 R2, just last weekend I installed the latest Rollup for Exchange. I thought the nightly incrementals were large, but perhaps my users really are sending that much mail so I tested taking one incremental backup, waiting 10 minutes and taking a second. The second incremental backup transfered 5.8GB of data. We as an organization are absolutely NOT putting 5.8GB of data on the mail server every 10 minutes. Are there any other veeam users who have seen something similar? Is my test faulted? Are there other considerations for VSS?

    Read the article

  • Updating a script currently being ran by Task Scheduler on Windows

    - by orangechicken
    I have a scheduled task that runs a script on a ahem schedule ahem that updates a local git repo. This script is a file in this local git repo. Currently, what I'm seeing is that the script is ran, git complains that permissions are denied to write to file which actually results in the script being deleted! The next time the scheduled task runs the script file is now missing! How can I ensure that when I pull changes to this script from the repo that the file is actually updated?

    Read the article

  • Make backups of Dropbox folder every week

    - by ilansch
    I have a Dropbox folder which is shared by couple of users. I would like to make a backup of this folder that will occur every week and store this backup on another hard drive. I can simply copy the entire folder each time and this will be the backup, but I would like to copy only the files that have been changed or created during that week. I thought of creating a batch script that will check each file in the Dropbox folder recursively and see its modified date. If that date is later then a given one (current backup date) it will copy the file to a folder named BackUP[Date]. Do you think this solution is OK?

    Read the article

  • Tape vs SSDs backups regarding long-term storage reliability

    - by user66131
    My question is very specifically about solid state drives, not regular hard drives. I would like to put in place a grandfather-father-son backup scheme, with the SSDs being used for the grandfather and father portions, and the yearly grandfather would be locked in a safe offsite for maybe 5-10 years. Can I expect that after this period of time the data would be preserved as well as it would be on a tape?

    Read the article

  • daily rsync backups with hard links, checksums, and a new computer

    - by user75058
    I backup my laptop to a Fedora desktop daily using rsync with hard links. This has worked great for almost a year. I recently purchased a new computer, transferred over my data, and would like to continue backing up this computer daily. However, due to the data transfer from the old laptop to the new laptop, the timestamps have obviously changed, and will thus cause my daily rsync backup to re-transfer all of the data. I thought that by adding the -c (checksum) switch to my rsync backup it would match files based on checksum, instead of timestamp and size, and only transfer those files that are different or not present. This appeared to work, but upon examining the new backup, hard links are not being created, and it appears the files that should be hard linked are simply being copied to the new backup directory from the previous backup directory on the backup server. This is very peculiar behavior to me, and I am having trouble figuring out why this is occurring. Checksums match for files that I think should be hard linked. I have looked through the rsync man page and Google'd around a bit and have been unable to find anything for me to better understand this behavior.

    Read the article

  • Backups devices for Windows Server Backup and Symantec [closed]

    - by user137841
    What is the best way to backup windows SQL, Exchange or AD servers data to? NAS, external USB , iSCSI or perhaps some other backup solution? I will not however be considering cloud backup solutions due to bandwidth restrictions and cost. Currently I find NAS devices to give the best results but clients that do not have the budget for backup software use Windows Server Backup but then they can make only 1 backup to a NAS at a time.

    Read the article

  • Storage server 2003 shadow copy backups deleted

    - by Aceth
    Hi there We have a 1TB storage server I've just gone to transfer a 100Gb file across to it. And it has deleted the shadow copy. From Googling I understand that this probably occurred: http://support.microsoft.com/kb/826936 Is there any way of recovering those shadow copies back? Thank you very much for having a read anyhow and any help would be greatly appreciated.

    Read the article

  • Looking for a Linux stream ripper that can be scheduled

    - by Anthony D
    I have an MP3 stream I want to schedule a recording of. I can do it using wget to a file, its just a straight mp3 stream. However I'd like to use a command line stream ripper that will do a better job. Any one know of one? Update 1 WGET is grabbing whatever part of the stream it comes in on. This may not really be the start of a frame in the MP3 file. Also, wget is not really schedule ready. I experimented with starting it with a cron job, then killing it later, this produced a file that didn't really start and stop where I wanted.

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Scheduled Task unable to create/update any files

    - by East of Nowhere
    I have several tasks in Task Scheduler in Windows Server 2008 SP2 (32-bit) and they all successfully "do their work", except for creating or updating any files on Windows. All the tasks point to simple .cmd files that have the real work but beyond that there's no pattern: some call robocopy with the /LOG option, some call .exe files I wrote that manipulate XML files, some just do stuff with > redirection. With all of them, if I double-click the .cmd file myself, it works fine and the files are created or updated or whatever. If I run it from Task Scheduler (by the schedule or just clicking Run), the task always completes "successfully" but without any of the desired changes to files. I don't see any "unable to create file" errors in Event Viewer either. The tasks do all Run As a specific account, but I have logged in as that account and verified that it has permissions to do everything it needs to. Further details -- Task is set to Run whether user is logged in or not. Configured for: "Windows Vista or Windows Server 2008", there is no other Configured for option available.

    Read the article

  • Windows Vista Backups?

    - by skaz
    I am trying to configure a Windows Backup on Vista but don't see some capabilities I would expect to be there. For one, it looks like I can only select a Local Drive or a network share. I want to use a local drive, but I want to use a sub folder of one of the drives. Must I really pick the root? As a work-around, I made a network share to the local drive, thinking I could then pick network share. However, when I do this, I am prompted for credentials to hit the share, and none work. However, the share works Explorer, and it works from other computers, so the access is configured correctly. Is there any way to do what I am trying to do? Thanks.

    Read the article

  • Windows backups to VM Esxi

    - by Martyn
    I'm very new to ESXi, so apologies if this is a silly question. I have ESXi 5.1 running running on HP Proliant Microserver with internal LUNs RAIDed with HP P410 Smart Array card. I have 2 Windows Server 2012 VMs running as Domain controllers. These are being backed-up up with GhettoVCB to a dedicated datastore hosted on LUNs on a eSata external device. I have several Windows 8 & 7 PC's connected to this domain. I want to back up these Windows PCs, via the built in backup software, ideally to the external eSata device as well. What do I need to do to have the Windows PC's see the datastore? I assume I could create a VM for FreeNas or something, or share out the datastore via one of the Windows Server 2012 VMs. Both those options seem a little bit of an over kill?

    Read the article

  • Schedule power-on with ASUS P8H77-I?

    - by user826955
    Is it possible to have my box with an ASUS P8H77-I to automatically power on each sunday at 0:00 (for backup tasks)? I could not find such an option in the BIOS (or rather EFI). [edit] nevermind, I found it. Its called "RTC alarm date" or sth like this. Still I wonder if there was a possibility to wake the machine like every week on sunday? As it seems, I can only set either everyday, or a single day in the month. I would like to do weekly backups, but this would only allow me to do monthly backups.

    Read the article

  • Use backups if unavailable (not just down)

    - by PriceChild
    Using haproxy, I want: A pool of 'main' servers and 'backup' servers, though they don't necessarily have to be in separate pools. Each backend has a low 'maxconn' (in this case 1) Clients should not wait in a queue. If there are no immediately available servers in the 'main' pool they should be shunted to the 'backup' pool without delay. Right now I have one backend, 'main' servers have an absurdly high weighting and it 'works'. acl use_backend + connslots is along the right lines but without the patch in my own answer it isn't perfect. Bonus points for not requiring a modified haproxy binary.

    Read the article

  • SVN (Subversion) Problem "File is scheduled for addition, but is missing" - Using Versions

    - by Mike
    I'm using Versions for SVN. I attempt to commit and get this message: Commit failed (details follow): '/Users/mike/Sites/mysite.com/astss-cvsdude/Trunk/cart/flashfile.swf' is scheduled for addition, but is missing I suppose this is because I had added files to the repo, and then deleted them via the filesystem. I'd like to have it simply make note of my change, and apply the change to the repo. How can I get around this?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >