Search Results

Search found 2822 results on 113 pages for 'scheduled backups'.

Page 90/113 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • I need an admin toolset for Windows 2003 and 2008

    - by eugeneK
    i know this is way too general question but anyway. I need few tools, will write down my tasks as sysadmin and if you have any to automate my job i would be glad to hear. I don't mind paying for software needed unless it is way too expensive. First of i backup all files on server at local/office storage. I 7zip all SQL backup files and then move them over network to centralized location and then FTP them from office PC which has no FTP server installed and cannot have one. Backups happen at 4AM at the morning thus i need to set time for compressing and afterward FTPing. Then i FTP all IIS web application as differentiation backup, same goes for VOD movies. Second tool i need is system monitor which will monitor all servers from themselves and from external location for CPU/Memory/Hard disk and other basic failures. This tool should able to execute Website address with parameters which will send me an email with all report on failure. Third tool i need is a way to get all Event Logs from 10 Windows based servers without accessing each any of them manually. If you know any solution, thanks in advance.

    Read the article

  • How to troubleshoot Hyper-V VSS writer causing backup failure on Server 2008 R2

    - by Tim Anderson
    I have a Windows Server 2008 R2 machine running Hyper-V. Backups using Windows Server Backup fail with the error: The backup operation that started at '?2011?-?01?-?02T10:37:01.230000000Z' has failed because the Volume Shadow Copy Service operation to create a shadow copy of the volumes being backed up failed with following error code '2155348129'. Please review the event details for a solution, and then rerun the backup operation once the issue is resolved. I have traced this to a problem with the Hyper-V VSS writer. vssadmin list writers reports: Writer name: 'Microsoft Hyper-V VSS Writer' Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} Writer Instance Id: {fcf0dd79-d282-4465-88ae-7b6857e055c2} State: [8] Failed Last error: Inconsistent shadow copy However I can't get any further. A few relevant facts: I get the error even if all the VMs are shut down If I disable the Hyper-V VSS Writer by stopping the Hyper-V Management Service backup completes OK There are no errors in the Hyper-V-VMMS application log I tried to set tracing for VSS but can't get any output for some reason. I set the correct registry entries but no trace log is generated. Tim

    Read the article

  • Windows 7 home backup solution, with offsite provision

    - by Richard E
    I am looking for a home backup solution for my single Windows 7 (Home Premium) PC. I have about 500GB of data to backup. I would like to spend less than GBP 300 on the solution. I don't see the need to backup the whole PC, rather specific folder branches (iTunes, photos, documents, Outlook files, user folders such as desktop, favorites etc). I would like a solution that enables me to maintain backups in two separate physical locations (e.g. home and work). To facilitate this I am imagining a storage unit with slots for two removable drives, along with three separate drives. At any one time two of the drives will be being backed up to in the storage unit. The third will be located at my work. Periodically I will take one of the drives into work and leave it there, then bring the drive that was there back home, and plug it into the storage unit. It will then be backed up along with the other drive that was left in the storage unit. This approach should cover scenarios such as virus attack and fire or theft from one location. Thoughts and comments on the sanity of this approach please...

    Read the article

  • Windows 7 home backup solution, with offsite provision

    - by Richard E
    I am looking for a home backup solution for my single Windows 7 (Home Premium) PC. I have about 500GB of data to backup. I would like to spend less than GBP 300 on the solution. I don't see the need to backup the whole PC, rather specific folder branches (iTunes, photos, documents, Outlook files, user folders such as desktop, favorites etc). I would like a solution that enables me to maintain backups in two separate physical locations (e.g. home and work). To facilitate this I am imagining a storage unit with slots for two removable drives, along with three separate drives. At any one time two of the drives will be being backed up to in the storage unit. The third will be located at my work. Periodically I will take one of the drives into work and leave it there, then bring the drive that was there back home, and plug it into the storage unit. It will then be backed up along with the other drive that was left in the storage unit. This approach should cover scenarios such as virus attack and fire or theft from one location. Thoughts and comments on the sanity of this approach please...

    Read the article

  • Hard drive had reallocated sectors...but now it magically doesn't! Can I trust it?

    - by rob
    Last week my SMART diagnostics utility, CrystalDiskInfo, reported that the external hard drive that I was saving my backups to had suddenly reported 900+ reallocated sectors. I double-checked to confirm, then ordered a replacement drive. I spent all of this week copying data from that drive to the new drive. But toward the end of the copy, something peculiar happened. CrystalDiskInfo popped up an alert that the reallocated sector count had gone back down to 0. I know that when SMART detects a read error on a block, it adds that block to the current pending reallocation list. If it subsequently is successfully written or read later, it is removed from the list and assumed to be fine, but if a subsequent write fails, it is marked bad and added to the reallocated sector count. What concerns me most is that I've never read anywhere that a sector can be recovered as "good" after it has been marked as a bad sector and remapped. I've just finished running an extended SMART diagnostic, and it found no surface errors. Now I'm doubtful that the manufacturer will honor a warranty claim if the SMART info does not report any problems. Has anyone had this happen? If so, then is the drive, indeed, okay, or should I be concerned about an imminent failure?

    Read the article

  • Backup hardware and strategy on distributed Windows Server 2008 network

    - by CesarGon
    This question is a follow up to this. We have a Windows Server 2008 R2 domain over a network that spans two different buildings, linked by a 100-Mbps point-to-point line. Over 60 users work in the organisation. We are planning to use DFS folders and DFS replication for file serving across the organisation. The estimated data volume is over 2 TB, and will grow at approximately 20% annually. The idea is to set up a DFS file server in each building and use DFS so that all the contents stay replicated over the 100-Mbps link. We are now considering backup hardware and strategies. We are Dell customers and, after browsing the online Dell catalogue, I can see a number of backup hardware options. My main doubts are the following: Would you go for a tape library, disk backup, or are there other options worth considering? Would you perform batch backups (i.e. nightly) or would you use continuous backup (i.e. while users are working)? Would you use a dedicated backup server to which the tape library (or any other backup device) is attached, or is there any other alternative way of doing things? My experience with backup hardware and overall setup is limited, so I appreciate any good piece of advice that you may have. Thanks.

    Read the article

  • CIFS mounted drive setting "stick-bit" on all files, cannot change permissions or modify files

    - by mattmcmanus
    I have a folder mounted on an Ubuntu 8.10 sever through cifs that I simply cannot change the permissions on once mounted. Here is a breakdown of what's going on: All files within the mounted folder automatically have their permissions set to -rwxrwSrwx regardless of whether the file is create on the windows server or on the linux machine. I have the same directory mounted on two other linux servers (both running 9.10 instead of 8.10) with no problems at all. They all are using the same fstab options and the same credentials. //server/folder /media/backups cifs credentials=/etc/samba/.arcadia_cred,noexec,noserverino 0 0 I've I run a chmod command a million different ways, all of which report successfully changing the permissions. However it doesn't. The issue began after I updated from 8.04 to 8.10 Any idea why this may be happening on one machine? Since it started after an upgrade I'm not sure what is the bes thing to do. Any help you could give would great! None of my automated backup scripts are working because of this!

    Read the article

  • MySQL encoding problem after site move

    - by Quan Zhou
    Guys, I need your help. Since last month my friend has lost his database on Dreamhost, he decided to move his wordpress based blog site (written in Chinese) to my server. He's using a wp-plugin called wp-db-backup to perform regular db backups. And the servers backgrounds are: Dreamhost: Linux 2.6.31.5-modsign-aufs2-grsec-2-opt mysql Ver 14.12 Distrib 5.0.16, for pc-linux-gnu (i386) using readline 5.0 apache2 unknown version My Server: Linux li159-46 2.6.32.12-x86_64-linode12 mysql Ver 14.14 Distrib 5.1.45, for debian-linux-gnu (x86_64) using readline 6.1 nginx 0.8.36 His site's encoding was UTF-8 in both wp-config and db. I imported his db backup file in UTF-8 by default, then I sync'd files using rsync from dreamhost, then I just changed the db address and nothing more. But when I take first look at the "new" site, it was full of unreadable characters, I met this problem before, I changed charset options in browser but none of them can make it displayed properly. Then I converted his db to GB18030, it works with only if browser set charset to GB18030 either GBK, but by default they recognize the charset as UTF-8. I tried to edit the headers but it doesn't work. What could I do now? Thx~~

    Read the article

  • How long will a USB key with an OS installed on it last?

    - by Xananax
    I've heard numerous times that installing an OS on a USB key is a bad thing to do, as USBs typically have a certain number of writes before dying, and installing an OS on it will wear it out (unless it's used sporadically for rescue purposes). Nonetheless, I am very tempted to install some flavour of Linux (Ubuntu or Arch, I haven't decided yet) on a small, transportable, USB Key. My problem is, although you read a lot that it's "bad", you are never told how bad. How long would it last (provided, say, a pc that is 24/7 on)? A month? A year? Five years? Is there recipes to make it last longer? Is there any reason beside weariness that should prevent me from attempting this? I mean, if it can be calculated, then I could theoretically shield myself by doing regular backups on another key when the deadline gets close (for example). Notes I am not talking of using a USB as a live CD, but actually installing the OS on it.) When I say "USB Key", I refer to the little USBs with a flash memory, not an external USB hard drive. For the curious, my reason is that I work in a lot of different places, on different PCs, and I have a very customized session, with my own WM, my own key bindings, my own scripts, , a selection of plugins for firefox and chrome, etc, and currently I am synchronizing all this through a mix of dropbox, git, and transporting files on USBs, and and it's becoming a chore. It would be much simpler for me to just plug the USB and mount the hard disk of the PC I am using and use it's processing power without actually needing to install any OS on it.

    Read the article

  • how to automatically mount ~/Private using ecryptfs when logging in via ssh pubkey

    - by andreash
    Raionale: I want to be able to automatically make backups to a remote machine, which will be encrypted with ecryptfs. The title says it all: I set up ecryptfs-utils on my Debian Squeeze box, and set up one user to use it via ecryptfs-setup-private. When I log in via SSH using password authentication, the ~/Private directory automatically gets mounted. How can I achieve that ~/Private also automatically gets mounted when logging in via SSH using public key authentication? Obviously, the best solution would be if ecryptfs could somehow 'use' the SSH public key to en/decrypt the data (I know that then using the user's password would not be able to en/decrypt the data any more; this would be acceptable). Probably, this will not work. So perhaps somehow call ecryptfs-mount-private via ssh before logging in via public key? Probably, then I would need to somehow pipe the passphrase through the SSH connection, right? So I would need to store it on the source machine's file system. Not nice either. Any other ideas?

    Read the article

  • Limit the amount of data that can be stored in a folder on Ubuntu Server 12.04?

    - by dougoftheabaci
    I'm in the process of building my first server. It's up, it's running, I'm transferring copious amounts of data away from my horrid little Drobo (DO NOT BUY ONE OF THESE, EVER). However, there's one thing I have yet to do: I'd like to set it up for Time Machine backups as well. I've seen all the guides and I have some idea of how to set the whole thing up, but the issue is that Time Machine will just fill up as much space as you let it. So if I let it lose in my 8 TB zpool it'll slowly consume every last available sector. This, of course, is not acceptable. I have a folder at the root of my zpool called "ZFS Time Machine" and I would like to limit it to 1 TB (all I need for backup purposes). However, I have no idea how to do that. Is this possible? I can continue using a small external hard drive attached via FW800 if I have to but I'd much rather prefer putting everything on my server.

    Read the article

  • What is the fastest way to resize a large partition?

    - by Jook
    Due to a new HDD-Configuration I am currently handling larger backup/resize tasks with partitions between around 900MB, wich are 70-90% full. some background: First thing I've noticed was, that the Acronis-WesternDigital TrueImage was extremly slow while running it under Windows 7, even though on high priority. To create a normal backup for 650gb of data (900gb partition), it would have taken 3 days! The same task done with the boot-cd version of this acronis version took about 2 hours (SATA3 copy from one disk to another, both around 110MB/s). Now, after I have done all my backups, I've wanted to remove some obsolete partitions and resize the leftovers to full hdd size. Of course, usually this takes quite some time - in this case for this 900gb partition, to extend it to 931 (30gb+ from front, 1gb+ from end), it will take around 6 hours (using gparted)! Had I new that erlier, I would have just restored the image. But no - first it showed a reasonable time of 1:45h and 0 of 1 operations, but after finishing 1:45h it started again, only this time with 4h to go, still 0 of 1 operations, but now it was copying instead of moving. Question: However, why has it to be this slow to resize a partition? I am asking for a good explanaition. This has bugged me, since I started partitioning - why does it require to copy all the data around, can't it just stay in place?!

    Read the article

  • Need Suggestions on Backup Strategies and Alternatives?

    - by Leejo
    I'm not sure where else to post this question since it is not exactly Code or Development related...but I know Stackoverflow is a very responsive to questions... Currently, I use Mozy Home to perform an online backup of my laptop. So far, this works well, since I only use one laptop that needs to be backed up. But, soon this may change and I want to explore other alternatives than having to perform an online backup on all machines. Ideally, I want to set up a Network Computer (Laptop/Desktop) with enough storage to hold the backups for all other machines that I would have. Each machine should be responsible for performing their backup (to the Network Computer). This would require some capability like Mozy's incremental backup strategy, but instead of online backup, I would prefer it to be done locally to the Network Computer. Can you recommend a local backup software (backup to a network pc, incremental backup, good restore options)? I'm also looking for any ideas on a local backup strategy even if its different from what I've stated? What works and what doesn't work? Thanks in advance for your help!

    Read the article

  • Dedicated server automatic backup solution

    - by Luigi
    I have a dedicated Ubuntu web server in a cloud environment, and I am looking for a nice way to do automated backups. I would like to backup some directories with web apps, and all my MySql databases. As for destination: make snapshots every two hours localy, and every six hours to a remote ftp server. Also delete backup archives older than seven days(localy + ftp), and notify on any problems by email. Now to achieve some of this functionality I use cron + shell script, and http://www.mysqldumper.net/, but really that doesn't answer my needs. Mysqldumper doesn't know automaticly about new databases, and shell script does not notify on problems. It's something I have to check out from time to time, and i don't have trust for. I googled a while, and seems like most people solve this stuff with shell scripts. Is this a method you can trust? Are there any web-gui tools, I'm missing? Maybe there is a smarter startegy for doing this? I'm a little bit confused.

    Read the article

  • Optimal file system type and mount options for an rsnapshot dedicated drive

    - by Nimmy Lebby
    We have an external USB 2 drive that we are using as a backup drive for our configuration. We use rsnapshot for the backups. It uses a few standard commands for managing snapshots: rm -rf: deletes expired snapshots mv: moves older snapshots down a slot cp -al: duplicates last snapshot to new slot rsync -a --delete --numeric-ids --relative: synchronizes new snapshot As you could see by the log below, the majority of the time is spent on the rm -rf and the cp -al steps: [25/Dec/2010:14:00:02] rsnapshot hourly: started [25/Dec/2010:14:00:02] echo 21012 > /var/run/rsnapshot.pid [25/Dec/2010:14:00:02] rm -rf /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.4/ /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.3/ /mnt/extdrive/snapshots/hourly.4/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.2/ /mnt/extdrive/snapshots/hourly.3/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.1/ /mnt/extdrive/snapshots/hourly.2/ [25/Dec/2010:14:15:48] cp -al /mnt/extdrive/snapshots/hourly.0 /mnt/extdrive/snapshots/hourly.1 [25/Dec/2010:14:23:32] rsync -a --delete --numeric-ids --relative /etc /mnt/extdrive/snapshots/hourly.0/sm4/ [25/Dec/2010:14:23:52] touch /mnt/extdrive/snapshots/hourly.0/ [25/Dec/2010:14:23:52] rm -f /var/run/rsnapshot.pid [25/Dec/2010:14:23:52] rsnapshot hourly: completed successfully My questions: I'm currently using ext4 for the filesystem. Maybe this is not the best choice from those available in Red Hat. Anyone have any recommendations that would speed up the process? The partition's mount options are sync,dirsync 1 2. Is there a way to optimize this since it's solely used for rsnapshot? Of course, reasoning would be greatly appreciated.

    Read the article

  • Debugging a Drobo that chokes Windows 7x64 When Plugged In

    - by Pridkett
    I've had a love hate relationship with my Drobo for a long time. After two years of using it on a Linux box, I moved it over to a Windows 7 machine where it seemed to work just fine for a long time, but it was under very light usage. Mainly backups that never actually happened. Recently I began using it for additional backup services (through CrashPlan, which is great). This means the Drobo gets a lot more usage. Also it means that something interesting happens, the Drobo can choke my system on startup. Here's what I mean: Start computer without Drobo plugged in, CrashPlan and Drobo Dashboard services disabled: 105s Start computer with Drobo plugged in Crashplan disabled, Drobo Dashboard enabled: 250s (and 1 cpu at 100% for a very long time, drobo churning) Start computer with Drobo plugged in, CrashPlan and Drobo Dashboard disabled: 250s (1 cpu at 100% for a very long time, drobo churning) Start computer with Drobo plugged in, Crashplan and Drobo Dashboard enabled: 300s (1 cpu at 100% for a very long time, drobo churning) If I yank the USB plug on the Drobo the CPU usage goes down to nothing very quickly. The slow startup in the fourth scenario is because CrashPlan is trying desperately to load stuff up on the H: drive before it gives up, so I've disabled it for the time being. So here's my question: What the heck is going on when I plug the drobo in? I've fired up Process Explorer and see that the System process is hogging the CPU, specifically it's an ntoskrnl.exe/KdPollBreakIn thread that's going ape. Is this something that's wrong with Drobo? Windows? Any idea on how to find out? If it matters, here's tech info: Athlon 64x2 4400, 2GB RAM, Win7 Ultimate, Drobo USB (2x1TB, 2x320GB)

    Read the article

  • DPM 2007 clashing with existing SQL backup job

    - by Paul D'Ambra
    I've recently installed a DPM2007 server on Server 2003 and have set up a protection group against a server 2003 server running SQL 2005 SP3. The SQL server in question has a full backup (as a sql agent job) once a day and transaction log backups hourly. These are zipped up and FTP'd to a server offsite by a scheduled task. Since adding the DPM job I'm receiving many error messages: DPM tried to do a SQL log backup, either as part of a backup job or a recovery to latest point in time job. The SQL log backup job has detected a discontinuity in the SQL log chain for database SERVER_NAME\DB_Name since the last backup. All incremental backup jobs will fail until an express full backup runs. My google-fu suggests that I need to change the full backup my sqlagent job is running to a copy_only job. But I think this means that I can't use that backup with the transaction_logs to restore the database if the building (including the DPM server) burns down. I'm sure I'm missing something obvious and thought I'd see what the hivemind suggests. It is an option to set-up a co-located DPM server elsewhere and have DPM stream the backup but that's obviously more expensive than the current set up. Many thanks in advance

    Read the article

  • Exchange DiskShadow/Robocopy backup does not purge log files

    - by Robert Allan Hennigan Leahy
    I have a series of scripts setup to backup my Exchange. The following command is executed to start the process: diskshadow /s C:\Backup_Scripts\exchangeserverbackupscript1.dsh This is exchangeserverbackupscript1.dsh: #DiskShadow script file set verbose on #delete shadows all set context persistent writer verify {76fe1ac4-15f7-4bcd-987e-8e1acb462fb7} set metadata C:\Backup_Scripts\shadowmetadata.cab begin backup add volume C: alias SH1 create expose %SH1% P: exec C:\Backup_Scripts\exchangeserverbackupscript1.cmd end backup delete shadows exposed P: exit #End of script And this is exchangeserverbackupscript1.cmd: robocopy "P:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group" "\\leahyfs\J$\E-Mail Backups\Day 1" /MIR /R:0 /W:0 /COPY:DT /B This is not causing Exchange to purge its log files. The edb file is 4.7 gigabytes, but the First Storage Group folder itself is 50+ gigabytes due to many, many log files for each day going back to 2009. Is there any way -- I've Googled and haven't found anything -- to notify Exchange when I've completed a full backup, and have it purge its log files? According to this and this, end backup should cause Exchange to "flush the transaction logs for that storage group" but only "if a successful backup of a storage group occurred", which leaves my question as: What constitutes a "successful backup", and why is what I'm doing not it?

    Read the article

  • Synchronizing files between Linux servers, through FTP

    - by Daniel Magliola
    I have the following configuration of servers: 1 central linux server, a VPS 8 satellite linux servers, "crappy shared hostings" I have a bunch of files that I need to have in all servers. Right now i'm copying them everywhere manually, but I want to be able to copy them to the central server, and then have a scheduled process that runs every now and then and synchronizes them (only outwardly, no need to try to find "new" files in the satellite servers). There are a couple of catches though: I can't have any custom software in the satellite servers, or do strange command line things that'll auto connect to them and send the files directly. I know this is the way these kinds of things are normally done, but the satellite servers are crappy shared hosting ones where I have absolutely no control over anything. I need to send the files over FTP I also need to have, in my central server, a list of the files that are available in each of the satellite servers, to make sure they are ready before I send traffic to them. If I were to do this manually, the steps would be: get the list of files in a satellite server compare to my own, and send the files that are missing get the list of files again, and store it in my central database. I'd like to know what tools are out there that can alleviate as much of this as possible, first the syncing, and then the "getting the list of files available in the other server". I'm going to be doing everything from PHP, not sure if there are good tools to "use FTP from PHP", which i'm pretty sure i'll have to do for step 3 at least. Thanks in advance for any ideas! Daniel

    Read the article

  • Suggestions for accessing SQL Server from internet

    - by Ian Boyd
    i need to be able to access a customer's SQL Server, and ideally their entire LAN, remotely. They have a firewall/router, but the guy responsible for it is unwilling to open ports for SQL Server, and is unable to support PPTP forwarding. The admin did open VNC, on a non-stanrdard port, but since they have a dynamic IP it is difficult to find them all the time. In the past i have created a VPN connection that connects back to our network. But that didn't work so well, since when i need access i have to ask the computer-phobic users to double-click the icon and press Connect i did try creating a scheduled task that attempts to keep the VPN connection back to our office up at all times by running: >rasdial "vpn to me" But after a few months the VPN connection went insane, and thought it was both, and neither, connected an disconnected; and the vpn connection wouldn't work again until the server was rebooted. Can anyone think of a way where i can access the customer's LAN that doesn't involve opening ports on the router needing to know their external IP customer interaction of any kind Blah blah blah use vpn vnc protocol has known weaknesses you are unwise to lower your defenses it's not wise to expose SQL Server directly to the internet you stole that line from Empire Customer doesn't care about any of that. Customer wants things to work.

    Read the article

  • Ubuntu Cannot change permissions on files I own and have RW to.

    - by madmaze
    Hello there, I have a harddrive full of backups which for me is mounted at /media/chronus_ I have been trying to give another user rw permission to this drive. The problem is that I cannot change any permissions on this drive, even if i make a new file it puts sets everything to -rw------- here is an excerpt of what i have tried: madmaze@the-gibson:~$ touch testfile madmaze@the-gibson:~$ ls -l testfile -rw-r--r-- 1 madmaze madmaze 0 2011-01-16 20:11 testfile madmaze@the-gibson:~$ chmod 777 testfile madmaze@the-gibson:~$ ls -l testfile -rwxrwxrwx 1 madmaze madmaze 0 2011-01-16 20:11 testfile madmaze@the-gibson:~$ cd /media/chronos_/Pix/ madmaze@the-gibson:/media/chronos_/Pix$ ls -l total 4100 -rw------- 1 madmaze madmaze 28226 2011-01-16 20:18 avp.jpg -rw------- 1 madmaze madmaze 5764 2011-01-16 20:18 avpsmall.jpg -rw------- 1 madmaze madmaze 98414 2011-01-16 20:18 john.jpg -rw------- 1 madmaze madmaze 98785 2011-01-16 20:18 lisa.jpg -rw------- 1 madmaze madmaze 3954281 2011-01-16 20:18 peter.jpg madmaze@the-gibson:/media/chronos_/Pix$ chmod 777 *.jpg madmaze@the-gibson:/media/chronos_/Pix$ ls -l total 4100 -rw------- 1 madmaze madmaze 28226 2011-01-16 20:18 avp.jpg -rw------- 1 madmaze madmaze 5764 2011-01-16 20:18 avpsmall.jpg -rw------- 1 madmaze madmaze 98414 2011-01-16 20:18 john.jpg -rw------- 1 madmaze madmaze 98785 2011-01-16 20:18 lisa.jpg -rw------- 1 madmaze madmaze 3954281 2011-01-16 20:18 peter.jpg madmaze@the-gibson:/media/chronos_/Pix$ sudo chmod 777 *.jpg madmaze@the-gibson:/media/chronos_/Pix$ ls -l total 4100 -rw------- 1 madmaze madmaze 28226 2011-01-16 20:18 avp.jpg -rw------- 1 madmaze madmaze 5764 2011-01-16 20:18 avpsmall.jpg -rw------- 1 madmaze madmaze 98414 2011-01-16 20:18 john.jpg -rw------- 1 madmaze madmaze 98785 2011-01-16 20:18 lisa.jpg -rw------- 1 madmaze madmaze 3954281 2011-01-16 20:18 peter.jpg madmaze@the-gibson:/media/chronos_/Pix$ touch testfile madmaze@the-gibson:/media/chronos_/Pix$ ls -l testfile -rw------- 1 madmaze madmaze 0 2011-01-16 20:25 testfile madmaze@the-gibson:/media/chronos_/Pix$ chmod 777 testfile madmaze@the-gibson:/media/chronos_/Pix$ ls -l testfile -rw------- 1 madmaze madmaze 0 2011-01-16 20:25 testfile madmaze@the-gibson:/media/chronos_/Pix$ Any Ideas what I could be doing wrongly?

    Read the article

  • can't backup to a NAS drive as offline schedule task

    - by imageng
    I have seen this problem issue discussed in several forums including this one, but could not find a solution. On MS server 2003 I configured a Backup task, the target backup is on a NAS disc (Seagate BlackArmor NAS 110). The backup task is working well as a scheduled task or by a direct command, when I am logged on. It is not working when the user is offline (in this case - Administrator). I already tried the following actions: 1) addressing to the target as network drive (Y:location..), 2)Using UNC instead, 3) making the drive a domain member (the NAS admin S/W allows to define itself as a domain member) The result log message for 1 and 2 is: "The operation was not performed because the specified media cannot be found." The result log message for 3 is empty file. The schedule task "RUN" command is: C:\WINDOWS\system32\ntbackup.exe backup "@C:\Documents and Settings\Administrator\Local Settings\Application Data\Microsoft\Windows NT\NTBackup\data\de-board.bks" /a /d "Set created 2/14/2010 at 5:10 PM" /v:yes /r:no /rs:no /hc:off /m incremental /j "de-board" /l:s /f "\10.0.0.8\public\Backups\IBMServer\de-board.bkf" 10.0.0.8 is the static IP of the NAS. "Run only if logged on" is NOT marked. Password of the administrator user is set. It is obvious that there is no access to the NAS when the user is logged-out. Do you have any idea how can I solve it? Thanks

    Read the article

  • Is it possible/practical to install and run Linux on a USB flash drive?

    - by Graeme Donaldson
    I'm going to replace my old 2004 vintage desktop PC soon and I have an idea of what I want to do, I'm just not sure if it's possible or realistic. In the time since I built the old PC it has slowly become less used as a PC and more as a file server, so I figured I'd build a small file server which could also function as a router/DHCP/DNS/whatever box. The idea is to base it on an Atom system. I have my eye on the Intel D510MO for the moment. This supports 2 SATA disks, and I'd prefer to dedicate those to data storage. I'd like to install Ubuntu Server or maybe Debian on a 8/16GB USB flash drive. I have seen plenty of tutorials on how to perform an installation from a USB drive, but I can't seem to find any info on actually booting and running the OS from USB flash. Is this even possible? Is it practical? This box will mostly be used for: Making backups of mine and my wife's notebooks via LAN. Will use SMB or NFS for this. Digital media storage, which will be accessed by a Mede8er box with no storage of its own. I will most likely use NFS for this.

    Read the article

  • Effective backup and archive strategy for database and linked files

    - by busyspin
    I am using Postgres to store a variety of application data for a webapp. Part of the application involves storing and retrieving user uploaded files. I am storing the files in the filesystem with some associated metadata in the database. I am trying to come up with a backup and archive strategy so that I can effectively backup and archive/restore the database and the linked files. Here are the things I want to accomplish. Perform routine backups that can be used for recovery from failures and which include all DB data and the linked files. Ideally, this backup would be done while the app is running. Live backup is certainly possible with a DB but I am not sure how to keep the linked files consistent with the database during the backup process Archive chunks of data as they become "old". These chunks must includes the database data plus any linked files. It should be possible to put the archived data back into production again. It would be ideal if it were easy to determine which ranges of objects were stored in each chunk. Do you have any advice for how to accomplish these goals? If the files were in the database as BLOBS these tasks would be much easier since normal database backup and restore functionality would handle this. I am not sure how to accomplish the same thing when file data is linked to database rows.

    Read the article

  • NTDS Replication Warning (Event ID 2089)

    - by Chris_K
    I have a simple little network with 3 AD servers in 2 sites. Site A has Win2k3 SP2 and Win2k SP4 servers, site B has a single Win2k3 SP2 server. All have been in place for at least 3 years now. Just last week I started getting Event 2089 "not backed up" warnings (example below) on both of the win2k3 servers. I understand what the message means, no need to send me links to the technet article explaining it. I'll improve my backups. What I'm more curious about is why did I just start getting this message now? Why haven't I been getting it for the past 3 years?!? Perhaps this is related: I recently decommissioned a few other sites and AD controllers (there used to be 3 more sites, each with their own controller). Don't worry, I did proper DCpromo exercises and made sure we didn't lose anything. But would shutting those down possibly be related to why I get this error now? This won't keep me awake at night but I am curious as to what changed... Event Type: Warning Event Source: NTDS Replication Event Category: Backup Event ID: 2089 Date: 3/28/2010 Time: 9:25:27 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: RedactedName Description: This directory partition has not been backed up since at least the following number of days. Directory partition: DC=MyDomain,DC=com 'Backup latency interval' (days): 30 It is recommended that you take a backup as often as possible to recover from accidental loss of data. However if you haven't taken a backup since at least the 'backup latency interval' number of days, this message will be logged every day until a backup is taken. You can take a backup of any replica that holds this partition. By default the 'Backup latency interval' is set to half the 'Tombstone Lifetime Interval'. If you want to change the default 'Backup latency interval', you could do so by adding the following registry key. 'Backup latency interval' (days) registry key: System\CurrentControlSet\Services\NTDS\Parameters\Backup Latency Threshold (days) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >