Search Results

Search found 30724 results on 1229 pages for 'backup solution'.

Page 61/1229 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Does SQL Server Maintainance Cleanup consider exact times when deleting old backups?

    - by Heinzi
    Let's say I have a daily maintainance task that: Backups all databases and then removes backups that are older than 3 days. Now let's say the first backup at day 1, starting at 10:00, results in the following files db1.bak 2012-01-01 10:04 db2.bak 2012-01-01 10:06 Now let's say at day 4 the first step of the maintainance task (backup the DBs) happens to finish at 10:05. Will SQL Server delete db1.bak and keep db2.bak (would be logical, but might be surprising for the user) or keep both or remove both?

    Read the article

  • few questions about imaging the os

    - by user23950
    How much time will it take to backup 49Gb? Here are the details: os: w7, processor-dual core 2.50 Ghz, 2Gb ram. -I'll use the free version of macrium reflect. I will back it up on a phd from seagate. And I have installed. Ms visual studio 2008, netbeans, and some from master collection cs4. I will only backup 1 partition.

    Read the article

  • Remove folder structure from archive and fix error

    - by Michael
    I am trying to make a script to backup each of my plesk hosts to individual files, I am having two problems: I would like to remove the folder structure from archive, the tar is 3 folders deep I am getting this error: tar: Removing leading `/' from member names The code: FILES=/var/www/vhosts/* FNAME="" for f in $FILES do FNAME=`basename $f` tar cfv "/root/backup/ftp/$FNAME.tar" $f done Sample output: tar: Removing leading `/' from member names /var/www/vhosts/mydomain.com/ /var/www/vhosts/mydomain.com/conf /var/www/vhosts/mydomain.com/etc/ /var/www/vhosts/mydomain.com/etc/group /var/www/vhosts/mydomain.com/etc/termcap /var/www/vhosts/mydomain.com/etc/passwd /var/www/vhosts/mydomain.com/usr/

    Read the article

  • Windows 7 Ultimate Restore From Another Machine

    - by Guy Thomas
    The situation I have an old machine running Windows Server 2008. I make a backup of selected directories. New Machine: I install Windows 7 Ultimate (and Windows Server 2008 [dual boot]) I cannot restore those files that I backed-up on the old machine. On the New machine neither the Windows 7 or Windows Server 2008 Restore (Recovery) can 'see' the backup files even though they are on the local machine. Is this behaviour (non-behaviour) 'by design' or am I missing something?

    Read the article

  • ArcServer creates additional jobs

    - by wullxz
    We're running CA ARCserve Backup r12.5 (Build 5854) - Small Business Server Edition on our Small Business Server 2008. There is a daily job which saves the systemdrive, exchange and 3 databases to our backup storage. It seems like this daily job creates these additional jobs every time it runs. I don't know why it's doing this. Can anybody tell me why this is happening? (I'm sorry this screenshot is in german... "Ergänzungsjob" means something like "extensionjob" or "additionjob")

    Read the article

  • Backing up data from server to laptop ?

    - by Patrick
    I need a tool to automatically backup my Drupal installations from my server to my laptop. In other terms, I need to copy 1 folder (all my Drupals are inside this folder) and all databases. So I was wondering if I just need to write a script on my laptop connecting to the server every week copying the folder with all mysql databases informing me by email if the backup has been succesfull do you know if I can find some tutorial for it ? or download such script ? thanks

    Read the article

  • Few questions on restoring an os from an image

    - by user23950
    Is it possible to backup and restore more than 1 partition?For example, I have a multi-boot computer. And I want to backup them all(Windows xp, Windows 7, and Ubuntu) If it is possible, what are the applications that offer this kind of feature? Please give me an idea on how this works. If ever its possible.

    Read the article

  • Restore dpm 2010 protection groups from partitions

    - by Dragouf
    Hello, I have Data protection manager (DPM) 2010. I did a backup of my system which has been saved into different partitions. The computer running DPM crashed and is not allowing me to restore the backup. However, i still have all the backups as partitions. How can I restore the multiple protection groups from the physical existing partitions? I have been researching the msdn documentation for a solution, but no luck so far. Thanks for your help

    Read the article

  • How to back up a Windows Home Server over the network?

    - by Jay Bazuzi
    One of the very few reasons I have to physically interact with my Windows Home Server is to back it up to an external hard drive, with the "Backup Server" feature. It would be more convenient to plug the external drive in to a desktop PC, and then do the backup over the network. Is there a way to do this? I've heard a little about iSCSI, but as far as I can tell it costs money, and I'm hoping for something free.

    Read the article

  • How long would this file transfer take?

    - by CT
    I have 12 hours to backup 2 TB of data. I would like to backup to a network share to a computer using consumer WD 2TB Black 7200rpm hard drives. Gigabit Ethernet. What other variables would I need to consider to see if this is feasible? How would I set up this calculation?

    Read the article

  • `find` command not available in web host, how to implement a delete based on modification time using other commands?

    - by CalumJEadie
    I'm creating a simple datebase backup solution for a client using web hosting at DataFlame. The web hosting account provides access to cron but not a shell. I have a database backup script creating regular backups and I want to automatically remove those more than N days old. I attempted to use find -v $backup_dir -mtime +$keep_days -name "*db.tar.gz" -delete however the user executing the script does not have permission to run find. Can you suggest how to implement this without using the find command?

    Read the article

  • macro collapse all in solution visual studio 2010

    - by rod
    Hi All, I found the CollapseAll macro online that has worked for me in vs2005 and vs2008. However, this half way works in vs2010. It looks like it only collapses the top nodes and not any subnodes that may be expanded? any ideas? Thanks, rod. Sub CollapseAll() ' Get the the Solution Explorer tree Dim UIHSolutionExplorer As UIHierarchy UIHSolutionExplorer = DTE.Windows.Item(Constants.vsext_wk_SProjectWindow).Object() ' Check if there is any open solution If (UIHSolutionExplorer.UIHierarchyItems.Count = 0) Then ' MsgBox("Nothing to collapse. You must have an open solution.") Return End If ' Get the top node (the name of the solution) Dim UIHSolutionRootNode As UIHierarchyItem UIHSolutionRootNode = UIHSolutionExplorer.UIHierarchyItems.Item(1) UIHSolutionRootNode.DTE.SuppressUI = True ' Collapse each project node Dim UIHItem As UIHierarchyItem For Each UIHItem In UIHSolutionRootNode.UIHierarchyItems 'UIHItem.UIHierarchyItems.Expanded = False If UIHItem.UIHierarchyItems.Expanded Then Collapse(UIHItem) End If Next ' Select the solution node, or else when you click ' on the solution window ' scrollbar, it will synchronize the open document ' with the tree and pop ' out the corresponding node which is probably not what you want. UIHSolutionRootNode.Select(vsUISelectionType.vsUISelectionTypeSelect) UIHSolutionRootNode.DTE.SuppressUI = False End Sub Private Sub Collapse(ByVal item As UIHierarchyItem) For Each eitem As UIHierarchyItem In item.UIHierarchyItems If eitem.UIHierarchyItems.Expanded AndAlso eitem.UIHierarchyItems.Count > 0 Then Collapse(eitem) End If Next item.UIHierarchyItems.Expanded = False End Sub End Module

    Read the article

  • VBA Solution to VLOOKUP with Hyperlinks

    - by Emily2
    I am looking for some help with a VBA solution for preserving hyperlinks when using VLOOKUP on Excel (2010). I have a load of data on Sheet 1 for internal use only, and a cut-down version of this on Sheet 2. Instead of recreating Sheet 2 everytime, I am looking to have a working version which updates everytime Sheet1 is updated. Thus, I have used VLOOKUP on Sheet 2 so that only the desired info is returned on sheet 2. However, the problem was that sheet 1 contained in many cells Hyperlinks to external websites, and this would not pull through to Sheet2 using VLOOKUP. With some help, however, using the following VBA solution the hyperlinks now pull through: Function GetHyperLink(r As Range) As String If r.Hyperlinks.Count Then GetHyperLink = r.Hyperlinks(1).Address End If End Function And I am using the following formula in the relevant cell(s) in Sheet2: =HYPERLINK(GetHyperLink(INDEX('Sheet 1'!$B$1:$B$10001,MATCH(A4,'Sheet 1'!$A$1:$A$10001,0))),(VLOOKUP(A4,'Sheet 1'!$A$1:$B$10001,2,FALSE))) However, the problem is with formatting: every cell on Sheet2 is formatted blue and underlined, even although some of them do not contain a hyperlink! Is someone able to help with a VBA solution/formula to fix this last piece of the puzzle? Many thanks, in anticipation.

    Read the article

  • SQL Server Replication Backup

    - by user18039
    Hi We have a new system that runs on SQL Server 2008 r2 64-bit. There is a primary on-line transactional processing (OLTP) database that accepts a high volume of updates from several thousand Point of Sale systems at stores around the country. In order to protect this vital function, I have decided to introduce a dedicated reporting database server - from which multiple users will run some pretty complex reports. I realise that there were a number of choices but I decided to use Transaction Replication as the mechanism for copying the data from the OLTP database to the new reporting database - one way replication. The solution has worked well in test. I'm now being asked what changes need to be made to the backup policy to cover the architectural changes. I have read pages such as MSDN:Strategies for Backing Up and Restoring Snapshot and Transactional Replication but I think these are overkill for my solution. In fact, my current thinking is that we simply need to continue making backups of the OLTP data and logs. If the Reporting db or any of the system replication (eg distribution) databases fail then it's no big deal - we can clear all down then re-create the replication. I realise that taking a complete snapshot of the OLTP would be time consuming (approx 5 hours) but I'd be more relaxed about this that trying to restore backups of the various data and log files in the correct sequence. My view is that the complex strategies set out in the MSDN article would only be the way to go for a more complex replication solution than I have, eg if there were multiple subscribers with 2-way replication. Would you agree? I'd be grateful for any advice. Many thanks, Rob.,

    Read the article

  • Choosing a distributed shared memory solution

    - by mindas
    I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on. The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second. The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour. I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below: The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers; The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster; The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down; Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster); There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB). Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must. The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available. Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option. Thanks in advance for any ideas.

    Read the article

  • Outlook 2007 Backup to D:\Outlook Fails - Access Denied, Write-Protected or File In Use

    - by nicorellius
    I can successfully save the Outlook PST file to the default location on the C drive (C:\Documents and Settings\user\ ... \Outlook) but when I change the backup save to directory to Outlook on the D drive I get the error: Cannot copy Outlook: Access is denied. Make sure the disk is not full or write protected and that the file is not currently in use. I suppose it is not that crucial that I save this file here, but I have never seen this problem before and I have made this same change in the past. I did some searching in this knowledge exchange as well as elsewhere on changing permissions, etc, but this didn't help. I discovered that the folder on my D drive (called Outlook) is not write-protected and nor is it read-only, as I can save to and modify files in that directory, as well as rename and delete the directory itself. At the time when I installed this version of Outlook, I used a previously saved Personal Folder (a backup PST file) and I thought having this still open in Outlook was causing the trouble. But I closed it and still have the same problem. I know this is probably a silly error on my part but I would like to figure it out. I'm new to superuser, but the answers I see are usually very good, so I thought I would post my first question. Thanks in advance.

    Read the article

  • SQL 2000 Backup/Export Process - Can't find SQL 2000 Enterprise Manager, Can't use Mgmt Studio Expre

    - by 1nsane
    I need to make a backup of a client's SQL 2000 database, however there are a few issues preventing me from doing so using the traditional methods. I've tried using SQL Management Studio Express, but the host doesn't give sufficient privileges to create a backup and I'm getting some strange error messages. I've also tried doing the "Generate Scripts" to recreate the schema, then using the DTS Wizard to migrate the data, but the IDs set up with the identity specification property are not consistent with the live database once copied over. This results in some foreign key breakage... If I remember correctly, I was able to use Microsoft SQL 2000 Enterprise Manager to perform the task before, but I can't find this anywhere... it seems Microsoft has pulled most SQL Server 2000 stuff from their site. Does anyone know where I can find a copy of Enterprise Manager (or a trial of SQL Server 2000, which I believe comes with the component)? Or conversely, does anyone know of any other tools (preferably non-commercial) that are capable of mirroring remote SQL Server 2000 DBs? Thanks!

    Read the article

  • Bacula virtual backup job doesn't run, no output?

    - by Zoredache
    I am trying to get Virtual Backups working, but when I try to run a virtual backup job, it appears to get created, but then never seems to actually run. I have a full, and a couple incremental backups. status director JobId Level Files Bytes Status Finished Name ==================================================================== 1283 Full 10,565 1.963 G OK 21-Dec-12 09:47 nms-Job 1284 Incr 314 129.6 M OK 21-Dec-12 09:49 nms-Job 1285 Incr 230 147.2 M OK 21-Dec-12 09:51 nms-Job 1288 Incr 525 138.8 M OK 21-Dec-12 11:25 nms-Job I attempt to start a job from bconsole like this. *run job=nms-Job level=VirtualFull Using Catalog "MySQL" Run Backup job JobName: nms-Job Level: VirtualFull Client: nms-FileDaemon FileSet: nms-FileSet Pool: nms-pool (From Job resource) Storage: File_d1 (From Pool resource) When: 2012-12-21 13:07:54 Priority: 10 OK to run? (yes/mod/no): Job queued. JobId=1291 Then my new job, just sits there, doing nothing. The JobStatus shows that the job was created, but it appears to never run? All the full, and incremental backups are terminating normally. *llist jobid=1291 JobId: 1,291 Job: nms-Job.2012-12-21_13.07.56_07 Name: nms-Job PurgedFiles: 0 Type: B Level: F ClientId: 4 Name: nms-FileDaemon JobStatus: C SchedTime: 2012-12-21 13:07:54 StartTime: 2012-12-21 13:07:56 EndTime: 0000-00-00 00:00:00 RealEndTime: 0000-00-00 00:00:00 JobTDate: 1,356,124,076 VolSessionId: 0 VolSessionTime: 0 JobFiles: 0 JobErrors: 0 JobMissingFiles: 0 PoolId: 19 PooLname: nms-pool PriorJobId: 0 FileSetId: 11 FileSet: nms-FileSet I am getting very frustrated, that this isn't working, mostly because it isn't giving me any error logs, or output at all. I submit the job, and as far as I can tell nothing happens. Is there some status, or debugging level that I can set to get a useful information about why this isn't working? What can I do to make this work? I was originally running Bacula 5.0.2 on Debian Squeeze, out of frustration, I upgraded to the 5.2.6 in the backports repository, hoping that a new version might give me better results.

    Read the article

  • Backup Picasa 'people' tags data

    - by pelms
    OK, so I've spent a fair amount of time putting names to faces in Picasa 3.5 but in a few days (hopefully) my copy of Windows 7 should arrive and I'll need to reinstall Windows. So, does anyone know what I need to backup so that I don't have to re-enter all those name tags? N.B. I'm on Windows 7 RC and know that I don't have to do a clean reinstall but I would prefer to. Outcome: I clean installed Windows 7 and downloaded and installed Picasa. Unfortunately, the download link on the UK Picasa homepage still pointed to Picasa 3.0 (rather than 3.5) which doesn't have face recognition. This scanned my photos folders and overwrote the picasa.ini files along with the people information   :¬( Fortunately I'd backed up the photos before installing Win 7, so after uninstalling Picasa 3.0 (along with it's database), restoring the photos from backup and installing Picasa 3.5, I finally got my face names back. Extra... Google has now posted advice on how to migrate to Windows 7 and keep your Picasa database, meaning that it will not need to rescan you photos and will retain all information about then including name tags. They have a method for upgrading and for a clean install of Win 7. Basically you need to back up: "C:\Users\%username%\AppData\Local\Google\Picasa2" and "C:\Users\%username%\AppData\Local\Google\Picasa2Albums"

    Read the article

  • PostgreSQL continuous archiving not running archive_command

    - by Whatsit
    I've been trying to set up continuous archiving for a simple, test PostgreSQL 9.0 database, as per the documentation. In postgres.conf I've set: wal_level = archive archive_mode = on archive_command = 'touch /home/myusername/backup/testtouch' archive_timeout = 30s ...and restarted PostgreSQL. The file listed by touch never appears. I can manually run the touch command and it works as expected. If I try to create a backup, it waits forever for the archive_command. In psql; postgres=# SELECT pg_start_backup('touchtest'); pg_start_backup ----------------- 0/14000020 (1 row) postgres=# SELECT pg_stop_backup(); NOTICE: pg_stop_backup cleanup done, waiting for required WAL segments to be archived WARNING: pg_stop_backup still waiting for all required WAL segments to be archived (60 seconds elapsed) HINT: Check that your archive_command is executing properly. pg_stop_backup can be cancelled safely, but the database backup will not be usable without all the WAL segments. What would cause this? How can I troubleshoot it? Additional info: Running on CentOS 5.4. PostgreSQL 9.0.2 installed as root.

    Read the article

  • How do you handle data archiving?

    - by 20th Century Boy
    Backups are one thing, but long term archival is another. For example, you might be required to store emails for 7 years, or keep all project data indefinitely. I used to save archives to tape, but then I've had tapes get destroyed (drives rip the tape out). So...write to 2 tapes I hear you say. Is that what others do? Have 2 (or more) tapes of the same data for redundancy? But then the other issue is that tapes cannot usually be read by different backup software vendors. Eg if you go from Arcserve - Backup Exec - Commvault over 10 years you would need to keep all 3 systems so that you could restore old data. Likewise for hardware. Old tapes might not be barcoded. Might not be compatible with the new library etc etc. So do you keep old tape hardware AND old software just in case you might need to restore a 10 year-old file? Or...when you move to a new backup system do you migrate all archived data to the new system and re-archive it onto new tapes? That could be a huge job. Any thoughts?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >