Search Results

Search found 5747 results on 230 pages for 'backup'.

Page 29/230 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • spring roo backup command lost my files

    - by Moddy
    I generated spring roo project and modifies .jspx files to my styles. Unfortunately, when i used the backup command, spring roo was auto-generated files to the original one. Thus, my .jspx files are noy my styles. How should i do to recovery my files back from this command.

    Read the article

  • Simple oracle backup without using exp or expdp

    - by Jacob
    I have Oracle 10g installed on Windows in C:\oracle. If I stop all Oracle services, is it safe to backup by just copying the entire directory (e.g., to C:\oracle_bak), or am I significantly better off using expdp? Pointers to docs/websites very welcome, I wasn't able to Google up anything relevant.

    Read the article

  • Copy files from sub directories into one directory.

    - by Derek Organ
    Ok I have a bunch of files in this file structure format. /backup/daily/database1/database1-2011-01-01.sql /backup/daily/database1/database1-2011-01-02.sql /backup/daily/database1/database1-2011-01-03.sql /backup/daily/database1/database1-2011-01-04.sql /backup/daily/database1/database1-2011-01-05.sql /backup/daily/database1/database1-2011-01-06.sql /backup/daily/database1/database1-2011-01-07.sql /backup/daily/anotherdb/anotherdb-2011-01-01.sql /backup/daily/anotherdb/anotherdb-2011-01-02.sql /backup/daily/anotherdb/anotherdb-2011-01-03.sql /backup/daily/anotherdb/anotherdb-2011-01-04.sql /backup/daily/anotherdb/anotherdb-2011-01-05.sql /backup/daily/anotherdb/anotherdb-2011-01-06.sql /backup/daily/anotherdb/anotherdb-2011-01-07.sql /backup/daily/stuff/stuff-2011-01-01.sql /backup/daily/stuff/stuff-2011-01-02.sql /backup/daily/stuff/stuff-2011-01-03.sql /backup/daily/stuff/stuff-2011-01-04.sql /backup/daily/stuff/stuff-2011-01-05.sql /backup/daily/stuff/stuff-2011-01-06.sql /backup/daily/stuff/stuff-2011-01-07.sql And there are lots lots more. ultimately I want to import all the 2011-01-07.sql files into my mysql database. This works for one mysql -u root -ppassword < /backup/daily/database1/database1-2011-01-07.sql That will nicely restore that database from this backupfile. I want to run a process where it does this for all databases. So my plan is to first cp all 2011-01-07 sql files into a tmp dir e.g. cp /backup/daily/*/*2011-01-07*.sql /tmp/all The command above unfortunately isn't working I get an error: cp: cannot stat ..... No such file or directory So can you guys help me out with this. For bonus points if you can tell me how to do the next step which is import all databases in one command doing one at a time that would be great too. I really want to do these in two separate steps because I need to delete a few sql files manually from the tmp dir before I run the restore command. So I need: 1) command to copy all 2011-01-07 sql files to a tmp dir 2) command to import all those files in that dir into mysql I know its possible to do in one but for lots of reasons I really would prefer to do it in two steps.

    Read the article

  • DPM 2007 clashing with existing SQL backup job

    - by Paul D'Ambra
    I've recently installed a DPM2007 server on Server 2003 and have set up a protection group against a server 2003 server running SQL 2005 SP3. The SQL server in question has a full backup (as a sql agent job) once a day and transaction log backups hourly. These are zipped up and FTP'd to a server offsite by a scheduled task. Since adding the DPM job I'm receiving many error messages: DPM tried to do a SQL log backup, either as part of a backup job or a recovery to latest point in time job. The SQL log backup job has detected a discontinuity in the SQL log chain for database SERVER_NAME\DB_Name since the last backup. All incremental backup jobs will fail until an express full backup runs. My google-fu suggests that I need to change the full backup my sqlagent job is running to a copy_only job. But I think this means that I can't use that backup with the transaction_logs to restore the database if the building (including the DPM server) burns down. I'm sure I'm missing something obvious and thought I'd see what the hivemind suggests. It is an option to set-up a co-located DPM server elsewhere and have DPM stream the backup but that's obviously more expensive than the current set up. Many thanks in advance

    Read the article

  • SQL Server transaction log backups,

    - by krimerd
    Hi there, I have a question regarding the transaction log backups in sql server 2008. I am currently taking full backups once a week (Sunday) and transaction log backups daily. I put full backup in folder1 on Sunday and then on Monday I also put the 1st transaction log backup in the same folder. On tuesday, before I take the 2nd transaction log backup I move the first transaction log backup from folder1 an put it into folder2 and then I take the 2nd transaction log backup and put it in the folder1. Same thing on Wed, Thurs and so on. Basicaly in folder1 I always have the latest full backup and the latest transaction log backup while the other transaction log backups are in folder2. My questions is, when sql server is about to take, lets say 4th (Thursday) transaction log backup, does it look for the previous transac log backups (1st, 2nd, and 3rd) so that this new backup will only include the transactions from the last backup or it has some other way of knowing whether there are other transac log backups. Basically, I am asking this because all my transaction log backups seem to be about the same size and I thought that their size will depend on the amount of transactions since the last transaction log backup. Can anyone please explain if my assumptions are right? Thanks...

    Read the article

  • NTDS Replication Warning (Event ID 2089)

    - by Chris_K
    I have a simple little network with 3 AD servers in 2 sites. Site A has Win2k3 SP2 and Win2k SP4 servers, site B has a single Win2k3 SP2 server. All have been in place for at least 3 years now. Just last week I started getting Event 2089 "not backed up" warnings (example below) on both of the win2k3 servers. I understand what the message means, no need to send me links to the technet article explaining it. I'll improve my backups. What I'm more curious about is why did I just start getting this message now? Why haven't I been getting it for the past 3 years?!? Perhaps this is related: I recently decommissioned a few other sites and AD controllers (there used to be 3 more sites, each with their own controller). Don't worry, I did proper DCpromo exercises and made sure we didn't lose anything. But would shutting those down possibly be related to why I get this error now? This won't keep me awake at night but I am curious as to what changed... Event Type: Warning Event Source: NTDS Replication Event Category: Backup Event ID: 2089 Date: 3/28/2010 Time: 9:25:27 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: RedactedName Description: This directory partition has not been backed up since at least the following number of days. Directory partition: DC=MyDomain,DC=com 'Backup latency interval' (days): 30 It is recommended that you take a backup as often as possible to recover from accidental loss of data. However if you haven't taken a backup since at least the 'backup latency interval' number of days, this message will be logged every day until a backup is taken. You can take a backup of any replica that holds this partition. By default the 'Backup latency interval' is set to half the 'Tombstone Lifetime Interval'. If you want to change the default 'Backup latency interval', you could do so by adding the following registry key. 'Backup latency interval' (days) registry key: System\CurrentControlSet\Services\NTDS\Parameters\Backup Latency Threshold (days) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • How do I restore a backup of my keyring (containing ssh key passprases, nautilus remote filesystem passwords and wifi passwords)?

    - by con-f-use
    I changed the disk on my laptop and installed Ubuntu on the new disk. Old disk had 12.04 upgraded to 12.10 on it. Now I want to copy my old keyring with WiFi passwords, ftp passwords for nautilus and ssh key passphrases. I have the whole data from the old disk available (is now a USB disk and I did not delete the old data yet or do anything with it - I could still put it in the laptop and boot from it like nothing happend). The old methods of just copying ~/.gconf/... and ~/.gnome2/keyrings won't work. Did I miss something? 1. Edit: I figure one needs to copy files not located in the users home directory as well. I copied the whole old /home/confus (which is my home directory) to the fresh install to no effect. That whole copy is now reverted to the fresh install's home directory, so my /home/confus is as it was the after fresh install. 2. Edit: The folder /etc/NetworkManager/system-connections seems to be the place for WiFi passwords. Could be that /usr/share/keyrings is important as well for ssh keys - that's the only sensible thing that a search came up with: find /usr/ -name "*keyring* 3. Edit: Still no ssh and ftp passwords from the keyring. What I did: Convert old hard drive to usb drive Put new drive in the laptop and installed fresh version of 12.10 there Booted from old hdd via USB and copied its /etc/NetwrokManager/system-connections, ~/.gconf/ and ~/.gnome2/keyrings, ~/.ssh over to the new disk. Confirmed that all keys on the old install work Booted from new disk Result: No passphrase for ssh keys, no ftp passwords in keyring. At least the WiFi passwords are migrated.

    Read the article

  • Amazon EC2 EBS automatic backup one-liner works manually but not from cron

    - by dan
    I am trying to implement an automatic backup system for my EBS on Amazon AWS. When I run this command as ec2-user: /opt/aws/bin/ec2-create-snapshot --region us-east-1 -K /home/ec2-user/pk.pem -C /home/ec2-user/cert.pem -d "vol-******** snapshot" vol-******** everything works fine. But if I add this line into /etc/crontab and restart the crond service: 15 12 * * * ec2-user /opt/aws/bin/ec2-create-snapshot --region us-east-1 -K /home/ec2-user/pk.pem -C /home/ec2-user/cert.pem -d "vol-******** snapshot" vol-******** that doesn't work. I checked var/log/cron and there is this line, therefore the command gets executed: Dec 13 12:15:01 ip-10-204-111-94 CROND[4201]: (ec2-user) CMD (/opt/aws/bin/ec2-create-snapshot --region us-east-1 -K /home/ec2-user/pk.pem -C /home/ec2-user/cert.pem -d "vol-******** snapshot" vol-******** ) Can you please help me to troubleshoot the problem? I guess is some environment problem - maybe the lack of some variable. If that's the case I don't know what to do about it. Thanks.

    Read the article

  • Using rsync to take backup of folder

    - by Ali
    Hi, I have a server (Linux) with NAS which is mounted as folder "mount" I have website in "public_html" folder. I want to take backup of website in mount folder automatically at certain intervals for e.g. every hour. I read that there is something called "rsync" which is used to make two folders sync. And it doesn't copy all files every time and instead matches if the file has been changed and then only update changed files. How do I use it to make automatic backups? I have root access to server. Thanks

    Read the article

  • How to schedule daily backup in SQL Server 2008 Web Edition

    - by Xenon
    In SQL Server Management Studio I created a maintenance plan but it won't work Error is; "Message Executed as user: LITESPELL-19C34\Administrator. Microsoft (R) SQL Server Execute Package Utility Version 10.0.1600.22 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. The SQL Server Execute Package Utility requires Integration Services to be installed by one of these editions of SQL Server 2008: Standard, Enterprise, Developer, or Evaluation. To install Integration Services, run SQL Server Setup and select Integration Services. The package execution failed. The step failed." But in Microsoft page http://www.microsoft.com/sqlserver/2008/en/us/web.aspx in Automate tasks and policies section it is written that backup can be scheduled in this edition but how?

    Read the article

  • How to schedule daily backup in MSSQL Server 2008 Web Edition

    - by Xenon
    In MSSQL Management Studio I created a maintenance plan but it won't work Error is; "Message Executed as user: LITESPELL-19C34\Administrator. Microsoft (R) SQL Server Execute Package Utility Version 10.0.1600.22 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. The SQL Server Execute Package Utility requires Integration Services to be installed by one of these editions of SQL Server 2008: Standard, Enterprise, Developer, or Evaluation. To install Integration Services, run SQL Server Setup and select Integration Services. The package execution failed. The step failed." But in Microsoft page http://www.microsoft.com/sqlserver/2008/en/us/web.aspx in Automate tasks and policies section it is written that backup can be scheduled in this edition but how?

    Read the article

  • Backup AWS Dynamodb to S3

    - by Ali
    It has been suggested on Amazon docs http://aws.amazon.com/dynamodb/ among other places, that you can backup your dynamodb tables using Elastic Map Reduce, I have a general understanding of how this could work but I couldn't find any guides or tutorials on this, So my question is how can I automate dynamodb backups (using EMR)? So far, I think I need to create a "streaming" job with a map function that reads the data from dynamodb and a reduce that writes it to S3 and I believe these could be written in Python (or java or a few other languages). Any comments, clarifications, code samples, corrections are appreciated.

    Read the article

  • Exchange 2010 and ESE Backup API

    - by Hannes de Jager
    Exchange 2010 does not support the ESE API for doing backups like it did in 2003 and 2007 according to MSDN. I Quote: "Exchange 2010 no longer supports the ESE streaming APIs for backup and restore of program files or data. Instead, Exchange 2010 supports only VSS-based backups." So my question is, if this is the case, why is the DLL (ESEBCLI2.DLL) still shipped with exchange 2010? I found it under C:\Program Files\Microsoft\Exchange Server\V14\Bin. Am I missing something here?

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • An XEvent a Day (17 of 31) – A Look at Backup Internals and How to Track Backup and Restore Throughput (Part 1)

    - by Jonathan Kehayias
    Today’s post is a continuation of yesterday’s post How Many Checkpoints are Issued During a Full Backup? and the investigation of Database Engine Internals with Extended Events.  In today’s post we’ll look at how Backup’s work inside of SQL Server and how to track the throughput of Backup and Restore operations.  This post is not going to cover Backups in SQL Server as a topic; if that is what you are looking for see Paul Randal’s TechNet Article Understanding SQL Server Backups . Yesterday...(read more)

    Read the article

  • SYS2 Scripts Updated – Scripts to monitor database backup, database space usage and memory grants now available

    - by Davide Mauri
    I’ve just released three new scripts of my “sys2” script collection that can be found on CodePlex: Project Page: http://sys2dmvs.codeplex.com/ Source Code Download: http://sys2dmvs.codeplex.com/SourceControl/changeset/view/57732 The three new scripts are the following sys2.database_backup_info.sql sys2.query_memory_grants.sql sys2.stp_get_databases_space_used_info.sql Here’s some more details: database_backup_info This script has been made to quickly check if and when backup was done. It will report the last full, differential and log backup date and time for each database. Along with these information you’ll also get some additional metadata that shows if a database is a read-only database and its recovery model: By default it will check only the last seven days, but you can change this value just specifying how many days back you want to check. To analyze the last seven days, and list only the database with FULL recovery model without a log backup select * from sys2.databases_backup_info(default) where recovery_model = 3 and log_backup = 0 To analyze the last fifteen days, and list only the database with FULL recovery model with a differential backup select * from sys2.databases_backup_info(15) where recovery_model = 3 and diff_backup = 1 I just love this script, I use it every time I need to check that backups are not too old and that t-log backup are correctly scheduled. query_memory_grants This is just a wrapper around sys.dm_exec_query_memory_grants that enriches the default result set with the text of the query for which memory has been granted or is waiting for a memory grant and, optionally, its execution plan stp_get_databases_space_used_info This is a stored procedure that list all the available databases and for each one the overall size, the used space within that size, the maximum size it may reach and the auto grow options. This is another script I use every day in order to be able to monitor, track and forecast database space usage. As usual feedbacks and suggestions are more than welcome!

    Read the article

  • How To Backup Of MySQL Database Using PhpMyAdmin

    - by Jyoti
    It is very important to do backup of your MySql database, you will probably realize it when it is too late. A lot of web applications use MySql for storing the content. This can be blogs, and a lot of other things. When you have all your content as html files on your web server it is very easy to keep them safe from crashes, you just have a copy of them on your own PC and then upload them again after the web server is restored after the crash. All the content in the MySql database must also be backed up. If you have spent a lot of time making the content and it is only stored in the Mysql server, you will feel very bad if it gets lost for ever. Backing it up once every month or so makes sure you never loose too much of your work in case of a server crash, and it will make you sleep better at night. It is easy and fast, so there is no reason for not doing it. Step 1: Log into phpMyAdmin on your server. Step2: You can select the database that you would like to backup from the drop-down menu called Database. Step 3: A new page will be loaded in phpMyAdmin showing the selected database. In order to proceed with the backup click on the Export tab. Step 4: The options that you should select apart from the default ones are Save as file which will save the file locally to your computer in an .sql format and Add DROP TABLE which will add the drop table functionality if the table already exists in the database backup as shown below. Step 5: Click on the Go button to start the export/backup procedure for your database. A download window will pop up prompting for the exact place where you would like to save the file on your local computer. It is possible that the download starts automatically. This depends on your browser’s settings.

    Read the article

  • Backup and Recovery of Windows 8 - Tip-of-the-Day

    - by KeithMayer
    Have you recently downloaded Windows 8 RTM? In this article, we'll introduce you to the new options available for making Backup and Recovery in Windows 8 easier than ever, including Windows 8 File History, launching Windows System Backup and Windows 8 Refresh & Reset PC. Use these options to define your backup and recovery plan so that you'll be prepared to restore the system and your data when needed.

    Read the article

  • Backup Compression - time for an overhaul

    - by jchang
    Database backup compression is incredibly useful and valuable. This became popular with then Imceda (later Quest and now Dell) LiteSpeed. SQL Server version 2008 added backup compression for Enterprise Edition only. The SQL Server EE native backup feature only allows a single compression algorithm, one that elects for CPU efficiency over the degree of compression achieved. In the long ago past, this strategy was essential. But today the benefits are irrelevant while the lower compression is becoming...(read more)

    Read the article

  • The backup set holds a backup of a database other than the existing database (Microsoft SQL Server,

    - by Shravan
    Fewdays back i took the backup of my development server, I am trying to restore it in my own system. I am getting the following error.  The backup set holds a backup of a database other than the existing 'sample' database. RESTORE DATABASE is terminating abnormally. (Microsoft SQL Server, Error: 3154)   I found a following solution form pinal dev blog. It's very simple. Ex: RESTORE DATABASE Sample FROM DISK = 'C:\Sample.bak' WITH REPLACE

    Read the article

  • World Backup Day

    This Saturday is World Backup Day, and with this in mind, Red Gate's Brian Harris talks about SQL Backup 7 and why they want to make backup verification a focus for more DBAs. What are your servers really trying to tell you? Find out with new SQL Monitor 3.0, an easy-to-use tool built for no-nonsense database professionals.For effortless insights into SQL Server, download a free trial today.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >