Search Results

Search found 8979 results on 360 pages for 'backup sessions'.

Page 50/360 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • How can I schedule a daily backup with SQL Server Express?

    - by edosoft
    I'm running a small web application with SQL server express (2005) as backend. I can create a backup with a SQL script, however, I'd like to schedule this on a daily basis. As extra option (should-have) I'd like to keep only the last X backups (for space-saving reasons obviously) Any pointers? [edit] SQL server agent is unavailable in SQL server express...

    Read the article

  • S3 Backup Memory Usage in Python

    - by danpalmer
    I currently use WebFaction for my hosting with the basic package that gives us 80MB of RAM. This is more than adequate for our needs at the moment, apart from our backups. We do our own backups to S3 once a day. The backup process is this: dump the database, tar.gz all the files into one backup named with the correct date of the backup, upload to S3 using the python library provided by Amazon. Unfortunately, it appears (although I don't know this for certain) that either my code for reading the file or the S3 code is loading the entire file in to memory. As the file is approximately 320MB (for today's backup) it is using about 320MB just for the backup. This causes WebFaction to quit all our processes meaning the backup doesn't happen and our site goes down. So this is the question: Is there any way to not load the whole file in to memory, or are there any other python S3 libraries that are much better with RAM usage. Ideally it needs to be about 60MB at the most! If this can't be done, how can I split the file and upload separate parts? Thanks for your help. This is the section of code (in my backup script) that caused the processes to be quit: filedata = open(filename, 'rb').read() content_type = mimetypes.guess_type(filename)[0] if not content_type: content_type = 'text/plain' print 'Uploading to S3...' response = connection.put(BUCKET_NAME, 'daily/%s' % filename, S3.S3Object(filedata), {'x-amz-acl': 'public-read', 'Content-Type': content_type})

    Read the article

  • Checking if your SIMPLE databases need a log backup

    - by Fatherjack
    Hopefully you have read the blog by William Durkin explaining why your SIMPLE databases need a log backup in some cases. There is a SQL Server bug that means in some cases databases are marked as being in SIMPLE RECOVERY but have a log wait type that shows they are not properly configured. Please read his blog for the full explanation and a great description of how to reproduce the issue. As part of our (William happens to be my Boss) work to recover our affected databases I wrote this small PowerShell script to quickly check our servers for databases that needed the attention that William details.  cls $Servers = “Server01″,”Server02″,”etc”,”etc” foreach($Server in $Servers){ write-host “************” $server “****************”     $server = New-Object Microsoft.sqlserver.management.smo.server $Server     foreach($db in $Server.databases){         $db | where {$_.RecoveryModel -eq “Simple” -and $_.logreusewaitstatus -ne “nothing”} | select name, LogReuseWaitStatus     } } If you get any results from this query then you should consult Williams blog for the details on what action you should take. This script does give out false positives if in some circumstances depending on how busy your databases are. Hopefully this will let you check your servers quickly and if you find any problems you can reference Williams blog to understand what you need to do.

    Read the article

  • You Have Questions

    - by Tom Caldecott-Oracle
    Oracle Consulting Experts Have Answers at Oracle OpenWorld Your thoughts are in the cloud. “How can I set up a private cloud that will work for my business?” “What will it take to move to an ERP, HCM, or CX cloud environment?”   You can attend Oracle Consulting sessions at Oracle OpenWorld and get answers. You can also walk up to one of the Oracle Consulting experts in the DEMOgrounds of the conference and learn about cloud implementation, engineered systems best practices, Oracle Applications upgrades, and more—just what you need to help maximize the value of your Oracle investments.   You might even get an answer to the “Ultimate Question of Life, the Universe, and Everything.” But you already know the answer, don’t you? 42. Learn more about Oracle Consulting at Oracle OpenWorld.        

    Read the article

  • Offsite Backup

    - by Grant Fritchey
    There was a recent weather event in the United States that seriously impacted our power grid and our physical well being. Lots of businesses found that they couldn’t get to their building or that their building was gone. Many of them got to do a full test of their disaster recovery processes. A big part of DR is having the ability to get yourself back online in a different location. Now, most of us are not going to be paying for multiple sites, but, we need the ability to move to one if needed. The best thing you can to start to set this up is have an off-site backup. Want an easy way to automate that? I mean, yeah, you can go to tape or to a portable drive (much more likely these days) and then carry that home, but we’ve all got access to offsite storage these days, SkyDrive, DropBox, S3, etc. How about just backing up to there? I agree. Great idea. That’s why Red Gate is setting up some methods around it. Want to take part in the early access program? Go here and try it out.

    Read the article

  • Shared Files stuck locked even after closing all sessions

    - by Chris S
    We run a business app from a shared network drive (has to be this way). When I go to do updates it complains that files are locked. Generally there are open sessions from people who left their computer on, but with no locks on files; there aren't necessarily always sessions open when it complains about locked files. If I close these sessions they disappear. I say "disappear" because I suspect they're actually hanging open. If I try to restart the Server service, it hangs on stopping. Restarting the whole server (it's a VM) unlocks the files. The Server is a Windows 2008 R2 Ent VM running on Hyper-V; the share is accessed through DFS. Offline Files and caching are disabled (Share and GPO). All clients are Win7. Nothing has SP1 yet. Any ideas on what causes the file locks to hang? Any ideas for a solution other than rebooting the server every time?

    Read the article

  • How to make and restore incremental snapshots of hard disk

    - by brunopereira81
    I use Virtual Box a lot for distro / applications testing purposes. One of the features I simply love about it is virtual machines snapshots, its saves a state of a virtual machine and is able to restore it to its former glory if something you did went wrong without any problems and without consuming your all hard disk space. On my live systems I know how to create a 1:1 image of the file system but all the solutions I'v known will create a new image of the complete file system. Are there any programs / file systems that are capable of taking a snapshot of a current file system, save it on another location but instead of making a complete new image it creates incremental backups? To easy describe what I want, it should be as dd images of a file system, but instead of only a full backup it would also create incremental. I am not looking for clonezilla, etc. It should run within the system itself with no (or almost none) intervention from the user, but contain all the data of the file systems. I am also not looking for a duplicity backup your all system excluding some folders script + dd to save your mbr. I can do that myself, looking for extra finesse. I'm looking for something I can do before doing massive changes to a system and then if something when wrong or I burned my hard disk after spilling coffee on it I can just boot from a liveCD and restore a working snapshot to a hard disk. It does not need to be daily, it doesn't even need a schedule. Just run once in a while and let it its job and preferably RAW based not file copy based.

    Read the article

  • SaaS Multi-tenancy Applications: How is data import/export/backup being implemented?

    - by Mark Redman
    How are applications providing import / export (or backups) of data in SaaS based multi-tenancy applications, particularly single database designs? Imports: Keeping things simple I think basic imports are useful, ie CSV to a spec (or a way of providing a mapping between CSV columns and fields in the database. Exports: In single database designs I have seen XML exports and HTML (basic sitse generated) exports of data? I would assume that XML is a better option? How does one cater for relational data? Would you reference various things within XML and provide documentation of the relationships or let users figurethis out? Are vendors providing an export/backup that can be imported back in/restored? Your comments appreciated.

    Read the article

  • Planning management slots/sessions

    - by Glide
    I have a planning structure on two tables to store available slots by day, and sessions. A slot is defined by a range of time in the day. CREATE TABLE slot ( `id` int(11) NOT NULL AUTO_INCREMENT , `date` date , `start` time , `end` time ); Sessions can't overlap themselves and must be wrapped in a slot. CREATE TABLE session ( `id` int(11) NOT NULL AUTO_INCREMENT , `date` date , `start` time , `end` time ); I need to generate a list of available blocks of time of a certain duration, in order to create sessions. Example: INSERT INTO slot (date, start, end) VALUES ("2010-01-01", "10:00", "19:00") , ("2010-01-02", "10:00", "15:00") , ("2010-01-02", "16:00", "20:30") ; INSERT INTO slot (date, start, end) VALUES ("2010-01-01", "10:00", "19:00") , ("2010-01-02", "10:00", "15:00") , ("2010-01-02", "16:00", "20:30") ; 2010-01-01 <##><####> <- Sessions ------------------------------------ <- Slots 10 11 12 13 14 15 16 17 18 19 20 2010-01-02 <##########> <########> <- Sessions -------------------- ------------------ <- Slots 10 11 12 13 14 15 16 17 18 19 20 I need to know which spaces of 1 hour I can use: +------------+-------+-------+ | date | start | end | +------------+-------+-------+ | 2010-01-01 | 13:00 | 14:00 | | 2010-01-01 | 14:00 | 15:00 | | 2010-01-01 | 15:00 | 16:00 | | 2010-01-01 | 16:00 | 17:00 | | 2010-01-01 | 17:00 | 18:00 | | 2010-01-01 | 18:00 | 19:00 | | 2010-01-02 | 10:00 | 11:00 | | 2010-01-02 | 11:00 | 12:00 | | 2010-01-02 | 16:00 | 17:00 | +------------+-------+-------+

    Read the article

  • How can I tell if a SQL Server database is being backed up

    - by Guy
    Is there a way to programmatically determine if a SQL Server backup is currently being performd on a particular database? We have automated database backup scripts for both data and log files, where the databases are backed up nightly and log files are backed up every 15 minutes, 24 hours a day. However, we think that the log file backup job is failing if it runs the same time as the full backup is being run. What I'd like to do is to make a change to my transaction log script to not run a transaction log backup while the full backup is being run. If there a DMV or a system table that I can query and work this out?

    Read the article

  • Backup Exec tape rotation guidelines

    - by HannesFostie
    Hi We use Backup Exec to take care of our backups for our data server, exchange server, and one more set of systems. Each of these 3 is being done on a separate "set" of tapes. Our goal is to be able to roll back a full 2 weeks, with 1 full backup each weekend and differential/incremental backups in between (the difference between the two in our case isn't very big, because the employees mostly use a very similar set of files throughout the week). While playing around with the settings on how to achieve this, we set the time for BE to keep the full backup to 14 days, but because we have too much data this would require manual intervention each time to erase a certain tape and use that. What I would like to know is what kind of guidelines, tricks, tips and general "stuff to think about" you keep in mind when designing your backup schedule. The type of backups (full/diff/incr) isn't of that much importance in our case as it's more or less set in stone. Made this community wiki as it's not a very specific question. Thanks in advance!

    Read the article

  • Symantec NetBackup restore - Incremental backup

    - by w0051977
    We are using Net Backup as a corporate solution. Incremental backups are taken daily during the week and then a weekly backup is done at the weekend (Saturday). My colleague has restored a folder to how it stood at 14:00 on a Tuesday. The problem is that the restore is taking files from the weekend backup if they did not exist at that point in time of the restore. For example, the folder we are restoring should look like this (this is how it looked on Tuesday at 14:00): Folder1 (folder name) Test.txt Test1.txt Test2.txt The folder looked like this at the weekend when the full restore was done (even though it did exist at the weekend when the full backup ran): Folder1 (folder name) Test.txt Test1.txt Test2.txt Test3.txt The actual folder restored looks like this: Folder1 (folder name) Test.txt Test1.txt Test2.txt Test3.txt Test3.txt should not be restored because it did not exist at the point of the restore. Is there a setting somewhere that we are missing. The folder in question is 200GB - the example above is for simplification. I realise this is a basic question.

    Read the article

  • Where are gnome keyboard shortcuts stored

    - by Evan Plaice
    I usually load a new version for every release to keep my OS fresh while preserving the last version on another partition as backup. I also employ a lot of custom key mappings (IMHO, the defaults suck). I've figured out how to transfer the majority of my configuration across systems so far but I can't figure out where the custom keyboard shortcut mappings are stored. Does anybody know where gnome puts these? Are there separate user config (Ie. ~/) and system config (Ie. /etc) files?

    Read the article

  • How to restore missing calendar data from Lightning/Thunderbird

    - by dev9
    Today out of nowhere all my events and tasks disappeared from my Thunderbird. However, I have a full backup of .thunderbird folder. How can I restore my calendar data? I reverted these files to previous versions: /home/me/.thunderbird/xxx.default/calendar-data/local.sqlite /home/me/.thunderbird/xxx.default/prefs.js but I still cannot see any data in my Thunderbird. What else should I do?

    Read the article

  • I forgot the password to a cbz/zip file

    - by hurley
    I forgot the password to a cbz file, which when I open it says it only contains empty pages, so i rename it to zip, since I read it will open anyway, and I enter what I supposed to be the password, and it starts extracting some 100 files, but it stops and asks for a password again and none of my known passwords work. Help? it's a backup for over 2 years of work. I'm using Archive Manager at Ubuntu 13.

    Read the article

  • SQL Server 2008 Recovery Mode reverts from FULL to SIMPLE

    - by Eric Hazen
    Three of our SQL databases have their recovery model change every night from FULL to SIMPLE. The only jobs that I'm aware of are two BackupExec jobs that run nightly. Why would the recovery model change? Backup Jobs: SQL FULL BACKUP, SQL LOG BACKUP Event Manager: Event 5084: Setting Database option RECOVERY to SIMPLE for database databaseName

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >