Search Results

Search found 13175 results on 527 pages for 'live backup'.

Page 107/527 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Time Machine is getting stuck at "Preparing to Back Up" and my Trash isn't emptying

    - by zarose
    I have encountered two separate problems, but I am putting them in the same question in case they are related. First, my Trash would not empty. It seems to be getting stuck on certain files, because I will reset my Macbook and some of the files will be deleted, and then if I remove a file or two at random, more can be deleted. Some of these files had strange characters in their names. I tried changing the names to single characters, but this did not help. Next, I attempted to backup my Macbook using Time Machine. I plugged in the HDD I've been using for this, but every time I try to start the backup, Time Machine gets stuck at "Preparing to Back Up". I definitely need to know how to fix the Time Machine problem, but I am curious how to solve the trash problem as well, and whether or not these problems are related. EDIT: Console.app logged the following this morning before I left on a trip. I did not bring the HDD with me. 6/5/12 7:41:28.312 AM com.apple.backupd: Starting standard backup 6/5/12 7:41:46.877 AM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 7:41:58.368 AM com.apple.backupd: Backup failed with error: 19 6/5/12 7:59:08.999 AM com.apple.backupd: Starting standard backup 6/5/12 7:59:10.187 AM com.apple.backupd: Backing up to: /Volumes/Seagate 3TB Mac/Backups.backupdb 6/5/12 7:59:13.308 AM com.apple.backupd: Event store UUIDs don't match for volume: Macintosh HD 6/5/12 7:59:13.331 AM com.apple.backupd: Event store UUIDs don't match for volume: Blank 6/5/12 7:59:13.683 AM com.apple.backupd: Deep event scan at path:/ reason:must scan subdirs|new event db| 6/5/12 8:23:31.807 AM com.apple.backupd: Backup canceled. 6/5/12 8:23:33.373 AM com.apple.backupd: Stopping backup to allow backup destination disk to be unmounted or ejected. 6/5/12 9:51:21.572 PM com.apple.backupd: Starting standard backup 6/5/12 9:51:22.515 PM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 9:51:32.741 PM com.apple.backupd: Backup failed with error: 19

    Read the article

  • Jquery and live act wired but act ok after force refresh in IE

    - by Y.G.J
    hi guys - somthing is wrong with my code and i can't get what it is... i have a div id = "personaltab" i have a form in side it to login the user with username and password. if success the jquery empty the div and puts in the form of the bidding. if the user try to bid the other ajax that assign to the button is working but for some reason skips the empty and just adding the responded ajax to the div i have checked that in IE and chrome and it is working fine in chrome here are my codes $("#login").click(function() { var id = $("input#pid").val(); var user = $("input#puser").val(); var pass = $("input#ppass").val(); var dataString = 'id='+ id + '&user='+ user + '&pass=' + pass; if (user == "") { alert("error"); $("input#puser").focus(); return false; } if (pass == "") { alert("error"); $("input#ppass").focus(); return false; } $.ajax({ type: "POST", url: "loginpersonal.asp", data: dataString, success: function(msg) { if (msg=="False") { alert("error"); $("#personaltab").show(); } else { $("#personaltab").fadeOut("normal",function(){ $("#personaltab").empty(); $("#personaltab").append(msg); $("#personaltab").slideDown(); }); } }, error: function (XMLHttpRequest, textStatus, errorThrown) { alert('error'); } }); return false; }); $("#sendbid").live("click", function(){ var startat = $("input[name=startat]").val(); var sprice = $("input[name=sprice]").val(); if (parseInt(sprice)<=parseInt(startat)) { alert("error"); $("input[name=sprice]").focus(); return false; } else { var payment = $("select[name=payment]").val(); if ($('input[name=credit]').is(':checked') ){ var credit = true; } var prodid = $("input[name=id]").val(); var dataString = 'id='+ prodid + '&price='+ sprice + '&payment=' + payment + '&credit=' + credit; $.ajax({ type: "POST", url: "loginpersonal.asp", data: dataString, success: function(msg) { $("#personaltab").empty(); $("#personaltab").append(msg); $("#personaltab").show(); }, error: function (XMLHttpRequest, textStatus, errorThrown) { alert('error'); } }); } return false; });

    Read the article

  • Overlay bitmap on live video

    - by sijith
    Hi i want to Overlay bitmap on live video. Iam trying to do this with the directshow sample. I edited PlayCapMonker sample and added some functions to enable this. i did this with the procedure explained in below link http://www.ureader.com/msg/1471251.aspx Now i am gettting errors Error 2 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 3 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 5 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 6 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 8 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 9 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 21 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 22 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 26 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 27 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 36 error C2228: left of '.m_alpha' must have class/struct/union Error 38 error C2227: left of '-SetAlphaBitmap' must point to class/struct/union/generic type Error 7 error C2146: syntax error : missing ';' before identifier 'Pool' Error 4 error C2146: syntax error : missing ';' before identifier 'Format' c:\Program Files\Microsoft Platform SDK\include\Vmr9.h 368 PlayCapMoniker Error 1 error C2143: syntax error : missing ';' before '' Error 20 error C2143: syntax error : missing ';' before '' Error 25 error C2143: syntax error : missing ';' before '*' Error 30 error C2065: 'g_pMixerBitmap' : undeclared identifier Error 33 error C2065: 'g_pMixerBitmap' : undeclared identifier Error 37 error C2065: 'g_pMixerBitmap' : undeclared identifier Error 31 error C2065: 'g_hbm' : undeclared identifier Error 32 error C2065: 'g_hbm' : undeclared identifier Error 35 error C2065: 'config' : undeclared identifier Error 10 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 11 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 12 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 13 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 16 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 19 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 23 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 24 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 28 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 29 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 14 error C2061: syntax error : identifier 'IDirect3DDevice9' Error 15 error C2061: syntax error : identifier 'IDirect3DDevice9' Error 17 error C2061: syntax error : identifier 'IDirect3DDevice9' Error 18 error C2061: syntax error : identifier 'IDirect3DDevice9' Error 34 error C2039: 'pDDS' : is not a member of '_VMR9AlphaBitmap' SDK\Samples\Multimedia\DirectShow\Capture\PlayCapMoniker\PlayCapMoniker.cpp 263 PlayCapMoniker

    Read the article

  • Overlay bitmap on live video

    - by sijith
    Hi i want to Overlay bitmap on live video. Iam trying to do this with the directshow sample. I edited PlayCapMonker sample and added some functions to enable this. i did this with the procedure explained in below link http://www.ureader.com/msg/1471251.aspx Now i am gettting errors Error 2 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 3 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 5 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 6 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 8 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 9 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 21 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 22 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 26 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 27 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 36 error C2228: left of '.m_alpha' must have class/struct/union Error 38 error C2227: left of '-SetAlphaBitmap' must point to class/struct/union/generic type Error 7 error C2146: syntax error : missing ';' before identifier 'Pool' Error 4 error C2146: syntax error : missing ';' before identifier 'Format' c:\Program Files\Microsoft Platform SDK\include\Vmr9.h 368 PlayCapMoniker Error 1 error C2143: syntax error : missing ';' before '' Error 20 error C2143: syntax error : missing ';' before '' Error 25 error C2143: syntax error : missing ';' before '*' Error 30 error C2065: 'g_pMixerBitmap' : undeclared identifier Error 33 error C2065: 'g_pMixerBitmap' : undeclared identifier Error 37 error C2065: 'g_pMixerBitmap' : undeclared identifier Error 31 error C2065: 'g_hbm' : undeclared identifier Error 32 error C2065: 'g_hbm' : undeclared identifier Error 35 error C2065: 'config' : undeclared identifier Error 10 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 11 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 12 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 13 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 16 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 19 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 23 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 24 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 28 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 29 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 14 error C2061: syntax error : identifier 'IDirect3DDevice9' Error 15 error C2061: syntax error : identifier 'IDirect3DDevice9' Error 17 error C2061: syntax error : identifier 'IDirect3DDevice9' Error 18 error C2061: syntax error : identifier 'IDirect3DDevice9' Error 34 error C2039: 'pDDS' : is not a member of '_VMR9AlphaBitmap' SDK\Samples\Multimedia\DirectShow\Capture\PlayCapMoniker\PlayCapMoniker.cpp 263 PlayCapMoniker

    Read the article

  • Updating / refreshing a live geojson layer | JSON & JS Variable

    - by Ozaki
    TLDR I am trying to get my geoJSON layer to update, currently it will 1. Create the vector mark, 2. Remove the vector mark, 3. Set the JS variable for lat and lon, 4. Unset the variable??? :S Hey S O. I have a geojson layer set up as follows: //GeoJSON Layer// var layer1 = new OpenLayers.Layer.GML("My GeoJSON Layer", "coordinates", {format: OpenLayers.Format.GeoJSON, styleMap: style_red}); My features are set up as follows: var latitude = 0.0; // as 0.0 it will draw the point in. var longitude = 0.0; // as 0.0 it will draw the point in. //var longitude = getlongitude; where getlongitude = JSON string of longitude. var point = new OpenLayers.Geometry.Point(longitude, latitude); pointFeature = new OpenLayers.Feature.Vector(point, null, style_red); var style_red = OpenLayers.Util.extend({}, layer_style); style_red.strokeColor = "red"; style_red.fillColor = "black"; style_red.fillOpacity = 0.5; style_red.graphicName = "circle"; style_red.pointRadius = 3.8; style_red.strokeWidth = 2; style_red.strokeLinecap = "butt"; and my layer updating function: function UpdateLayer(){ var p = new OpenLayers.Format.GeoJSON({ 'internalProjection': new OpenLayers.Projection("EPSG:900913"), 'externalProjection': new OpenLayers.Projection("EPSG:4326") }); var url = "coordinates"; OpenLayers.loadURL(url, {}, null, function(r) { var f = p.read(r.responseText); map.layers[2].destroyFeatures(); map.layers[2].addFeatures(pointFeature); }); setTimeout("UpdateLayer()",1000) } Any idea what I am doing wrong or what I am missing? Edit1 It now removes the feature (was map.layers[1]) previously... But will not add the new feature.. Edit2 I managed to get it to redraw a point but not with live data. It should draw the point at what (latitude) & (longitude) are equal to. I am trying to set latitude & longitude to some JSON string but every time straight after it sets the variable it changes it back to "undefined" as soon as it passes the line after var latitude? (using firebug & firequery to debug)

    Read the article

  • HTML5 <audio> Safari live broadcast vs not

    - by Peter Parente
    I'm attempting to embed an HTML5 audio element pointing to MP3 or OGG data served by a PHP file . When I view the page in Safari, the controls appear, but the UI says "Live Broadcast." When I click play, the audio starts as expected. Once it ends, however, I can't start it playing again by clicking play. Even using the JS API on the audio element and setting currentTime to 0 fails with an index error exception. I suspected the headers from the PHP script were the problem, particularly missing a content length. But that's not the case. The response headers include a proper Content- Length to indicate the audio has finite size. Furthermore, everything works as expected in Firefox 3.5+. I can click play on the audio element multiple times to hear the sound replay. If I remove the PHP script from the equation and serve up a static copy of the MP3 file, everything works fine in Safari. Does this mean Safari is treating audio src URLs with query parameters differently than URLs that don't have them? Anyone have any luck getting this to work? My simple example page is: <!DOCTYPE html> <html> <head></head> <body> <audio controls autobuffer> <source src="say.php?text=this%20is%20a%20test&format=.ogg" /> <source src="say.php?text=this%20is%20a%20test&format=.mp3" /> </audio> </body> </html> HTTP Headers from PHP script: HTTP/1.x 200 OK Date: Sun, 03 Jan 2010 15:39:34 GMT Server: Apache X-Powered-By: PHP/5.2.10 Content-Length: 8993 Keep-Alive: timeout=2, max=98 Connection: Keep-Alive Content-Type: audio/mpeg HTTP Headers from direct file access: HTTP/1.x 200 OK Date: Sun, 03 Jan 2010 20:06:59 GMT Server: Apache Last-Modified: Sun, 03 Jan 2010 03:20:02 GMT Etag: "a404b-c3f-47c3a14937c80" Accept-Ranges: bytes Content-Length: 8993 Keep-Alive: timeout=2, max=100 Connection: Keep-Alive Content-Type: audio/mpeg I tried hard-coding the Accept-Ranges header into the script too, but no luck.

    Read the article

  • Objective-c design advice for use of different data sources, swapping between test and live

    - by user200341
    I'm in the process of designing an application that is part of a larger piece of work, depending on other people to build an API that the app can make use of to retrieve data. While I was thinking about how to setup this project and design the architecture around it, something occurred to me, and I'm sure many people have been in similar situations. Since my work is depending on other people to complete their tasks, and a test server, this slows work down at my end. So the question is: What's the best practice for creating test repositories and classes, implementing them, and not having to depend on altering several places in the code to swap between the test classes and the actual repositories / proper api calls. Contemplate the following scenario: GetDataFromApiCommand *getDataCommand = [[GetDataFromApiCommand alloc]init]; getDataCommand.delegate = self; [getDataCommand getData]; Once the data is available via the API, "GetDataFromApiCommand" could use the actual API, but until then a set of mock data could be returned upon the call of [getDataCommand getData] There might be multiple instances of this, in various places in the code, so replacing all of them wherever they are, is a slow and painful process which inevitably leads to one or two being overlooked. In strongly typed languages we could use dependency injection and just alter one place. In objective-c a factory pattern could be implemented, but is that the best route to go for this? GetDataFromApiCommand *getDataCommand = [GetDataFromApiCommandFactory buildGetDataFromApiCommand]; getDataCommand.delegate = self; [getDataCommand getData]; What is the best practices to achieve this result? Since this would be most useful, even if you have the actual API available, to run tests, or work off-line, the ApiCommands would not necessarily have to be replaced permanently, but the option to select "Do I want to use TestApiCommand or ApiCommand". It is more interesting to have the option to switch between: All commands are test and All command use the live API, rather than selecting them one by one, however that would also be useful to do for testing one or two actual API commands, mixing them with test data. EDIT The way I have chosen to go with this is to use the factory pattern. I set up the factory as follows: @implementation ApiCommandFactory + (ApiCommand *)newApiCommand { // return [[ApiCommand alloc]init]; return [[ApiCommandMock alloc]init]; } @end And anywhere I want to use the ApiCommand class: GetDataFromApiCommand *getDataCommand = [ApiCommandFactory newApiCommand]; When the actual API call is required, the comments can be removed and the mock can be commented out. Using new in the message name implies that who ever uses the factory to get an object, is responsible for releasing it (since we want to avoid autorelease on the iPhone). If additional parameters are required, the factory needs to take these into consideration i.e: [ApiCommandFactory newSecondApiCommand:@"param1"]; This will work quite well with repositories as well.

    Read the article

  • sybase bcp import fails

    - by chromeplatedbanana
    We're trying to export some tables from our production database to our test database using bcp. The bcp export seems to work fine, but the import always fails with a data type error (see below). We tested on our test database exporting the table content to a file, then importing it in again immediately, but that failed too. e.g., bcp TABLENAME out ~/tempfile -S servername -U username generates a file as expected. If we use -c option then the number of lines is as expected. However, bcp TABLENAME in ~/tempfile -S servername -U username fails with CTLIB Message: - L0/0D/S0/N0/0/0: blk_int(): blk_layer: CT library error: Cannot find an equivalent CS_TYPE for this TDS data type 49 blk_init failed. We get this whenever we try to copy into TABLENAME, whether from the production or test table dump file. I don't understand why export and import for the same TABLENAME is generating a data type error. What am I doing wrong here? Thanks

    Read the article

  • What's the most efficient way to reclaim disk space after deleting lots of data from a database on Sybase ASE 15?

    - by Ernie Longmire
    As I understand it, based on some research but zero real-world experience with Sybase ASE, the only way to reclaim disk space once it's been allocated to a database is to export that database, create a new DB with the same schema, and reload all the exported data to the new database. Is this correct, or is there some other method? Then: assuming the above is correct and a full export-recreate-reload is required, what's the most efficient way to do that? Are there tools that will automate all or part of that process? I'm being told we would have to write separate bcp export and import commands for each and every object in the database, which if true sounds easily scriptable by someone who knows Sybase ASE well enough. (I don't.) This seems to me like a really basic housekeeping task, and it feels like I'm missing something obvious.

    Read the article

  • export shared services from MOSS

    - by vittocia
    Hello, using the stsadm command I have been able to export a MOSS website and restore it on a different server which works fine. I tried the same for the shared services, it gave no errors, but it does not have all the import connections when I check around. Is there a better way to export and restore shared services, or a way to synch the import connections and user list?

    Read the article

  • MySQL equivalent to .pgpass, or automatic authentication in a cron job for mySQL

    - by Ibrahim
    I'm writing a bash script to back up my databases. Most are postgresql, and in postgres there's a way to avoid having to authenticate by creating a ~/.pgpass file which contains the postgres password. I put this in root's home directory and made it chmod 0600, so that root could dump the postgres databases without having to authenticate. Now I want to do something similar for mysql, although I only have one mysql database. How can I do this? I don't want to specify the password on the command line for mysqldump because this is part of a script that might be somewhat visible to other users. Is there a better way (i.e. built in to mysql) to do this than make a file that only root can read and then read that to get the mysql password, and then use that in the bash script as a variable?

    Read the article

  • MySQL equivalent to .pgpass, or automatic authentication in a cron job for mySQL

    - by Ibrahim
    I'm writing a bash script to back up my databases. Most are postgresql, and in postgres there's a way to avoid having to authenticate by creating a ~/.pgpass file which contains the postgres password. I put this in root's home directory and made it chmod 0600, so that root could dump the postgres databases without having to authenticate. Now I want to do something similar for mysql, although I only have one mysql database. How can I do this? I don't want to specify the password on the command line for mysqldump because this is part of a script that might be somewhat visible to other users. Is there a better way (i.e. built in to mysql) to do this than make a file that only root can read and then read that to get the mysql password, and then use that in the bash script as a variable?

    Read the article

  • Easy Oracle Log-Shipping

    - by ItsAMystery
    Hi All I am looking for a decent way of keeping a secondary Oracle database up to date without exporting and importing the database each time. There are 3 users on the instance that I would essentially like to 'log ship' if thats what it is called on Oracle! Can anyone suggest anything? The database is well under a GB total and we are running 10g express (although I have thought about using 10g standard as we have a spare license). Cheers Chris

    Read the article

  • Automate uploading of videos to YouTube

    - by John
    Here's the problem: I would like to keep lots of home made videos. Of course, they are subject to being lost, or somebody could steal the the computer, or water or fire could destroy them. Secondly, I have to plug in my hard drive every time I want to watch something, which I find slow and cumbersome. I was thinking that perhaps I could upload the videos to Youtube with the privacy set to invite only and then delete the video from the hard drive automatically. Could this be done?

    Read the article

  • Windows Home Server style redundancy/multi-disk-support on Windows Server 2008 R2?

    - by user19597
    I'm setting up a fileserver for our department. It'll be connected to the domain. I want it to have a very large amount of storage (several TB). Ideally, it should also preserve disk space by identifying identical files and only storing them once. It should be fault tollerant so that if one of the drives fails, that drive can be replaced without losing any data. All of these features are available in Microsoft's consumer offering - Windows Home Server. However, I can't find these kind of features within the enterprise Windows Server 2008 R2. Am I missing something? I know that I could buy a Drobo, or similar, and use this instead. However, I would prefer to use a built-in feature of Windows Server should it exist. It seems surprising to me that these features should be available in Home Server but not in an enterprise fileserver.

    Read the article

  • LTO(4) tape shelf life estimation?

    - by emilp
    LTO tapes, Maxell in this case, are often marketed as having 30 years or more shelf life when stored under "optimal conditions" Is there a way to get a good estimation to the shelf life, given parameters such as relative humidity and temperature etc? Obsolescence of the tapes aside, is there a way of determining the impact to shelf life of any deviance from the optimal. In other words how many years are lost when storing say 1 degree above the specified range? regards Emil

    Read the article

  • Bacula v5.0.2 Windows Installation Issues

    - by JohnyD
    First off, I am very new to Bacula but I'm very intriqued from what I've read. I'm looking to set up Bacula 5.0.2 on a Windows 2008 R2 server. I've run the installer and at the end it asks me to configure DIR name, DIR password, DIR Address. Windows documentation is somewhat hard to come by and I'm not certain what exactly I'm supposed to enter here. Do I need to create a local account that matches this info? Will the installation process create the account for me? Will this be the account that handles the FD daemon/service? I'm also not certain if Address means network location or local direcory. I apologize for my ignorance. Currently I'm trying to use the following information: Name: john pass: john address: thin1 (server name although I have also tried thin1.fqdm.local and 10.0.0.104) This info allows for the installer to complete successfully. However, when I run the BAT it hangs at, "Connecting to Director thin1:9101". The Bacula File Service is currently running under the local system account. What am I doing wrong? What do I have yet to do? Once I get this working properly I assume I will need to install clients on all my Windows boxes? Also, this is a 64-bit cpu but I am installing the 32-bit client. Are there any issues with this? Should I be using the 32-bit client? Thanks very much for the help.

    Read the article

  • Ubuntu: Move fsbackup backups to Amazon S3

    - by Alexander Gladysh
    I have a legacy server (Ubuntu 9.10 Karmic x86), where previous admin set up backups with fsbackup. This server lives in a VPS (under some kind of Xen), and it is low on HDD space (16 GB total). Now it came to a point, where fsbackup backups take more space than the rest of data in the system. The filesystem is 100% filled, and I already cleaned up all that I could, aside from actual backups. I do not have any experience managing fsbackup, and I do not want to break or lose the backups. Googling fsbackup gives surprisingly low quality results... Here is how my backups look like: $ sudo ls -lh /var/archives total 8.1G -rw-rw---- 1 root root 318 2011-01-06 06:26 myserver-20110106.md5 -rw-rw---- 1 root root 258 2011-01-07 06:26 myserver-20110107.md5 -rw-rw---- 1 root root 318 2011-01-08 06:26 myserver-20110108.md5 -rw-rw---- 1 root root 318 2011-01-09 06:26 myserver-20110109.md5 -rw-rw---- 1 root root 346 2011-01-10 06:43 myserver-20110110.md5 -rw-rw---- 1 root root 14M 2011-01-06 06:26 myserver-all-mysql-databases.20110106.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-07 06:26 myserver-all-mysql-databases.20110107.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-08 06:26 myserver-all-mysql-databases.20110108.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-09 06:26 myserver-all-mysql-databases.20110109.sql.bz2 -rw-rw---- 1 root root 862 2011-01-10 06:43 myserver-all-mysql-databases.20110110.sql.bz2 -rw-rw---- 1 root root 827K 2011-01-03 06:25 myserver-etc.20110103.master.tar.gz -rw-rw---- 1 root root 16K 2011-01-06 06:25 myserver-etc.20110106.tar.gz -rw-rw---- 1 root root 16K 2011-01-07 06:25 myserver-etc.20110107.tar.gz -rw-rw---- 1 root root 16K 2011-01-08 06:25 myserver-etc.20110108.tar.gz -rw-rw---- 1 root root 16K 2011-01-09 06:25 myserver-etc.20110109.tar.gz -rw-rw---- 1 root root 827K 2011-01-10 06:25 myserver-etc.20110110.master.tar.gz -rw------- 1 root root 36K 2011-01-10 06:25 myserver-etc.incremental.bin -rw-rw---- 1 root root 29M 2011-01-03 06:25 myserver-home.20110103.master.tar.gz -rw-rw---- 1 root root 11K 2011-01-06 06:25 myserver-home.20110106.tar.gz -rw-rw---- 1 root root 14K 2011-01-07 06:25 myserver-home.20110107.tar.gz -rw-rw---- 1 root root 11K 2011-01-08 06:25 myserver-home.20110108.tar.gz -rw-rw---- 1 root root 11K 2011-01-09 06:25 myserver-home.20110109.tar.gz -rw-rw---- 1 root root 2.0M 2011-01-10 06:25 myserver-home.20110110.master.tar.gz -rw------- 1 root root 27K 2011-01-10 06:25 myserver-home.incremental.bin -rw-rw---- 1 root root 1.5G 2011-01-03 06:29 myserver-opt.20110103.master.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-06 06:25 myserver-opt.20110106.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-07 06:25 myserver-opt.20110107.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-08 06:25 myserver-opt.20110108.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-09 06:25 myserver-opt.20110109.tar.gz -rw-rw---- 1 root root 1.5G 2011-01-10 06:30 myserver-opt.20110110.master.tar.gz -rw------- 1 root root 201K 2011-01-10 06:30 myserver-opt.incremental.bin -rw-rw---- 1 root root 2.3G 2011-01-03 06:41 myserver-srv.20110103.master.tar.gz -rw-rw---- 1 root root 44M 2011-01-06 06:26 myserver-srv.20110106.tar.gz -rw-rw---- 1 root root 27M 2011-01-07 06:25 myserver-srv.20110107.tar.gz -rw-rw---- 1 root root 39M 2011-01-08 06:26 myserver-srv.20110108.tar.gz -rw-rw---- 1 root root 2.0M 2011-01-09 06:25 myserver-srv.20110109.tar.gz -rw-rw---- 1 root root 2.7G 2011-01-10 06:42 myserver-srv.20110110.master.tar.gz -rw------- 1 root root 3.4M 2011-01-10 06:42 myserver-srv.incremental.bin I'm thinking about moving backups to Amazon S3, but before that I have to free some space, so the server can work. Perhaps I can mount /var/archives to an Amazon S3 bucket somehow... Any advice?

    Read the article

  • Auto-archive IMAP mail folders on OS X

    - by Pradeep
    Hi, I am trying to achieve the following. Download all messages from mail server(and remove downloaded messages from server). Downloaded messages should be in a local mailbox preserving folder structure as was defined on server. The download process should be automatic and shouldn't create duplicates. I am on OSX and looking for solutions using Apple Mail or Thunderbird or similar. So far I have found POP is not the way to go (as it looses folder structure and potentially can cause duplicates). The solution described here seems very good but isn't yet available for thunderbird or apple mail. http://getsatisfaction.com/mozilla_messaging/topics/auto_archive_and_keep_folder_structure. The other alternative is outlook which has auto archive which is paid and I think exports to pst instead of the more common mbox format. Yet another alternative is http://www.pop4.org/ which adds support for folder management to POP. Which I don't think is going to become usable soon. Any other better solutions.? Thank you

    Read the article

  • How to warehouse data that is not needed from MS SQL server

    - by I__
    I have been asked to truncate a large table in MS SQL Server 2008. The data is not needed but might be needed once every two years. It will NEVER have to be changed, only viewed. The question is, since I don't need the data on a day-to-day basis, what do I do with it to protect and back it up? Please keep in mind that I will need to have it accessible maybe once every two years, and it is FINE for us if the recovery process takes a few hours. The entire table is about 3 million rows and I need to truncate it to about 1 million rows.

    Read the article

  • How to replicate windows server 2008?

    - by Sem Dendoncker
    Hello, We have 2 servers let's call them server A and server B. Server B is never used unless something goes wrong with server A. I need a system that can replicate server A to server B but this doesn't have to be continues. This only has to be done once every day (let's say at 1 am or so). Also, it's not necessary to automatically take over the server, this can be done manually. On this server IIS is running and sql 2008 so all webapplications and databases must be synced/replicated. What tools can I use? I hope someone can help me with this. Cheers, Sem

    Read the article

  • How to merge-copy multiple folders in Outlook?

    - by user553702
    In MS Outlook, I need to be able to incrementally copy items in multiple folders in the Exchange account to a local PST file with a mirrored folder structure. I need the items in each folder to be combined into the destination. For example, let's say on the server account I have a folder tree like this: Inbox SortedEmails1 SortedEmails2 SortedEmails3 I also have these same four folders in the local PST file, which I want to keep growing as I incrementally pull more messages from the Exchange server. Messages from "Inbox" should go to the local "Inbox", messages from "SortedEmails1" should go into "SortedEmails1" in the local PST, etc. I'd like to avoid manually iterating into every single folder and copying items. How can I do this?

    Read the article

  • Setting up a Windows Server 2008 R2 DC + Fileserver : native or virtual?

    - by user126890
    I want to deploy a new DC + Fileserver using Windows Server 2008 R2 SP1 Standard Edition on a Dell PowerEdge R410 and iSCSI storage for a small business (~30 people). Should I install the system native on the server or use a virt layer? I don't have a budget for virtualization so i gotta go with something free... What's a better working routine, taking snapshots of vm's or taking backups (Acronis/CloneZilla) of systems? If I use a virt system, I need a GUI for some people in the business to reset the system to a earlier state in emergency situations. I wanted to install phpVirtualBox once but never finished, is it suitable in a productive environment? server specs: Intel Xeon E5620 CPU (2,40GHz, 4C, 12MB Cache) 8GB RAM Dual Rank LV RDIMMs 1333MHz 2x 1TB SATA 7,2K 3,5, RAID1

    Read the article

  • On a failing hard drive, I am able to view data but unable to copy it - why?

    - by Tom
    I have a 2.5" external hard drive that is failing. It's not making the expected 'clicking' noise that most hard drives and I am able to view the data, but I am unable to actually retrieve the data. I attempted to use SpinRite in order to access the data on the drive, but it didn't like the external drive. When I view the drive's property page, the drive shows that it's used space is at 100% and that it has 0 bytes available; however, the progress indicator under the drive icon in Windows Explorer shows that it's roughly 50% full (which is correct). When I attempt to run Windows' "Error Checking" tool and attempt to "scan for an attempt recovery of bad sectors," the tool begins to run then immediately closes with no error message. I am able to browse the contents of the drive using Windows Explorer. When I begin to try copying any given single file, the copy process begins, an indicator starts, and then the copy fails with no real error message. The Disk Management page in Computer Management under Control Panel also shows this drive has being 'Healthy.' I dropped the drive off at a data recovery store and they said that "The data seems to be intact, but an internal failure is preventing any information from being retrieved." They offered to provide me references to a data recovery specialist. I've also attempted to run CHKDSK on the drive (with and without arguments) but it returns the following error: The type of the filesystem is RAW. CHKDSK is not available for RAW drives. Before going the route of more expensive data recovery, I'm wondering if these symptoms sound familiar to anyone? Other questions... I'm willing to continue trying tools such as TestDisk and/or PhotoRec (as the majority of the data that I'd like to salvage are photos) but how long I should expect either tool to run given approximately 400GB of data? I'm also comfortable using Linux so I welcome any suggestions for utilities or tools and strategies with which you've had success.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >