Search Results

Search found 6101 results on 245 pages for 'incremental backup'.

Page 52/245 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • How to back up a Windows Home Server over the network?

    - by Jay Bazuzi
    One of the very few reasons I have to physically interact with my Windows Home Server is to back it up to an external hard drive, with the "Backup Server" feature. It would be more convenient to plug the external drive in to a desktop PC, and then do the backup over the network. Is there a way to do this? I've heard a little about iSCSI, but as far as I can tell it costs money, and I'm hoping for something free.

    Read the article

  • How long would this file transfer take?

    - by CT
    I have 12 hours to backup 2 TB of data. I would like to backup to a network share to a computer using consumer WD 2TB Black 7200rpm hard drives. Gigabit Ethernet. What other variables would I need to consider to see if this is feasible? How would I set up this calculation?

    Read the article

  • `find` command not available in web host, how to implement a delete based on modification time using other commands?

    - by CalumJEadie
    I'm creating a simple datebase backup solution for a client using web hosting at DataFlame. The web hosting account provides access to cron but not a shell. I have a database backup script creating regular backups and I want to automatically remove those more than N days old. I attempted to use find -v $backup_dir -mtime +$keep_days -name "*db.tar.gz" -delete however the user executing the script does not have permission to run find. Can you suggest how to implement this without using the find command?

    Read the article

  • C# Monte Carlo Incremental Risk Calculation optimisation, random numbers, parallel execution

    - by m3ntat
    My current task is to optimise a Monte Carlo Simulation that calculates Capital Adequacy figures by region for a set of Obligors. It is running about 10 x too slow for where it will need to be in production and number or daily runs required. Additionally the granularity of the result figures will need to be improved down to desk possibly book level at some stage, the code I've been given is basically a prototype which is used by business units in a semi production capacity. The application is currently single threaded so I'll need to make it multi-threaded, may look at System.Threading.ThreadPool or the Microsoft Parallel Extensions library but I'm constrained to .NET 2 on the server at this bank so I may have to consider this guy's port, http://www.codeproject.com/KB/cs/aforge_parallel.aspx. I am trying my best to get them to upgrade to .NET 3.5 SP1 but it's a major exercise in an organisation of this size and might not be possible in my contract time frames. I've profiled the application using the trial of dotTrace (http://www.jetbrains.com/profiler). What other good profilers exist? Free ones? A lot of the execution time is spent generating uniform random numbers and then translating this to a normally distributed random number. They are using a C# Mersenne twister implementation. I am not sure where they got it or if it's the best way to go about this (or best implementation) to generate the uniform random numbers. Then this is translated to a normally distributed version for use in the calculation (I haven't delved into the translation code yet). Also what is the experience using the following? http://quantlib.org http://www.qlnet.org (C# port of quantlib) or http://www.boost.org Any alternatives you know of? I'm a C# developer so would prefer C#, but a wrapper to C++ shouldn't be a problem, should it? Maybe even faster leveraging the C++ implementations. I am thinking some of these libraries will have the fastest method to directly generate normally distributed random numbers, without the translation step. Also they may have some other functions that will be helpful in the subsequent calculations. Also the computer this is on is a quad core Opteron 275, 8 GB memory but Windows Server 2003 Enterprise 32 bit. Should I advise them to upgrade to a 64 bit OS? Any links to articles supporting this decision would really be appreciated. Anyway, any advice and help you may have is really appreciated.

    Read the article

  • Incremental hot deployment on Tomcat with Maven and NetBeans

    - by deamon
    I'm using NetBeans 6.8, Tomcat 6, and Maven 2.2 and want to see changes in my code immediately in the browser (showing http://localhost:8080) after saving the file. The tomcat-maven-plugin has the following configuration: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>tomcat-maven-plugin</artifactId> <version>1.0-beta-1</version> </plugin> Following to the output it should perform in-place deployment. What can I do to see changes in my Java code immediately in the browser?

    Read the article

  • How to merge-copy multiple folders in Outlook?

    - by user553702
    In MS Outlook, I need to be able to incrementally copy items in multiple folders in the Exchange account to a local PST file with a mirrored folder structure. I need the items in each folder to be combined into the destination. For example, let's say on the server account I have a folder tree like this: Inbox SortedEmails1 SortedEmails2 SortedEmails3 I also have these same four folders in the local PST file, which I want to keep growing as I incrementally pull more messages from the Exchange server. Messages from "Inbox" should go to the local "Inbox", messages from "SortedEmails1" should go into "SortedEmails1" in the local PST, etc. I'd like to avoid manually iterating into every single folder and copying items. How can I do this?

    Read the article

  • Android - incremental status bar notification icon

    - by DroidIn.net
    You know what I'm talking about: for example when you get multiple new emails the notification icon in the status bar is augmented with a little red circle that contains number of unread mails. Twitroid has the same icon. Any idea how it's done? I don't think (or so I hope) there are 10000 similar icons. Is this red circle generated and overlaid the notification icon? If so - any code snippets will be much appreciated.

    Read the article

  • Incremental Timer

    - by Donal Rafferty
    I'm currently using a Timer and TimerTask to perform some work every 30 seconds. My problem is that after each time I do this work I want to increment the interval time of the Timer. So for example it starts off with 30 seconds between the timer firing but I want to add 10 seconds to the interval then so that the next time the Timer takes 40 seconds before it fires. Here is my current code: public void StartScanning() { scanTask = new TimerTask() { public void run() { handler.post(new Runnable() { public void run() { wifiManager.startScan(); scanCount++; if(SCAN_INTERVAL_TIME <= SCAN_MAX_INTERVAL){ SCAN_INTERVAL_TIME = SCAN_INTERVAL_TIME + SCAN_INCREASE_INTERVAL; t.schedule(scanTask, 0, SCAN_INTERVAL_TIME); } } }); }}; Log.d("SCAN_INTERVAL_TIME ** ", "SCAN_INTERVAL_TIME ** = " + SCAN_INTERVAL_TIME); t.schedule(scanTask, 0, SCAN_INTERVAL_TIME); } But the above gives the following error: 05-26 11:48:02.472: ERROR/AndroidRuntime(4210): java.lang.IllegalStateException: TimerTask is scheduled already Calling cancel or purge doesn't help. So I was wondering if anyone can help me find a solution? Is a timer even the right way to approach this?

    Read the article

  • Handling incremental Data Modeling Changes in Functional Programming

    - by Adam Gent
    Most of the problems I have to solve in my job as a developer have to do with data modeling. For example in a OOP Web Application world I often have to change the data properties that are in a object to meet new requirements. If I'm lucky I don't even need to programmatically add new "behavior" code (functions,methods). Instead I can declarative add validation and even UI options by annotating the property (Java). In Functional Programming it seems that adding new data properties requires lots of code changes because of pattern matching and data constructors (Haskell, ML). How do I minimize this problem? This seems to be a recognized problem as Xavier Leroy states nicely on page 24 of "Objects and Classes vs. Modules" - To summarize for those that don't have a PostScript viewer it basically says FP languages are better than OOP languages for adding new behavior over data objects but OOP languages are better for adding new data objects/properties. Are there any design pattern used in FP languages to help mitigate this problem? I have read Phillip Wadler's recommendation of using Monads to help this modularity problem but I'm not sure I understand how?

    Read the article

  • Incremental build with NetBeans and Maven for jetty hot deployment

    - by deamon
    After my unsuccessful attempt to run Tomcat with hot deployment from NetBeans with Maven, I've tried jetty. The jetty-maven-plugin doc gave me an important hint: The plugin will automatically ensure the classes are rebuilt and up-to-date before deployment. If you change the source of a class and your IDE automatically compiles it in the background, the plug-in will pick up the changed class. If I look at $myproject/target/classes/... in the projects directory, I can see that NetBeans doesn't compile and refresh the class file on saving. I need to build the project explicitly to update the file and than jetty picks up the change. (The plug-in param "scanIntervalSeconds" is set to 1.) How can I tell NetBeans to compile on save and update the class file so that jetty can pick up the change?

    Read the article

  • incremental OL using letters (jQuery)

    - by jquery n00b
    Hi, I'm trying to dynamically add a span to an ol, where the counter should be in letters. eg: A result B result C result etc etc I've got this code which is great for using numbers but I've no idea what to do to it to make the numbers into letters jQuery(document).ready( function() { jQuery('.results ol').each(function () { jQuery(this).find('li').each(function (i) { i = i+1; jQuery(this).prepend('<span class="marker">' + i + '</span>'); }); }); }); Any help is greatly appreciated!

    Read the article

  • OpenCL - incremental summation during compute

    - by user1721997
    I'm absolutelly novice in OpenCL programming. For my app. (molecular simulaton) I wrote a kernel for calculate intermolecular potential of lennard-jones liquid. In this kernel I need to compute cumulative value of the potential of all particles with one: __kernel void Molsim(__global const float* inmatrix, __global float* fi, const int c, const float r1, const float r2, const float r3, const float rc, const float epsilon, const float sigma, const float h1, const float h23) { float fi0; float fi1; float d; unsigned int i = get_global_id(0); //number of particles (typically 2000) if(c!=i) { // potential before particle movement d=sqrt(pow((0.5*h1-fabs(0.5*h1-fabs(inmatrix[c*3]-inmatrix[i*3]))),2.0)+pow((0.5*h23-fabs(0.5*h23-fabs(inmatrix[c*3+1]-inmatrix[i*3+1]))),2.0)+pow((0.5*h23-fabs(0.5*h23-fabs(inmatrix[c*3+2]-inmatrix[i*3+2]))),2.0)); if(d<rc) { fi0=4.0*epsilon*(pow(sigma/d,12.0)-pow(sigma/d,6.0)); } else { fi0=0; } // potential after particle movement d=sqrt(pow((0.5*h1-fabs(0.5*h1-fabs(r1-inmatrix[i*3]))),2.0)+pow((0.5*h23-fabs(0.5*h23-fabs(r2-inmatrix[i*3+1]))),2.0)+pow((0.5*h23-fabs(0.5*h23-fabs(r3-inmatrix[i*3+2]))),2.0)); if(d<rc) { fi1=4.0*epsilon*(pow(sigma/d,12.0)-pow(sigma/d,6.0)); } else { fi1=0; } // cumulative difference of potentials fi[0]+=fi1-fi0; } } My problem is in the line: fi[0]+=fi1-fi0;. In the one-element vector fi[0] are wrong results. I read something about sum reduction, but I do not know how to do it during the calculation. Exist any simple solution of my problem?

    Read the article

  • incremental way of counting quantiles for large set of data

    - by Gacek
    I need to count the quantiles for a large set of data. Let's assume we can get the data only through some portions (i.e. one row of a large matrix). To count the Q3 quantile one need to get all the portions of the data and store it somewhere, then sort it and count the quantile: List<double> allData = new List<double>(); foreach(var row in matrix) // this is only example. In fact the portions of data are not rows of some matrix { allData.AddRange(row); } allData.Sort(); double p = 0.75*allData.Count; int idQ3 = (int)Math.Ceiling(p) - 1; double Q3 = allData[idQ3]; Now, I would like to find a way of counting this without storing the data in some separate variable. The best solution would be to count some parameters od mid-results for first row and then adjust it step by step for next rows. Note: These datasets are really big (ca 5000 elements in each row) The Q3 can be estimated, it doesn't have to be an exact value. I call the portions of data "rows", but they can have different leghts! Usually it varies not so much (+/- few hundred samples) but it varies! This question is similar to this one: http://stackoverflow.com/questions/1058813/on-line-iterator-algorithms-for-estimating-statistical-median-mode-skewness But I need to count quantiles. ALso there are few articles in this topic, i.e.: http://web.cs.wpi.edu/~hofri/medsel.pdf http://portal.acm.org/citation.cfm?id=347195&dl But before I would try to implement these, I wanted to ask you if there are maybe any other, qucker ways of counting the 0.25/0.75 quantiles?

    Read the article

  • Incremental Union?

    - by cam
    I'm trying to describe a Sudoku Board in C++ with a union statement: union Board { int board[9][9]; int sec1[3][3]; int sec2[3][3]; int sec3[3][3]; int sec4[3][3]; int sec5[3][3]; int sec6[3][3]; int sec7[3][3]; int sec8[3][3]; int sec9[3][3]; } Would each section of the board correspond with the correct part of the array? IE, Would sec4 correspond with board[4-6][0-3]? Is there a better way to do this sort of thing (specifically describing a sudoku board)?

    Read the article

  • Outlook 2007 Backup to D:\Outlook Fails - Access Denied, Write-Protected or File In Use

    - by nicorellius
    I can successfully save the Outlook PST file to the default location on the C drive (C:\Documents and Settings\user\ ... \Outlook) but when I change the backup save to directory to Outlook on the D drive I get the error: Cannot copy Outlook: Access is denied. Make sure the disk is not full or write protected and that the file is not currently in use. I suppose it is not that crucial that I save this file here, but I have never seen this problem before and I have made this same change in the past. I did some searching in this knowledge exchange as well as elsewhere on changing permissions, etc, but this didn't help. I discovered that the folder on my D drive (called Outlook) is not write-protected and nor is it read-only, as I can save to and modify files in that directory, as well as rename and delete the directory itself. At the time when I installed this version of Outlook, I used a previously saved Personal Folder (a backup PST file) and I thought having this still open in Outlook was causing the trouble. But I closed it and still have the same problem. I know this is probably a silly error on my part but I would like to figure it out. I'm new to superuser, but the answers I see are usually very good, so I thought I would post my first question. Thanks in advance.

    Read the article

  • SQL 2000 Backup/Export Process - Can't find SQL 2000 Enterprise Manager, Can't use Mgmt Studio Expre

    - by 1nsane
    I need to make a backup of a client's SQL 2000 database, however there are a few issues preventing me from doing so using the traditional methods. I've tried using SQL Management Studio Express, but the host doesn't give sufficient privileges to create a backup and I'm getting some strange error messages. I've also tried doing the "Generate Scripts" to recreate the schema, then using the DTS Wizard to migrate the data, but the IDs set up with the identity specification property are not consistent with the live database once copied over. This results in some foreign key breakage... If I remember correctly, I was able to use Microsoft SQL 2000 Enterprise Manager to perform the task before, but I can't find this anywhere... it seems Microsoft has pulled most SQL Server 2000 stuff from their site. Does anyone know where I can find a copy of Enterprise Manager (or a trial of SQL Server 2000, which I believe comes with the component)? Or conversely, does anyone know of any other tools (preferably non-commercial) that are capable of mirroring remote SQL Server 2000 DBs? Thanks!

    Read the article

  • Backup Picasa 'people' tags data

    - by pelms
    OK, so I've spent a fair amount of time putting names to faces in Picasa 3.5 but in a few days (hopefully) my copy of Windows 7 should arrive and I'll need to reinstall Windows. So, does anyone know what I need to backup so that I don't have to re-enter all those name tags? N.B. I'm on Windows 7 RC and know that I don't have to do a clean reinstall but I would prefer to. Outcome: I clean installed Windows 7 and downloaded and installed Picasa. Unfortunately, the download link on the UK Picasa homepage still pointed to Picasa 3.0 (rather than 3.5) which doesn't have face recognition. This scanned my photos folders and overwrote the picasa.ini files along with the people information   :¬( Fortunately I'd backed up the photos before installing Win 7, so after uninstalling Picasa 3.0 (along with it's database), restoring the photos from backup and installing Picasa 3.5, I finally got my face names back. Extra... Google has now posted advice on how to migrate to Windows 7 and keep your Picasa database, meaning that it will not need to rescan you photos and will retain all information about then including name tags. They have a method for upgrading and for a clean install of Win 7. Basically you need to back up: "C:\Users\%username%\AppData\Local\Google\Picasa2" and "C:\Users\%username%\AppData\Local\Google\Picasa2Albums"

    Read the article

  • How do you handle data archiving?

    - by 20th Century Boy
    Backups are one thing, but long term archival is another. For example, you might be required to store emails for 7 years, or keep all project data indefinitely. I used to save archives to tape, but then I've had tapes get destroyed (drives rip the tape out). So...write to 2 tapes I hear you say. Is that what others do? Have 2 (or more) tapes of the same data for redundancy? But then the other issue is that tapes cannot usually be read by different backup software vendors. Eg if you go from Arcserve - Backup Exec - Commvault over 10 years you would need to keep all 3 systems so that you could restore old data. Likewise for hardware. Old tapes might not be barcoded. Might not be compatible with the new library etc etc. So do you keep old tape hardware AND old software just in case you might need to restore a 10 year-old file? Or...when you move to a new backup system do you migrate all archived data to the new system and re-archive it onto new tapes? That could be a huge job. Any thoughts?

    Read the article

  • PostgreSQL continuous archiving not running archive_command

    - by Whatsit
    I've been trying to set up continuous archiving for a simple, test PostgreSQL 9.0 database, as per the documentation. In postgres.conf I've set: wal_level = archive archive_mode = on archive_command = 'touch /home/myusername/backup/testtouch' archive_timeout = 30s ...and restarted PostgreSQL. The file listed by touch never appears. I can manually run the touch command and it works as expected. If I try to create a backup, it waits forever for the archive_command. In psql; postgres=# SELECT pg_start_backup('touchtest'); pg_start_backup ----------------- 0/14000020 (1 row) postgres=# SELECT pg_stop_backup(); NOTICE: pg_stop_backup cleanup done, waiting for required WAL segments to be archived WARNING: pg_stop_backup still waiting for all required WAL segments to be archived (60 seconds elapsed) HINT: Check that your archive_command is executing properly. pg_stop_backup can be cancelled safely, but the database backup will not be usable without all the WAL segments. What would cause this? How can I troubleshoot it? Additional info: Running on CentOS 5.4. PostgreSQL 9.0.2 installed as root.

    Read the article

  • Why should one have a secondary DNS server?

    - by Sam Levin
    I'm very confused. I basically understand how DNS works. Here's an example that helps illustrate what I'm having trouble understanding. Right now, I run a small web-server. I use my provider's DNS manager, so I don't have a DNS server hosted on the machine. Let's say for a second, that I don't use my host's DNS, and I decide to set up a DNS server on my server. Hypothetical scenario: my server (entire) server goes down - DNS included. Why do I need backup DNS? If the server is down, who cares if the DNS server is down too, considering that even if I had DNS up (it wasn't on the crashed server), it wouldn't be able to forward requests anyway since the server would be down? Is the point of having secondary DNS, to be able to change the IP addresses that your DNS server points to, so if your webserver was down, you could redirect traffic to a backup? How would you switch to the secondary provider, in the event that your main DNS provider becomes unavailable? Is a backup DNS system basically up all the time? How is it configured? Is it just an exact clone of the DNS server you would have on your server? Do they run simultaneously? Hopefully someone can see what I'm hung up on, and provide some guidance. Thanks

    Read the article

  • Robocopy silently missing files

    - by John Hunt
    I'm using Robocopy to sync data from our server's hard disk to an external disk as a backup. It's a pretty simple solution but pretty much the best/easiest one we could come up with - we use two external disks and rotate them offsite. Anyway, here's the script (with the comments taken out) that I'm using to do it. It works very well, it's quick and almost 100% complete - however it's acting pretty strange with a few files (note company name has been changed in paths to protect the innocent): @ECHO OFF set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% SET prefix="E:\backup_log-" SET source_dir="M:\Company Names Data\Working Folder\_ADMIN_BACKUP_FILES\COMPA AANY Business Folder_Backup_040407\COMPANY_sales order register\BACKUP CLIENT FOLDERS & CURRENT JOBS pre 270404\CLIENT SALES ORDER REGISTER" SET dest_dir="E:\dest" SET log_fname=%prefix%%date:~-4,4%%date:~-10,2%%date:~-7,2%.log SET what_to_copy=/COPY:DAT /MIR SET options=/R:0 /W:0 /LOG+:%log_fname% /NFL /NDL ROBOCOPY %source_dir% %dest_dir% %what_to_copy% %options% set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% cscript msg.vbs "Backup completed at %DATESTAMP% - Logs can be found on the E: drive." :END Normally the source would just be M:\Comapany name data\ but I altered the script a bit to test the problem. The following files in the source are not copied to the dest: Someclient\SONICP~1.DOC Someclient\SONICP~2.DOC Someclient\SONICP~3.DOC However, files in the same directory named: TIMESH~1.XLS TIMESH~2.XLS are copied. I'm able to open the files that aren't copied with no trouble at all, and they certainly weren't opened when I ran robocopy so it's not a locking issue. Robocopy is running as administrator so it's not a permissions issue. There's no trace these files were even attempted to be copied as there are no errors being output in the log or in my command prompt. Does anyone have any suggestions as to what this might be? Busted hard disk? Cheers, John.

    Read the article

  • How to move my data from my old MacBook Pro to my new one?

    - by Tim Büthe
    I just purchased a new MacBook Pro and already got an 2008 model. I wonder how I move all my data over to the new one. My first idea was, to use my Time Machine backup and restore from it, which seems to be a good idea and should work just fine regarding to this link: http://blog.duncandavidson.com/2008/01/restoring-from-time-machine.html. But, since my current MacBook got older Software on it, like iLife '08 instead of iLife '09 I would have to upgrade this afterwards. Is this correct, or does Time Machine does some magic to exclude well known software? And is it possible to reinstall or upgrade iLife with the included installation DVDs? My second idea is, to just swap the hard drives instead of using the Time machine backup. If it is not too complicated to remove the hdd, this should be the fastest way. This also has the benefit, that the 2008er MacBook then contains a brand new installation and I don't have to remove all my stuff or reinstall Mac OS before I give it away. My question on that second idea would be: does snow leopard handle this stuff correctly? I reboot with the new hardware and all just works fine? So in a nutshell: What would you do: restore from backup or swap drives? And what about the new software?

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >