Search Results

Search found 34513 results on 1381 pages for 'end task'.

Page 605/1381 | < Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >

  • Why does File::Find finished short of completely traversing a large directory?

    - by Stan
    A directory exists with a total of 2,153,425 items (according to Windows folder Properties). It contains .jpg and .gif image files located within a few subdirectories. The task was to move the images into a different location while querying each file's name to retrieve some relevant info and store it elsewhere. The script that used File::Find finished at 20462 files. Out of curiosity I wrote a tiny recursive function to count the items which returned a count of 1,734,802. I suppose the difference can be accounted for by the fact that it didn't count folders, only files that passed the -f test. The problem itself can be solved differently by querying for file names first instead of traversing the directory. I'm just wondering what could've caused File::Find to finish at a small fraction of all files. The data is stored on an NTFS file system.

    Read the article

  • SSIS: Update a RecordSet passed into a VB.NET ScriptTask

    - by Zambouras
    What I am trying to accomplish is using this script task to continually insert into a generated RecordSet I know how to access it in the script however I do not know how to update it after my changes to the DataTable have been made. Code is Below: Dim EmailsToSend As New OleDb.OleDbDataAdapter Dim EmailsToSendDt As New DataTable("EmailsToSend") Dim CurrentEmailsToSend As New DataTable Dim EmailsToSendRow As DataRow EmailsToSendDt.Columns.Add("SiteMgrUserId", System.Type.GetType("System.Integer")) EmailsToSendDt.Columns.Add("EmailAddress", System.Type.GetType("System.String")) EmailsToSendDt.Columns.Add("EmailMessage", System.Type.GetType("System.String")) EmailsToSendRow = EmailsToSendDt.NewRow() EmailsToSendRow.Item("SiteMgrUserId") = siteMgrUserId EmailsToSendRow.Item("EmailAddress") = siteMgrEmail EmailsToSendRow.Item("EmailMessage") = EmailMessage.ToString EmailsToSend.Fill(CurrentEmailsToSend, Dts.Variables("EmailsToSend").Value) EmailsToSendDt.Merge(CurrentEmailsToSend, True) Basically my goal is to create a single row in a new data table. Get the current record set, merge the results so I have my result DataTable. Now I just need to update the ReadWriteVariable for my script. Do not know if I have to do anything special or if I can just assign it directly to the DataTable I.E. Dts.Variables("EmailsToSend").Value = EmailsToSendDt Thanks for the help in advanced.

    Read the article

  • Optimal Activity Stack Order for a Main Menu button?

    - by kefs
    I'm developing an app that starts with a main menu, and then continues through three different steps (activities) to a final activity where the task is marked complete. On this last activity, i have several additional options (add note, share, etc..) and i also have a return to main menu button. My question is.. how do i stack the activities so that calling finish() on the final activity will return back to the first activity launched? i am currently just starting the new activity via an intent, so pressing back on this screen doesn't return me to home as i would like. Sorry in advance for being so convoluted in my desc

    Read the article

  • Make a usable Join relationship with LINQ on top of a database CSV design error

    - by jdk
    I'm looking for a way to fix and/or abstract away a comma-separated values (CSV) list in a database field in order to reconstruct a usable relationship such that I can properly join the two tables below and query them using LINQ and its Join method. Following is a sample showing the Person table and CsvArticleIds field having a CSV value to represent a one-to-many association with Article records. TABLE [dbo].[Person] Id Name CsvArticleIds -- ---------- -------- 1 Joe "15,22" 5 Ed "22" 10 Arnie "8,15,22" ^^^(Of course a link table should have been created; nonetheless the relationship with articles is trapped inside that list of CSV values.) TABLE [dbo].[Article] Id Title -- ---------- 8 Beginning C# 15 A Historic look at Programming in the 90s 22 Gardening in January Additional Info the fix can be at any level: C#.NET or SQL Server something easy because I will be repeating the solution for many other CSV values in other tables. Elegant is nice too. not looking for efficiency because this is part of a one-time data migration task and can take as long as it wants to run.

    Read the article

  • Can't access my files in ASP.NET web site

    - by jumbojs
    I'm having a very difficult time. I am running windows 2008 server, I have an Able Commerce site using ASP.NET with C#. I'm writing an automated task that will ftp some xml files down into a local directory on our web server and then the program parses the xml file and saves information to our database. The problem, once I save the files to our local directory, my program has no access to the files. The NETWORK SERVICE user permissions isn't being inherited by the xml files so my program can't do anything with them. I can manually change the permissions, but this wouldn't be automated and won't work. How can I get this to work? help please, it's very frustrating.

    Read the article

  • Drupal - Use lightbox with Views (Rel attribute) - in output link

    - by kilrizzy
    In Drupal I have two image fields, one to act as a thumbnail and the other the image that will open when the thumbnail is clicked. The only way I could find to link the two was to use the option for "Output this field as a link" and link to the image field. This works, so when I click the thumbnail it opens the larger image however I would like to use lightbox2 for this task but in the "Output this field as a link" options there is no way to set the "rel" attribute. Is there a way to either set the rel attribute or invoke the lightbox by setting a class?

    Read the article

  • How to specify schema location in an xsd file?

    - by Manoj
    I have an xsd file Foo.xsd. I tried following ways to refer it in a WSDL file but it doesnt work. 1) placed the xsd file in local file system and imported it as <xsd:import namespace="http://ws.test.com/" schemaLocation="file:///D:/wsdl/Foo.xsd"></xsd:import> 2) Placed the xsd file in web root folder and imported as <xsd:import namespace="http://ws.test.com/" schemaLocation="http://localhost:8080/Xfire/Foo.xsd"></xsd:import> When I run the client I get null for the fields of response object. But this works when I embed the type definition inside the WSDL itself. How do we specify the path to external xsds? I am using xFire 1.2.6 for generating webservices. Client is generated using xFire WSGen ant task.

    Read the article

  • Error 5 partition table invalid or corrupt

    - by Clodoaldo
    I'm trying to add a second SSD to a Centos 6 system. But I get the Error 5 partition table invalid or corrupt at boot. The system already has a single SSD (sdb) and a pair of HDDs (sd{a,c}) in a RAID 1 array from where it boots. It is as if the new SSD assumes one of the devices of the RAID array. Is it? How to avoid that or rearrange the setup? # cat fstab UUID=967b4035-782d-4c66-b22f-50244fe970ca / ext4 defaults 1 1 UUID=86fd06e9-cdc9-4166-ba9f-c237cfc43e02 /boot ext4 defaults 1 2 UUID=72552a7a-d8ae-4f0a-8917-b75a6239ce9f /ssd ext4 discard,relatime 1 2 UUID=8000e5e6-caa2-4765-94f8-9caeb2bda26e swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 # ll /dev/disk/by-id/ total 0 lrwxrwxrwx. 1 root root 9 Jun 15 23:50 ata-OCZ-VERTEX3_OCZ-43DSRFTNCLE9ZJXX -> ../../sdb lrwxrwxrwx. 1 root root 10 Jun 15 23:50 ata-OCZ-VERTEX3_OCZ-43DSRFTNCLE9ZJXX-part1 -> ../../sdb1 lrwxrwxrwx. 1 root root 9 Jun 15 23:50 ata-ST3500413AS_5VMT49E3 -> ../../sdc lrwxrwxrwx. 1 root root 10 Jun 15 23:50 ata-ST3500413AS_5VMT49E3-part1 -> ../../sdc1 lrwxrwxrwx. 1 root root 10 Jun 15 23:50 ata-ST3500413AS_5VMT49E3-part2 -> ../../sdc2 lrwxrwxrwx. 1 root root 10 Jun 15 23:50 ata-ST3500413AS_5VMT49E3-part3 -> ../../sdc3 lrwxrwxrwx. 1 root root 9 Jun 15 23:50 ata-ST3500413AS_5VMTJNAJ -> ../../sda lrwxrwxrwx. 1 root root 10 Jun 15 23:50 ata-ST3500413AS_5VMTJNAJ-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Jun 15 23:50 ata-ST3500413AS_5VMTJNAJ-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Jun 15 23:50 ata-ST3500413AS_5VMTJNAJ-part3 -> ../../sda3 lrwxrwxrwx. 1 root root 9 Jun 15 23:50 md-name-localhost.localdomain:0 -> ../../md0 lrwxrwxrwx. 1 root root 9 Jun 15 23:50 md-name-localhost.localdomain:1 -> ../../md1 lrwxrwxrwx. 1 root root 9 Jun 15 23:50 md-name-localhost.localdomain:2 -> ../../md2 lrwxrwxrwx. 1 root root 9 Jun 15 23:50 md-uuid-a04d7241:8da6023e:f9004352:107a923a -> ../../md1 lrwxrwxrwx. 1 root root 9 Jun 15 23:50 md-uuid-a22c43b9:f1954990:d3ddda5e:f9aff3c9 -> ../../md0 lrwxrwxrwx. 1 root root 9 Jun 15 23:50 md-uuid-f403a2d0:447803b5:66edba73:569f8305 -> ../../md2 lrwxrwxrwx. 1 root root 9 Jun 15 23:50 scsi-SATA_OCZ-VERTEX3_OCZ-43DSRFTNCLE9ZJXX -> ../../sdb lrwxrwxrwx. 1 root root 10 Jun 15 23:50 scsi-SATA_OCZ-VERTEX3_OCZ-43DSRFTNCLE9ZJXX-part1 -> ../../sdb1 lrwxrwxrwx. 1 root root 9 Jun 15 23:50 scsi-SATA_ST3500413AS_5VMT49E3 -> ../../sdc lrwxrwxrwx. 1 root root 10 Jun 15 23:50 scsi-SATA_ST3500413AS_5VMT49E3-part1 -> ../../sdc1 lrwxrwxrwx. 1 root root 10 Jun 15 23:50 scsi-SATA_ST3500413AS_5VMT49E3-part2 -> ../../sdc2 lrwxrwxrwx. 1 root root 10 Jun 15 23:50 scsi-SATA_ST3500413AS_5VMT49E3-part3 -> ../../sdc3 lrwxrwxrwx. 1 root root 9 Jun 15 23:50 scsi-SATA_ST3500413AS_5VMTJNAJ -> ../../sda lrwxrwxrwx. 1 root root 10 Jun 15 23:50 scsi-SATA_ST3500413AS_5VMTJNAJ-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Jun 15 23:50 scsi-SATA_ST3500413AS_5VMTJNAJ-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Jun 15 23:50 scsi-SATA_ST3500413AS_5VMTJNAJ-part3 -> ../../sda3 lrwxrwxrwx. 1 root root 9 Jun 15 23:50 wwn-0x5000c500383621ff -> ../../sdc lrwxrwxrwx. 1 root root 10 Jun 15 23:50 wwn-0x5000c500383621ff-part1 -> ../../sdc1 lrwxrwxrwx. 1 root root 10 Jun 15 23:50 wwn-0x5000c500383621ff-part2 -> ../../sdc2 lrwxrwxrwx. 1 root root 10 Jun 15 23:50 wwn-0x5000c500383621ff-part3 -> ../../sdc3 lrwxrwxrwx. 1 root root 9 Jun 15 23:50 wwn-0x5000c5003838b2e7 -> ../../sda lrwxrwxrwx. 1 root root 10 Jun 15 23:50 wwn-0x5000c5003838b2e7-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Jun 15 23:50 wwn-0x5000c5003838b2e7-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Jun 15 23:50 wwn-0x5000c5003838b2e7-part3 -> ../../sda3 lrwxrwxrwx. 1 root root 9 Jun 15 23:50 wwn-0x5e83a97f592139d6 -> ../../sdb lrwxrwxrwx. 1 root root 10 Jun 15 23:50 wwn-0x5e83a97f592139d6-part1 -> ../../sdb1 # fdisk -l Disk /dev/sdb: 120.0 GB, 120034123776 bytes 255 heads, 63 sectors/track, 14593 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x79298ec9 Device Boot Start End Blocks Id System /dev/sdb1 1 14594 117219328 83 Linux Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000d99de Device Boot Start End Blocks Id System /dev/sdc1 1 1275 10240000 fd Linux raid autodetect /dev/sdc2 * 1275 1339 512000 fd Linux raid autodetect /dev/sdc3 1339 60802 477633536 fd Linux raid autodetect Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000b3327 Device Boot Start End Blocks Id System /dev/sda1 1 1275 10240000 fd Linux raid autodetect /dev/sda2 * 1275 1339 512000 fd Linux raid autodetect /dev/sda3 1339 60802 477633536 fd Linux raid autodetect Disk /dev/md0: 10.5 GB, 10484641792 bytes 2 heads, 4 sectors/track, 2559727 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Disk /dev/md2: 489.1 GB, 489095557120 bytes 2 heads, 4 sectors/track, 119408095 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md2 doesn't contain a valid partition table Disk /dev/md1: 524 MB, 524275712 bytes 2 heads, 4 sectors/track, 127997 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table # cat /etc/grub.conf default=0 timeout=5 splashimage=(hd2,1)/grub/splash.xpm.gz hiddenmenu title CentOS (2.6.32-220.17.1.el6.x86_64) root (hd2,1) kernel /vmlinuz-2.6.32-220.17.1.el6.x86_64 ro root=UUID=967b4035-782d-4c66-b22f-50244fe970ca rd_MD_UUID=f403a2d0:447803b5:66edba73:569f8305 rd_MD_UUID=a22c43b9:f1954990:d3ddda5e:f9aff3c9 rd_NO_LUKS rd_NO_LVM rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=br-abnt2 crashkernel=auto rhgb quiet initrd /initramfs-2.6.32-220.17.1.el6.x86_64.img

    Read the article

  • Cilk or Cilk++ or OpenMP

    - by Aman Deep Gautam
    I'm creating a multi-threaded application in Linux. here is the scenario: Suppose I am having x instance of a class BloomFilter and I have some y GB of data(greater than memory available). I need to test membership for this y GB of data in each of the bloom filter instance. It is pretty much clear that parallel programming will help to speed up the task moreover since I am only reading the data so it can be shared across all processes or threads. Now I am confused about which one to use Cilk, Cilk++ or OpenMP(which one is better). Also I am confused about which one to go for Multithreading or Multiprocessing

    Read the article

  • Same network same switch but computers can't talk "ping"to eachother [closed]

    - by Sue
    Possible Duplicate: How does IPv4 Subnetting Work? Each computer(all 2 of them) can ping the router but can't ping each-other and firewall is off. Same default gateway, IP address very similar (just two number away at end) but the subnet mask is different between these two computers. One ends in 192 the other 224. There is a switch between them that then connects to the router. Why can't the computers ping each-other?

    Read the article

  • Get filename for puppet template

    - by Noodles
    I have a file that I'd like to reuse for a few different purposes. The file is 90% the same across uses, just slight differences. I'd rather not replicate the content across multiple files in puppet, so is there a way to do something like file { "/tmp/file1" : content => template("module/template.erb") } file { "/tmp/file2" : content => template("module/template.erb") } And in the template: Jack John James <% if file == "/tmp/file2" %> Jim <% end %>

    Read the article

  • Versioning CommonAssemblyInfo.cs and MSBuild

    - by James Thigpen
    So I have a CommonAssemblyInfo.cs linked into all the projects in my solution and is dynamically generated by my rake/albacore scripts which is not checked into source control. I also have a CommonAssemblyInfo.cs.local for use when there is no ruby available, mainly to be used by devs. Is it possible to have a msbuild task or something that runs before any of the other project compilation that will copy CommonAssemblyInfo.cs.local to CommonAssemblyInfo.cs before trying to compile my solution? I hate having to have a command you have to just know about and type in order to open and buidl the solution in Visual Studio.

    Read the article

  • MSSQLServer2008\Instance, Why?

    - by Ice
    Hi, im aware of the possibility to create instances but i don't know a real good reason to do it. This way one has per definition at least two sqlserver services running, but what for should this be good? The two instances have to share all the ressouces mainly the RAM. If you have to rename the server you will end up with an access like \NEWSQLServer\OldInstanceName. So what is the case for instances?

    Read the article

  • Future raid update in a NAS and HP ProLiant alternatives

    - by edwardmlyte
    I'm thinking of buying the HP ProLiant MicroServer*. My question is if I just put a single 2TB drive, how easy would it be in the future to upgrade to a second 2TB drive in a RAID-1 setup? Can this be done without formatting the original 2TB drive? *It looks like the £100 cashback offer is stopping end of Aug, making this system cost around £260 without HDDs. Are there any other brands that anyone would recommend for all-in-one hardware solutions.

    Read the article

  • Storing millions of URLs in a database for fast pattern matching

    - by Paras Chopra
    I am developing a web analytics kind of system which needs to log referring URL, landing page URL and search keywords for every visitor on the website. What I want to do with this collected data is to allow end-user to query the data such as "Show me all visitors who came from Bing.com searching for phrase that contains 'red shoes'" or "Show me all visitors who landed on URL that contained 'campaign=twitter_ad'", etc. Because this system will be used on many big websites, the amount of data that needs to log will grow really, really fast. So, my question: a) what would be the best strategy for logging so that scaling the system doesn't become a pain; b) how to use that architecture for rapid querying of arbitrary requests? Is there a special method of storing URLs so that querying them gets faster? In addition to MySQL database that I use, I am exploring (and open to) other alternatives better suited for this task.

    Read the article

  • How do I control script execution time in PHP

    - by mathew
    for example I do have 5 PHP functions on a page which execute when loading. each functions has its own processing time and some of them take more time sometimes to complete the task. hence the total loading time of the said page is slow. my question is how do I control execution time for each script and set time limit for the same. I am aware that there is an in built function in PHP called set_time_limit(); but it gives fatal error if time is beyond the maximum limit...

    Read the article

  • Integrating Hudson with MS Test?

    - by hangy
    Is it possible to integrate Hudson with MS Test? I am setting up a smaller CI server on my development machine with Hudson right now, just so that I can have some statistics (ie. FxCop and compiler warnings). Of course, it would also be nice if it could just run my unit tests and present their output. Up to now, I have added the following batch task to Hudson, which makes it run the tests properly. "%PROGRAMFILES%\Microsoft Visual Studio 9.0\Common7\IDE\MSTest.exe" /runconfig:LocalTestRun.testrunconfig /testcontainer:Tests\bin\Debug\Tests.dll However, as far as I know, Hudson does not support analysis of MS Test results, yet. Does anyone know whether the TRX files generated by MSTest.exe can be transformed to the JUnit or NUnit result format (because those are supported by Hudson), or whether there is any other way to integrate MS Test unit tests with Hudson?

    Read the article

  • ant cpptask with ivy

    - by AC
    A company I am working for, has some c binaries build with ant using cpptask. They use ivy to retrieve shared c libraries every time we start a build which wastes a significant amount of time comparing the revisions and downloading, when then only need to be download if the header files have changed. I have added a target which sets a var, which causes the build to skip over the ivy steps but I'd like a better solution. I see that cpptask creates a file history.xml and only rebuilds to binary if any of the sources have change. I'd like to know if there is way to independently test if the binary needs to build, and it does, I'd like it fire off the ivy targets. I'd also like for a variable to be set if the binary was rebuilt so that I can conditionally start an rpm generation task

    Read the article

  • C# - Thread does not abort on application closing

    - by michal
    Hi, I have an application which does some background task (network listening & reading) in separate Thread. It seems however that the Thread is not being Terminated/Aborted when I close the application (click "x" button on titlebar ;)). Is that because the main Thread routine is while(true) {...} ? What is the solution here? I was looking for some "interruption" flag for the Thread as the condition for "while" loop, but didn't found any ...

    Read the article

  • customize keyboard in hackintosh

    - by user36912
    i have Mac OS SL installed on intel machine. My shortcut keys for Home, End, Copy, Paste and Undo are not working. How can i customize keyboard so that i can get Copy by Ctrl+C, Paster by Ctrl+V, Undo by Ctrl+Z and so on..

    Read the article

  • Data replication between two web nodes

    - by HTF
    I have Wordpress installation running on two web servers (Nginx). There is unidirectional synchronization from server A to server B and I'm using lsyncd for this purpose. with his configuration I have to add blog posts from the first web server so the data is replicated to the second one - how I can force access to Wordpress back-end only from the first web server? Please note that both servers have the same domain for Wordpress. Regards

    Read the article

  • Handover document for complete systems

    - by viraptor
    Hi, I need to create a handover document for a fairly large system consisting of all the stuff you'd expect from a telecom deployment: many servers, database clusters which copy some data between them in specific ways, tons of log files, both off-the-shelf and locally developed software, scripts, network configurations, local know-how, etc. It's really got as many sysadmin-typical elements, as development ones. The target of this document are in the first place sysadmins who take over the day-to-day operation tasks and some problem resolving, and in the second place people who want to learn about the system in general. Is there some place I can learn about how to write something like that? It could just as easily be a 10 page "what's where", as a 500 pages book about "all things telephony". Maybe it should be more than one document really. Please link some useful resources / books I could use for this task. PS: this is intended to be internal only, customer interactions etc. are out of scope here

    Read the article

  • Can I use one virtualbox disk for multiple machines?

    - by mxp
    I'm not sure what search term to use and skimming through the VirtualBox manual didn't help me either, so I ask my two questions here... My setup is this: PC with dual boot into Windows 7 and a Debian operating system (both 64bit). I've created a virtual machine (Kubuntu, 64bit) under Windows and put it's VDI file on a SMB share of my NAS. Then I created a VM under linux using the same settings for memory etc and assigned the existing VDI file to it. My idea was that I could use that virtual machine from Windows and Linux as well. (1) Is this generally something that should work without problems? I noticed that snapshots get me into trouble because they appear to be not visible from the other operating system: The snapshots I took after installing the guest system are not visible under Linux. That's why I shut down the VM after usage and not save its state while it's running. My current problem is this: I have used the VM under Windows first, then under Linux. Now it will only start on Linux. When trying this on Windows the guest OS detects some kind of hard disk error and fails to boot because it cannot mount its drive. Obviously the virtual hard disk won't fail so it must have something to do with me using it under Linux. (2) How can I fix that? Update: It also looks like any changes I made in the VM under Linux have been reset by trying to boot it under Windows. Looks like it's back to the latest snapshot. I'm confused... Update The answer to my first question can be found below. In short: It works, as long as you don't use snapshots. The answer to my second question is this: Under Windows set the VM back to the latest snapshot and then discard the snapshot so it gets merged. There should be no snapshots left at the end. If you have multiple snapshots, discard the earliest ones first (Snapshot 1, then 2, 3, ...). I'm not sure what happens if you start at the end (.., 3, 2, 1). This of course leads to some data loss since you revert all changes since the last snapshot. But at least the VM is usable again.

    Read the article

< Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >