Search Results

Search found 10549 results on 422 pages for 'recovery console'.

Page 145/422 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • jquery-ui .draggable is not a function error

    - by niczoom
    I am getting the following error (using Firefox 3.5.9): $("#dragMe_" + myCount).draggable is not a function $("#dragMe_"+myCount).draggable({ containment: 'parent', axis: 'y' }); Line 231 http://www.liamharding.com/pgi/pgi.php Link to page in question : http://www.liamharding.com/pgi/pgi.php For example, click the 2 checkbox's 'R25 + R50 Random Walk' then click Show/Refresh Graphs. Two graphs should be displayed, both with draggable thin horizontal red lines. Re-open the options panel and de-select R50 Random Walk, now click Show/Refresh Graphs again, 1 graph is removed and the other updated; now re-select R50 Random Walk and click Show/Refresh, you will find the still checked R25 graph gets updated ok the above error occurs and i cant figure out why. Initially, when displaying the first 2 graphs it uses the same code and it works just fine. The error occurs on this line: //********* ERROR OCCURS HERE ********** $("#dragMe_"+myCount).draggable({ containment: 'parent', axis: 'y' }); Here is the code for the Show/Refresh Graphs.click() event: $("#btnShowGraphs").click(function(){ // Hide 'Options' panel (only if open AND an index is checked) if (IsOptionsPanelOpen && ($("#indexCheck:checked").length != 0)) {$('#optionImgDiv').click();}; var myCount = 0; var divIsNew = false; var gif_loader_small = '<div id="gif_loader_small"></div>'; var gif_loader_big = '<div id="gif_loader_big"></div>'; $("input:checkbox[id=indexCheck]").each(function() { if (this.checked) { // check for an existing wrapper div for the current forex item, using the current checkbox value (foxex name) if ( $("#"+this.value).length == 0 ) { console.log("New 'graphContainer' div : "+this.value); divIsNew = true; // Create new divs for graph image, drag bar and heading var $structure = " \ <li id=\""+this.value+"\" class=\"graphContainer\"> \ <div id=\"dragMe_"+myCount+"\" class=\"dragMe\"></div> \ <div id=\"image_"+myCount+"\" class=\"image\"></div> \ <div id=\"heading_"+myCount+"\" class=\"heading\"></div> \ </li> \ "; $('#graphResults').append($structure); // Hide dragMe DIV $('#dragMe_'+myCount).hide(); // Make 'dragMe' draggable div //********* ERROR OCCURS HERE ********** $("#dragMe_"+myCount).draggable({ containment: 'parent', axis: 'y' }); } // Display small loading gif $(gif_loader_small).clone().appendTo( $(this).parent() ); // Display large circular loading gif var $loader = $(gif_loader_big); // add temporary css attributes onto existing graph divs as they need to be displayed diffrently if(!divIsNew){ console.log("Reposition existing 'gif_loader_big' div"); $loader = $(gif_loader_big).css({ "position" : "absolute", "top" : "35%", "opacity" : ".85"}); } // add newly styled big-loader-gif to index div $loader.clone().prependTo( $("#"+this.value) ); // Call function to fetch image using ajax get_graph(this, myCount, divIsNew); } else { // REMOVE 'graphContainer' DIVS NOT CHECKED // check for div existance if ( $("#"+this.value).length != 0 ) { console.log("DESTROY: #dragMe_"+myCount+", REMOVE: #"+this.value); // DESTROY draggable //$("#dragMe_"+myCount).draggable("destroy"); // remove div $("#"+this.value).remove(); } } // reset counters and other variables myCount++; divIsNew = false; console.log("Complete: "+this.value+", NEXT index"); }); });

    Read the article

  • what's wrong with my producer-consumer queue design?

    - by toasteroven
    I'm starting with the C# code example here. I'm trying to adapt it for a couple reasons: 1) in my scenario, all tasks will be put in the queue up-front before consumers will start, and 2) I wanted to abstract the worker into a separate class instead of having raw Thread members within the WorkerQueue class. My queue doesn't seem to dispose of itself though, it just hangs, and when I break in Visual Studio it's stuck on the _th.Join() line for WorkerThread #1. Also, is there a better way to organize this? Something about exposing the WaitOne() and Join() methods seems wrong, but I couldn't think of an appropriate way to let the WorkerThread interact with the queue. Also, an aside - if I call q.Start(#) at the top of the using block, only some of the threads every kick in (e.g. threads 1, 2, and 8 process every task). Why is this? Is it a race condition of some sort, or am I doing something wrong? using System; using System.Collections.Generic; using System.Text; using System.Messaging; using System.Threading; using System.Linq; namespace QueueTest { class Program { static void Main(string[] args) { using (WorkQueue q = new WorkQueue()) { q.Finished += new Action(delegate { Console.WriteLine("All jobs finished"); }); Random r = new Random(); foreach (int i in Enumerable.Range(1, 10)) q.Enqueue(r.Next(100, 500)); Console.WriteLine("All jobs queued"); q.Start(8); } } } class WorkQueue : IDisposable { private Queue _jobs = new Queue(); private int _job_count; private EventWaitHandle _wh = new AutoResetEvent(false); private object _lock = new object(); private List _th; public event Action Finished; public WorkQueue() { } public void Start(int num_threads) { _job_count = _jobs.Count; _th = new List(num_threads); foreach (int i in Enumerable.Range(1, num_threads)) { _th.Add(new WorkerThread(i, this)); _th[_th.Count - 1].JobFinished += new Action(WorkQueue_JobFinished); } } void WorkQueue_JobFinished(int obj) { lock (_lock) { _job_count--; if (_job_count == 0 && Finished != null) Finished(); } } public void Enqueue(int job) { lock (_lock) _jobs.Enqueue(job); _wh.Set(); } public void Dispose() { Enqueue(Int32.MinValue); _th.ForEach(th = th.Join()); _wh.Close(); } public int GetNextJob() { lock (_lock) { if (_jobs.Count 0) return _jobs.Dequeue(); else return Int32.MinValue; } } public void WaitOne() { _wh.WaitOne(); } } class WorkerThread { private Thread _th; private WorkQueue _q; private int _i; public event Action JobFinished; public WorkerThread(int i, WorkQueue q) { _i = i; _q = q; _th = new Thread(DoWork); _th.Start(); } public void Join() { _th.Join(); } private void DoWork() { while (true) { int job = _q.GetNextJob(); if (job != Int32.MinValue) { Console.WriteLine("Thread {0} Got job {1}", _i, job); Thread.Sleep(job * 10); // in reality would to actual work here if (JobFinished != null) JobFinished(job); } else { Console.WriteLine("Thread {0} no job available", _i); _q.WaitOne(); } } } } }

    Read the article

  • Deleting unreferenced child records with nhibernate

    - by Chev
    Hi There I am working on a mvc app using nhibernate as the orm (ncommon framework) I have parent/child entities: Product, Vendor & ProductVendors and a one to many relationship between them with Product having a ProductVendors collection Product.ProductVendors. I currently am retrieving a Product object and eager loading the children and sending these down the wire to my asp.net mvc client. A user will then modify the list of Vendors and post the updated Product back. I am using a custom model binder to generate the modified Product entity. I am able to update the Product fine and insert new ProductVendors. My problem is that dereferenced ProductVendors are not cascade deleted when specifying Product.ProductVendors.Clear() and calling _productRepository.Save(product). The problem seems to be with attaching the detached instance. Here are my mapping files: Product <?xml version="1.0" encoding="utf-8" ?> <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <property name="Name" type="String" length="250" /> ProductVendors <?xml version="1.0" encoding="utf-8" ?> <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <property name="Price" /> <many-to-one name="Product" class="Product" column="ProductId" lazy="false" not-null="true" /> <many-to-one name="Vendor" class="Vendor" column="VendorId" lazy="false" not-null="true" /> Custom Model Binder: using System; using Test.Web.Mvc; using Test.Domain; namespace Spoked.MVC { public class ProductUpdateModelBinder : DefaultModelBinder { private readonly ProductSystem ProductSystem; public ProductUpdateModelBinder(ProductSystem productSystem) { ProductSystem = productSystem; } protected override void OnModelUpdated(ControllerContext controllerContext, ModelBindingContext bindingContext) { var product = bindingContext.Model as Product; if (product != null) { product.Category = ProductSystem.GetCategory(new Guid(bindingContext.ValueProvider["Category"].AttemptedValue)); product.Brand = ProductSystem.GetBrand(new Guid(bindingContext.ValueProvider["Brand"].AttemptedValue)); product.ProductVendors.Clear(); if (bindingContext.ValueProvider["ProductVendors"] != null) { string[] productVendorIds = bindingContext.ValueProvider["ProductVendors"].AttemptedValue.Split(','); foreach (string id in productVendorIds) { product.AddProductVendor(ProductSystem.GetVendor(new Guid(id)), 90m); } } } } } } Controller: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Update(Product product) { using (var scope = new UnitOfWorkScope()) { //product.ProductVendors.Clear(); _productRepository.Save(product); scope.Commit(); } using (new UnitOfWorkScope()) { IList<Vendor> availableVendors = _productSystem.GetAvailableVendors(product); productDetailEditViewModel = new ProductDetailEditViewModel(product, _categoryRepository.Select(x => x).ToList(), _brandRepository.Select(x => x).ToList(), availableVendors); } return RedirectToAction("Edit", "Products", new {id = product.Id.ToString()}); } The following test does pass though: [Test] [NUnit.Framework.Category("ProductTests")] public void Can_Delete_Product_Vendors_By_Dereferencing() { Product product; using(UnitOfWorkScope scope = new UnitOfWorkScope()) { Console.Out.WriteLine("Selecting..."); product = _productRepository.First(); Console.Out.WriteLine("Adding Product Vendor..."); product.AddProductVendor(_vendorRepository.First(), 0m); scope.Commit(); } Console.Out.WriteLine("About to delete Product Vendors..."); using (UnitOfWorkScope scope = new UnitOfWorkScope()) { Console.Out.WriteLine("Clearing Product Vendor..."); _productRepository.Save(product); // seems to be needed to attach entity to the persistance manager product.ProductVendors.Clear(); scope.Commit(); } } Going nuts here as I almost have a very nice solution between mvc, custom model binders and nhibernate. Just not seeing my deletes cascaded. Any help greatly appreciated. Chev

    Read the article

  • WP7: play MP3 using Media with phonegap/Cordova

    - by Loda
    My problem: I use the Media Class from Cordova. The MP3 file is only played once (the first time). Code: Add this code to the Cordova Starter project to reproduce my problem: var playCounter = 0; function playMP3(){ console.log("playMP3() counter " + playCounter); var my_media = new Media("app/www/test.mp3");//ressource buildAction == content my_media.play(); playCounter++; } [...] <p onclick="playMP3();">Click to Play MP3</p> VS output: [...] GapBrowser_Navigated :: /app/www/index.html 'UI Task' (Managed): Loaded 'System.ServiceModel.Web.dll' 'UI Task' (Managed): Loaded 'System.ServiceModel.dll' Log:"onDeviceReady. You should see this message in Visual Studio's output window." 'UI Task' (Managed): Loaded 'Microsoft.Xna.Framework.dll' Log:"playMP3() counter 0" 'UI Task' (Managed): Loaded 'System.SR.dll' Log:"media on status :: {\"id\": \"fa123123-bc55-a266-f447-8881bd32e2aa\", \"msg\": 1, \"value\": 1}" A first chance exception of type 'System.ArgumentException' occurred in mscorlib.dll Log:"media on status :: {\"id\": \"fa123123-bc55-a266-f447-8881bd32e2aa\", \"msg\": 1, \"value\": 2}" Log:"media on status :: {\"id\": \"fa123123-bc55-a266-f447-8881bd32e2aa\", \"msg\": 2, \"value\": 2.141}" Log:"media on status :: {\"id\": \"fa123123-bc55-a266-f447-8881bd32e2aa\", \"msg\": 1, \"value\": 4}" Log:"playMP3() counter 1" A first chance exception of type 'System.ArgumentException' occurred in mscorlib.dll A first chance exception of type 'System.IO.IOException' occurred in mscorlib.dll A first chance exception of type 'System.IO.IsolatedStorage.IsolatedStorageException' occurred in mscorlib.dll Log:"media on status :: {\"id\": \"2de3388c-bbb6-d896-9e27-660f1402bc2a\", \"msg\": 9, \"value\": 5}" My Config: cordova-1.6.1.js Lumia 800 WP 7.5 (7.10.7740.16) WorkAround (kind of): Desactivate the app (turn off the screen) reactivate the app (turn on the screen) - you get one more shot. Any help is welcome as I am blocked on this since may days and I found no usefull information anywhere. Also, Can you tell me if this code work on your config ? . . . Update: add a demo code using a global var. Keeping the instance alive. result The test2.mp3 is played and can replay fine. the test.mp3 is not played at all. It is the first file you play that will work. Code function onDeviceReady() { document.getElementById("welcomeMsg").innerHTML += "Cordova is ready! version=" + window.device.cordova; console.log("onDeviceReady. You should see this message in Visual Studio's output window."); my_media = new Media("app/www/test.mp3");//ressource buildAction == content my_media2 = new Media("app/www/test2.mp3");//ressource buildAction == content } var playCounter = 0; var my_media = null; function playMP3(){ console.log("playMP3() counter " + playCounter); my_media.play(); playCounter++; } var my_media2 = null; function playMP32(){ console.log("playMP32() counter " + playCounter); my_media2.play(); playCounter++; } </script> [...] <p onclick="playMP3();">Click to Play MP3</p> <p onclick="playMP32();">Click to Play MP3 2</p> VS output: Log:"onDeviceReady. You should see this message in Visual Studio's output window." INFO: startPlayingAudio could not find mediaPlayer for 71888b14-86fe-4769-95c9-a9bb05d5555b Log:"playMP32() counter 0" INFO: startPlayingAudio could not find mediaPlayer for 71888b14-86fe-4769-95c9-a9bb05d5555b Log:"playMP32() counter 1" Log:"playMP3() counter 2" INFO: startPlayingAudio could not find mediaPlayer for b60fa266-d105-a295-a5be-fa2c6b824bc1 A first chance exception of type 'System.ArgumentException' occurred in System.Windows.dll Error: El parámetro es incorrecto. Log:"playMP32() counter 3" INFO: startPlayingAudio could not find mediaPlayer for 71888b14-86fe-4769-95c9-a9bb05d5555b Can anybody reproduce this ? link to bug report: https://issues.apache.org/jira/browse/CB-941

    Read the article

  • Internet Explorer Automation: how to suppress Open/Save dialog?

    - by Vladimir Dyuzhev
    When controlling IE instance via MSHTML, how to suppress Open/Save dialogs for non-HTML content? I need to get data from another system and import it into our one. Due to budget constraints no development (e.g. WS) can be done on the other side for some time, so my only option for now is to do web scrapping. The remote site is ASP.NET-based, so simple HTML requests won't work -- too much JS. I wrote a simple C# application that uses MSHTML and SHDocView to control an IE instance. So far so good: I can perform login, navigate to desired page, populate required fields and do submit. Then I face a couple of problems: First is that report is opening in another window. I suspect I can attach to that window too by enumerating IE windows in the system. Second, more troublesome, is that report itself is CSV file, and triggers Open/Save dialog. I'd like to avoid it and make IE save the file into given location OR I'm fine with programmatically clicking dialog buttons too (how?) I'm actually totally non-Windows guy (unix/J2EE), and hope someone with better knowledge would give me a hint how to do those tasks. Thanks! UPDATE I've found a promising document on MSDN: http://msdn.microsoft.com/en-ca/library/aa770041.aspx Control the kinds of content that are downloaded and what the WebBrowser Control does with them once they are downloaded. For example, you can prevent videos from playing, script from running, or new windows from opening when users click on links, or prevent Microsoft ActiveX controls from downloading or executing. Slowly reading through... UPDATE 2: MADE IT WORK, SORT OF... Finally I made it work, but in an ugly way. Essentially, I register a handler "before navigate", then, in the handler, if the URL is matching my target file, I cancel the navigation, but remember the URL, and use WebClient class to access and download that temporal URL directly. I cannot copy the whole code here, it contains a lot of garbage, but here are the essential parts: Installing handler: _IE2.FileDownload += new DWebBrowserEvents2_FileDownloadEventHandler(IE2_FileDownload); _IE.BeforeNavigate2 += new DWebBrowserEvents2_BeforeNavigate2EventHandler(IE_OnBeforeNavigate2); Recording URL and then cancelling download (thus preventing Save dialog to appear): public string downloadUrl; void IE_OnBeforeNavigate2(Object ob1, ref Object URL, ref Object Flags, ref Object Name, ref Object da, ref Object Head, ref bool Cancel) { Console.WriteLine("Before Navigate2 "+URL); if (URL.ToString().EndsWith(".csv")) { Console.WriteLine("CSV file"); downloadUrl = URL.ToString(); } Cancel = false; } void IE2_FileDownload(bool activeDocument, ref bool cancel) { Console.WriteLine("FileDownload, downloading "+downloadUrl+" instead"); cancel = true; } void IE_OnNewWindow2(ref Object o, ref bool cancel) { Console.WriteLine("OnNewWindow2"); _IE2 = new SHDocVw.InternetExplorer(); _IE2.BeforeNavigate2 += new DWebBrowserEvents2_BeforeNavigate2EventHandler(IE_OnBeforeNavigate2); _IE2.Visible = true; o = _IE2; _IE2.FileDownload += new DWebBrowserEvents2_FileDownloadEventHandler(IE2_FileDownload); _IE2.Silent = true; cancel = false; return; } And in the calling code using the found URL for direct download: ... driver.ClickButton(".*_btnRunReport"); driver.WaitForComplete(); Thread.Sleep(10000); WebClient Client = new WebClient(); Client.DownloadFile(driver.downloadUrl, "C:\\affinity.dump"); (driver is a simple wrapper over IE instance = _IE) Hope that helps someone.

    Read the article

  • multple inner joins 3 or more crashes mysql server 5.1.30 opensolaris

    - by user331849
    when doing simple query on 4 inner joined tables, the server crashes with the output below appearing in the the mysql .err file. eg. select * from table1 inner join table2 on table1.a = table2.a and table1.b = table2.b inner join table3 on table2.a = table3.a and table2.c = table3.c inner join table4 on table3.a = table4.a and table3.d = table4.d If i remove one of the tables it executes fine. Likewise if I remove a different table, it executes fine. Though all tables have been checked anyway, this would suggest that it is not a problem specifically with one of the tables. mysql.err trace: 100503 18:13:19 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=1572864000 read_buffer_size=2097152 max_used_connections=11 max_threads=151 threads_connected=10 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 2155437 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x72febda8 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = fe07efb0 thread_stack 0x40000 Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd-query at be1021f0 = explain select * from business inner join timetable on business.id = timetable.business_id inner join timetableentry on timetable.business_id = timetableentry.business_id and timetable.kid = timetableentry.parent inner join staff on timetable.business_id = staff.business_id and timetable.staf f_person = staff.kid where business.id = '3050bb04fda41df64a9c1c149150026c' thd-thread_id=9 thd-killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 100503 18:13:19 mysqld_safe mysqld restarted 100503 18:13:20 InnoDB: Failed to set DIRECTIO_ON on file ./ibdata1: OPEN: Inap propriate ioctl for device, continuing anyway 100503 18:13:20 InnoDB: Failed to set DIRECTIO_ON on file ./ibdata1: OPEN: Inap propriate ioctl for device, continuing anyway InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100503 18:13:20 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Last MySQL binlog file position 0 2731, file name ./mysql-bin.000093 100503 18:13:20 InnoDB: Started; log sequence number 0 2650338426 100503 18:13:20 [Note] Recovering after a crash using mysql-bin 100503 18:13:20 [Note] Starting crash recovery... 100503 18:13:20 [Note] Crash recovery finished. This on opensolaris SunOS 5.11 snv_111b i86pc i386 i86pc Mysql 5.1.30 Here is a snippet from the my.cnf file: key_buffer = 1500M max_allowed_packet = 1M thread_stack = 256K thread_cache_size = 8 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M table_cache = 512 tmp_table_size = 400M max_heap_table_size = 64M query_cache_limit = 20M query_cache_size = 200M Is this a bug or a configuration issue?

    Read the article

  • PHP Form - Empty input enter this text - Validation

    - by James Skelton
    No doubt very simple question for someone with php knowledge. I have a form with a datepicker, all is fine when a user has selected a date the email is send with: Date: 2012 04 10 But i would like if the user has skipped this and left blank (as i have not made this required) to send as: Date: Not Entered (<-- Or something) Instead at the minute of course it reads: Date: Form input <input type="text" class="form-control" id="datepicker" name="datepicker" size="50" value="Date Of Wedding" /> This is the validator $(document).ready(function(){ //validation contact form $('#submit').click(function(event){ event.preventDefault(); var fname = $('#name').val(); var validInput = new RegExp(/^[a-zA-Z0-9\s]+$/); var email = $('#email').val(); var validEmail = new RegExp(/^([a-zA-Z0-9_\.\-])+\@(([a-zA-Z0-9\-])+\.)+([a-zA-Z0-9]{2,4})+$/); var message = $('#message').val(); if(fname==''){ showError('<div class="alert alert-danger">Please enter your name.</div>', $('#name')); $('#name').addClass('required'); return;} if(!validInput.test(fname)){ showError('<div class="alert alert-danger">Please enter a valid name.</div>', $('#name')); $('#name').addClass('required'); return;} if(email==''){ showError('<div class="alert alert-danger">Please enter an email address.</div>', $('#email')); $('#email').addClass('required'); return;} if(!validEmail.test(email)){ showError('<div class="alert alert-danger">Please enter a valid email.</div>', $('#email')); $('#email').addClass('required'); return;} if(message==''){ showError('<div class="alert alert-danger">Please enter a message.</div>', $('#message')); $('#message').addClass('required'); return;} // setup some local variables var request; var form = $(this).closest('form'); // serialize the data in the form var serializedData = form.serialize(); // fire off the request to /contact.php request = $.ajax({ url: "contact.php", type: "post", data: serializedData }); // callback handler that will be called on success request.done(function (response, textStatus, jqXHR){ $('.contactWrap').show( 'slow' ).fadeIn("slow").html(' <div class="alert alert-success centered"><h3>Thank you! Your message has been sent.</h3></div> '); }); // callback handler that will be called on failure request.fail(function (jqXHR, textStatus, errorThrown){ // log the error to the console console.error( "The following error occured: "+ textStatus, errorThrown ); }); }); //remove 'required' class and hide error $('input, textarea').keyup( function(event){ if($(this).hasClass('required')){ $(this).removeClass('required'); $('.error').hide("slow").fadeOut("slow"); } }); // show error showError = function (error, target){ $('.error').removeClass('hidden').show("slow").fadeIn("slow").html(error); $('.error').data('target', target); $(target).focus(); console.log(target); console.log(error); return; } });

    Read the article

  • Followup: Python 2.6, 3 abstract base class misunderstanding

    - by Aaron
    I asked a question at Python 2.6, 3 abstract base class misunderstanding. My problem was that python abstract base classes didn't work quite the way I expected them to. There was some discussion in the comments about why I would want to use ABCs at all, and Alex Martelli provided an excellent answer on why my use didn't work and how to accomplish what I wanted. Here I'd like to address why one might want to use ABCs, and show my test code implementation based on Alex's answer. tl;dr: Code after the 16th paragraph. In the discussion on the original post, statements were made along the lines that you don't need ABCs in Python, and that ABCs don't do anything and are therefore not real classes; they're merely interface definitions. An abstract base class is just a tool in your tool box. It's a design tool that's been around for many years, and a programming tool that is explicitly available in many programming languages. It can be implemented manually in languages that don't provide it. An ABC is always a real class, even when it doesn't do anything but define an interface, because specifying the interface is what an ABC does. If that was all an ABC could do, that would be enough reason to have it in your toolbox, but in Python and some other languages they can do more. The basic reason to use an ABC is when you have a number of classes that all do the same thing (have the same interface) but do it differently, and you want to guarantee that that complete interface is implemented in all objects. A user of your classes can rely on the interface being completely implemented in all classes. You can maintain this guarantee manually. Over time you may succeed. Or you might forget something. Before Python had ABCs you could guarantee it semi-manually, by throwing NotImplementedError in all the base class's interface methods; you must implement these methods in derived classes. This is only a partial solution, because you can still instantiate such a base class. A more complete solution is to use ABCs as provided in Python 2.6 and above. Template methods and other wrinkles and patterns are ideas whose implementation can be made easier with full-citizen ABCs. Another idea in the comments was that Python doesn't need ABCs (understood as a class that only defines an interface) because it has multiple inheritance. The implied reference there seems to be Java and its single inheritance. In Java you "get around" single inheritance by inheriting from one or more interfaces. Java uses the word "interface" in two ways. A "Java interface" is a class with method signatures but no implementations. The methods are the interface's "interface" in the more general, non-Java sense of the word. Yes, Python has multiple inheritance, so you don't need Java-like "interfaces" (ABCs) merely to provide sets of interface methods to a class. But that's not the only reason in software development to use ABCs. Most generally, you use an ABC to specify an interface (set of methods) that will likely be implemented differently in different derived classes, yet that all derived classes must have. Additionally, there may be no sensible default implementation for the base class to provide. Finally, even an ABC with almost no interface is still useful. We use something like it when we have multiple except clauses for a try. Many exceptions have exactly the same interface, with only two differences: the exception's string value, and the actual class of the exception. In many exception clauses we use nothing about the exception except its class to decide what to do; catching one type of exception we do one thing, and another except clause catching a different exception does another thing. According to the exception module's doc page, BaseException is not intended to be derived by any user defined exceptions. If ABCs had been a first class Python concept from the beginning, it's easy to imagine BaseException being specified as an ABC. But enough of that. Here's some 2.6 code that demonstrates how to use ABCs, and how to specify a list-like ABC. Examples are run in ipython, which I like much better than the python shell for day to day work; I only wish it was available for python3. Your basic 2.6 ABC: from abc import ABCMeta, abstractmethod class Super(): __metaclass__ = ABCMeta @abstractmethod def method1(self): pass Test it (in ipython, python shell would be similar): In [2]: a = Super() --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/aaron/projects/test/<ipython console> in <module>() TypeError: Can't instantiate abstract class Super with abstract methods method1 Notice the end of the last line, where the TypeError exception tells us that method1 has not been implemented ("abstract methods method1"). That was the method designated as @abstractmethod in the preceding code. Create a subclass that inherits Super, implement method1 in the subclass and you're done. My problem, which caused me to ask the original question, was how to specify an ABC that itself defines a list interface. My naive solution was to make an ABC as above, and in the inheritance parentheses say (list). My assumption was that the class would still be abstract (can't instantiate it), and would be a list. That was wrong; inheriting from list made the class concrete, despite the abstract bits in the class definition. Alex suggested inheriting from collections.MutableSequence, which is abstract (and so doesn't make the class concrete) and list-like. I used collections.Sequence, which is also abstract but has a shorter interface and so was quicker to implement. First, Super derived from Sequence, with nothing extra: from abc import abstractmethod from collections import Sequence class Super(Sequence): pass Test it: In [6]: a = Super() --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/aaron/projects/test/<ipython console> in <module>() TypeError: Can't instantiate abstract class Super with abstract methods __getitem__, __len__ We can't instantiate it. A list-like full-citizen ABC; yea! Again, notice in the last line that TypeError tells us why we can't instantiate it: __getitem__ and __len__ are abstract methods. They come from collections.Sequence. But, I want a bunch of subclasses that all act like immutable lists (which collections.Sequence essentially is), and that have their own implementations of my added interface methods. In particular, I don't want to implement my own list code, Python already did that for me. So first, let's implement the missing Sequence methods, in terms of Python's list type, so that all subclasses act as lists (Sequences). First let's see the signatures of the missing abstract methods: In [12]: help(Sequence.__getitem__) Help on method __getitem__ in module _abcoll: __getitem__(self, index) unbound _abcoll.Sequence method (END) In [14]: help(Sequence.__len__) Help on method __len__ in module _abcoll: __len__(self) unbound _abcoll.Sequence method (END) __getitem__ takes an index, and __len__ takes nothing. And the implementation (so far) is: from abc import abstractmethod from collections import Sequence class Super(Sequence): # Gives us a list member for ABC methods to use. def __init__(self): self._list = [] # Abstract method in Sequence, implemented in terms of list. def __getitem__(self, index): return self._list.__getitem__(index) # Abstract method in Sequence, implemented in terms of list. def __len__(self): return self._list.__len__() # Not required. Makes printing behave like a list. def __repr__(self): return self._list.__repr__() Test it: In [34]: a = Super() In [35]: a Out[35]: [] In [36]: print a [] In [37]: len(a) Out[37]: 0 In [38]: a[0] --------------------------------------------------------------------------- IndexError Traceback (most recent call last) /home/aaron/projects/test/<ipython console> in <module>() /home/aaron/projects/test/test.py in __getitem__(self, index) 10 # Abstract method in Sequence, implemented in terms of list. 11 def __getitem__(self, index): ---> 12 return self._list.__getitem__(index) 13 14 # Abstract method in Sequence, implemented in terms of list. IndexError: list index out of range Just like a list. It's not abstract (for the moment) because we implemented both of Sequence's abstract methods. Now I want to add my bit of interface, which will be abstract in Super and therefore required to implement in any subclasses. And we'll cut to the chase and add subclasses that inherit from our ABC Super. from abc import abstractmethod from collections import Sequence class Super(Sequence): # Gives us a list member for ABC methods to use. def __init__(self): self._list = [] # Abstract method in Sequence, implemented in terms of list. def __getitem__(self, index): return self._list.__getitem__(index) # Abstract method in Sequence, implemented in terms of list. def __len__(self): return self._list.__len__() # Not required. Makes printing behave like a list. def __repr__(self): return self._list.__repr__() @abstractmethod def method1(): pass class Sub0(Super): pass class Sub1(Super): def __init__(self): self._list = [1, 2, 3] def method1(self): return [x**2 for x in self._list] def method2(self): return [x/2.0 for x in self._list] class Sub2(Super): def __init__(self): self._list = [10, 20, 30, 40] def method1(self): return [x+2 for x in self._list] We've added a new abstract method to Super, method1. This makes Super abstract again. A new class Sub0 which inherits from Super but does not implement method1, so it's also an ABC. Two new classes Sub1 and Sub2, which both inherit from Super. They both implement method1 from Super, so they're not abstract. Both implementations of method1 are different. Sub1 and Sub2 also both initialize themselves differently; in real life they might initialize themselves wildly differently. So you have two subclasses which both "is a" Super (they both implement Super's required interface) although their implementations are different. Also remember that Super, although an ABC, provides four non-abstract methods. So Super provides two things to subclasses: an implementation of collections.Sequence, and an additional abstract interface (the one abstract method) that subclasses must implement. Also, class Sub1 implements an additional method, method2, which is not part of Super's interface. Sub1 "is a" Super, but it also has additional capabilities. Test it: In [52]: a = Super() --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/aaron/projects/test/<ipython console> in <module>() TypeError: Can't instantiate abstract class Super with abstract methods method1 In [53]: a = Sub0() --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/aaron/projects/test/<ipython console> in <module>() TypeError: Can't instantiate abstract class Sub0 with abstract methods method1 In [54]: a = Sub1() In [55]: a Out[55]: [1, 2, 3] In [56]: b = Sub2() In [57]: b Out[57]: [10, 20, 30, 40] In [58]: print a, b [1, 2, 3] [10, 20, 30, 40] In [59]: a, b Out[59]: ([1, 2, 3], [10, 20, 30, 40]) In [60]: a.method1() Out[60]: [1, 4, 9] In [61]: b.method1() Out[61]: [12, 22, 32, 42] In [62]: a.method2() Out[62]: [0.5, 1.0, 1.5] [63]: a[:2] Out[63]: [1, 2] In [64]: a[0] = 5 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/aaron/projects/test/<ipython console> in <module>() TypeError: 'Sub1' object does not support item assignment Super and Sub0 are abstract and can't be instantiated (lines 52 and 53). Sub1 and Sub2 are concrete and have an immutable Sequence interface (54 through 59). Sub1 and Sub2 are instantiated differently, and their method1 implementations are different (60, 61). Sub1 includes an additional method2, beyond what's required by Super (62). Any concrete Super acts like a list/Sequence (63). A collections.Sequence is immutable (64). Finally, a wart: In [65]: a._list Out[65]: [1, 2, 3] In [66]: a._list = [] In [67]: a Out[67]: [] Super._list is spelled with a single underscore. Double underscore would have protected it from this last bit, but would have broken the implementation of methods in subclasses. Not sure why; I think because double underscore is private, and private means private. So ultimately this whole scheme relies on a gentleman's agreement not to reach in and muck with Super._list directly, as in line 65 above. Would love to know if there's a safer way to do that.

    Read the article

  • How To Clear An Alert - Part 2

    - by werner.de.gruyter
    There were some interesting comments and remarks on the original posting, so I decided to do a follow-up and address some of the issues that got raised... Handling Metric Errors First of all, there is a significant difference between an 'error' and an 'alert'. An 'alert' is the violation of a condition (a threshold) specified for a given metric. That means that the Agent is collecting and gathering the data for the metric, but there is a situation that requires the attention of an administrator. An 'error' on the other hand however, is a failure to collect metric data: The Agent is throwing the error because it cannot determine the value for the metric Whereas the 'alert' guarantees continuity of the metric data, an 'error' signals a big unknown. And the unknown aspect of all this is what makes an error a lot more serious than a regular alert: If you don't know what the current state of affairs is, there could be some serious issues brewing that nobody is aware of... The life-cycle of a Metric Error Clearing a metric error is pretty much the same workflow as a metric 'alert': The Agent signals the error after it failed to execute the metric The error is uploaded to the OMS/repository, where it becomes visible in the Console The error will remain active until the Agent is able to execute the metric successfully. Even though the metric is still getting scheduled and executed on a regular basis, the error will remain outstanding as long as the Agent is not capable of executing the metric correctly Knowing this, the way to fix the metric error should be obvious: Take the 'problem' away, and as soon as the metric is executed again (based on the frequency of the metric), the error will go away. The same tricks used to clear alerts can be used here too: Wait for the next scheduled execution. For those metrics that are executed regularly (like every 15 minutes or so), it's just a matter of waiting those minutes to see the updates. The 'Reevaluate Alert' button can be used to force a re-execution of the metric. In case a metric is executed once a day, this will be a better way to make sure that the underlying problem has been solved. And if it has been, the metric error will be removed, and the regular data points will be uploaded to the repository. And just in case you have to 'force' the issue a little: If you disable and re-enable a metric, it will get re-scheduled. And that means a new metric execution, and an update of the (hopefully) fixed problem. Database server-generated alerts and problem checkers There are various ways the Agent can collect metric data: Via a script or a SQL statement, reading a log file, getting a value from an SNMP OID or listening for SNMP traps or via the DBMS_SERVER_ALERTS mechanism of an Oracle database. For those alert which are generated by the database (like tablespace metrics for 10g and above databases), the Agent just 'waits' for the database to report any new findings. If the Agent has lost the current state of the server-side metrics (due to an incomplete recovery after a disaster, or after an improper use of the 'emctl clearstate' command), the Agent might be still aware of an alert that the database no longer has (or vice versa). The same goes for 'problem checker' alerts: Those metrics that only report data if there is a problem (like the 'invalid objects' metric) will also have a problem if the Agent state has been tampered with (again, the incomplete recovery, and after improper use of 'emctl clearstate' are the two main causes for this). The best way to deal with these kinds of mismatches, is to simple disable and re-enable the metric again: The disabling will clear the state of the metric, and the re-enabling will force a re-execution of the metric, so the new and updated results can get uploaded to the repository. Starting 10gR5, the Agent performs additional checks and verifications after each restart of the Agent and/or each state change of the database (shutdown/startup or failover in case of DataGuard) to catch these kinds of mismatches.

    Read the article

  • Look Inside WebLogic Server Embedded LDAP with an LDAP Explorer

    - by james.bayer
    Today a question came up on our internal WebLogic Server mailing lists about an issue deleting a Group from WebLogic Server.  The group had a special character in the name. The WLS console refused to delete the group with the message a java.net.MalformedURLException and another message saying “Errors must be corrected before proceeding.” as shown below. The group aa:bb is the one with the issue.  Click to enlarge. WebLogic Server includes an embedded LDAP server that can be used for managing users and groups for “reasonably small environments (10,000 or fewer users)”.  For organizations scaling larger or using more high-end features, I recommend looking at one of Oracle’s very popular enterprise directory services products like Oracle Internet Directory or Oracle Directory Server Enterprise Edition.  You can configure multiple authenicators in WebLogic Server so that you can use multiple directories at the same time. I am not sure WebLogic Server supports special characters in group names for the Embedded LDAP server, but in this case both the console and WLST reported the same issue deleting the group with the special character in the name.  Here’s the WLST output: wls:/hotspot_domain/serverConfig/SecurityConfiguration/hotspot_domain/Realms/myrealm/AuthenticationProviders/DefaultAuthenticator> cmo.removeGroup('aa:bb') Traceback (innermost last): File "<console>", line 1, in ? weblogic.security.providers.authentication.LDAPAtnDelegateException: [Security:090296]invalid URL ldap:///ou=people,ou=myrealm,dc=hotspot_domain??sub?(&(objectclass=person)(wlsMemberOf=cn=aa:bb,ou=groups,ou=myrealm,dc=hotspot_domain)) at weblogic.security.providers.authentication.LDAPAtnGroupMembersNameList.advance(LDAPAtnGroupMembersNameList.java:254) at weblogic.security.providers.authentication.LDAPAtnGroupMembersNameList.<init>(LDAPAtnGroupMembersNameList.java:119) at weblogic.security.providers.authentication.LDAPAtnDelegate.listGroupMembers(LDAPAtnDelegate.java:1392) at weblogic.security.providers.authentication.LDAPAtnDelegate.removeGroup(LDAPAtnDelegate.java:1989) at weblogic.security.providers.authentication.DefaultAuthenticatorImpl.removeGroup(DefaultAuthenticatorImpl.java:242) at weblogic.security.providers.authentication.DefaultAuthenticatorMBeanImpl.removeGroup(DefaultAuthenticatorMBeanImpl.java:407) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at weblogic.management.jmx.modelmbean.WLSModelMBean.invoke(WLSModelMBean.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$16.run(WLSMBeanServerInterceptorBase.java:449) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.invoke(WLSMBeanServerInterceptorBase.java:447) at weblogic.management.mbeanservers.internal.JMXContextInterceptor.invoke(JMXContextInterceptor.java:263) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$16.run(WLSMBeanServerInterceptorBase.java:449) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.invoke(WLSMBeanServerInterceptorBase.java:447) at weblogic.management.mbeanservers.internal.SecurityInterceptor.invoke(SecurityInterceptor.java:444) at weblogic.management.jmx.mbeanserver.WLSMBeanServer.invoke(WLSMBeanServer.java:323) at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder$11$1.run(JMXConnectorSubjectForwarder.java:663) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder$11.run(JMXConnectorSubjectForwarder.java:661) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363) at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder.invoke(JMXConnectorSubjectForwarder.java:654) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427) at javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265) at java.security.AccessController.doPrivileged(Native Method) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1367) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788) at javax.management.remote.rmi.RMIConnectionImpl_WLSkel.invoke(Unknown Source) at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:667) at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:522) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:146) at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:518) at weblogic.rmi.internal.wls.WLSExecuteRequest.run(WLSExecuteRequest.java:118) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:207) at weblogic.work.ExecuteThread.run(ExecuteThread.java:176) Caused by: java.net.MalformedURLException at netscape.ldap.LDAPUrl.readNextConstruct(LDAPUrl.java:651) at netscape.ldap.LDAPUrl.parseUrl(LDAPUrl.java:277) at netscape.ldap.LDAPUrl.<init>(LDAPUrl.java:114) at weblogic.security.providers.authentication.LDAPAtnGroupMembersNameList.advance(LDAPAtnGroupMembersNameList.java:224) ... 41 more It’s fairly clear that in order to work that the : character needs to be URL encoded to %3A or similar.  But all is not lost, there is another way.  You can configure an LDAP Explorer like JXplorer to WebLogic Server Embedded LDAP and browse/edit the entries. Follow the instructions here, being sure to change the authentication credentials to the Embedded LDAP server to some value you know, as by default they are some unknown value.  You’ll need to reboot the WebLogic Server Admin Server after making this change. Now configure JXplorer to connect as described in the documentation.  I’ve circled the important inputs.  In this example, my domain name is “hotspot_domain” which listens on the localhost listen address and port 7001.  The cn=Admin user name is a constant identifier for the Administrator of the embedded LDAP and that does not change, but you need to know what it is so you can enter it into the tool you use. Once you connect successfully, you can explore the entries and in this case delete the group that is no longer desired.

    Read the article

  • Clean Up the New Ubuntu Grub2 Boot Menu

    - by Trevor Bekolay
    Ubuntu adopted the new version of the Grub boot manager in version 9.10, getting rid of the old problematic menu.lst. Today we look at how to change the boot menu options in Grub2. Grub2 is a step forward in a lot of ways, and most of the annoying menu.lst issues from the past are gone. Still, if you’re not vigilant with removing old versions of the kernel, the boot list can still end up being longer than it needs to be. Note: You may have to hold the SHIFT button on your keyboard while booting up to get this menu to show. If only one operating system is installed on your computer, it may load it automatically without displaying this menu. Remove Old Kernel Entries The most common clean up task for the boot menu is to remove old kernel versions lying around on your machine. In our case we want to remove the 2.6.32-21-generic boot menu entries. In the past, this meant opening up /boot/grub/menu.lst…but with Grub2, if we remove the kernel package from our computer, Grub automatically removes those options. To remove old kernel versions, open up Synaptic Package Manager, found in the System > Administration menu. When it opens up, type the kernel version that you want to remove in the Quick search text field. The first few numbers should suffice. For each of the entries associated with the old kernel (e.g. linux-headers-2.6.32-21 and linux-image-2.6.32-21-generic), right-click and choose Mark for Complete Removal. Click the Apply button in the toolbar and then Apply in the summary window that pops up. Close Synaptic Package Manager. The next time you boot up your computer, the Grub menu will not contain the entries associated with the removed kernel version. Remove Any Option by Editing /etc/grub.d If you need more fine-grained control, or want to remove entries that are not kernel versions, you must change the files located in /etc/grub.d. /etc/grub.d contains files that hold the menu entries that used to be contained in /boot/grub/menu.lst. If you want to add new boot menu entries, you would create a new file in this folder, making sure to mark it as executable. If you want to remove boot menu entries, as we do, you would edit files in this folder. If we wanted to remove all of the memtest86+ entries, we could just make the 20_memtest86+ file non-executable, with the terminal command sudo chmod –x 20_memtest86+ Followed by the terminal command sudo update-grub Note that memtest86+ was not found by update-grub because it will only consider executable files. However, instead, we’re going to remove the Serial console 115200 entry for memtest86+… Open a terminal window Applications > Accessories > Terminal. In the terminal window, type in the command: sudo gedit /etc/grub.d/20_memtest86+ The menu entries are found at the bottom of this file. Comment out the menu entry for serial console 115200 by adding a “#” to the start of each line. Save and close this file. In the terminal window you opened, enter in the command sudo update-grub Note: If you don’t run update-grub, the boot menu options will not change! Now, the next time you boot up, that strange entry will be gone, and you’re left with a simple and clean boot menu. Conclusion While changing Grub2’s boot menu may seem overly complicated to legacy Grub masters, for normal users, Grub2 means that you won’t have to change the boot menu that often. Fortunately, if you do have to do it, the process is still pretty easy. For more detailed information about how to change entries in Grub2, this Ubuntu forum thread is a great resource. If you’re using an older version of Ubuntu, check out our article on how to clean up Ubuntu grub boot menu after upgrades. Similar Articles Productive Geek Tips Clean Up Ubuntu Grub Boot Menu After UpgradesReinstall Ubuntu Grub Bootloader After Windows Wipes it OutChange the GRUB Menu Timeout on UbuntuHow To Switch to Console Mode for Ubuntu VMware GuestSet Windows as Default OS when Dual Booting Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Daily Motivator (Firefox) FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation"

    Read the article

  • How to create a new Team Project Collection in TFS2010:

    - by jehan
    TFS 2010 has introduced the notion of Team Project Collection (TPC).  I have already discussed about TPC in my earlier post, you can check it out here. In this post, I will demonstrate how to create a new Team Project Collection in TFS2010. First, you have to open the TFS Administration Console (Start à All Programs à Microsoft Team Foundation Server 2010 à Team Foundation Server Administration Console), expand the Application Tier node in TFS Administration Console and click on Team Project Collection. Here you will see the TPC’s which are already exist, I am having only one TPC named New Collection and I’m going to create a new TPC called Demo Collection. To create a new Team Project Collection, you need to click on Create Collection; it will open the Create New Team Project Collection window.     Under the Name tab, you have to enter the name of Collection which you want to give for your new TPC (I naming it as Demo Collection). You can also provide some description about your TPC in Description tab which is optional and click next. Here, you need to enter the name of SQL Server Instance where you want your new TPC data to reside. You have the option either to choose the creating a Database for this TPC or use the already existing empty database and then click next.   In next screen, you have to choose SharePoint configuration. Here you have the options to either configure SharePoint Site for TPC at default collections or you can specify the your existing SharePoint site and  you can also choose not  to configure the SharePoint for this collection, if you choose last option then you cannot configure the Share Point sites for the all the Team Projects under this Project Collection. You also have the flexibility to create a Share Point site for this TPC later on, then if you need you have to configure SharePoint site for the existing team projects manually.   In next screen, you will have the Reports configuration. Here you have the options to either configure the Reports for TPC at default path or you can specify the path for at existing Reports folder, you can also choose not to configure the Reports for this collection, if you choose last option then you cannot create  the Reports  for the all the Team Projects under this Project Collection. Here also you can enable reporting for this TPC later on. The next screen is related to Lab Management Configuration, Lab Management is the new feature in TFS2010 which enables the users to create and manage virtual test environments where you can deploy and test your application. There are no options available here as I don’t have the Lab Management configured for my Team Foundation Server. The next screen is Review Configuration window, which will show up all the configuration settings you have specified, so that you can review the configurations before creating the Team Project Collection. If you want to make any changes to the configurations then you can go back to the previous windows and can make the changes. After Reviewing the configuration settings, you can click on verify button. Which will verify that if you’re Team Project Collection is ready to be created or not, it will show up the errors and warning (if any) which can make your Team Project Collection fail. You can then choose to create the Team Project Collection if the verify option doesn’t throw any warnings and errors. If the verify option throws any errors, then it is strongly suggested that you have to first rectify the issues then only go for TPC creation especially in case of warnings as it is a common practice to overlook the warnings.   If you choose the create TPC option, then it will start the process of creating a Team Project Collection  and once its completed you can check the status of configuration different components  during Team Project Collection. You can see in below screen that all the components are configured successfully.   In next screen, you can find the location of log file created for this Team Project Creation, this log file is really important in case of Team Project creation failure because it will help you to find  the root cause for the failure. Now, you can see that the New Team Projection (Demo Collection) which was created is now available in Team Foundation Collection tab and its status is Online.   You can now try to connect to this Team Project Collection from Team Explorer. Choose the newly created Team Project Collection and click on connect.     This Team Project Collection is empty because no Team Projects are created yet. Now, you can create the new Team Projects and start working.

    Read the article

  • AWS: setting up auto-scale for EC2 instances

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/16/aws-setting-up-auto-scale-for-ec2-instances.aspxWith Amazon Web Services, there’s no direct equivalent to Azure Worker Roles – no Elastic Beanstalk-style application for .NET background workers. But you can get the auto-scale part by configuring an auto-scaling group for your EC2 instance. This is a step-by-step guide, that shows you how to create the auto-scaling configuration, which for EC2 you need to do with the command line, and then link your scaling policies to CloudWatch alarms in the Web console. I’m using queue size as my metric for CloudWatch,  which is a good fit if your background workers are pulling messages from a queue and processing them.  If the queue is getting too big, the “high” alarm will fire and spin up a new instance to share the workload. If the queue is draining down, the “low” alarm will fire and shut down one of the instances. To start with, you need to manually set up your app in an EC2 VM, for a background worker that would mean hosting your code in a Windows Service (I always use Topshelf). If you’re dual-running Azure and AWS, then you can isolate your logic in one library, with a generic entry point that has Start() and Stop()  functions, so your Worker Role and Windows Service are essentially using the same code. When you have your instance set up with the Windows Service running automatically, and you’ve tested it starts up and works properly from a reboot, shut the machine down and take an image of the VM, using Create Image (EBS AMI) from the Web Console: When that completes, you’ll have your own AMI which you can use to spin up new instances, and you’re ready to create your auto-scaling group. You need to dip into the command-line tools for this, so follow this guide to set up the AWS autoscale command line tool. Now we’re ready to go. 1. Create a launch configuration This launch configuration tells AWS what to do when a new instance needs to be spun up. You create it with the as-create-launch-config command, which looks like this: as-create-launch-config sc-xyz-launcher # name of the launch config --image-id ami-7b9e9f12 # id of the AMI you extracted from your VM --region eu-west-1 # which region the new instance gets created in --instance-type t1.micro # size of the instance to create --group quicklaunch-1 #security group for the new instance 2. Create an auto-scaling group The auto-scaling group links to the launch config, and defines the overall configuration of the collection of instances: as-create-auto-scaling-group sc-xyz-asg # auto-scaling group name --region eu-west-1 # region to create in --launch-configuration sc-xyz-launcher # name of the launch config to invoke for new instances --min-size 1 # minimum number of nodes in the group --max-size 5 # maximum number of nodes in the group --default-cooldown 300 # period to wait (in seconds) after each scaling event, before checking if another scaling event is required --availability-zones eu-west-1a eu-west-1b eu-west-1c # which availability zones you want your instances to be allocated in – multiple entries means EC@ will use any of them 3. Create a scale-up policy The policy dictates what will happen in response to a scaling event being triggered from a “high” alarm being breached. It links to the auto-scaling group; this sample results in one additional node being spun up: as-put-scaling-policy scale-up-policy # policy name -g sc-psod-woker-asg # auto-scaling group the policy works with --adjustment 1 # size of the adjustment --region eu-west-1 # region --type ChangeInCapacity # type of adjustment, this specifies a fixed number of nodes, but you can use PercentChangeInCapacity to make an adjustment relative to the current number of nodes, e.g. increasing by 50% 4. Create a scale-down policy The policy dictates what will happen in response to a scaling event being triggered from a “low” alarm being breached. It links to the auto-scaling group; this sample results in one node from the group being taken offline: as-put-scaling-policy scale-down-policy -g sc-psod-woker-asg "--adjustment=-1" # in Windows, use double-quotes to surround a negative adjustment value –-type ChangeInCapacity --region eu-west-1 5. Create a “high” CloudWatch alarm We’re done with the command line now. In the Web Console, open up the CloudWatch view and create a new alarm. This alarm will monitor your metrics and invoke the scale-up policy from your auto-scaling group, when the group is working too hard. Configure your metric – this example will fire the alarm if there are more than 10 messages in my queue for over a minute: Then link the alarm to the scale-up policy in your group: 6. Create a “low” CloudWatch alarm The opposite of step 4, this alarm will trigger when the instances in your group don’t have enough work to do (e.g fewer than 2 messages in the queue for 1 minute), and will invoke the scale-down policy. And that’s it. You don’t need your original VM as the auto-scale group has a minimum number of nodes connected. You can test out the scaling by flexing your CloudWatch metric – in this example, filling up a queue from a  stub publisher – and watching AWS create new nodes as required, then stopping the publisher and watch AWS kill off the spare nodes.

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • Microsoft Sql Server driver for Nodejs - Part 2

    - by chanderdhall
    Nodejs, Sql server and Json response with Rest This post is part 2 of Microsoft Sql Server driver for Node js.In this post we will look at the JSON responses from the Microsoft Sql Server driver for Node js. Pre-requisites: If you have read the Part 1 of the series, you should be good. We will be using a framework for Rest within Nodejs - Restify, but that would need no prior learning. Restify: Restify is a simple node module for building RESTful services. It is slimmer than Express. Express is a complete module that has all what you need to create a full-blown browser app. However, Restify does not have additional overhead of templating, rendering etc that would be needed if your app has views. So, as the name suggests it's an awesome framework for building RESTful services and is very light-weight. Set up - You can continue with the same directory or project structure we had in the previous post, or can start a new one. Install restify using npm and you are good to go. npm install restify Go to Server.js and include Restify in your solution. Then create the server object using restify.CreateServer() - SLICK - ha? var restify = require('restify'); var server = restify.createServer(); server.listen(8080, function () { console.log('%s listening at %s', server.name, server.url); }); Then make sure you provide a port for the Server to listen at. The call back function is optional but helps you for debugging purposes. Once you are done, save the file and then go to the command prompt and hit 'node server.js' and you should see the following:   To test the server, go to your browser and type the address 'http://localhost:8080/' and oops you will see an error.   Why is that? - Well because we haven't defined any routes. Let's go ahead and create a route. To begin with I'd like to return whatever is typed in the url after my name and the following code should do it. server.get('/ChanderDhall/:status', function respond(req, res, next) { res.end("hello " + req.params.name + "") }); You can also avoid writing call backs inline. Something like this. function respond(req, res, next) { res.end("Chander Dhall " + req.params.name + ""); } server.get('/hello/:name', respond); Now if you go ahead and type http://localhost:8080/ChanderDhall/LovesNode you will get the response 'Chander Dhall loves node'. NOTE: Make sure your url has the right case as it's case-sensitive. You could have also typed it in as 'server.get('/chanderdhall/:name', respond);' Stored procedure: We've talked a lot about Restify now, but keep in mind the post is about being able to use Sql server with Node and return JSON. To see this in action, let's go ahead and create another route to a list of Employees from a stored procedure. server.get('/Employees', Employees); The following code will return a JSON response.  function Employees(req, res, next) { res.header("Content-Type: application/json"); //Need to specify the Content-Type which is //JSON in our case. sql.open(conn_str, function (err, conn) { if (err) { //Logs an error console.log("Error opening the database connection!"); return; } console.log("before query!"); conn.queryRaw("exec sp_GetEmployees", function (err, results) { if (err) { //Connection is open but an error occurs whileWhat else can be done? May be create a formatter or may be even come up with a hypermedia type but that may upset some pragmatists. Well, that's going to be a totally different discussion and is really not part of this series. Summary: We've discussed how to execute a stored procedure using Microsoft Sql Server driver for Node. Also, we have discussed how to format and send out a clean JSON to the app calling this API.  

    Read the article

  • SQL SERVER – master Database Log File Grew Too Big

    - by pinaldave
    Couple of the days ago, I received following email and I find this email very interesting and I feel like sharing with all of you. Note: Please read the whole email before providing your suggestions. “Hi Pinal, If you can share these details on your blog, it will help many. We understand the value of the master database and we take its regular back up (everyday midnight). Yesterday we noticed that our master database log file has grown very large. This is very first time that we have encountered such an issue. The master database is in simple recovery mode; so we assumed that it will never grow big; however, we now have a big log file. We ran the following command USE [master] GO DBCC SHRINKFILE (N'mastlog' , 0, TRUNCATEONLY) GO We know this command will break the chains of LSN but as per our understanding; it should not matter as we are in simple recovery model.     After running this, the log file becomes very small. Just to be cautious, we took full backup of the master database right away. We totally understand that this is not the normal practice; so if you are going to tell us the same, we are aware of it. However, here is the question for you? What operation in master database would have caused our log file to grow too large? Thanks, [name and company name removed as per request]“ Here was my response to them: “Hi [name removed], It is great that you are aware of all the right steps and method. Taking full backup when you are not sure is always a good practice. Regarding your question what could have caused your master database log to grow larger, let me try to guess what could have happened. Do you have any user table in the master database? If yes, this is not recommended and also NOT a good practice. If have user tables in master database and you are doing any long operation (may be lots of insert, update, delete or rebuilding them), then it can cause this situation. You have made me curious about your scenario; do revert back. Kind Regards, Pinal” Within few minutes I received reply: “That was it Pinal. We had one of the maintenance task log tables created in the master table, which had many long transactions during the night. We moved it to newly created database named ‘maintenance’, and we will keep you updated.” I was very glad to receive the email. I do not suggest that any user table should be created in the master database. It should be left alone from user objects. Now here is the question for you – can you think of any other reason for master log file growth? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Can't boot 12.04 installed alongside Windows 7

    - by PalaceChan
    I realize there are other questions like this one here, but I have visited them and tried several things and nothing is helping. One of them had a suggestion to boot the liveCD, and sudo mount /dev/sda* /mnt and to then chroot and reinstall grub. I did this and it did not help. Then on the Windows side, I downloaded a free version of easyBCD and chose to add a Grub2 Ubuntu 12.04 entry. On restart I saw this entry, but when I click on it it takes me to a Windows failed to boot error, as if it wasn't even trying to boot Ubuntu. I have booted from Ubuntu liveCD once again and have a snapshot of my GParted I ran this bootinfoscript thing from the liveCD, here are my results: It seems grub is on sda. I just want to be able to boot into my Ubuntu on startup. Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== = Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1041658947 of the same hard drive for core.img. core.img is at this location and looks for (,gpt7)/boot/grub on this drive. sda1: __________________________________________ File system: vfat Boot sector type: Windows 7: FAT32 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /efi/Boot/bootx64.efi sda2: __________________________________________ File system: Boot sector type: - Boot sector info: Mounting failed: mount: unknown filesystem type '' sda3: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda4: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda5: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /bootmgr /boot/bcd sda6: __________________________________________ File system: BIOS Boot partition Boot sector type: Grub2's core.img Boot sector info: sda7: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sda7 and looks at sector 1046637581 of the same hard drive for core.img. core.img is at this location and looks for (,gpt7)/boot/grub on this drive. Operating System: Ubuntu 12.04 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sda8: __________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 1 1,465,149,167 1,465,149,167 ee GPT GUID Partition Table detected. Partition Start Sector End Sector # of Sectors System /dev/sda1 2,048 411,647 409,600 EFI System partition /dev/sda2 411,648 673,791 262,144 Microsoft Reserved Partition (Windows) /dev/sda3 673,792 533,630,975 532,957,184 Data partition (Windows/Linux) /dev/sda4 533,630,976 1,041,658,946 508,027,971 Data partition (Windows/Linux) /dev/sda5 1,412,718,592 1,465,147,391 52,428,800 Windows Recovery Environment (Windows) /dev/sda6 1,041,658,947 1,041,660,900 1,954 BIOS Boot partition /dev/sda7 1,041,660,901 1,396,174,572 354,513,672 Data partition (Windows/Linux) /dev/sda8 1,396,174,573 1,412,718,591 16,544,019 Swap partition (Linux) blkid output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 B498-319E vfat SYSTEM /dev/sda3 820C0DA30C0D92F9 ntfs OS /dev/sda4 168410AB84108EFD ntfs DATA /dev/sda5 AC7A43BA7A438056 ntfs Recovery /dev/sda7 42a5b598-4d8b-471b-987c-5ce8a0ce89a1 ext4 /dev/sda8 5732f1c7-fa51-45c3-96a4-7af3bff13278 swap /dev/sr0 iso9660 Ubuntu 12.04 LTS i386 ================================ Mount points: ================================= Device Mount_Point Type Options /dev/loop0 /rofs squashfs (ro,noatime) /dev/sr0 /cdrom iso9660 (ro,noatime) =========================== sda7/boot/grub/grub.cfg: =========================== How can I get this option? When I was using easyBCD, it kept saying I had no entries at all, so I did the add entry thing for Ubuntu many times and I see several of those on boot screen now. I'd love to get rid of all those unusable options.

    Read the article

  • Cross-language Extension Method Calling

    - by Tom Hines
    Extension methods are a concise way of binding functions to particular types. In my last post, I showed how Extension methods can be created in the .NET 2.0 environment. In this post, I discuss calling the extensions from other languages. Most of the differences I find between the Dot Net languages are mainly syntax.  The declaration of Extensions is no exception.  There is, however, a distinct difference with the framework accepting excensions made with C++ that differs from C# and VB.  When calling the C++ extension from C#, the compiler will SOMETIMES say there is no definition for DoCPP with the error: 'string' does not contain a definition for 'DoCPP' and no extension method 'DoCPP' accepting a first argument of type 'string' could be found (are you missing a using directive or an assembly reference?) If I recompile, the error goes away. The strangest problem with calling the C++ extension from C# is that I first must make SOME type of reference to the class BEFORE using the extension or it will not be recognized at all.  So, if I first call the DoCPP() as a static method, the extension works fine later.  If I make a dummy instantiation of the class, it works.  If I have no forward reference of the class, I get the same error as before and recompiling does not fix it.  It seems as if this none of this is supposed to work across the languages. I have made a few work-arounds to get the examples to compile and run. Note the following examples: Extension in C# using System; namespace Extension_CS {    public static class CExtension_CS    {  //in C#, the "this" keyword is the key.       public static void DoCS(this string str)       {          Console.WriteLine("CS\t{0:G}\tCS", str);       }    } } Extension in C++ /****************************************************************************\  * Here is the C++ implementation.  It is the least elegant and most quirky,  * but it works. \****************************************************************************/ #pragma once using namespace System; using namespace System::Runtime::CompilerServices;     //<-Essential // Reference: System.Core.dll //<- Essential namespace Extension_CPP {        public ref class CExtension_CPP        {        public:               [Extension] // or [ExtensionAttribute] /* either works */               static void DoCPP(String^ str)               {                      Console::WriteLine("C++\t{0:G}\tC++", str);               }        }; } Extension in VB ' Here is the VB implementation.  This is not as elegant as the C#, but it's ' functional. Imports System.Runtime.CompilerServices ' Public Module modExtension_VB 'Extension methods can be defined only in modules.    <Extension()> _       Public Sub DoVB(ByVal str As String)       Console.WriteLine("VB" & Chr(9) & "{0:G}" & Chr(9) & "VB", str)    End Sub End Module   Calling program in C# /******************************************************************************\  * Main calling program  * Intellisense and VS2008 complain about the CPP implementation, but with a  * little duct-tape, it works just fine. \******************************************************************************/ using System; using Extension_CPP; using Extension_CS; using Extension_VB; // vitual namespace namespace TestExtensions {    public static class CTestExtensions    {       /**********************************************************************\        * For some reason, this needs a direct reference into the C++ version        * even though it does nothing than add a null reference.        * The constructor provides the fake usage to please the compiler.       \**********************************************************************/       private static CExtension_CPP x = null;   // <-DUCT_TAPE!       static CTestExtensions()       {          // Fake usage to stop compiler from complaining          if (null != x) {} // <-DUCT_TAPE       }       static void Main(string[] args)       {          string strData = "from C#";          strData.DoCPP();          strData.DoCS();          strData.DoVB();       }    } }   Calling program in VB  Imports Extension_CPP Imports Extension_CS Imports Extension_VB Imports System.Runtime.CompilerServices Module TestExtensions_VB    <Extension()> _       Public Sub DoCPP(ByVal str As String)       'Framework does not treat this as an extension, so use the static       CExtension_CPP.DoCPP(str)    End Sub    Sub Main()       Dim strData As String = "from VB"       strData.DoCS()       strData.DoVB()       strData.DoCPP() 'fake    End Sub End Module  Calling program in C++ // TestExtensions_CPP.cpp : main project file. #include "stdafx.h" using namespace System; using namespace Extension_CPP; using namespace Extension_CS; using namespace Extension_VB; void main(void) {        /*******************************************************\         * Extension methods are called like static methods         * when called from C++.  There may be a difference in         * syntax when calling the VB extension as VB Extensions         * are embedded in Modules instead of classes        \*******************************************************/     String^ strData = "from C++";     CExtension_CPP::DoCPP(strData);     CExtension_CS::DoCS(strData);     modExtension_VB::DoVB(strData); //since Extensions go in Modules }

    Read the article

  • Autoscaling in a modern world&hellip;. Part 2

    - by Steve Loethen
    When we last left off, we had a web application spinning away in the cloud, and a local console application watching it and reacting to changes in demand.  Reactions that were specified by a set of rules.  Let’s talk about those rules. Constraints.  The first set of rules this application answered to were the constraints. Here is what they looked like: <constraintRules> <rule name="default" enabled="true" rank="1" description="The default constraint rule"> <actions> <range min="1" max="4" target="AutoscalingApplicationRole"/> </actions> </rule> </constraintRules> Pretty basic.  We have one role, the “AutoscalingApplicationRole”, and we have decided to have it live within a range of 1 to 4.  This rule does not adjust, but instead, set’s limits on what other rules can do.  It has a rank, so you can have you can specify other sets of constraints, perhaps based on time or date, to allow for deviations from this set.  But for now, let’s keep it simple.  In the real world, you would probably use the minimum to set a lower end SLA.  A common value might be a 2, to prevent the reactive rules from ever taking you down to 1 role.  The maximum is often used to keep a rule from driving the cost up, setting an upper limit to prevent you waking up one morning and find a bill for hundreds of instances you didn’t expect.  So, here we have the range we want our application to live inside.  This is good for our investigation and testing.  Next, let’s take a look at the reactive rules.  These rules are what you use to react (hence reactive rules) to changing demands on your application.  The HOL has two simple rules.  One that looks at a queue depth, and one that looks at a performance counter that reports cpu utilization.  the XML in the rules file looks like this: <reactiveRules> <rule name="ScaleUp" rank="10" description="Scale Up the web role" enabled="true"> <when> <any> <greaterOrEqual operand="Length_05_holqueue" than="10"/> <greaterOrEqual operand="CPU_05_holwebrole" than="65"/> </any> </when> <actions> <scale target="AutoscalingApplicationRole" by="1"/> </actions> </rule> <rule name="ScaleDown" rank="10" description="Scale down the web role" enabled="true"> <when> <all> <less operand="Length_05_holqueue" than="5"/> <less operand="CPU_05_holwebrole" than="40"/> </all> </when> <actions> <scale target="AutoscalingApplicationRole" by="-1"/> </actions> </rule> </reactiveRules> <operands> <performanceCounter alias="CPU_05_holwebrole" performanceCounterName="\Processor(_Total)\% Processor Time" source="AutoscalingApplicationRole" timespan="00:05:00" aggregate="Average" /> <queueLength alias="Length_05_holqueue" queue="hol-queue" timespan="00:05:00" aggregate="Average"/> </operands> These rules are currently contained in a file called rules.xml, that is in the root of the console application.  The console app, starts up, grabs the rules and starts watching the 2 operands.  When it detects a rule has been satisfied, it performs the desired action.  (here, scale up or down my 1). But I want to host the autoscaler  in the cloud.  For my first trick, I will move the rules (and another file called services.xml) to azure blob storage.  Look for part 3.

    Read the article

  • Tech Ed/BI Conference 2010: A Recovering Industry in a Recovering City

    - by andrewbrust
    I tried writing a post for this blog last night, while at the this year’s Microsoft Tech Ed and Business Intelligence conferences, in New Orleans. But I literally fell asleep while writing it.  That’s probably a sign that my readers might have done the same while reading it. Why the writer’s block? This was a very good show for me, but I think I was having trouble figuring out exactly why.  Now that I’m on the flight home, I’m starting to piece it together. One reason, for sure, was that I’ve spent years in both the developer and the BI worlds, and a show that combined the two was really enjoyable for me.  Typically, the subject matter, the attendees, the Microsoft execs and managers, and even the social circles have been separate.  This year’s Tech Ed facilitated a fusion of each of these previously segregated groups.  That was good for me as a speaker; for example, I facilitated a Birds of a Feather session on PowerPivot (Microsoft’s new self-service BI offering) which was well-attended, and by a large number of non-BI pros.  The fusion was good for me as an attendee too, as Microsoft BI, in the form of a new Pivot Viewer control, made it into the Day 1 keynote, demoed by Microsoft’s key BI champion, Amir Netz.  And it was good for me socially, as I was able to meet with peers in both camps, and at one location. Speaking of meeting with industry colleagues, I did a lot of that at this show.  Probably for the first time ever, I carefully scheduled and conducted a series of meetings with friends and business acquaintances in the developer tools, data visualization, utilities, publishing and training areas of the Microsoft ecosystem.  Beside the time efficiencies in conducting so many meetings, I discovered another benefit. I got a real handle on the tech industry’s economic health. The news here is good.  First of all, 2010 has been a great year for just about everyone I spoke to.  The mood is positive, energy is high, and people are working really hard.  This is, of course, refreshing to see, and it’s a huge relief.  Add to that the fact that this year’s Tech Ed was about 2.5 times larger in headcount than last year’s (based on numbers from unofficial, but reliable, sources), and the economic prognosis seems excellent.  But there’s more to it than that. Here’s the thing: everyone I talked to seems to be working, and succeeding, at changing their business models to adapt to changes in the industry.  Whether it’s the Internet’s impact on publishing and training, the increased importance of the developer audience in South Asia, the shift of affordable developer and business talent to unfamiliar locales abroad, or even lapses in Microsoft’s performance in the market, partner companies aren’t just rolling with the punches; they’re welcoming the changes and working them to their advantage.  No one seemed downtrodden, or even fatigued.  Even for businesses who have seen core revenue streams become commoditized, everyone seems to be changing their market strategy and winning.  Even Microsoft, of whom I have been critical recently, showed signs of successful hard work and playbook change, in the maturing of their cloud strategy, their commitment to it and their excitement around it.  And the embedded, managed, self-service BI strategy that Microsoft has been touting looks like it’s already being embraced by customers, even though PowerPivot, and other new Microsoft BI products, were released only recently. The collective optimism I have witnessed, and that I have felt, tells me good things about this industry and the economy.  The stock market had huge mood swings during my stay, and that may yet subdue the industry recovery I have seen this week.  Nonetheless, I am convinced that a strong foundation of hard work, innovative thinking and, if I may,  true renaissance is underlying this industry’s success. That kind of strength will generate a strong recovery, I am certain, whether now or once we’re past another round of choppy weather in the broader economy.  The fundamentals are good.

    Read the article

  • Backup Your Windows Home Server Off-Site with Asus Webstorage

    - by Mysticgeek
    Windows Home Server lets you backup machines on your network easily. But what about backing up the server data? Today we take a look at ASUS WebStorage for Windows Home Server, which provides you with secure off-site backup for WHS. To use the ASUS WebStorage service you’ll need to sign up for a free account. It offers 1GB of free storage, then you can purchase an unlimited backup package for $39.99 for a year subscription. Note: They also offer online storage for individual PCs as well. Install ASUS WebStorage for WHS Browse to your shared folders on the server and open the Add-Ins folder and copy over the WHSConnectorSetup2.2.4.088.msi file (link below) then close out of the folder. Now launch Windows Home Server Console from one of the computers on your network, click Settings, then Add-ins. Under Available Add-ins click the Available tab and you’ll see the Asus WebStorage installer file we just copied over. Click the Install button. Installation kicks off and when it’s complete, you’ll need to close out of the console and reconnect. Using ASUS WebStorage WHS Connector  When you reconnect to WHS Console, scroll over to the ASUS WebStorage icon and click on Settings. Now log into your ASUS account… Now select the folders you want to backup to the WebStorage service. Select the radio button next to Enable to initialize the backup process… The backup process begins. You can change which folders are backed up simply by disabling the backup process, uncheck the folder(s), then enable the backup again. ASUS WebStorage Site After you have files backed up to the ASUS site, log into your account, and your presented with an overview of the amount of storage you’re using. It also shows what type of files are taking certain amounts of space.   You can browse through your backed up files and folders. It allows you to share and sync backed up data as well. Navigate to the file you want and you can easily download it by clicking on it, or share it out by clicking the share link below it. If you choose to share it, you’re provided with a link to the file to send out to other users.   Conclusion Users of Windows Home Server have been looking for an inexpensive cloud backup solution for quite some time. There are services such as JungleDisk, KeepVault, Wuala…etc. These services probably do a better job, but can start getting expensive once you start uploading a GBs of data. Another disappointment of ASUS WebStorage is you can only backup your WHS shares (from what we’ve been able to determine), it’s an “all or nothing” type of thing. You cannot go in and select individual files and folders. The initial upload speeds can be a bit slow as well, although that might have something to do with limited upload speeds on the DSL connection we used to test it. Retrieving your data from the ASUS site is a breeze though, and all the data files are organized quite well. The WHS Addin is very easy to install and use. If you’re looking for an off-site solution to backup your WHS data, you can test out ASUS WebStorage for free with a 1GB limit. This is good for testing the service and it might be exactly what you’re looking for. Other users may want a more advanced solution like KeepVault or CloudBerry…which is a front end for Amazon S3 storage. Download ASUS WebStorage WHS Addin Other WHS Offsite Backup Solutions CloudBerry, JungleDisk, KeepVault, Wuala Similar Articles Productive Geek Tips Restore Files from Backups on Windows Home ServerGMedia Blog: Setting Up a Windows Home ServerCreate A Windows Home Server Home Computer Restore DiscRemove a Network Computer from Windows Home ServerShare Ubuntu Home Directories using Samba TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • Oracle OpenWorld Update: Oracle GoldenGate for High Availability

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} One of our primary themes this year for the Oracle OpenWorld Sessions featuring Oracle GoldenGate is High Availability. This is a pretty wide theme, but the focus will be on ways of maximizing uptime for critical systems during planned and unplanned events. We have a number of very informative sessions dedicated to exploring this theme in detail; from deep product implementation strategies up to lessons learned by our customers when using Oracle GoldenGate to meet strict SLAs. We kick this track off with our Customer Panel on Zero Downtime Operations on Monday, which I overviewed in my last posting. This is followed by Comcast, who will be hosting a sessions at 1:45PM in Moscone West 3014. Their session will discuss using Oracle GoldenGate to reduce downtime during a database upgrade. Here’s an overview: CON8571 - Oracle Database Upgrade with Oracle GoldenGate: Best Practices from Comcast Does your business demand high availability? In this session, Comcast, among the largest telecom firms in the world, shares tips on how to achieve zero downtime while upgrading to Oracle Database 11g Release 2, using a combination of Oracle technologies: Oracle Real Application Clusters (Oracle RAC), Oracle Database’s Grid Infrastructure and Oracle Recovery Manager (Oracle RMAN) features, Oracle GoldenGate, and Oracle Active Data Guard. This successful upgrade took place on a mission-critical system that handles more than 60 million business requests and service calls a day. You’ll also hear how Comcast leverages Oracle Advanced Customer Support Services, including an Oracle Solution Support Center, to maximize performance and availability of its Oracle technologies. On Tuesday, Joydip Kundu (Director of Software Development) will be presenting “Oracle GoldenGate and Oracle Data Guard: Working Together Seamlessly” at 10:15AM in Moscone South 3005. This session focuses on how both modes of Oracle GoldenGate extract (Classic and Integrated Capture) can be used with Oracle Data Guard for disaster recovery purposes or to offload extract processing. That afternoon at 1:15PM Comcast takes the stage again to discuss firsthand lessons learned implementing Oracle GoldenGate in a heterogeneous, highly available environment. Here’s a rundown of their session: CON8750 - High-Volume OLTP with Oracle GoldenGate: Best Practices from Comcast Does your business demand high availability in a mission-critical environment? In this session, Comcast, one of the largest telecom firms, shares best practices for leveraging Oracle GoldenGate to replicate high-volume online transaction processing data from Tandem NSK SQL/MX to Teradata. Hear critical success factors from Comcast for overall platform and component architectures as well as configuration and tuning techniques. Learn how it met the challenges of replication in a complex heterogeneous environment. You’ll also hear how Comcast leverages Oracle Advanced Customer Support Services, including an Oracle Solution Support Center, to provide mission-critical support for maximized performance and availability of its Oracle environment. The final session on the high availability track will be hosted by Patricia Mcelroy (Distinguished Product Manager) and Stephan Haisly (Principle Member of Technical Staff). Their session (CON8401 - Tuning and Troubleshooting Oracle GoldenGate on Oracle Database) covers techniques for performance tuning and troubleshooting of Oracle GoldenGate on Oracle Database. Using various types of workloads (OLTP, batch, Oracle’s PeopleSoft Enterprise), the presentation steps through the process of monitoring and troubleshooting the configuration to maximize performance and replication throughput within and between Oracle clouds. Join us at our sessions or stop by our demo pods in Moscone south and meet the product management and development teams.

    Read the article

  • eSTEP Newsletter November 2012

    - by uwes
    Dear Partners,We would like to inform you that the November '12 issue of our Newsletter is now available.The issue contains information to the following topics: News from CorpOracle Celebrates 25 Years of SPARC Innovation; IDC White Papers Finds Growing Customer Comfort with Oracle Solaris Operating System; Oracle Buys Instantis; Pillar Axiom OpenWorld Highlights; Announcement Oracle Solaris 11.1 Availability (data sheet, new features, FAQ's, corporate pages, internal blog, download links, Oracle shop); Announcing StorageTek VSM 6; Announcement Oracle Solaris Cluster 4.1 Availability (new features, FAQ's, cluster corp page, download site, shop for media); Announcement: Oracle Database Appliance 2.4 patch update becomes available Technical SectionOracle White papers on SPARC SuperCluster; Understanding Parallel Execution; With LTFS, Tape is Gaining Storage Ground with additional link to How to Create Oracle Solaris 11 Zones with Oracle Enterprise Manager Ops Center; Provisioning Capabilities of Oracle Enterprise Ops Center Manager 12c; Maximizing your SPARC T4 Oracle Solaris Application Performance with the following articles: SPARC T4 Servers Set World Record on Siebel CRM 8.1.1.4 Benchmark, SPARC T4-Based Highly Scalable Solutions Posts New World Record on SPECjEnterprise2010 Benchmark, SPARC T4 Server Delivers Outstanding Performance on Oracle Business Intelligence Enterprise Edition 11g; Oracle SUN ZFS Storage Appliance Reference Architecture for VMware vSphere4;  Why 4K? - George Wilson's ZFS Day Talk; Pillar Axiom 600 with connected subjects: Oracle Introduces Pillar Axiom Release 5 Storage System Software, Driving down the high cost of Storage, This Provisioning with Pilar Axiom 600, Pillar Axiom 600- System overview and architecture; Migrate to Oracle;s SPARC Systems; Top 5 Reasons to Migrate to Oracle's SPARC Systems Learning & EventsRecently delivered Techcasts: Learning Paths; Oracle Database 11g: Database Administration (New) - Learning Path; Webcast: Drill Down on Disaster Recovery; What are Oracle Users Doing to Improve Availability and Disaster Recovery; SAP NetWeaver and Oracle Exadata Database Machine ReferencesARTstor Selects Oracle’s Sun ZFS Storage 7420 Appliances To Support Rapidly Growing Digital Image Library, Scottish Widows Cuts Sales Administration 20%, Reduces Time to Prepare Reports by 75%, and Achieves Return on Investment in First Year, Oracle's CRM Cloud Service Powers Innovation: Applications on Demand; Technology on Demand, How toHow to Migrate Your Data to Oracle Solaris 11 Using Shadow Migration; Using svcbundle to Create SMF Manifests and Profiles in Oracle Solaris 11; How to prepare a Sun ZFS Storage Appliance to Serve as a Storage Devise with Oracle Enterprise Manager Ops Center 12c; Command Summary: Basic Operations with the Image Packaging System In Oracle Solaris 11; How to Update to Oracle Solaris 11.1 Using the Image Packaging System, How to Migrate Oracle Database from Oracle Solaris 8 to Oracle Solaris 11;  Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster; Ease the Chaos with Automated Patching: Oracle Enterprise Manager Cloud Control 12c; Book excerpt: Oracle Exalogic Elastic Cloud Handbook You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves for October 20-26, 2013

    - by OTN ArchBeat
    Here's this week's list of the Top 10 items shared on the OTN ArchBeat Facebook Page from October 27 - November 2, 2013. Visualizing and Process (Twitter) Events in Real Time with Oracle Coherence | Noah Arliss This OTN Virtual Developer Day session explores in detail how to create a dynamic HTML5 Web application that interacts with Oracle Coherence as it’s processing events in real time, using the Avatar project and Oracle Coherence’s Live Events feature. Part of OTN Virtual Developer Day: Harnessing the Power of Oracle WebLogic and Oracle Coherence, November 5, 2013. 9am to 1pm PT / 12pm to 4pm ET / 1pm to 5pm BRT. Register now! HTML5 Application Development with Oracle WebLogic Server | Doug Clarke This free OTN Virtual Developer Day session covers the support for WebSockets, RESTful data services, and JSON infrastructure available in Oracle WebLogic Server. Part of OTN Virtual Developer Day: Harnessing the Power of Oracle WebLogic and Oracle Coherence, November 5, 2013. 9am to 1pm PT / 12pm to 4pm ET / 1pm to 5pm BRT. Register now! Video: ADF BC and REST services | Frederic Desbiens Spend a few minutes with Oracle ADF principal product manager Frederic Desbiens and learn how to publish ADF Business Components as RESTful web services. One Client Two Clusters | David Felcey "Sometimes its desirable to have a client connect to multiple clusters, either because the data is dispersed or for instance the clusters are in different locations for high availability," says David Felcey. David shows you how in this post, which includes a simple example. Exceptions Handling and Notifications in ODI | Christophe Dupupet Oracle Fusion Middleware A-Team director Christophe Dupupet reviews the techniques that are available in Oracle Data Integrator to guarantee that the appropriate individuals are notified in the event that ODI processes are impacted by network outages or other mishaps. Securing WebSocket applications on Glassfish | Pavel Bucek WebSocket is a key capability standardized into Java EE 7. Many developers wonder how WebSockets can be secured. One very nice characteristic for WebSocket is that it in fact completely piggybacks on HTTP. In this post Pavel Bucek demonstrates how to secure WebSocket endpoints in GlassFish using TLS/SSL. Oracle Coherence, Split-Brain and Recovery Protocols In Detail | Ricardo Ferreira Ricardo Ferreira's article "provides a high level conceptual overview of Split-Brain scenarios in distributed systems," focusing on a "specific example of cluster communication failure and recovery in Oracle Coherence." Non-programmatic Authentication Using Login Form in JSF (For WebCenter & ADF) | JayJay Zheng Oracle ACE JayJay Zheng shares an approach that "avoids the programmatic authentication and works great for having a custom login page developed in WebCenter Portal integrated with OAM authentication." The latest article in the Industrial SOA series looks at mobile computing and how companies are developing SOA to go. http://pub.vitrue.com/PUxT Tech Article: SOA in Real Life: Mobile Solutions The ACE Director Thing | Dr. Frank Munz Frank Munz finally gets around to blogging about achieving Oracle ACE Director status and shares some interesting insight into what will change—and what won't—thanks to that new status. A good, short read for those interested in learning more about the Oracle ACE program. Thought for the Day "Even if you're on the right track, you'll get run over if you just sit there." — Will Rogers, American humorist (November 4, 1879 – August 15, 1935) Source: brainyquote.com

    Read the article

  • SQL Server Transaction Marks: Restoring multiple databases to a common relative point

    - by Mladen Prajdic
    We’re all familiar with the ability to restore a database to point in time using the RESTORE WITH STOPAT statement. But what if we have multiple databases that are accessed from one application or are modifying each other? And over multiple instances? And all databases have different workloads? And we want to restore all of the databases to some known common relative point? The catch here is that this common relative point isn’t the same point in time for all databases. This common relative point in time might be now in DB1, now-1 hour in DB2 and yesterday in DB3. And we don’t know the exact times. Let me introduce you to Transaction Marks. When we run a marked transaction using the WITH MARK option a flag is set in the transaction log and a row is added to msdb..logmarkhistory table. When restoring a transaction log backup we can restore to either before or after that marked transaction. The best thing is that we don’t even need to have one database modifying another database. All we have to do is use a marked transaction with the same name in different database. Let’s see how this works with an example. The code comments say what’s going on. USE master GOCREATE DATABASE TestTxMark1GOUSE TestTxMark1GOCREATE TABLE TestTable1( ID INT, VALUE UNIQUEIDENTIFIER) -- insert some data into the table so we can have a starting pointINSERT INTO TestTable1SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NULLFROM master..spt_valuesORDER BY RNSELECT *FROM TestTable1GO-- TAKE A FULL BACKUP of the databseBACKUP DATABASE TestTxMark1 TO DISK = 'c:\TestTxMark1.bak'GO USE master GOCREATE DATABASE TestTxMark2GOUSE TestTxMark2GOCREATE TABLE TestTable2( ID INT, VALUE UNIQUEIDENTIFIER)-- insert some data into the table so we can have a starting pointINSERT INTO TestTable2SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NEWID()FROM master..spt_valuesORDER BY RNSELECT *FROM TestTable2GO-- TAKE A FULL BACKUP of our databseBACKUP DATABASE TestTxMark2 TO DISK = 'c:\TestTxMark2.bak'GO -- start a marked transaction that modifies both databasesBEGIN TRAN TxDb WITH MARK -- update values from NULL to random value UPDATE TestTable1 SET VALUE = NEWID(); -- update first 100 values from random value -- to NULL in different DB UPDATE TestTxMark2.dbo.TestTable2 SET VALUE = NULL WHERE ID <= 100;COMMITGO     -- some time goes by here -- with various database activity... -- We see two entries for marks in each database. -- This is just informational and has no bearing on the restore itself.SELECT * FROM msdb..logmarkhistory USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark1 TO DISK = 'c:\TestTxMark1.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark1GO USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark2 TO DISK = 'c:\TestTxMark2.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark2GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark1 FROM DISK = 'c:\TestTxMark1.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark1 FROM DISK = 'c:\TestTxMark1.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark2 FROM DISK = 'c:\TestTxMark2.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark2 FROM DISK = 'c:\TestTxMark2.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO USE TestTxMark1-- we restored to time before the transaction -- so we have NULL values in our tableSELECT * FROM TestTable1 USE TestTxMark2-- we restored to time before the transaction -- so we DON'T have NULL values in our tableSELECT * FROM TestTable2   Transaction marks can be used like a crude sync mechanism for cross database operations. With them we can mark our databases with a common “restore to” point so we know we have a valid state between all databases to restore to.

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >