Search Results

Search found 2229 results on 90 pages for 'conditions'.

Page 86/90 | < Previous Page | 82 83 84 85 86 87 88 89 90  | Next Page >

  • Effective optimization strategies on modern C++ compilers

    - by user168715
    I'm working on scientific code that is very performance-critical. An initial version of the code has been written and tested, and now, with profiler in hand, it's time to start shaving cycles from the hot spots. It's well-known that some optimizations, e.g. loop unrolling, are handled these days much more effectively by the compiler than by a programmer meddling by hand. Which techniques are still worthwhile? Obviously, I'll run everything I try through a profiler, but if there's conventional wisdom as to what tends to work and what doesn't, it would save me significant time. I know that optimization is very compiler- and architecture- dependent. I'm using Intel's C++ compiler targeting the Core 2 Duo, but I'm also interested in what works well for gcc, or for "any modern compiler." Here are some concrete ideas I'm considering: Is there any benefit to replacing STL containers/algorithms with hand-rolled ones? In particular, my program includes a very large priority queue (currently a std::priority_queue) whose manipulation is taking a lot of total time. Is this something worth looking into, or is the STL implementation already likely the fastest possible? Along similar lines, for std::vectors whose needed sizes are unknown but have a reasonably small upper bound, is it profitable to replace them with statically-allocated arrays? I've found that dynamic memory allocation is often a severe bottleneck, and that eliminating it can lead to significant speedups. As a consequence I'm interesting in the performance tradeoffs of returning large temporary data structures by value vs. returning by pointer vs. passing the result in by reference. Is there a way to reliably determine whether or not the compiler will use RVO for a given method (assuming the caller doesn't need to modify the result, of course)? How cache-aware do compilers tend to be? For example, is it worth looking into reordering nested loops? Given the scientific nature of the program, floating-point numbers are used everywhere. A significant bottleneck in my code used to be conversions from floating point to integers: the compiler would emit code to save the current rounding mode, change it, perform the conversion, then restore the old rounding mode --- even though nothing in the program ever changed the rounding mode! Disabling this behavior significantly sped up my code. Are there any similar floating-point-related gotchas I should be aware of? One consequence of C++ being compiled and linked separately is that the compiler is unable to do what would seem to be very simple optimizations, such as move method calls like strlen() out of the termination conditions of loop. Are there any optimization like this one that I should look out for because they can't be done by the compiler and must be done by hand? On the flip side, are there any techniques I should avoid because they are likely to interfere with the compiler's ability to automatically optimize code? Lastly, to nip certain kinds of answers in the bud: I understand that optimization has a cost in terms of complexity, reliability, and maintainability. For this particular application, increased performance is worth these costs. I understand that the best optimizations are often to improve the high-level algorithms, and this has already been done.

    Read the article

  • Ajax Calendar Date Range with JavaScript

    - by hungrycoder
    I have the following code to compare two dates with the following conditions Scenario: On load there are two text boxes (FromDate, ToDate) with Ajax calendar extenders. On load From Date shows today's date. when date less than today was selected in both text boxes(FromDate, ToDate), it alerts user saying "You cannot select a day earlier than today!" When ToDate's Selected date < FromDate's Selected Date, alerts user saying "To Date must be Greater than From date." and at the same time it clears the selected Date in ToDate Text box. Codeblock: ASP.NET , AJAX <asp:TextBox ID="txtFrom" runat="server" ReadOnly="true"></asp:TextBox> <asp:ImageButton ID="imgBtnFrom" runat="server" ImageUrl="~/images/Cal20x20.png" Width="20" Height="20" ImageAlign="TextTop" /> <asp:CalendarExtender ID="txtFrom_CalendarExtender" PopupButtonID="imgBtnFrom" runat="server" Enabled="True" OnClientDateSelectionChanged="checkDate" TargetControlID="txtFrom" Format="MMM d, yyyy"> </asp:CalendarExtender> <asp:TextBox ID="txtTo" runat="server" ReadOnly="true"></asp:TextBox> <asp:ImageButton ID="imgBtnTo" runat="server" ImageUrl="~/images/Cal20x20.png" Width="20" Height="20" ImageAlign="TextTop" /> <asp:CalendarExtender ID="txtTo_CalendarExtender" OnClientDateSelectionChanged="compareDateRange" PopupButtonID="imgBtnTo" runat="server" Enabled="True" TargetControlID="txtTo" Format="MMM d, yyyy"> </asp:CalendarExtender> <asp:HiddenField ID="hdnFrom" runat="server" /> <asp:HiddenField ID="hdnTo" runat="server" /> C# Code protected void Page_Load(object sender, EventArgs e) { txtFrom.Text = string.Format("{0: MMM d, yyyy}", DateTime.Today); if (Page.IsPostBack) { if (!String.IsNullOrEmpty(hdnFrom.Value as string)) { txtFrom.Text = hdnFrom.Value; } if (!String.IsNullOrEmpty(hdnTo.Value as string)) { txtTo.Text = hdnTo.Value; } } } JavaScript Code <script type="text/javascript"> function checkDate(sender, args) { document.getElementById('<%=txtTo.ClientID %>').value = ""; if (sender._selectedDate < new Date()) { alert("You cannot select a day earlier than today!"); sender._selectedDate = new Date(); // set the date back to the current date sender._textbox.set_Value(sender._selectedDate.format(sender._format)); //assign the value to the hidden field. document.getElementById('<%=hdnFrom.ClientID %>').value = sender._selectedDate.format(sender._format); //reset the to date to blank. document.getElementById('<%=txtTo.ClientID %>').value = ""; } else { document.getElementById('<%=hdnFrom.ClientID %>').value = sender._selectedDate.format(sender._format); } } function compareDateRange(sender, args) { var fromDateString = document.getElementById('<%=txtFrom.ClientID %>').value; var fromDate = new Date(fromDateString); if (sender._selectedDate < new Date()) { alert("You cannot select a Date earlier than today!"); sender._selectedDate = ""; sender._textbox.set_Value(sender._selectedDate) } if (sender._selectedDate <= fromDate) { alert("To Date must be Greater than From date."); sender._selectedDate = ""; sender._textbox.set_Value(sender._selectedDate) } else { document.getElementById('<%=hdnTo.ClientID %>').value = sender._selectedDate.format(sender._format); } } </script> Error Screen(Hmmm :X) Now in ToDate, when you select Date Earlier than today or Date less than FromDate, ToDate Calendar shows NaN for Every Date and ,0NaN for Year

    Read the article

  • Synchronous communication using NSOperationQueue

    - by chip_munks
    I am new to Objective C programming. I have created two threads called add and display using the NSInvocationOperation and added it on to the NSOperationQueue. I make the display thread to run first and then run the add thread. The display thread after printing the "Welcome to display" has to wait for the results to print from the add method. So i have set the waitUntilFinished method. Both the Operations are on the same queue. If i use waitUntilFinished for operations on the same queue there may be a situation for deadlock to happen(from apples developer documentation). Is it so? To wait for particular time interval there is a method called waitUntilDate: But if i need to like this wait(min(100,dmax)); let dmax = 20; How to do i wait for these conditions? It would be much helpful if anyone can explain with an example. EDITED: threadss.h ------------ #import <Foundation/Foundation.h> @interface threadss : NSObject { BOOL m_bRunThread; int a,b,c; NSOperationQueue* queue; NSInvocationOperation* operation; NSInvocationOperation* operation1; NSConditionLock* theConditionLock; } -(void)Thread; -(void)add; -(void)display; @end threadss.m ------------ #import "threadss.h" @implementation threadss -(id)init { if (self = [super init]) { queue = [[NSOperationQueue alloc]init]; operation = [[NSInvocationOperation alloc]initWithTarget:self selector:@selector(display) object:nil]; operation1 = [[NSInvocationOperation alloc]initWithTarget:self selector:@selector(add) object:nil]; theConditionLock = [[NSConditionLock alloc]init]; } return self; } -(void)Thread { m_bRunThread = YES; //[operation addDependency:operation1]; if (m_bRunThread) { [queue addOperation:operation]; } //[operation addDependency:operation1]; [queue addOperation:operation1]; //[self performSelectorOnMainThread:@selector(display) withObject:nil waitUntilDone:YES]; //NSLog(@"I'm going to do the asynchronous communication btwn the threads!!"); //[self add]; //[operation addDependency:self]; sleep(1); [queue release]; [operation release]; //[operation1 release]; } -(void)add { NSLog(@"Going to add a and b!!"); a=1; b=2; c = a + b; NSLog(@"Finished adding!!"); } -(void)display { NSLog(@"Into the display method"); [operation1 waitUntilFinished]; NSLog(@"The Result is:%d",c); } @end main.m ------- #import <Foundation/Foundation.h> #import "threadss.h" int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; threadss* thread = [[threadss alloc]init]; [thread Thread]; [pool drain]; return 0; } This is what i have tried with a sample program. output 2011-06-03 19:40:47.898 threads_NSOperationQueue[3812:1503] Going to add a and b!! 2011-06-03 19:40:47.898 threads_NSOperationQueue[3812:1303] Into the display method 2011-06-03 19:40:47.902 threads_NSOperationQueue[3812:1503] Finished adding!! 2011-06-03 19:40:47.904 threads_NSOperationQueue[3812:1303] The Result is:3 Is the way of invoking the thread is correct. 1.Will there be any deadlock condition? 2.How to do wait(min(100,dmax)) where dmax = 50.

    Read the article

  • linked list problem (with insert)

    - by JohnWong
    The problem appears with the insert function that I wrote. 3 conditions must work, I tested b/w 1 and 2, b/w 2 and 3 and as last element, they worked. But b/w 3 and 4, it did not work. It only display up to the new added record, and did not show the fourth element. Efficiency is not my concern here (not yet). Please guide me through this debug process. Thank you very much. #include<iostream> #include<string> using namespace std; struct List // we create a structure called List { string name; string tele; List *nextAddr; }; void populate(List *); void display(List *); void insert(List *); int main() { const int MAXINPUT = 3; char ans; List * data, * current, * point; // create two pointers data = new List; current = data; for (int i = 0; i < (MAXINPUT - 1); i++) { populate(current); current->nextAddr = new List; current = current->nextAddr; } // last record we want to do it sepeartely populate(current); current->nextAddr = NULL; cout << "The current list consists of the following data records: " << endl; display(data); // now ask whether user wants to insert new record or not cout << "Do you want to add a new record (Y/N)?"; cin >> ans; if (ans == 'Y' || ans == 'y') { /* To insert b/w first and second, use point as parameter between second and third uses point->nextAddr between third and fourth uses point->nextAddr->nextAddr and insert as last element, uses current instead */ point = data; insert(()); display(data); } return 0; } void populate(List *data) { cout << "Enter a name: "; cin >> data->name; cout << "Enter a phone number: "; cin >> data->tele; return; } void display(List *content) { while (content != NULL) { cout << content->name << " " << content->tele; content = content->nextAddr; cout << endl; // we skip to next line } return; } void insert(List *last) { List * temp = last->nextAddr; //save the next address to temp last->nextAddr = new List; // now modify the address pointed to new allocation last = last->nextAddr; populate(last); last->nextAddr = temp; // now link all three together, eg 1-NEW-2 return; }

    Read the article

  • Azure - Part 4 - Table Storage Service in Windows Azure

    - by Shaun
    In Windows Azure platform there are 3 storage we can use to save our data on the cloud. They are the Table, Blob and Queue. Before the Chinese New Year Microsoft announced that Azure SDK 1.1 had been released and it supports a new type of storage – Drive, which allows us to operate NTFS files on the cloud. I will cover it in the coming few posts but now I would like to talk a bit about the Table Storage.   Concept of Table Storage Service The most common development scenario is to retrieve, create, update and remove data from the data storage. In the normal way we communicate with database. When we attempt to move our application over to the cloud the most common requirement should be have a storage service. Windows Azure provides a in-build service that allow us to storage the structured data, which is called Windows Azure Table Storage Service. The data stored in the table service are like the collection of entities. And the entities are similar to rows or records in the tradtional database. An entity should had a partition key, a row key, a timestamp and set of properties. You can treat the partition key as a group name, the row key as a primary key and the timestamp as the identifer for solving the concurrency problem. Different with a table in a database, the table service does not enforce the schema for tables, which means you can have 2 entities in the same table with different property sets. The partition key is being used for the load balance of the Azure OS and the group entity transaction. As you know in the cloud you will never know which machine is hosting your application and your data. It could be moving based on the transaction weight and the number of the requests. If the Azure OS found that there are many requests connect to your Book entities with the partition key equals “Novel” it will move them to another idle machine to increase the performance. So when choosing the partition key for your entities you need to make sure they indecate the category or gourp information so that the Azure OS can perform the load balance as you wish.   Consuming the Table Although the table service looks like a database, you cannot access it through the way you are using now, neither ADO.NET nor ODBC. The table service exposed itself by ADO.NET Data Service protocol, which allows you can consume it through the RESTful style by Http requests. The Azure SDK provides a sets of classes for us to connect it. There are 2 classes we might need: TableServiceContext and TableServiceEntity. The TableServiceContext inherited from the DataServiceContext, which represents the runtime context of the ADO.NET data service. It provides 4 methods mainly used by us: CreateQuery: It will create a IQueryable instance from a given type of entity. AddObject: Add the specified entity into Table Service. UpdateObject: Update an existing entity in the Table Service. DeleteObject: Delete an entity from the Table Service. Beofre you operate the table service you need to provide the valid account information. It’s something like the connect string of the database but with your account name and the account key when you created the storage service on the Windows Azure Development Portal. After getting the CloudStorageAccount you can create the CloudTableClient instance which provides a set of methods for using the table service. A very useful method would be CreateTableIfNotExist. It will create the table container for you if it’s not exsited. And then you can operate the eneities to that table through the methods I mentioned above. Let me explain a bit more through an exmaple. We always like code rather than sentence.   Straightforward Accessing to the Table Here I would like to build a WCF service on the Windows Azure platform, and for now just one requirement: it would allow the client to create an account entity on the table service. The WCF service would have a method named Register and accept an instance of the account which the client wants to create. After perform some validation it will add the entity into the table service. So the first thing I should do is to create a Cloud Application on my VIstial Studio 2010 RC. (The Azure SDK 1.1 only supports VS2008 and VS2010 RC.) The solution should be like this below. Then I added a configuration items for the storage account through the Settings section under the cloud project. (Double click the Services file under Roles folder and navigate to the Setting section.) This setting will be used when to retrieve my storage account information. Since for now I just in the development phase I will select “UseDevelopmentStorage=true”. And then I navigated to the WebRole.cs file under my WCF project. If you have read my previous posts you would know that this file defines the process when the application start, and terminate on the cloud. What I need to do is to when the application start, set the configuration publisher to load my config file with the config name I specified. So the code would be like below. I removed the original service and contract created by the VS template and add my IAccountService contract and its implementation class - AccountService. And I add the service method Register with the parameters: email, password and it will return a boolean value to indicates the result which is very simple. At this moment if I press F5 the application will be established on my local development fabric and I can see my service runs well through the browser. Let’s implement the service method Rigister, add a new entity to the table service. As I said before the entities you want to store in the table service must have 3 properties: partition key, row key and timespan. You can create a class with these 3 properties. The Azure SDK provides us a base class for that named TableServiceEntity in Microsoft.WindowsAzure.StorageClient namespace. So what we need to do is more simply, create a class named Account and let it derived from the TableServiceEntity. And I need to add my own properties: Email, Password, DateCreated and DateDeleted. The DateDeleted is a nullable date time value to indecate whether this entity had been deleted and when. Do you notice that I missed something here? Yes it’s the partition key and row key I didn’t assigned. The TableServiceEntity base class defined 2 constructors one was a parameter-less constructor which will be used to fill values into the properties from the table service when retrieving data. The other was one with 2 parameters: partition key and row key. As I said below the partition key may affect the load balance and the row key must be unique so here I would like to use the email as the parition key and the email plus a Guid as the row key. OK now we finished the entity class we need to store onto the table service. The next step is to create a data access class for us to add it. Azure SDK gives us a base class for it named TableServiceContext as I mentioned below. So let’s create a class for operate the Account entities. The TableServiceContext need the storage account information for its constructor. It’s the combination of the storage service URI that we will create on Windows Azure platform, and the relevant account name and key. The TableServiceContext will use this information to find the related address and verify the account to operate the storage entities. Hence in my AccountDataContext class I need to override this constructor and pass the storage account into it. All entities will be saved in the table storage with one or many tables which we call them “table containers”. Before we operate an entity we need to make sure that the table container had been created on the storage. There’s a method we can use for that: CloudTableClient.CreateTableIfNotExist. So in the constructor I will perform it firstly to make sure all method will be invoked after the table had been created. Notice that I passed the storage account enpoint URI and the credentials to specify where my storage is located and who am I. Another advise is that, make your entity class name as the same as the table name when create the table. It will increase the performance when you operate it over the cloud especially querying. Since the Register WCF method will add a new account into the table service, here I will create a relevant method to add the account entity. Before implement, I should add a reference - System.Data.Services.Client to the project. This reference provides some common method within the ADO.NET Data Service which can be used in the Windows Azure Table Service. I will use its AddObject method to create my account entity. Since the table service are not fully implemented the ADO.NET Data Service, there are some methods in the System.Data.Services.Client that TableServiceContext doesn’t support, such as AddLinks, etc. Then I implemented the serivce method to add the account entity through the AccountDataContext. You can see in the service implmentation I load the storage account information through my configuration file and created the account table entity from the parameters. Then I created the AccountDataContext. If it’s my first time to invoke this method the constructor of the AccountDataContext will create a table container for me. Then I use Add method to add the account entity into the table. Next, let’s create a farely simple client application to test this service. I created a windows console application and added a service reference to my WCF service. The metadata information of the WCF service cannot be retrieved if it’s deployed on the Windows Azure even though the <serviceMetadata httpGetEnabled="true"/> had been set. If we need to get its metadata we can deploy it on the local development service and then changed the endpoint to the address which is on the cloud. In the client side app.config file I specified the endpoint to the local development fabric address. And the just implement the client to let me input an email and a password then invoke the WCF service to add my acocunt. Let’s run my application and see the result. Of course it should return TRUE to me. And in the local SQL Express I can see the data had been saved in the table.   Summary In this post I explained more about the Windows Azure Table Storage Service. I also created a small application for demostration of how to connect and consume it through the ADO.NET Data Service Managed Library provided within the Azure SDK. I only show how to create an eneity in the storage service. In the next post I would like to explain about how to query the entities with conditions thruogh LINQ. I also would like to refactor my AccountDataContext class to make it dyamic for any kinds of entities.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • SQL Server and Hyper-V Dynamic Memory - Part 1

    - by SQLOS Team
    SQL and Dynamic Memory Blog Post Series   Hyper-V Dynamic Memory is a new feature in Windows Server 2008 R2 SP1 that allows the memory assigned to guest virtual machines to vary according to demand. Using this feature with SQL Server is supported, but how well does it work in an environment where available memory can vary dynamically, especially since SQL Server likes memory, and is not very eager to let go of it? The next three posts will look at this question in detail. In Part 1 Serdar Sutay, a program manager in the Windows Hyper-V team, introduces Dynamic Memory with an overview of the basic architecture, configuration and monitoring concepts. In subsequent parts we will look at SQL Server memory handling, and develop some guidelines on using SQL Server with Dynamic Memory.   Part 1: Dynamic Memory Introduction   In virtualized environments memory is often the bottleneck for reaching higher VM densities. In Windows Server 2008 R2 SP1 Hyper-V introduced a new feature “Dynamic Memory” to improve VM densities on Hyper-V hosts. Dynamic Memory increases the memory utilization in virtualized environments by enabling VM memory to be changed dynamically when the VM is running.   This brings up the question of how to utilize this feature with SQL Server VMs as SQL Server performance is very sensitive to the memory being used. In the next three posts we’ll discuss the internals of Dynamic Memory, SQL Server Memory Management and how to use Dynamic Memory with SQL Server VMs.   Memory Utilization Efficiency in Virtualized Environments   The primary reason memory is usually the bottleneck for higher VM densities is that users tend to be generous when assigning memory to their VMs. Here are some memory sizing practices we’ve heard from customers:   ·         I assign 4 GB of memory to my VMs. I don’t know if all of it is being used by the applications but no one complains. ·         I take the minimum system requirements and add 50% more. ·         I go with the recommendations provided by my software vendor.   In reality correctly sizing a virtual machine requires significant effort to monitor the memory usage of the applications. Since this is not done in most environments, VMs are usually over-provisioned in terms of memory. In other words, a SQL Server VM that is assigned 4 GB of memory may not need to use 4 GB.   How does Dynamic Memory help?   Dynamic Memory improves the memory utilization by removing the requirement to determine the memory need for an application. Hyper-V determines the memory needed by applications in the VM by evaluating the memory usage information in the guest with Dynamic Memory. VMs can start with a small amount of memory and they can be assigned more memory dynamically based on the workload of applications running inside.   Overview of Dynamic Memory Concepts   ·         Startup Memory: Startup Memory is the starting amount of memory when Dynamic Memory is enabled for a VM. Dynamic Memory will make sure that this amount of memory is always assigned to the VMs by default.   ·         Maximum Memory: Maximum Memory specifies the maximum amount of memory that a VM can grow to with Dynamic Memory. ·         Memory Demand: Memory Demand is the amount determined by Dynamic Memory as the memory needed by the applications in the VM. In Windows Server 2008 R2 SP1, this is equal to the total amount of committed memory of the VM. ·         Memory Buffer: Memory Buffer is the amount of memory assigned to the VMs in addition to their memory demand to satisfy immediate memory requirements and file cache needs.   Once Dynamic Memory is enabled for a VM, it will start with the “Startup Memory”. After the boot process Dynamic Memory will determine the “Memory Demand” of the VM. Based on this memory demand it will determine the amount of “Memory Buffer” that needs to be assigned to the VM. Dynamic Memory will assign the total of “Memory Demand” and “Memory Buffer” to the VM as long as this value is less than “Maximum Memory” and as long as physical memory is available on the host.   What happens when there is not enough physical memory available on the host?   Once there is not enough physical memory on the host to satisfy VM needs, Dynamic Memory will assign less than needed amount of memory to the VMs based on their importance. A concept known as “Memory Weight” is used to determine how much VMs should be penalized based on their needed amount of memory. “Memory Weight” is a configuration setting on the VM. It can be configured to be higher for the VMs with high performance requirements. Under high memory pressure on the host, the “Memory Weight” of the VMs are evaluated in a relative manner and the VMs with lower relative “Memory Weight” will be penalized more than the ones with higher “Memory Weight”.   Dynamic Memory Configuration   Based on these concepts “Startup Memory”, “Maximum Memory”, “Memory Buffer” and “Memory Weight” can be configured as shown below in Windows Server 2008 R2 SP1 Hyper-V Manager. Memory Demand is automatically calculated by Dynamic Memory once VMs start running.     Dynamic Memory Monitoring    In Windows Server 2008 R2 SP1, Hyper-V Manager displays the memory status of VMs in the following three columns:         ·         Assigned Memory represents the current physical memory assigned to the VM. In regular conditions this will be equal to the sum of “Memory Demand” and “Memory Buffer” assigned to the VM. When there is not enough memory on the host, this value can go below the Memory Demand determined for the VM. ·         Memory Demand displays the current “Memory Demand” determined for the VM. ·         Memory Status displays the current memory status of the VM. This column can represent three values for a VM: o   OK: In this condition the VM is assigned the total of Memory Demand and Memory Buffer it needs. o   Low: In this condition the VM is assigned all the Memory Demand and a certain percentage of the Memory Buffer it needs. o   Warning: In this condition the VM is assigned a lower memory than its Memory Demand. When VMs are running in this condition, it’s likely that they will exhibit performance problems due to internal paging happening in the VM.    So far so good! But how does it work with SQL Server?   SQL Server is aggressive in terms of memory usage for good reasons. This raises the question: How do SQL Server and Dynamic Memory work together? To understand the full story, we’ll first need to understand how SQL Server Memory Management works. This will be covered in our second post in “SQL and Dynamic Memory” series. Meanwhile if you want to dive deeper into Dynamic Memory you can check the below posts from the Windows Virtualization Team Blog:   http://blogs.technet.com/virtualization/archive/2010/03/18/dynamic-memory-coming-to-hyper-v.aspx   http://blogs.technet.com/virtualization/archive/2010/03/25/dynamic-memory-coming-to-hyper-v-part-2.aspx   http://blogs.technet.com/virtualization/archive/2010/04/07/dynamic-memory-coming-to-hyper-v-part-3.aspx   http://blogs.technet.com/b/virtualization/archive/2010/04/21/dynamic-memory-coming-to-hyper-v-part-4.aspx   http://blogs.technet.com/b/virtualization/archive/2010/05/20/dynamic-memory-coming-to-hyper-v-part-5.aspx   http://blogs.technet.com/b/virtualization/archive/2010/07/12/dynamic-memory-coming-to-hyper-v-part-6.aspx   - Serdar Sutay   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • SQL Server and Hyper-V Dynamic Memory - Part 1

    - by SQLOS Team
    SQL and Dynamic Memory Blog Post Series   Hyper-V Dynamic Memory is a new feature in Windows Server 2008 R2 SP1 that allows the memory assigned to guest virtual machines to vary according to demand. Using this feature with SQL Server is supported, but how well does it work in an environment where available memory can vary dynamically, especially since SQL Server likes memory, and is not very eager to let go of it? The next three posts will look at this question in detail. In Part 1 Serdar Sutay, a program manager in the Windows Hyper-V team, introduces Dynamic Memory with an overview of the basic architecture, configuration and monitoring concepts. In subsequent parts we will look at SQL Server memory handling, and develop some guidelines on using SQL Server with Dynamic Memory.   Part 1: Dynamic Memory Introduction   In virtualized environments memory is often the bottleneck for reaching higher VM densities. In Windows Server 2008 R2 SP1 Hyper-V introduced a new feature “Dynamic Memory” to improve VM densities on Hyper-V hosts. Dynamic Memory increases the memory utilization in virtualized environments by enabling VM memory to be changed dynamically when the VM is running.   This brings up the question of how to utilize this feature with SQL Server VMs as SQL Server performance is very sensitive to the memory being used. In the next three posts we’ll discuss the internals of Dynamic Memory, SQL Server Memory Management and how to use Dynamic Memory with SQL Server VMs.   Memory Utilization Efficiency in Virtualized Environments   The primary reason memory is usually the bottleneck for higher VM densities is that users tend to be generous when assigning memory to their VMs. Here are some memory sizing practices we’ve heard from customers:   ·         I assign 4 GB of memory to my VMs. I don’t know if all of it is being used by the applications but no one complains. ·         I take the minimum system requirements and add 50% more. ·         I go with the recommendations provided by my software vendor.   In reality correctly sizing a virtual machine requires significant effort to monitor the memory usage of the applications. Since this is not done in most environments, VMs are usually over-provisioned in terms of memory. In other words, a SQL Server VM that is assigned 4 GB of memory may not need to use 4 GB.   How does Dynamic Memory help?   Dynamic Memory improves the memory utilization by removing the requirement to determine the memory need for an application. Hyper-V determines the memory needed by applications in the VM by evaluating the memory usage information in the guest with Dynamic Memory. VMs can start with a small amount of memory and they can be assigned more memory dynamically based on the workload of applications running inside.   Overview of Dynamic Memory Concepts   ·         Startup Memory: Startup Memory is the starting amount of memory when Dynamic Memory is enabled for a VM. Dynamic Memory will make sure that this amount of memory is always assigned to the VMs by default.   ·         Maximum Memory: Maximum Memory specifies the maximum amount of memory that a VM can grow to with Dynamic Memory. ·         Memory Demand: Memory Demand is the amount determined by Dynamic Memory as the memory needed by the applications in the VM. In Windows Server 2008 R2 SP1, this is equal to the total amount of committed memory of the VM. ·         Memory Buffer: Memory Buffer is the amount of memory assigned to the VMs in addition to their memory demand to satisfy immediate memory requirements and file cache needs.   Once Dynamic Memory is enabled for a VM, it will start with the “Startup Memory”. After the boot process Dynamic Memory will determine the “Memory Demand” of the VM. Based on this memory demand it will determine the amount of “Memory Buffer” that needs to be assigned to the VM. Dynamic Memory will assign the total of “Memory Demand” and “Memory Buffer” to the VM as long as this value is less than “Maximum Memory” and as long as physical memory is available on the host.   What happens when there is not enough physical memory available on the host?   Once there is not enough physical memory on the host to satisfy VM needs, Dynamic Memory will assign less than needed amount of memory to the VMs based on their importance. A concept known as “Memory Weight” is used to determine how much VMs should be penalized based on their needed amount of memory. “Memory Weight” is a configuration setting on the VM. It can be configured to be higher for the VMs with high performance requirements. Under high memory pressure on the host, the “Memory Weight” of the VMs are evaluated in a relative manner and the VMs with lower relative “Memory Weight” will be penalized more than the ones with higher “Memory Weight”.   Dynamic Memory Configuration   Based on these concepts “Startup Memory”, “Maximum Memory”, “Memory Buffer” and “Memory Weight” can be configured as shown below in Windows Server 2008 R2 SP1 Hyper-V Manager. Memory Demand is automatically calculated by Dynamic Memory once VMs start running.     Dynamic Memory Monitoring    In Windows Server 2008 R2 SP1, Hyper-V Manager displays the memory status of VMs in the following three columns:         ·         Assigned Memory represents the current physical memory assigned to the VM. In regular conditions this will be equal to the sum of “Memory Demand” and “Memory Buffer” assigned to the VM. When there is not enough memory on the host, this value can go below the Memory Demand determined for the VM. ·         Memory Demand displays the current “Memory Demand” determined for the VM. ·         Memory Status displays the current memory status of the VM. This column can represent three values for a VM: o   OK: In this condition the VM is assigned the total of Memory Demand and Memory Buffer it needs. o   Low: In this condition the VM is assigned all the Memory Demand and a certain percentage of the Memory Buffer it needs. o   Warning: In this condition the VM is assigned a lower memory than its Memory Demand. When VMs are running in this condition, it’s likely that they will exhibit performance problems due to internal paging happening in the VM.    So far so good! But how does it work with SQL Server?   SQL Server is aggressive in terms of memory usage for good reasons. This raises the question: How do SQL Server and Dynamic Memory work together? To understand the full story, we’ll first need to understand how SQL Server Memory Management works. This will be covered in our second post in “SQL and Dynamic Memory” series. Meanwhile if you want to dive deeper into Dynamic Memory you can check the below posts from the Windows Virtualization Team Blog:   http://blogs.technet.com/virtualization/archive/2010/03/18/dynamic-memory-coming-to-hyper-v.aspx   http://blogs.technet.com/virtualization/archive/2010/03/25/dynamic-memory-coming-to-hyper-v-part-2.aspx   http://blogs.technet.com/virtualization/archive/2010/04/07/dynamic-memory-coming-to-hyper-v-part-3.aspx   http://blogs.technet.com/b/virtualization/archive/2010/04/21/dynamic-memory-coming-to-hyper-v-part-4.aspx   http://blogs.technet.com/b/virtualization/archive/2010/05/20/dynamic-memory-coming-to-hyper-v-part-5.aspx   http://blogs.technet.com/b/virtualization/archive/2010/07/12/dynamic-memory-coming-to-hyper-v-part-6.aspx   - Serdar Sutay   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • CodePlex Daily Summary for Saturday, March 10, 2012

    CodePlex Daily Summary for Saturday, March 10, 2012Popular ReleasesPlayer Framework by Microsoft: Player Framework for Windows 8 Metro (BETA): Player Framework for HTML/JavaScript and XAML/C# Metro Style Applications.WPF Application Framework (WAF): WAF for .NET 4.5 (Experimental): Version: 2.5.0.440 (Experimental): This is an experimental release! It can be used to investigate the new .NET Framework 4.5 features. The ideas shown in this release might come in a future release (after 2.5) of the WPF Application Framework (WAF). More information can be found in this dicussion post. Requirements .NET Framework 4.5 (The package contains a solution file for Visual Studio 11) The unit test projects require Visual Studio 11 Professional Changelog All: Upgrade all proje...SSH.NET Library: 2012.3.9: There are still few outstanding issues I wanted to include in this release but since its been a while and there are few new features already I decided to create a new release now. New Features Add SOCKS4, SOCKS5 and HTTP Proxy support when connecting to remote server. For silverlight only IP address can be used for server address when using proxy. Add dynamic port forwarding support using ForwardedPortDynamic class. Add new ShellStream class to work with SSH Shell. Add supports for mu...Test Case Import Utilities for Visual Studio 2010 and Visual Studio 11 Beta: V1.2 RTM: This release (V1.2 RTM) includes: Support for connecting to Hosted Team Foundation Server Preview. Support for connecting to Team Foundation Server 11 Beta. Fix to issue with read-only attribute being set for LinksMapping-ReportFile which may have led to problems when saving the report file. Fix to issue with “related links” not being set properly in certain conditions. Fix to ensure that tool works fine when the Excel file contained rich text data. Note: Data is still imported in pl...Audio Pitch & Shift: Audio Pitch And Shift 3.5.0: Modules (mod, xm, it, etc..) supportcallisto: callisto 2.0.19: BUG FIX: Autorun.load() function in scripting now has sandboxed path (Thanks Mikey!) BUG FIX: UserObject.Name property now allows full 20 byte string replacements. FEATURE REQUEST: File.* script functions now allow file extensions.EntitiesToDTOs - Entity Framework DTO Generator: EntitiesToDTOs.v2.1: Changelog Fixed template file access issue on Win7. Fix on configuration load when target project was not found and "Use project default namespace" was checked. Minor fix on loading latest configuration at startup. Minor fix in VisualStudioHelper class. DTO's properties accessors are now in one line. Improvements in PropertyHelper to get a cleaner and more performant code. Added Website project type as a not supported project type. Using Error List pane from VS IDE to show Enti...DotNetNuke® Community Edition CMS: 06.01.04: Major Highlights Fixed issue with loading the splash page skin in the login, privacy and terms of use pages Fixed issue when searching for words with special characters in them Fixed redirection issue when the user does not have permissions to access a resource Fixed issue when clearing the cache using the ClearHostCache() function Fixed issue when displaying the site structure in the link to page feature Fixed issue when inline editing the title of modules Fixed issue with ...Mayhem: Mayhem Developer Preview: This is the developer preview of Mayhem. Enjoy!Team Foundation Server Process Template Customization Guidance: v1 - For Visual Studio 11: Welcome to the BETA release of the Team Foundation Server Process Template Customization preview. As this is a BETA release and the quality bar for the final Release has not been achieved, we value your candid feedback and recommend that you do not use or deploy these BETA artifacts in a production environment. Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has not been through an independent technical review Documentation has not been rev...Magelia WebStore Open-source Ecommerce software: Magelia WebStore 1.2: Medium trust compliant lot of small change for medium trust compliance full refactoring of user management refactoring of Client Refactoring of user management Magelia.WebStore.Client no longer reference Magelia.WebStore.Services.Contract Refactoring page category multi parent category added copy category feature added Refactoring page catalog copy catalog feature added variant management improvement ability to define a default variant for a variable product ability to ord...PDFsharp - A .NET library for processing PDF: PDFsharp and MigraDoc Foundation 1.32: PDFsharp and MigraDoc Foundation 1.32 is a stable version that fixes a few bugs that were found with version 1.31. Version 1.32 includes solutions for Visual Studio 2010 only (but it should be possible to add the project files to existing solutions for VS 2005 or VS 2008). Users of VS 2005 or VS 2008 can still download version 1.31 with the solutions for those versions that allow them to easily try the samples that are included. While it may create smaller PDF files than version 1.30 because...Terminals: Version 2.0 - Release: Changes since version 1.9a:New art works New usability in Organize favorites window Improved usability of imports/exports and scans Large number of fixes Improvements in single instance mode Comparing November beta 4, this corrects: New application icons Doesn't show Logon error codes Fixed command line arguments exception for single instance mode Fixed detaching of tabs improved usability in detached window Fixed option settings for Capture manager Fixed system tray noti...MFCMAPI: March 2012 Release: Build: 15.0.0.1032 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeTortoiseHg: TortoiseHg 2.3.1: bugfix releaseSimple Injector: Simple Injector v1.4.1: This release adds two small improvements to the SimpleInjector.Extensions.dll. No changes have been made to the core library. New features and improvements in this release for the SimpleInjector.Extensions.dll The RegisterManyForOpenGeneric extension methods now accept non-generic decorator, as long as they implement the given open generic service type. GetTypesToRegister methods added to the OpenGenericBatchRegistrationExtensions class which allows to customize the behavior. Note that the...CommonLibrary: Code: CodeVidCoder: 1.3.1: Updated HandBrake core to 0.9.6 release (svn 4472). Removed erroneous "None" container choice. Change some logic and help text to stop assuming you have to pick the VIDEO_TS folder for a DVD scan. This should make previewing DVD titles on the Queue Multiple Titles window possible when you've picked the root DVD directory.Google Books Downloader for Windows: Google Books Downloader: Google Books Downloader 1.8ExtAspNet: ExtAspNet v3.1.0: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-03-04 v3.1.0 -??Hidden???????(〓?〓)。 -?PageManager??...New ProjectsAres Backup: Ares Backup is a Backup software which can save bytediffs and provides several storage plugins.BackItUp: Backup-Tool für Visual Studio Projektebinbin domain: binbin domain Blexus Service Plattform: Some cool stuff about Wcf Services. - can communicate files - can communicate xaml objects (generate dynamically Gui) CardPlay - a Solitaire Framework for .Net: CardPlay is a C# framework for developing Solitaire card games. The solution includes a sample WPF client along with over 100 games.Cloud Files Upload: Windows application to script cloud file uploads.Code First API Library, Scaffolding & Guidance for Coded UI Tests: Code first Coded UI Tests for web apps. Library, Scaffolding and Guidance.CPEBook by FMUG & TPAY: CPEBook by MUG & TPAY Projet dot NET CPEBookDot Net Application String Resources Viewer: Dot Net Application String Resources ViewerFilter for SharePoint Web Settings Page: This solution show a simple way to integrate a filter box by using jquery, a global farm feature and a simple delegate control for AdditionalPageHead.Google Books Downloader for Windows: Save Google books in PDF, JPEG or PNG format.GUIToolkit: C++ Windowless GUI,DirectUIHarvest Sports: Harvest SportsInfoPath Analyzer: InfoPath Analyzer makes InfoPath form development and troubleshooting much easier. You're easy to find the relationship between controls and data fields, search data fields or controls by name, edit InfoPath inner html directly.Kinectsignlanguage project: This project will help kinect be used for sign language to speech so that sign language people can be understood while talking to important people. LotteryVote: ????manager123: Trying out CodePlexMcRegister: McRegister is an asp.net mvc 3 razor website that enables you to register users on your minecraft server it works in conjunction with a minecraft mod called EasyAuth.MetroTipi: HelloTipi Sous l'interface Metro de Windows 8Microsoft AppFactory: AppFactory is a powerful data-driven build system for Windows Phone (and soon Windows 8) projects. Its purpose is to help developers start with template projects and turn them into suites of applications.MiniStock: MiniStock is an experimental reference architecture for scalable cloud-based architectures. Implemented in .net.online book shopping: online book shoppingScopa: Carousel Team Scopa per WP7 XNA testtom03092012tfs01: testtom03092012tfs01UsingLib: A library of automatically removed utilities: 1.Changing cursor to hourglass in Windows Forms 2.Logging It's developed in C#WHMCS Library: WHMCS is an all-in-one client management, billing & support solution for online businesses. Handling everything from signup to termination, WHMCS is a powerful business automation tool that puts you firmly in control. The WHMCS Library is a .NET wrapper for the WHMCS API. Written entirely in C# but really easy to port over to VB.NET. Coming from a VB.NET background we tried hard to make sure porting would be simple for VB.NET community members. ZoneEditService: Windows service to update ZoneEdit for dynamic dns.

    Read the article

  • Customize the Windows Media Center Start Menu with Media Center Studio

    - by DigitalGeekery
    Do you ever wish you could change the WMC start menu? Maybe move some of the tiles and strips around to different locations, add new ones, or eliminate some altogether? Today we look at how to do it using Media Center Studio. Download and install Media Center Studio. (Download link below) You’ll also want to make sure you have Windows Media Center closed before running Media Center Studio. Many of the actions cannot be performed with Media Center open. Once installed, you can open Media Center Studio from the Windows Start Menu. When you first open Media Center Studio you’ll be on the Themes tab. Click on the Start Menu tab. It should be noted that Media Center Studio is a Beta application, and it did crash on us a few times, so it’s a good idea to save your work frequently. You can save your changes by selecting Save on the Home tab, or by clicking the small disk icon at the top left. We also found that that trying to launch Media Center from the Start Media Center button on the application ribbon typically didn’t work. Opening Windows Media Center from the Windows Start Menu is preferred.   When you’re on the Start Menu tab you will see the Windows Media Center menu strips and tiles. Click the arrows located at the right, left, top, and bottom of the screen to scroll through the various menu strips.   Hiding and Removing Tiles and Menu Strips. If there is an entire menu strip that you never use and would like to remove from Media Center, simply uncheck the box to the left of the the title above that menu strip. If you’d like to hide individual tiles, uncheck the box next to the name of the individual tile. Renaming Tiles and Strips To rename a tile or menu strip, click on the small notepad icon next to the title. Note: If you do not see a small notepad icon next to the title, then the title is not editable. This applies to many of the “Promo” tiles. The title will turn into a text input box so that you can edit the name. Click away from the text box when finished. Here we will change the title of the default Movie strip to “Flicks.” Change the Default Tile and Menu Strip The Default menu strip is the strip that is highlighted, or on focus, when you open Media Center.   To change the default strip, simply click once on another strip to highlight it, and then save your work. In our example, I’m going to make our newly renamed “Flicks” strip the default.   Each menu strip has a default tile. This is the tile that is active, or on focus, when you select the menu strip. To change the default tile on a strip, click once on the tile. You will see it outlined in light blue. Now just simply save your changes. In our example below, we’ve changed the default tile on the TV strip to “guide.”   Moving Tiles and Menu Strips You can move an entire Menu Strip up or down on the screen. When you hover your mouse over the a menu strip, you will see up and down arrows appear to the right and left of the title. Click on the arrows to move the strip up or down.   You will see the menu strip appear in it’s new position.   To move a tile to a new menu strip, click and drag the tile you’d like to move. When you begin to drag the tile, green plus (+) signs will appear in between the tiles. Drag and drop the tile onto to any of these green plus signs to move it to that location. When you’ve dragged the tile over an acceptable position, you’ll see the  red “Move” label next to your cursor turn to a blue “Move to” label. Now you can drop the tile into position. You’ll see the tile located in it’s new position.   Adding a New Custom Menu Strip Click on the Start Menu tab and then select the Menu Strip button.   You will see a new Custom Menu strip appear on your Start Menu with the default name of Custom menu. You can change the name by clicking on the notepad icon just as we did earlier. For our example, we’ll change the name of the new strip to Add-ins. To add a new tile, click on Entry Points at the lower left of the application window. This will reveal all of your available Entry Points that can be added to the Media Center Menu. You should see the built-in Media Center Games and any Media Center Plug-ins you have added to your system. You can then drag and drop any of the Entry Points onto any of the Menu Strips. Below we’ve added Media Browser to our custom Add-ins menu strip. You can also add additional applications to launch directly from Media Center. Click on the Application button on the Start Menu tab. Note: Many applications may not work with your remote, but with keyboard and mouse only.    Type in a title which will appear under the tile in Media Center, and then type the path to the application. In our example, we will add Internet Explorer 8. Note: Be sure to add the actual path to the application and not just a link on the desktop. Click any of the check boxes to select any options under Required Capabilities. You can also browse to choose an image if you don’t care for the image that appears automatically.   Next, you can select keyboard strokes to press to exit the application and return to Media Center. Click the green plus (+) button. When prompted, press a key you’ll use to close the program. Repeat the process if you’d also like to select a keystroke to kill the program.   You’ll see your button programs listed below. When you’re finished, save your work and close out of Media Center Studio.   Now your new program entry point will appear in the Entry Points section. Drag the icon to the desired position on the Start Menu and save again before exiting Media Center Studio. When you open Media Center you will see your new application on the start menu. Click the tile to open the application just as you would any other tile. The application will open and minimize Media Center. When you press the key you choose to close the program, Windows Media Center will automatically be restored. Note: You can also exit the application through normal methods by clicking the red “X” or File > Exit. Conclusion Media Center Studio is a Beta application which the developer freely admits still has some bugs. Despite it’s flaws Media Center Studio is a powerful tool, and when it comes to customizing your Media Center start menu, it’s pretty much the only game in town. It works with both Vista and Windows 7, and according to the developer, has not been officially tested with extenders. Media Center Studio can also be used to add custom themes to Windows 7 Media Center and we’ll be covering that in a future article. Looking for more ways to customize your Media Center experience? Be sure to check out our earlier posts on Media Browser, as well as how to add Hulu, Boxee, and weather conditions your Windows 7 Media Center. Download Media Center Studio Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)How To Rip a Music CD in Windows 7 Media CenterSchedule Updates for Windows Media CenterStartup Customizations for Media Center in Windows 7Automatically Start Windows 7 Media Center in Live TV Mode TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall

    Read the article

  • ASP.NET Web Forms Extensibility: Providers

    - by Ricardo Peres
    Introduction This will be the first of a number of posts on ASP.NET extensibility. At this moment I don’t know exactly how many will be and I only know a couple of subjects that I want to talk about, so more will come in the next days. I have the sensation that the providers offered by ASP.NET are not widely know, although everyone uses, for example, sessions, they may not be aware of the extensibility points that Microsoft included. This post won’t go into details of how to configure and extend each of the providers, but will hopefully give some pointers on that direction. Canonical These are the most widely known and used providers, coming from ASP.NET 1, chances are, you have used them already. Good support for invoking client side, either from a .NET application or from JavaScript. Lots of server-side controls use them, such as the Login control for example. Membership The Membership provider is responsible for managing registered users, including creating new ones, authenticating them, changing passwords, etc. ASP.NET comes with two implementations, one that uses a SQL Server database and another that uses the Active Directory. The base class is Membership and new providers are registered on the membership section on the Web.config file, as well as parameters for specifying minimum password lengths, complexities, maximum age, etc. One reason for creating a custom provider would be, for example, storing membership information in a different database engine. 1: <membership defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </membership> Role The Role provider assigns roles to authenticated users. The base class is Role and there are three out of the box implementations: XML-based, SQL Server and Windows-based. Also registered on Web.config through the roleManager section, where you can also say if your roles should be cached on a cookie. If you want your roles to come from a different place, implement a custom provider. 1: <roleManager defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly" /> 4: </providers> 5: </roleManager> Profile The Profile provider allows defining a set of properties that will be tied and made available to authenticated or even anonymous ones, which must be tracked by using anonymous authentication. The base class is Profile and the only included implementation stores these settings in a SQL Server database. Configured through profile section, where you also specify the properties to make available, a custom provider would allow storing these properties in different locations. 1: <profile defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </profile> Basic OK, I didn’t know what to call these, so Basic is probably as good as a name as anything else. Not supported client-side (doesn’t even make sense). Session The Session provider allows storing data tied to the current “session”, which is normally created when a user first accesses the site, even when it is not yet authenticated, and remains all the way. The base class and only included implementation is SessionStateStoreProviderBase and it is capable of storing data in one of three locations: In the process memory (default, not suitable for web farms or increased reliability); A SQL Server database (best for reliability and clustering); The ASP.NET State Service, which is a Windows Service that is installed with the .NET Framework (ok for clustering). The configuration is made through the sessionState section. By adding a custom Session provider, you can store the data in different locations – think for example of a distributed cache. 1: <sessionState customProvider=”MyProvider”> 2: <providers> 3: <add name=”MyProvider” type=”MyClass, MyAssembly” /> 4: </providers> 5: </sessionState> Resource A not so known provider, allows you to change the origin of localized resource elements. By default, these come from RESX files and are used whenever you use the Resources expression builder or the GetGlobalResourceObject and GetLocalResourceObject methods, but if you implement a custom provider, you can have these elements come from some place else, such as a database. The base class is ResourceProviderFactory and there’s only one internal implementation which uses these RESX files. Configuration is through the globalization section. 1: <globalization resourceProviderFactoryType="MyClass, MyAssembly" /> Health Monitoring Health Monitoring is also probably not so well known, and actually not a good name for it. First, in order to understand what it does, you have to know that ASP.NET fires “events” at specific times and when specific things happen, such as when logging in, an exception is raised. These are not user interface events and you can create your own and fire them, nothing will happen, but the Health Monitoring provider will detect it. You can configure it to do things when certain conditions are met, such as a number of events being fired in a certain amount of time. You define these rules and route them to a specific provider, which must inherit from WebEventProvider. Out of the box implementations include sending mails, logging to a SQL Server database, writing to the Windows Event Log, Windows Management Instrumentation, the IIS 7 Trace infrastructure or the debugger Trace. Its configuration is achieved by the healthMonitoring section and a reason for implementing a custom provider would be, for example, locking down a web application in the event of a significant number of failed login attempts occurring in a small period of time. 1: <healthMonitoring> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </healthMonitoring> Sitemap The Sitemap provider allows defining the site’s navigation structure and associated required permissions for each node, in a tree-like fashion. Usually this is statically defined, and the included provider allows it, by supplying this structure in a Web.sitemap XML file. The base class is SiteMapProvider and you can extend it in order to supply you own source for the site’s structure, which may even be dynamic. Its configuration must be done through the siteMap section. 1: <siteMap defaultProvider="MyProvider"> 2: <providers><add name="MyProvider" type="MyClass, MyAssembly" /> 3: </providers> 4: </siteMap> Web Part Personalization Web Parts are better known by SharePoint users, but since ASP.NET 2.0 they are included in the core Framework. Web Parts are server-side controls that offer certain possibilities of configuration by clients visiting the page where they are located. The infrastructure handles this configuration per user or globally for all users and this provider is responsible for just that. The base class is PersonalizationProvider and the only included implementation stores settings on SQL Server. Add new providers through the personalization section. 1: <webParts> 2: <personalization defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </personalization> 7: </webParts> Build The Build provider is responsible for compiling whatever files are present on your web folder. There’s a base class, BuildProvider, and, as can be expected, internal implementations for building pages (ASPX), master pages (Master), user web controls (ASCX), handlers (ASHX), themes (Skin), XML Schemas (XSD), web services (ASMX, SVC), resources (RESX), browser capabilities files (Browser) and so on. You would write a build provider if you wanted to generate code from any kind of non-code file so that you have strong typing at development time. Configuration goes on the buildProviders section and it is per extension. 1: <buildProviders> 2: <add extension=".ext" type="MyClass, MyAssembly” /> 3: </buildProviders> New in ASP.NET 4 Not exactly new since they exist since 2010, but in ASP.NET terms, still new. Output Cache The Output Cache for ASPX pages and ASCX user controls is now extensible, through the Output Cache provider, which means you can implement a custom mechanism for storing and retrieving cached data, for example, in a distributed fashion. The base class is OutputCacheProvider and the only implementation is private. Configuration goes on the outputCache section and on each page and web user control you can choose the provider you want to use. 1: <caching> 2: <outputCache defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </outputCache> 7: </caching> Request Validation A big change introduced in ASP.NET 4 (and refined in 4.5, by the way) is the introduction of extensible request validation, by means of a Request Validation provider. This means we are not limited to either enabling or disabling event validation for all pages or for a specific page, but we now have fine control over each of the elements of the request, including cookies, headers, query string and form values. The base provider class is RequestValidator and the configuration goes on the httpRuntime section. 1: <httpRuntime requestValidationType="MyClass, MyAssembly" /> Browser Capabilities The Browser Capabilities provider is new in ASP.NET 4, although the concept exists from ASP.NET 2. The idea is to map a browser brand and version to its supported capabilities, such as JavaScript version, Flash support, ActiveX support, and so on. Previously, this was all hardcoded in .Browser files located in %WINDIR%\Microsoft.NET\Framework(64)\vXXXXX\Config\Browsers, but now you can have a class inherit from HttpCapabilitiesProvider and implement your own mechanism. Register in on the browserCaps section. 1: <browserCaps provider="MyClass, MyAssembly" /> Encoder The Encoder provider is responsible for encoding every string that is sent to the browser on a page or header. This includes for example converting special characters for their standard codes and is implemented by the base class HttpEncoder. Another implementation takes care of Anti Cross Site Scripting (XSS) attacks. Build your own by inheriting from one of these classes if you want to add some additional processing to these strings. The configuration will go on the httpRuntime section. 1: <httpRuntime encoderType="MyClass, MyAssembly" /> Conclusion That’s about it for ASP.NET providers. It was by no means a thorough description, but I hope I managed to raise your interest on this subject. There are lots of pointers on the Internet, so I only included direct references to the Framework classes and configuration sections. Stay tuned for more extensibility!

    Read the article

  • How to Use USER_DEFINED Activity in OWB Process Flow

    - by Jinggen He
    Process Flow is a very important component of Oracle Warehouse Builder. With Process Flow, we can create and control the ETL process by setting all kinds of activities in a well-constructed flow. In Oracle Warehouse Builder 11gR2, there are 28 kinds of activities, which fall into three categories: Control activities, OWB specific activities and Utility activities. For more information about Process Flow activities, please refer to OWB online doc. Most of those activities are pre-defined for some specific use. For example, the Mapping activity allows execution an OWB mapping in Process Flow and the FTP activity allows an interaction between the local host and a remote FTP server. Besides those activities for specific purposes, the User Defined activity enables you to incorporate into a Process Flow an activity that is not defined within Warehouse Builder. So the User Defined activity brings flexibility and extensibility to Process Flow. In this article, we will take an amazing tour of using the User Defined activity. Let's start. Enable execution of User Defined activity Let's start this section from creating a very simple Process Flow, which contains a Start activity, a User Defined activity and an End Success activity. Leave all parameters of activity USER_DEFINED unchanged except that we enter /tmp/test.sh into the Value column of the COMMAND parameter. Then let's create the shell script test.sh in /tmp directory. Here is the content of /tmp/test.sh (this article is demonstrating a scenario in Linux system, and /tmp/test.sh is a Bash shell script): echo Hello World! > /tmp/test.txt Note: don't forget to grant the execution privilege on /tmp/test.sh to OS Oracle user. For simplicity, we just use the following command. chmod +x /tmp/test.sh OK, it's so simple that we’ve almost done it. Now deploy the Process Flow and run it. For a newly installed OWB, we will come across an error saying "RPE-02248: For security reasons, activity operator Shell has been disabled by the DBA". See below. That's because, by default, the User Defined activity is DISABLED. Configuration about this can be found in <ORACLE_HOME>/owb/bin/admin/Runtime.properties: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint=DISABLED The property can be set to three different values: NATIVE_JAVA, SCHEDULER and DISBALED. Where NATIVE_JAVA uses the Java 'Runtime.exec' interface, SCHEDULER uses a DBMS Scheduler external job submitted by the Control Center repository owner which is executed by the default operating system user configured by the DBA. DISABLED prevents execution via these operators. We enable the execution of User Defined activity by setting: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint= NATIVE_JAVA Restart the Control Center service for the change of setting to take effect. cd <ORACLE_HOME>/owb/rtp/sql sqlplus OWBSYS/<password of OWBSYS> @stop_service.sql sqlplus OWBSYS/<password of OWBSYS> @start_service.sql And then run the Process Flow again. We will see that the Process Flow completes successfully. The execution of /tmp/test.sh successfully generated a file /tmp/test.txt, containing the line Hello World!. Pass parameters to User Defined Activity The Process Flow created in the above section has a drawback: the User Defined activity doesn't accept any information from OWB nor does it give any meaningful results back to OWB. That's to say, it lacks interaction. Maybe, sometimes such a Process Flow can fulfill the business requirement. But for most of the time, we need to get the User Defined activity executed according to some information prior to that step. In this section, we will see how to pass parameters to the User Defined activity and pass them into the to-be-executed shell script. First, let's see how to pass parameters to the script. The User Defined activity has an input parameter named PARAMETER_LIST. This is a list of parameters that will be passed to the command. Parameters are separated from one another by a token. The token is taken as the first character on the PARAMETER_LIST string, and the string must also end in that token. Warehouse Builder recommends the '?' character, but any character can be used. For example, to pass 'abc,' 'def,' and 'ghi' you can use the following equivalent: ?abc?def?ghi? or !abc!def!ghi! or |abc|def|ghi| If the token character or '\' needs to be included as part of the parameter, then it must be preceded with '\'. For example '\\'. If '\' is the token character, then '/' becomes the escape character. Let's configure the PARAMETER_LIST parameter as below: And modify the shell script /tmp/test.sh as below: echo $1 is saying hello to $2! > /tmp/test.txt Re-deploy the Process Flow and run it. We will see that the generated /tmp/test.txt contains the following line: Bob is saying hello to Alice! In the example above, the parameters passed into the shell script are static. This case is not so useful because: instead of passing parameters, we can directly write the value of the parameters in the shell script. To make the case more meaningful, we can pass two dynamic parameters, that are obtained from the previous activity, to the shell script. Prepare the Process Flow as below: The Mapping activity MAPPING_1 has two output parameters: FROM_USER, TO_USER. The User Defined activity has two input parameters: FROM_USER, TO_USER. All the four parameters are of String type. Additionally, the Process Flow has two string variables: VARIABLE_FOR_FROM_USER, VARIABLE_FOR_TO_USER. Through VARIABLE_FOR_FROM_USER, the input parameter FROM_USER of USER_DEFINED gets value from output parameter FROM_USER of MAPPING_1. We achieve this by binding both parameters to VARIABLE_FOR_FROM_USER. See the two figures below. In the same way, through VARIABLE_FOR_TO_USER, the input parameter TO_USER of USER_DEFINED gets value from output parameter TO_USER of MAPPING_1. Also, we need to change the PARAMETER_LIST of the User Defined activity like below: Now, the shell script is getting input from the Mapping activity dynamically. Deploy the Process Flow and all of its necessary dependees then run the Process Flow. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! 'USER B' and 'USER A' are two outputs of the Mapping execution. Write the shell script within Oracle Warehouse Builder In the previous section, the shell script is located in the /tmp directory. But sometimes, when the shell script is small, or for the sake of maintaining consistency, you may want to keep the shell script inside Oracle Warehouse Builder. We can achieve this by configuring these three parameters of a User Defined activity properly: COMMAND: Set the path of interpreter, by which the shell script will be interpreted. PARAMETER_LIST: Set it blank. SCRIPT: Enter the shell script content. Note that in Linux the shell script content is passed into the interpreter as standard input at runtime. About how to actually pass parameters to the shell script, we can utilize variable substitutions. As in the following figure, ${FROM_USER} will be replaced by the value of the FROM_USER input parameter of the User Defined activity. So will the ${TO_USER} symbol. Besides the custom substitution variables, OWB also provide some system pre-defined substitution variables. You can refer to the online document for that. Deploy the Process Flow and run it. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! Leverage the return value of User Defined activity All of the previous sections are connecting the User Defined activity to END_SUCCESS with an unconditional transition. But what should we do if we want different subsequent activities for different shell script execution results? 1.  The simplest way is to add three simple-conditioned out-going transitions for the User Defined activity just like the figure below. In the figure, to simplify the scenario, we connect the User Defined activity to three End activities. Basically, if the shell script ends successfully, the whole Process Flow will end at END_SUCCESS, otherwise, the whole Process Flow will end at END_ERROR (in our case, ending at END_WARNING seldom happens). In the real world, we can add more complex and meaningful subsequent business logic. 2.  Or we can utilize complex conditions to work with different results of the User Defined activity. Previously, in our script, we only have this line: echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt We can add more logic in it and return different values accordingly. echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt if CONDITION_1 ; then ...... exit 0 fi if CONDITION_2 ; then ...... exit 2 fi if CONDITION_3 ; then ...... exit 3 fi After that we can leverage the result by checking RESULT_CODE in condition expression of those out-going transitions. Let's suppose that we have the Process Flow as the following graph (SUB_PROCESS_n stands for more different further processes): We can set complex condition for the transition from USER_DEFINED to SUB_PROCESS_1 like this: Other transitions can be set in the same way. Note that, in our shell script, we return 0, 2 and 3, but not 1. As in Linux system, if the shell script comes across a system error like IO error, the return value will be 1. We can explicitly handle such a return value. Summary Let's summarize what has been discussed in this article: How to create a Process Flow with a User Defined activity in it How to pass parameters from the prior activity to the User Defined activity and finally into the shell script How to write the shell script within Oracle Warehouse Builder How to do variable substitutions How to let the User Defined activity return different values and in what way can we leverage

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #033

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Spatial Database Definition and Research Documents Here is the definition from Wikipedia about spatial database : A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. Select Only Date Part From DateTime – Best Practice A very common question which I receive is how to only get Date or Time part from datetime value. In this blog post I explain the same in very simple words. T-SQL Paging Query Technique Comparison (OVER and ROW_NUMBER()) – CTE vs. Derived Table I have received few emails and comments about my post SQL SERVER – T-SQL Paging Query Technique Comparison – SQL 2000 vs SQL 2005. The main question was is this can be done using CTE? Absolutely! What about Performance? It is identical! Please refer above mentioned article for the history of paging. SQL SERVER – Cannot resolve collation conflict for equal to operation One of the very first error I ever encountered in my career was to resolve this conflict. I have blogged about it and I have realized that many others like me who are facing this error. LEN and DATALENGTH of NULL Simple Example Here is the question for you what is the LEN of NULL value? Well it is very easy – just read the blog. Recovery Models and Selection Very simple and easy explanation of the Database Backup Recovery Model and how to select the best option for you. Explanation SQL SERVER Hash Join Hash join gives best performance when two more join tables are joined and at-least one of them have no index or is not sorted. It is also expected that smaller of the either of table can be read in memory completely (though not necessary). Easy Sequence of SELECT FROM JOIN WHERE GROUP BY HAVING ORDER BY SELECT yourcolumns FROM tablenames JOIN tablenames WHERE condition GROUP BY yourcolumns HAVING aggregatecolumn condition ORDER BY yourcolumns NorthWind Database or AdventureWorks Database – Samples Databases In this blog post we learn how to install Northwind database. I also shared the source where one can download this database as that is used in many examples on MSDN help files. sp_HelpText for sp_HelpText – Puzzle A simple quick puzzle – do you know the answer of it? If not, go ahead and read the blog. 2008 SQL SERVER – 2008 – Step By Step Installation Guide With Images When SQL Server 2008 was newly introduced lots of people had no clue how to install SQL Server 2008 and the amount of the question which I used to receive were so much. I wrote this blog post with the spirit that this will help all the newbies to install SQL Server 2008 with the help of images. Still today this blog post has been bible for all of the people who are confused with SQL Server installation. Inline Variable Assignment I loved this feature. I have always wanted this feature to be present in SQL Server. The last time when I met developers from Microsoft SQL Server, I had talked about this feature. I think this feature saves some time but make the code more readable. Introduction to Policy Management – Enforcing Rules on SQL Server If our company policy is to create all the Stored Procedure with prefix ‘usp’ that developers should be just prevented to create Stored Procedure with any other prefix. Let us see a small tutorial how to create conditions and policy which will prevent any future SP to be created with any other prefix. 2009 Performance Counters from System Views – By Kevin Mckenna Many of you are not aware of this fact that access to performance information is readily available in SQL Server and that too without querying performance counters using a custom application or via perfmon. Till now, this fact has remained undisclosed but through this post I would like to explain you can easily access SQL Server performance counter information. Without putting much effort you will come across the system viewsys.dm_os_performance_counters. As the name suggests, this provides you easy access to the SQL Server performance counter information that is passed on to perfmon, but you can get at it via tsql. Customize Toolbar – Remove Debug Button from Toolbar I was fond of SQL Server Debugger feature in SQL Server 2000. To my utter disappointment, this feature was withdrawn from SQL Server 2005. The button of the debugger is similar to a play button and is used to run debugging commands of Visual Studio. Because of this reason, it gets very much infuriating for developers when they are developing on both – Visual Studio and SSMS. Let us now see how we can remove debugging button from SQL Server Management Studio. Effect of Normalization on Index and Performance A very interesting conversation which started from twitter. If you want to read one link this is the link I encourage you to read it. SSMS Feature – Multi-server Queries Using SQL Server Management Studio (SSMS) DBAs can now query multiple servers from one window. It is quite common for DBAs with large amount of servers to maintain and gather information from multiple SQL Servers and create report. This feature is a blessing for the DBAs, as they can now assemble all the information instantaneously without going anywhere. Query Optimizer Hint ROBUST PLAN – Question to You “ROBUST PLAN” is a kind of query hint which works quite differently than other hints. It does not improve join or force any indexes to use; it just makes sure that a query does not crash due to over the limit size of row. Let me elaborate upon it in the blog post. 2010 Do you really know the difference between various date functions available in SQL Server 2012? Here is a three part story where we explored the same with examples: Fastest Way to Restore the Database Difference Between DATETIME and DATETIME2 Difference Between DATETIME and DATETIME2 – WITH GETDATE Shrinking NDF and MDF Files – Readers’ Opinion Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. 2011 Solution – Puzzle – SELECT * vs SELECT COUNT(*) This is very interesting question and I am very confident that not every one knows the answer to this question. Let me ask you again – Which will be faster SELECT* or SELECT COUNT (*) or do you think this is apples and oranges comparison. 2012 Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Let us understand the entire discussion with the help of three different examples. Finding Size of a Columnstore Index Using DMVs One of the very common question I often see is need of the list of columnstore index along with their size and corresponding table name. I quickly re-wrote a script using DMVs sys.indexes and sys.dm_db_partition_stats. This script gives the size of the columnstore index on disk only. I am sure there will be advanced script to retrieve details related to components associated with the columnstore index. However, I believe following script is sufficient to start getting an idea of columnstore index size. Developer Training Resources and Summary Roundup Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. Developer Training – A Conclusive Summary- Part 5 There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. A Quick Look at Logging and Ideas around Logging Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Beginning Performance Tuning with SQL Server Execution Plan Solution of Puzzle – Swap Value of Column Without Case Statement Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • .htaccess file size causes 500 Internal Server Error

    - by moobot
    As soon as my .htaccess goes over approx 8410 bytes, I get a 500 Internal Server Error. I don't think this is due to a bad redirect, as I have experimented with redirects in the .htaccess and then with just text that is commented out #. (no actual commands in the .htaccess file) Is there anything obvious that can cause this? Update: The site is on WordPress. Here are the redirects I was originally trying to add: RewriteEngine On ## 301 Redirects of old URLs to new # 301 Redirect 1 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^accesseries/underlay/prod_37\.html$ /product-category/accessories/underlays? [R=301,NE,NC,L] # 301 Redirect 2 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^accessories/acoustic-underlay/prod_29\.html$ /product/acoustic-underlay/? [R=301,NE,NC,L] # 301 Redirect 3 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^accessories/cat_4\.html$ /product-category/accessories/? [R=301,NE,NC,L] # 301 Redirect 4 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-bamboo-flooring/accessories/cat_8\.html$ /product-category/accessories/? [R=301,NE,NC,L] # 301 Redirect 5 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-bamboo-flooring/bamboo-floor/natural-strandwoven-bamboo-semi-gloss-wide-board-135mm-click/prod_151\.html$ /product/natural-strand-woven-bamboo-semi-gloss-wide-board-135mm-click/? [R=301,NE,NC,L] # 301 Redirect 6 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-bamboo-flooring/bamboo-floor/strandwoven-chocolate-135mm-bamboo-flooring/prod_174\.html$ /product/strand-woven-chocolate-135mm-bamboo-flooring/? [R=301,NE,NC,L] # 301 Redirect 7 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-bamboo-flooring/bamboo-floor/strand-woven-kempas-bamboo-flooring/prod_173\.html$ /product/strand-woven-kempas-bamboo-flooring/? [R=301,NE,NC,L] # 301 Redirect 8 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-bamboo-flooring/bamboo-floor/strandwoven-walnut-wired-135mm-bamboo-flooring/prod_176\.html$ /product/strand-woven-walnut-wired-135mm-bamboo-flooring/? [R=301,NE,NC,L] # 301 Redirect 9 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-bamboo-flooring/cat_7\.html$ /product-category/bamboo-floor/? [R=301,NE,NC,L] # 301 Redirect 10 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-bamboo-installation/info_8\.html$ /bamboo-installation/? [R=301,NE,NC,L] # 301 Redirect 11 RewriteCond %{QUERY_STRING} ^act=cart$ [NC] RewriteRule ^cart\.php$ /cart/? [R=301,NE,NC,L] # 301 Redirect 12 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^contact-us/info_2\.html$ /contact-us/? [R=301,NE,NC,L] # 301 Redirect 13 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^faqs/info_9\.html$ /faqs/? [R=301,NE,NC,L] # 301 Redirect 14 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-floating-timber-floor/black-butt-engineered-floating-timber/prod_213\.html$ /product/black-butt-engineered-floating-timber/? [R=301,NE,NC,L] # 301 Redirect 15 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-floating-timber-floor/doussie-engineered-floating-timber/prod_208\.html$ /product/doussie-engineered-floating-timber/? [R=301,NE,NC,L] # 301 Redirect 16 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-floating-timber-floor/smoked-oak-engineered-floating-timber/prod_217\.html$ /product/smoked-oak-engineered-floating-timber/? [R=301,NE,NC,L] # 301 Redirect 17 RewriteCond %{QUERY_STRING} ^act=thanks$ [NC] RewriteRule ^index\.php$ http://www.xxxxxxxxxx.com/? [R=301,NE,NC,L] # 301 Redirect 18 RewriteCond %{QUERY_STRING} ^act=viewCat&catId=13$ [NC] RewriteRule ^index\.php$ /product-category/samples/bamboo-flooring-samples/? [R=301,NE,NC,L] # 301 Redirect 19 RewriteCond %{QUERY_STRING} ^act=viewCat&catId=18$ [NC] RewriteRule ^index\.php$ /product/bamboo-plastic-composite/? [R=301,NE,NC,L] # 301 Redirect 20 RewriteCond %{QUERY_STRING} ^act=viewCat&catId=2$ [NC] RewriteRule ^index\.php$ /product-category/bamboo-floor/? [R=301,NE,NC,L] # 301 Redirect 21 RewriteCond %{QUERY_STRING} ^act=viewCat&catId=20$ [NC] RewriteRule ^index\.php$ /products/? [R=301,NE,NC,L] # 301 Redirect 22 RewriteCond %{QUERY_STRING} ^act=viewCat&catId=3$ [NC] RewriteRule ^index\.php$ /product-category/floating-timber-floor/? [R=301,NE,NC,L] # 301 Redirect 23 RewriteCond %{QUERY_STRING} ^act=viewCat&catId=5$ [NC] RewriteRule ^index\.php$ /product-category/laminate-flooring/? [R=301,NE,NC,L] # 301 Redirect 24 RewriteCond %{QUERY_STRING} ^act=viewCat&catId=6$ [NC] RewriteRule ^index\.php$ /product-category/accessories/? [R=301,NE,NC,L] # 301 Redirect 25 RewriteCond %{QUERY_STRING} ^act=viewCat&catId=saleItems$ [NC] RewriteRule ^index\.php$ /product-category/clearance-sale/? [R=301,NE,NC,L] # 301 Redirect 26 RewriteCond %{QUERY_STRING} ^act=viewDoc&docId=3$ [NC] RewriteRule ^index\.php$ /faqs/? [R=301,NE,NC,L] # 301 Redirect 27 RewriteCond %{QUERY_STRING} ^act=viewDoc&docId=4$ [NC] RewriteRule ^index\.php$ /faqs/? [R=301,NE,NC,L] # 301 Redirect 28 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=137$ [NC] RewriteRule ^index\.php$ /product/laminate-flooring-goustein-wood/? [R=301,NE,NC,L] # 301 Redirect 29 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=164$ [NC] RewriteRule ^index\.php$ /product/modern-black-brushed-finish-strand-woven-flooring/? [R=301,NE,NC,L] # 301 Redirect 30 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=165$ [NC] RewriteRule ^index\.php$ /product/lime-wash-strand-woven-bamboo-flooring/? [R=301,NE,NC,L] # 301 Redirect 31 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=168$ [NC] RewriteRule ^index\.php$ /product/country-bark/? [R=301,NE,NC,L] # 301 Redirect 32 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=173$ [NC] RewriteRule ^index\.php$ /product-category/bamboo-floor/14mm-bamboo-flooring/? [R=301,NE,NC,L] # 301 Redirect 33 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=178$ [NC] RewriteRule ^index\.php$ /product/blue-gum-136-floating-timber/? [R=301,NE,NC,L] # 301 Redirect 34 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=199$ [NC] RewriteRule ^index\.php$ /product/jarrah-laminate-floor-sample/? [R=301,NE,NC,L] # 301 Redirect 35 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=205$ [NC] RewriteRule ^index\.php$ /product/elm-12mm-laminate-floor-sample/? [R=301,NE,NC,L] # 301 Redirect 36 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=209$ [NC] RewriteRule ^index\.php$ /product/iroko-engineered-floating-timber/? [R=301,NE,NC,L] # 301 Redirect 37 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=222$ [NC] RewriteRule ^index\.php$ /product/european-oak-engineered-floating-timber-sample/? [R=301,NE,NC,L] # 301 Redirect 38 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=236$ [NC] RewriteRule ^index\.php$ /product/black-forest-5mm-vinyl-flooring/? [R=301,NE,NC,L] # 301 Redirect 39 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=65$ [NC] RewriteRule ^index\.php$ /product/stair-nose/? [R=301,NE,NC,L] # 301 Redirect 40 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=83$ [NC] RewriteRule ^index\.php$ /product/laminate-flooring-warm-teak/? [R=301,NE,NC,L] # 301 Redirect 41 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-laminate-flooring/12mm-laminate-flooring/blackbutt/prod_156\.html$ /product/blackbutt-12mm-laminate-floor/? [R=301,NE,NC,L] # 301 Redirect 42 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-laminate-flooring/12mm-laminate-flooring/tasmanian-oak/prod_171\.html$ /product/tasmanian-oak/? [R=301,NE,NC,L] # 301 Redirect 43 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-laminate-flooring/8-3mm-laminate-flooring/laminate-flooring-warm-teak/prod_8\.html$ /product/laminate-flooring-warm-teak/? [R=301,NE,NC,L] # 301 Redirect 44 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-laminate-flooring/accessories/cat_6\.html$ /product-category/accessories/? [R=301,NE,NC,L] # 301 Redirect 45 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-laminate-flooring/cat_5\.html$ /product-category/laminate-flooring/? [R=301,NE,NC,L] # 301 Redirect 46 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-laminate-flooring/country-classic-12mm-laminate/cat_19\.html$ /product-category/laminate-flooring/12mm-country-classic-laminate-floor/? [R=301,NE,NC,L] # 301 Redirect 47 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-laminate-installation/info_7\.html$ /laminate-installation/? [R=301,NE,NC,L] # 301 Redirect 48 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^privacy-policy/info_4\.html$ /faqs/? [R=301,NE,NC,L] # 301 Redirect 49 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-quotation-request/info_5\.html$ /quotation-request/? [R=301,NE,NC,L] # 301 Redirect 50 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^rainbow-flooring/cat_16\.html$ /product-category/rainbow-flooring/? [R=301,NE,NC,L] # 301 Redirect 51 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^rainbow-flooring/walnut-rainbow-flooring/prod_112\.html$ /product/walnut-rainbow-flooring/? [R=301,NE,NC,L] # 301 Redirect 52 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/12mm-laminate-floor-samples/kempas-laminate-floor-sample/prod_195\.html$ /product/kempas-laminate-floor-sample/? [R=301,NE,NC,L] # 301 Redirect 53 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/12mm-laminate-floor-samples/spotted-gum-laminate-floor-sample/prod_196\.html$ /product/spotted-gum-laminate-floor-sample/? [R=301,NE,NC,L] # 301 Redirect 54 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/12mm-laminate-floor-samples/tasmanian-oak-laminate-floor-sample/prod_197\.html$ /product/tasmanian-oak-laminate-floor-sample/? [R=301,NE,NC,L] # 301 Redirect 55 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/bamboo-flooring-samples/cat_13\.html$ /product-category/samples/bamboo-flooring-samples/? [R=301,NE,NC,L] # 301 Redirect 56 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/bamboo-flooring-samples/rosewood-strandwoven-bamboo-floor-135mm-click-sample/prod_191\.html$ /product/rosewood-strand-woven-bamboo-floor-135mm-click-sample/? [R=301,NE,NC,L] # 301 Redirect 57 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/cat_9\.html$ /samples/? [R=301,NE,NC,L] # 301 Redirect 58 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/floating-timber-floor-samples/iroko-engineered-floating-timber-floor-sample/prod_223\.html$ /product/iroko-engineered-floating-timber-floor-sample/? [R=301,NE,NC,L] # 301 Redirect 59 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/floating-timber-floor-samples/jarrah-engineered-floating-timber-sample/prod_224\.html$ /product/jarrah-engineered-floating-timber-sample/? [R=301,NE,NC,L] # 301 Redirect 60 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/floating-timber-floor-samples/merbau-engineered-floating-timber-sample/prod_226\.html$ /product/merbau-engineered-floating-timber-sample/? [R=301,NE,NC,L] # 301 Redirect 61 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/floating-timber-floor-samples/spotted-gum-engineered-floating-timber-sample/prod_228\.html$ /product/spotted-gum-engineered-floating-timber-sample/? [R=301,NE,NC,L] # 301 Redirect 62 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/floating-timber-floor-samples/sydney-blue-gum-engineered-floating-timber-sample/prod_220\.html$ /product/sydney-blue-gum-engineered-floating-timber-sample/? [R=301,NE,NC,L] # 301 Redirect 63 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^shop\.php/-laminate-flooring/accessories/laminate-flooring-accessories-click-stairnose/prod_251\.html$ /product/stair-nose/? [R=301,NE,NC,L] # 301 Redirect 64 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^shop\.php/-laminate-flooring/country-classic-12mm-laminate/country-classic-polar-white/prod_243\.html$ /product/country-classic-polar-white/? [R=301,NE,NC,L] # 301 Redirect 65 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^shop\.php/samples/12mm-laminate-floor-samples/country-classic-polar-white/prod_244\.html$ /product/country-classic-polar-white-sample/? [R=301,NE,NC,L] # 301 Redirect 66 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^shop\.php/samples/12mm-laminate-floor-samples/rustic-oak-12mm-laminate-floor/prod_248\.html$ /product/rustic-oak-12mm-laminate-floor-sample/? [R=301,NE,NC,L] # 301 Redirect 67 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^shop\.php/samples/vinyl-flooring-samples/cat_25\.html$ /product-category/samples/vinyl-flooring-samples/? [R=301,NE,NC,L] # 301 Redirect 68 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^shop\.php/vinyl-flooring/cat_24\.html$ /product-category/vinyl-floor/? [R=301,NE,NC,L] # 301 Redirect 69 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^solardeck-tiles/cat_22\.html$ /product-category/solardeck-tiles/? [R=301,NE,NC,L] # 301 Redirect 70 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^solardeck-tiles/solardeck-tiles/prod_206\.html$ /product/solardeck-tiles/? [R=301,NE,NC,L] # 301 Redirect 71 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^terms-conditions/info_3\.html$ /faqs/? [R=301,NE,NC,L] I'm getting errors like this in my log: Invalid command 'aminate-flooring/tasmanian-oak/prod_171\\.html$', perhaps misspelled or defined by a module not included in the server configuration, referer: http://www.xxxxxxxx.com/laminate-installation/ Invalid command ',NE,NC,L]', perhaps misspelled or defined by a module not included in the server configuration Invalid command ',L]#', perhaps misspelled or defined by a module not included in the server configuration

    Read the article

  • Lightning talk: Coderetreat

    - by Michael Williamson
    In the spirit of trying to encourage more deliberate practice amongst coders in Red Gate, Lauri Pesonen had the idea of running a coderetreat in Red Gate. Lauri and I ran the first one a few weeks ago: given that neither of us hadn’t even been to a coderetreat before, let alone run one, I think it turned out quite well. The participants gave positive feedback, saying that they enjoyed the day, wrote some thought-provoking code and would do it again. Sam Blackburn was one of the attendees, and gave a lightning talk to the other developers in one of our regular lightning talk sessions: In case you can’t watch the video, I’ve transcribed the talk below, although I’d recommend watching the video if you can — I didn’t have much time to do the transcribing! So, what is a coderetreat? So it’s not just something in Red Gate, there’s a website and everything, although it’s not a very big website. It calls itself a community network. The basic ideas behind coderetreat are: you’ve got one day, and you split it into one hour sections. You spend three quarters of that coding, and do a little retrospective at the end. You’re supposed to start fresh each, we were told to delete our code after every session. We were in pairs, swapping after each session, and we did the same task every time. In fact, Conway’s Game of Life is the only task mentioned anywhere that I find for coderetreat. So I don’t know what we’ll do next time, or if we’re meant to do the same thing again. There are some guiding principles which felt to us like restrictions, that you have to code in crazy ways to encourage better code. Final thing is that it’s supposed to be free for outsiders to join. It’s meant to be a kind of networking thing, where you link up with people from other companies. We had a pilot day with Michael and Lauri. Since it was basically the first time any of us had done anything like this, everybody was from Red Gate. We didn’t chat to anybody else for the initial one. The task was Conway’s Game of Life, which most of you have probably heard of it, all but one of us knew about it when did the coderetreat. I won’t got into the details of what it is, but it felt like the right size of task, basically one or two groups actually produced something working by the end of the day, and of course that doesn’t mean it’s necessarily a day’s work to produce that because we were starting again every hour. The task really drives you more than trying to create good code, I found. It was really tempting to try and get it working rather than stick to the rules. But it’s really good to stop and try again because there are so many what-ifs when you’ve finished writing something, “what if I’d done it this way?”. You can answer all those questions at a coderetreat because it’s not about getting a product out the door, it’s about learning and playing with ideas. So we had all these different practices we were trying. I’ll try and go through most of these. Single responsibility is this idea that everything should do just one thing. It was the very first session, we were still trying to figure out how do you go about the Game of Life? So by the end of forty-five minutes hadn’t produced very much for that first session. We were still thinking, “Do we start with a board, how do we represent all these squares? It can be infinitely big, help, this is getting really difficult!”. So, most of us didn’t really get anywhere on the first one. Although it was interesting that some people started with the board, one group started with the FateDecider class that decides whether things live or die. A sort of god class, but in a good way. They managed to implement all of the rules without even defining how the squares were arranged or anything like that. Another thing we tried was TDD (test-driven development). I’m sure most of you know what TDD is: Watch a test, watch it fail for the right reason Write code to pass the test, watch it pass Refactor, check the test still passes Repeat! It basically worked, we were able to produce code, but we often found the tests defined the direction that code went, which is obviously the idea of TDD. But you tend to find that by the time you’ve even written your first assertion, which is supposed to be the very first thing you write, because you write your tests backwards from the assertions back to the initial conditions, you’ve already constrained the logic of the code in some way by the time you’ve done that. You then get to this situation of, “Well, we actually want to go in a slightly different direction. Can we do this?”. Can we write tests that don’t constrain the architecture? Wrapping up all primitives: it’s kind of turtles all the way down. We had a Size, which has a Width and Height, which both derive from Dimension. You’ve got pages of code before you’ve even done anything. No getters and setters (use tell don’t ask instead): mocks and stubs for tests are required if you want to assert that your results are what you think they should be. You can’t just check the internal state of the code. And people found that really challenging and it made them think in a different way which I think is really good. Not having mutable state: that was kind of confusing because we weren’t quite sure what fitted within that rule and what didn’t, and I think we were trying too hard to follow the rule rather than the guideline. No if-statements: supposed to use polymorphism instead, but polymorphism still requires a factory with conditional behaviour. We did something really crazy to get around this: public T If(bool condition, Func<T> left, Func<T> right) { var dict = new Dictionary<bool, Func<T>> {{true, left}, {false, right}}; return dict[condition].Invoke(); } That is not really polymorphism, is it? For-loops: you can always replace a for-loop with recursion, but it doesn’t tend to make it any more readable unless it’s the kind of task that really lends itself to that. So it was interesting, it was good practice, but it wouldn’t make it easier it’s the kind of tree-structure algorithm where that would help. Having a limit on the number of levels of indentation: again, I think it does produce very nice, clean code, but it wasn’t actually a challenge because you just extract methods. That’s quite a useful thing because you can apply that to real code and say, “Okay, should this method really be going crazy like this?” No talking: we hated that. It’s like there’s two of you at a computer, and one of you is doing the typing, what does the other guy do if they’re not allowed to talk. The answer is TDD ping-pong – one person writes the tests, and then the other person writes the code to pass the test. And that creates communication without actually having to have discussion about things which is kind of cool. No code comments: just makes no difference to anything. It’s a forty-five minute exercise, so what are you going to put comments in code for? Finally, this is my fault. I discovered an entertaining way of doing the calculation that was kind of cool (using convolutions over the state of the board). Unfortunately, it turns out to be really hard to implement in C#, so didn’t even manage to work out how to do that convolution in C#. It’s trivial in some high-level languages, but you need something matrix-orientated for it to really work. That’s most of it, really. The thoughts that people went away with: we put down our answers to questions like “What have you learnt?” and “What surprised you?”, “How are you going to do things differently?”, and most people said redoing the problem is really, really good for understanding it properly. People hate having a massive legacy codebase that they can’t change, so being able to attack something three different ways in an environment where the end-product isn’t important: that’s something people really enjoyed. Pair-programming: also people said that they wanted to do more of that, especially with TDD ping-pong, where you write the test and somebody else writes the code. Various people thought different things about immutables, but most people thought they were good, they promote functional programming. And TDD people found really hard. “Tell, don’t ask” people found really, really hard and really, really, really hard to do well. And the recursion just made things trickier to debug. But most people agreed that coderetreats are really cool, and we should do more of them.

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Book Review - Programming Windows Azure by Siriram Krishnan

    - by BuckWoody
    As part of my professional development, I’ve created a list of books to read throughout the year, starting in June of 2011. This a review of the first one, called Programming Windows Azure by Siriram Krishnan. You can find my entire list of books I’m reading for my career here: http://blogs.msdn.com/b/buckwoody/archive/2011/06/07/head-in-the-clouds-eyes-on-the-books.aspx  Why I Chose This Book: As part of my learning style, I try to read multiple books about a single subject. I’ve found that at least 3 books are necessary to get the right amount of information to me. This is a “technical” work, meaning that it deals with technology and not business, writing or other facets of my career. I’ll have a mix of all of those as I read along. I chose this work in addition to others I’ve read since it covers everything from an introduction to more advanced topics in a single book. It also has some practical examples of actually working with the product, particularly on storage. Although it’s dated, many examples normally translate. I also saw that it had pretty good reviews. What I learned: I learned a great deal about storage, and many useful code snippets. I do think that there could have been more of a focus on the application fabric - but of course that wasn’t as mature a feature when this book was written. I learned some great architecture examples, and in one section I learned more about encryption. In that example, however, I would rather have seen the examples go the other way - the book focused on moving data from on-premise to Azure storage in an encrypted fashion. Using the Application Fabric I would rather see sensitive data left in a hybrid fashion on premise, and connect to for the Azure application. Even so, the examples were very useful. If you’re looking for a good “starter” Azure book, this is a good choice. I also recommend the last chapter as a quick read for a DBA, or Database Administrator. It’s not very long, but useful. Note that the limits described are incorrect - which is one of the dangers of reading a book about any cloud offering. The services offered are updated so quickly that the information is in constant danger of being “stale”. Even so, I found this a useful book, which I believe will help me work with Azure better. Raw Notes: I take notes as I read, calling that process “reading with a pencil”. I find that when I do that I pay attention better, and record some things that I need to know later. I’ll take these notes, categorize them into a OneNote notebook that I synchronize in my Live.com account, and that way I can search them from anywhere. I can even read them on the web, since the Live.com has a OneNote program built in. Note that these are the raw notes, so they might not make a lot of sense out of context - I include them here so you can watch my though process. Programming Windows Azure by Siriram Krishnan: Learning about how to select applications suitable for Distributed Technology. Application Fabric gets the least attention; probably because it was newer at the time. Very clear (Chapter One) Good foundation Background and history, but not too much I normally arrange my descriptions differently, starting with the use-cases and moving to physicality, but this difference helps me. Interesting that I am reading this using Safari Books Online, which uses many of these concepts. Taught me some new aspects of a Hypervisor – very low-level information about the Azure Fabric (not to be confused with the Application Fabric feature) (Chapter Two) Good detail of what is included in the SDK. Even more is available now. CS = Cloud Service (Chapter 3) Place Storage info in the configuration file, since it can be streamed in-line with a running app. Ditto for logging, and keep separated configs for staging and testing. Easy-switch in and switch out.  (Chapter 4) There are two Runtime API’s, one of external and one for internal. Realizing how powerful this paradigm really is. Some places seem light, and to drop off but perhaps that’s best. Managing API is not charged, which is nice. I don’t often think about the price, until it comes to an actual deployment (Chapter 5) Csmanage is something I want to dig into deeper. API requires package moves to Blob storage first, so it needs a URL. Csmanage equivalent can be written in Unix scripting using openssl. Upgrades are possible, and you use the upgradeDomainCount attribute in the Service-Definition.csdef file  Always use a low-privileged account to test on the dev fabric, since Windows Azure runs in partial trust. Full trust is available, but can be dangerous and must be well-thought out. (Chapter 6) Learned how to run full CMD commands in a web window – not that you would ever do that, but it was an interesting view into those links. This leads to a discussion on hosting other runtimes (such as Java or PHP) in Windows Azure. I got an expanded view on this process, although this is where the book shows its age a little. Books can be a problem for Cloud Computing for this reason – things just change too quickly. Windows Azure storage is not eventually consistent – it is instantly consistent with multi-phase commit. Plumbing for this is internal, not required to code that. (Chapter 7) REST API makes the service interoperable, hybrid, and consistent across code architectures. Nicely done. Use affinity groups to keep data and code together. Side note: e-book readers need a common “notes” feature. There’s a decent quick description of REST in this chapter. Learned about CloudDrive code – PowerShell sample that mounts Blob storage as a local provider. Works against Dev fabric by default, can be switched to Account. Good treatment in the storage chapters on the differences between using Dev storage and Azure storage. These can be mitigated. No, blobs are not of any size or number. Not a good statement (Chapter 8) Blob storage is probably Azure’s closest play to Infrastructure as a Service (Iaas). Blob change operations must be authenticated, even when public. Chapters on storage are pretty in-depth. Queue Messages are base-64 encoded (Chapter 9) The visibility timeout ensures processing of message in a disconnected system. Order is not guaranteed for a message, so if you need that set an increasing number in the queue mechanism. While Queues are accessible via REST, they are not public and are secured by default. Interesting – the header for a queue request includes an estimated count. This can be useful to create more worker roles in a dynamic system. Each Entity (row) in the Azure Table service is atomic – all or nothing. (Chapter 10) An entity can have up to 255 Properties  Use “ID” for the class to indicate the key value, or use the [DataServiceKey] Attribute.  LINQ makes working with the Azure Table Service much easier, although Interop is certainly possible. Good description on the process of selecting the Partition and Row Key.  When checking for continuation tokens for pagination, include logic that falls out of the check in case you are at the last page.  On deleting a storage object, it is instantly unavailable, however a background process is dispatched to perform the physical deletion. So if you want to re-create a storage object with the same name, add retry logic into the code. Interesting approach to deleting an index entity without having to read it first – create a local entity with the same keys and apply it to the Azure system regardless of change-state.  Although the “Indexes” description is a little vague, it’s interesting to see a Folding and Stemming discussion a-la the Porter Stemming Algorithm. (Chapter 11)  Presents a better discussion of indexes (at least inverted indexes) later in the chapter. Great treatment for DBA’s in Chapter 11. We need to work on getting secondary indexes in Table storage. There is a limited form of transactions called “Entity Group Transactions” that, although they have conditions, makes a transactional system more possible. Concurrency also becomes an issue, but is handled well if you’re using Data Services in .NET. It watches the Etag and allows you to take action appropriately. I do not recommend using Azure as a location for secure backups. In fact, I would rather have seen the examples in (Chapter 12) go the other way, showing how data could be brought back to a local store as a DR or HA strategy. Good information on cryptography and so on even so. Chapter seems out of place, and should be combined with the Blob chapter.  (Chapter 13) on SQL Azure is dated, although the base concepts are OK.  Nice example of simple ADO.NET access to a SQL Azure (or any SQL Server Really) database.  

    Read the article

  • Commit in SQL

    - by PRajkumar
    SQL Transaction Control Language Commands (TCL)                                           (COMMIT) Commit Transaction As a SQL language we use transaction control language very frequently. Committing a transaction means making permanent the changes performed by the SQL statements within the transaction. A transaction is a sequence of SQL statements that Oracle Database treats as a single unit. This statement also erases all save points in the transaction and releases transaction locks. Oracle Database issues an implicit COMMIT before and after any data definition language (DDL) statement. Oracle recommends that you explicitly end every transaction in your application programs with a COMMIT or ROLLBACK statement, including the last transaction, before disconnecting from Oracle Database. If you do not explicitly commit the transaction and the program terminates abnormally, then the last uncommitted transaction is automatically rolled back.   Until you commit a transaction: ·         You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit ·         You can roll back (undo) any changes made during the transaction with the ROLLBACK statement   Note: Most of the people think that when we type commit data or changes of what you have made has been written to data files, but this is wrong when you type commit it means that you are saying that your job has been completed and respective verification will be done by oracle engine that means it checks whether your transaction achieved consistency when it finds ok it sends a commit message to the user from log buffer but not from data buffer, so after writing data in log buffer it insists data buffer to write data in to data files, this is how it works.   Before a transaction that modifies data is committed, the following has occurred: ·         Oracle has generated undo information. The undo information contains the old data values changed by the SQL statements of the transaction ·         Oracle has generated redo log entries in the redo log buffer of the System Global Area (SGA). The redo log record contains the change to the data block and the change to the rollback block. These changes may go to disk before a transaction is committed ·         The changes have been made to the database buffers of the SGA. These changes may go to disk before a transaction is committed   Note:   The data changes for a committed transaction, stored in the database buffers of the SGA, are not necessarily written immediately to the data files by the database writer (DBWn) background process. This writing takes place when it is most efficient for the database to do so. It can happen before the transaction commits or, alternatively, it can happen some times after the transaction commits.   When a transaction is committed, the following occurs: 1.      The internal transaction table for the associated undo table space records that the transaction has committed, and the corresponding unique system change number (SCN) of the transaction is assigned and recorded in the table 2.      The log writer process (LGWR) writes redo log entries in the SGA's redo log buffers to the redo log file. It also writes the transaction's SCN to the redo log file. This atomic event constitutes the commit of the transaction 3.      Oracle releases locks held on rows and tables 4.      Oracle marks the transaction complete   Note:   The default behavior is for LGWR to write redo to the online redo log files synchronously and for transactions to wait for the redo to go to disk before returning a commit to the user. However, for lower transaction commit latency application developers can specify that redo be written asynchronously and that transaction do not need to wait for the redo to be on disk.   The syntax of Commit Statement is   COMMIT [WORK] [COMMENT ‘your comment’]; ·         WORK is optional. The WORK keyword is supported for compliance with standard SQL. The statements COMMIT and COMMIT WORK are equivalent. Examples Committing an Insert INSERT INTO table_name VALUES (val1, val2); COMMIT WORK; ·         COMMENT Comment is also optional. This clause is supported for backward compatibility. Oracle recommends that you used named transactions instead of commit comments. Specify a comment to be associated with the current transaction. The 'text' is a quoted literal of up to 255 bytes that Oracle Database stores in the data dictionary view DBA_2PC_PENDING along with the transaction ID if a distributed transaction becomes in doubt. This comment can help you diagnose the failure of a distributed transaction. Examples The following statement commits the current transaction and associates a comment with it: COMMIT     COMMENT 'In-doubt transaction Code 36, Call (415) 555-2637'; ·         WRITE Clause Use this clause to specify the priority with which the redo information generated by the commit operation is written to the redo log. This clause can improve performance by reducing latency, thus eliminating the wait for an I/O to the redo log. Use this clause to improve response time in environments with stringent response time requirements where the following conditions apply: The volume of update transactions is large, requiring that the redo log be written to disk frequently. The application can tolerate the loss of an asynchronously committed transaction. The latency contributed by waiting for the redo log write to occur contributes significantly to overall response time. You can specify the WAIT | NOWAIT and IMMEDIATE | BATCH clauses in any order. Examples To commit the same insert operation and instruct the database to buffer the change to the redo log, without initiating disk I/O, use the following COMMIT statement: COMMIT WRITE BATCH; Note: If you omit this clause, then the behavior of the commit operation is controlled by the COMMIT_WRITE initialization parameter, if it has been set. The default value of the parameter is the same as the default for this clause. Therefore, if the parameter has not been set and you omit this clause, then commit records are written to disk before control is returned to the user. WAIT | NOWAIT Use these clauses to specify when control returns to the user. The WAIT parameter ensures that the commit will return only after the corresponding redo is persistent in the online redo log. Whether in BATCH or IMMEDIATE mode, when the client receives a successful return from this COMMIT statement, the transaction has been committed to durable media. A crash occurring after a successful write to the log can prevent the success message from returning to the client. In this case the client cannot tell whether or not the transaction committed. The NOWAIT parameter causes the commit to return to the client whether or not the write to the redo log has completed. This behavior can increase transaction throughput. With the WAIT parameter, if the commit message is received, then you can be sure that no data has been lost. Caution: With NOWAIT, a crash occurring after the commit message is received, but before the redo log record(s) are written, can falsely indicate to a transaction that its changes are persistent. If you omit this clause, then the transaction commits with the WAIT behavior. IMMEDIATE | BATCH Use these clauses to specify when the redo is written to the log. The IMMEDIATE parameter causes the log writer process (LGWR) to write the transaction's redo information to the log. This operation option forces a disk I/O, so it can reduce transaction throughput. The BATCH parameter causes the redo to be buffered to the redo log, along with other concurrently executing transactions. When sufficient redo information is collected, a disk write of the redo log is initiated. This behavior is called "group commit", as redo for multiple transactions is written to the log in a single I/O operation. If you omit this clause, then the transaction commits with the IMMEDIATE behavior. ·         FORCE Clause Use this clause to manually commit an in-doubt distributed transaction or a corrupt transaction. ·         In a distributed database system, the FORCE string [, integer] clause lets you manually commit an in-doubt distributed transaction. The transaction is identified by the 'string' containing its local or global transaction ID. To find the IDs of such transactions, query the data dictionary view DBA_2PC_PENDING. You can use integer to specifically assign the transaction a system change number (SCN). If you omit integer, then the transaction is committed using the current SCN. ·         The FORCE CORRUPT_XID 'string' clause lets you manually commit a single corrupt transaction, where string is the ID of the corrupt transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to specify this clause. ·         Specify FORCE CORRUPT_XID_ALL to manually commit all corrupt transactions. You must have DBA privileges to specify this clause. Examples Forcing an in doubt transaction. Example The following statement manually commits a hypothetical in-doubt distributed transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to issue this statement. COMMIT FORCE '22.57.53';

    Read the article

  • MSCC: Purpose and benefits of Version Control Systems (VCS)

    Unfortunately, there was no monthly meetup during May. Which means that it was even more important and interesting to go forward with a great topic for this month. Earlier this year I already spoke to Nayar Joolfoo about doing a presentation on version control systems (VCS), and he gladly agreed since then. It was just about finding the right date for the action. Furthermore, it was also a great coincidence that Avinash Meetoo announced on social media networks that Knowledge 7 is about to have a new training on "Effective git" - which correlates to a book title Avinash is currently working on - all the best with your approach on this and reach out to our MSCC craftsmen for recessions. Once again a big Thank you to Orange Ebene Accelerator on providing the venue for us, and the MSCC members involved on securing the time slot for our event. Unfortunately, it's kind of tough to get an early confirmation for our meetups these days. I'll keep you posted on that one as there are some interesting and exciting options coming up soon. Okay, let's talk about the meeting and version control systems again. As usual, I'm going to put my first impression of the meetup: "Absolutely great topic, questions and discussions on version control systems, like git or VSO. I was also highly pleased by the number of first timers and female IT geeks. Hopefully, we will be able to keep this trend for future get-togethers." And I really have to emphasise the amount of fresh blood coming to our gathering. Also, during the initial phase it was surprising to see that exactly those first-timers, most of them students at various campuses here on the island, had absolutely no idea about version control systems. More about further down... Reactions of other attendees If I counted correctly, we had a total of 17 attendees this month, and I'd like to give you feedback from some of them: "Inspiring. Helped me understand more about GIT." -- Sean on event comments "Joined the meetup today with literally no idea what is a version control system. I have several reasons why I should be starting to use VCS as from NOW in my projects. Thanks Nayar, Jochen and other participants :)" -- Yudish on event comments "Was present today and I'm very satisfied.I was not aware if there was a such tool like git available. Thanks to those who contributed for this meetup.It was great. Learned a lot from this meetup!!" -- Leonardo on event comments "Seriously, I can see how it’s going to ease my task and help me save time. Gone are the issues with files backups.  And since I’ll be doing my dissertation this year, using Git would help me a lot for my backups and I’m grateful to Nayar for the great explanation." -- Swan-Iyah on MSCC meetup : Version Controls Hopefully, I'll be able to get some other sources - personal blogs preferred - on our meeting. Geeks, thank you so much for those encouraging comments. It's really great to experience that we, all members of the MSCC, are doing the right thing to get more IT information out, and to help each other to improve and evolve in our professional careers. Our agenda of the day Honestly, we had a bumpy start... First, I was battling a little bit with the movable room divider in order to maximize the space. I mean, we had 24 RSVPs and usually there might additional people coming along. Then, for what ever reason, we were facing power outages - actually twice in short periods. Not too good for the projector after all, but hey it went smooth for the rest of the time being. And last but not least... our first speaker Nayar got stuck somewhere on the road. ;-) Anyway, not a real show-stopper and we used the time until Nayar's arrival to introduce ourselves a little bit. It is always important for me to get to know the "newbies" a little bit, and as a result we had lots of students of university - first year, second year and recent graduates - among them. Surprisingly, none of them was ever in contact with version control systems at all. I mean, this is a shocking discovery! Similar to the ability of touch-typing I'd say that being able to use (and master) any kind of version control system is compulsory in any job in the IT industry. Seriously, I'm wondering what is being taught during the classes on the campus. All of them have to work on semester assessments or final projects, even in small teams of 2-4 people. That's the perfect occasion to get started with VCS. Already in this phase, we had great input from more experienced VCS users, like Sean, Avinash and myself. git - a modern approach to VCS - Nayar What a tour! Nayar gave us the full round of git from start to finish, even touching some more advanced techniques. First, he started to explain about the importance of version control systems as an essential tool for software developers, even working alone on a project, and the ability to have a kind of "time machine" that allows you to inspect and revert to a previous version of source code at any time. Then he showed how easy it is to install git on an Ubuntu based system but also mentioned that git is literally available for any operating system, like Windows, Mac OS X and of course other Linux distributions. Next, he showed us how to set the initial configuration values of user name and email address which simplifies the daily usage of the git client while working with your repositories. Then he initialised and added a new repository for some local development of a blogging software. All commands were done using the command line interface (CLI) so that they can be repeated on any system as reference. The syntax and the procedure is always the same, and Nayar clearly mentioned this to the attendees. Now, having a git repository in place it was about time to work on some "important" changes on the blogging software - just for the sake of demonstrating the ease of use and power of git. One interesting question came very early: "How many commands do we have to learn? It looks quite difficult at the moment" - Well, rest assured that during daily development circles you will need less than 10 git commands on a regular base: git add, commit, push, pull, checkout, and merge And Nayar demo'd all of them. Much to the delight of everyone he also showed gitk which is the git repository browser. It's an UI tool to display changes in a repository or a selected set of commits. This includes visualizing the commit graph, showing information related to each commit, and the files in the trees of each revision. Using gitk to display and browse information of a local git repository And last but not least, we took advantage of the internet connectivity and reached out to various online portals offering git hosting for free. Nayar showed us how to push the local repository into a remote system on github. Showing the web-based git browser and history handling, and then also explained and demo'd on how to connect to existing online repositories in order to get access to either your own source code or other people's open source projects. Next to github, we also spoke about bitbucket and gitlab as potential online platforms for your projects. Have a look at the conditions and details about their free service packages and what you can get additionally as a paying customer. Usually, you already get a lot of services for up to five users for free but there might be other important aspects that might have an impact on your decision. Anyways, moving git-based repositories between systems is a piece of cake, and changing online platforms is possible at any stage of your development. Visual Studio Online (VSO) - Jochen Well, Nayar literally covered all elements of working with git during his session, including the use of external online platforms. So, what would be the advantage of talking about Visual Studio Online (VSO)? First of all, VSO is "just another" online platform for hosting and managing git repositories on remote systems, equivalent to github, bitbucket, or any other web site. At the moment (of writing), Microsoft also provides a free package of up to five users / developers on a git repository but there is more in that package. Of course, it is related to software development on the Windows systems and the bonds are tightened towards the use of Visual Studio but out of experience you are absolutely not restricted to that. Connecting a Linux or Mac OS X machine with a git client or an integrated development environment (IDE) like Eclipse or Xcode works as smooth as expected. So, why should one opt in for VSO? Well, one of the main aspects that I would like to mention here is that VSO integrates the Application Life Cycle Methodology (ALM) of Microsoft in their platform. Meaning that you get agile project management with Backlogs, Sprints, Burn-down charts as well as the ability to track tasks, bug reports and work items next to collaborative team chats. It's the whole package of agile development you'll get. And, something I mentioned briefly during the begin of our meeting, VSO gives you the possibility of an automated continuous integrated (CI) process which builds and can run tests of your source code after each commit of changes. Having a proper CI strategy is also part of the Clean Code Developer practices - on Level Green actually -, and not only simplifies your life as a software developer but also reduces the sources of potential errors. Seamless integration and automated deployment between Microsoft Azure Web Sites and git repository But my favourite feature is the seamless continuous deployment to Microsoft Azure. Especially, while working on web projects it's absolutely astounishing that as soon as you commit your chances it just takes a couple of seconds until your modifications are deployed and available on your Azure-hosted web sites. Upcoming Events and networking Due to the adjusted times, everybody was kind of hungry and we didn't follow up on networking or upcoming events - very unfortunate to my opinion and this will have an impact on future planning of our meetups. Because I rather would like to see more conversations during and at the end of our meetings than everyone just packing their laptops, bags and accessories and rush off to grab some food. I was hoping to get some information regarding this year's Code Challenge - supposedly to be organised during July? Maybe someone could leave a comment on that - but I couldn't get any updates. Well, I'll keep digging... In case that you would like to get more into git and how to use it effectively, please check out Knowledge 7's upcoming course on "Effective git". Thanks Avinash for your vital input into today's conversation and I'm looking forward to get a grip on your book title very soon. My resume of the day Do not work in IT without any kind of version control system! Seriously, without a VCS in place you're doing it wrong. It's like driving a car without seat belts attached or riding your bike without safety helmet. You don't do that! End of discussion. ;-) Nowadays, having access to free (as in cost) tools to install on your machine and numerous online platforms to host your source code for free for up to five users it's a no-brainer to get yourself familiar with VCS. Today's sessions gave a good overview on how to start using git and how to connect to various remote services like github or VSO.

    Read the article

  • CodePlex Daily Summary for Thursday, May 20, 2010

    CodePlex Daily Summary for Thursday, May 20, 2010New ProjectsAlphaChannel: Closed projectAragon Online Client: The Aragon Online Client is a front-end application allowing users to play the online game http://aragon-online.net The client fetches game data a...BISBCarManager: Car managerBlammo.Net: Blammo.Net is a simple logging system that allows for multiple files, has simple configuration, and is modular.C# IMAPI2 Samples: This project is my effort to port the VB Script IMAPI2 samples to C# 4.0 and provide CD/DVD Burning capabilities in C#. This takes advantage of th...DemotToolkit: A toolbox to help you enjoy the demotivators.FMI Silverlight Course: This is the site for the final project of the Silverlight course taught at the Sofia University in the summer semester of 2010.InfoPath Publisher Helper: Building a large set of InfoPath Templates? Bored of the repetive stsadm commands to deploy an online form? This tool will allow you to submit ...JSBallBounce - HTML5 Stereocopy: A demo of basic stereoscopy in HTML5Kindler: Kindler allows you to easily convert simply HTML into documents that can be easily read on the Amazon Kindle.Maybe: Maybe or IfNotNull using lambdas for deep expressions. int? CityId= employee.Maybe(e=>e.Person.Address.City);PopCorn Project : play music with system beeps: PopCorn is an application that can play monophonic music through system beeps. You can launch music on the local machine, or on a remote server thr...RuneScape 2 Chronos - Emulation done right.: RuneScape 2 Chronos is a RuneScape 2 Emulator framework. It is completely open-source and is programmed in a way in which should be simpler for it'...RWEntregas: Projeto do Rogrigo e Wender referente a entregasSilverAmp: A media player to demonstrate lots of new Silverlight 4 features. Running out of browser and reading files from the MyMusic folder are two of them....Silverlight Scheduler: A light-weight silverlight scheduler control.SimpleContainer: SimpleContainer is very simple IoC container that consists of one class that implements two remarkable methods: one for registration and one for re...sqlserverextensions: This project will provide some use operations for files and directories. Further development will include extended string operations.TechWorks Portugal Sample BI Project: Techworks is a complete Microsoft BI sample project customized for Portugal to be used in demo and learning scenarios. It is based in SQL Server 20...Test4Proj: Test4ProjTV Show Renamer: TV Show Renamer is a program that is designed to take files downloaded off the internet and rename them to a more user friendly file name. It is fo...UnFreeZeMouSeW7: This small application disable or enable the standby mode on Windows 7 devices. As the mouse pointer freezes or the latency increase on some device...VianaNET - Videoanalysis for physical motion: The VianaNET project targets physics education. The software enables to analyze the motion of colored objects in life-video and video files. The da...Visual Studio 2010 extension for helping SharePoint 2010 debugging: This is a Visual Studio 2010 extensibility project that extends the debugging support for the built-in SharePoint 2010 tools with new menu items an...Visual Studio 2010 Load Test Plugins Library: Useful plugin library for Visual Studio Load Test 2010 version. (Best for web tests).VMware Lab Manager Powershell Cmdlet: This is a simple powershell cmdlet which connects you with the VMware lab manager internal soap api.Webgame UI: Bot to webgameNew ReleasesActipro WPF Controls Contrib: v2010.1: Minor tweaks and updated to target Actipro WPF Studio 2010.1. Addition of Editors.Interop.Datagrid project, which allows Actipro Editors for WPF t...Blammo.Net: Blammo.Net 1.0: This is the initial release of Blammo.Net.Build Version Increment Add-In Visual Studio: Shared Assembly Info Setup: Example solution that makes use of one shared assembly info file.CSS 360 Planetary Calendar: Zero Feature Release: =============================================================================== Zero Feature Release Version: 0.0 Description: This is a binar...DotNetNuke® Community Edition: 05.04.02: Updated the Installation Wizard's Polish & German language packs. Improved performance of Sql script for listing modules by portal. Improved De...DotNetNuke® Store: 02.01.36: What's New in this release? Bugs corrected: - The reference to resource.zip has been commented in the Install package. Sorry for that it's my mista...Extend SmallBasic: Teaching Extensions v.016: added turtle tree quizExtremeML: ExtremeML v1.0 Beta 2: Following the decision to terminate development of the Premium Edition of ExtremeML, this release includes all code previously restricted to the Pr...InfoPath Publisher Helper: 1st Release: InfoPath Publisher Helper Tool The version is mostly stable. There are some UI errors, which can be ignored. The code has not been cleaned up so t...JSBallBounce - HTML5 Stereocopy: HTML5 Stereoscopic Bouncing Balls Demo: Stereoscopic rendering in HTML5kdar: KDAR 0.0.22: KDAR - Kernel Debugger Anti Rootkit - signature's bases updated - ALPC port jbject check added - tcpip internal critical area checks added - some ...Lightweight Fluent Workflow: Objectflow core: Release Notes Fixed minor defects Framework changes Added IsFalse for boolean conditions Defects fixed IsTrue works with Not operator Installati...LiquidSilver SharePoint Framework: LiquidSilver 0.2.0.0 Beta: Changes in this version: - Fixed a bug in HgListItem when parsing double and int fields. - Added the LiquidSilver.Extra project. - Added the Liquid...Matrix: MatrixPlugin-0.5.1: Works with UniqueRoutes plugin on Google Code Works in Linux (path changes, variable usage etc) Builds .st2plugin by default Adapted to ST3MRDS Samples: MRDS Samples 1.0: Initial Release Please read the installation instructions on the Read Me page carefully before you unzip the samples. This ZIP file contains sourc...PopCorn Project : play music with system beeps: Popcorn v0.1: 1st beta releaseRuneScape 2 Chronos - Emulation done right.: Revision 0: Alpha stage of the Chronos source.SCSI Interface for Multimedia and Block Devices: Release 13 - Integrated Shell, x64 Fixes, and more: Changes from the previous version: - Added an integrated shell browser to the program and removed the Add/Remove File/Folder buttons, since the she...Silverlight Console: Silverlight Console Release 2: Release contains: - Orktane.Console.dll - Orktane.ScriptActions.dll Release targets: - Silverlight 4 Dependencies: - nRoute.Toolkit for Silverlig...SimpleContainer: SimpleContainer: Initial release of SimpleContainer library.Springshield Sample Site for EPiServer CMS: Springshield 1.0: City of Springshield - The accessible sample site for EPiServer CMS 6. Read the readme.txt on how to install.SQL Trim: 20100519: Improved releasesqlserverextensions: V 0.1 alpha: Version 0.1 Alphasvchost viewer: svchost viewer ver. 0.5.0.1: Got some feedback from a user, with some nice ideas so here they are: • Made the program resizable. • Program now saves the size and position when...Tribe.Cache: Tribe.Cache RC: Release Candidate There are breaking changes between BETA and RC :- 1) Cache Dictionary is now not exposed to the client. 2) Completly Thread Sa...TV Show Renamer: TV Show Renamer: This is the first public release. Don't worry that it is version 2.1 that is because i keep adding features to it and then upping the version numbe...UnFreeZeMouSeW7: UnFreeZeMouSeW7 0.1: First releaseVCC: Latest build, v2.1.30519.0: Automatic drop of latest buildVianaNET - Videoanalysis for physical motion: VianaNET V1.1 - alpha: This is the first alpha release of the completely rewritten Viana. Known issues are: - sometimes black frame at initial load of video - no abilit...visinia: visinia_1: The beta is gone, the visinia is here with visinia 1. now you can confidently install visinia use visinia and enjoy visinia. This version of visini...visinia: visinia_1_Src: The beta is gone, the visinia is here with visinia 1. now you can confidently install visinia use visinia and enjoy visinia. This version of visini...Visual Studio 2010 extension for helping SharePoint 2010 debugging: 1.0 First public release: The extension is released as a Visual Studio 2010 solution. See my related blog post at http://pholpar.wordpress.com/2010/05/20/visual-studio-2010-...Visual Studio 2010 Load Test Plugins Library: version 1 stable: version 1 stableVMware Lab Manager Powershell Cmdlet: LMCmdlet 1.0.0: first Release. You need to be an Administrator to install this cmdlet. After you run setup open powershell type: Get-PSSnapin -Registered you sh...WF Personalplaner: Personalplaner v1.7.29.10139: - Wenn ein Schema erstellt wird mit der Checkbox "Als neues Schema speichern" wurde pro Person ein Schema erstellt - Wenn ein Pensum geändert wurde...XAM.NET: XAM 1.0p2 + Issue Tracker 8396: Patch release for Issue Tracker 8396Xrns2XMod: Xrns2XMod 1.2: Fixed 32 bit flac conversion - Thanks to Yuri for updating FlacBox librariesMost Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRawrPHPExcelGMap.NET - Great Maps for Windows Forms & PresentationCustomer Portal Accelerator for Microsoft Dynamics CRMBlogEngine.NETWindows Azure Command-line Tools for PHP DevelopersCassiniDev - Cassini 3.5/4.0 Developers EditionSQL Server PowerShell ExtensionsFluent Ribbon Control Suite

    Read the article

  • Critique of SEO of this HTML

    - by Tom Gullen
    I'm designing a new site which I want to be as SEO friendly as possible, fast and responsive, semantic and very accessible. A lot of these things, embarrassingly are quite new to me. Have I miss applied anything? I want the template to be perfect. Live demo: http://69.24.73.172/demos/newDemo/ HTML: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <title>Welcome to Scirra.com</title> <meta name="description" content="Construct 2, the HTML5 games creator." /> <meta name="keywords" content="game maker, game builder, html5, create games, games creator" /> <link rel="stylesheet" href="css/default.css" type="text/css" /> <link rel="stylesheet" href="plugins/coin-slider/coin-slider-styles.css" type="text/css" /> </head> <body> <div class="topBar"></div> <div class="mainBox"> <header> <div class="headWrapper"> <div class="s searchWrap"> <input type="text" name="SearchBox" id="SearchBox" tabindex="1" /> <div class="s searchIco"></div> </div> <!-- Logo placeholder --> </div> <div class="menuWrapper"><nav> <ul class="mainMenu"> <li><a href="#">Home</a></li> <li><a href="#">Forum</a></li> <li><a href="#" class="mainSelected">Construct</a></li> <li><a href="#">Arcade</a></li> <li><a href="#">Manual</a></li> </ul> <ul class="underMenu"> <li><a href="#">Homepage</a></li> <li><a href="#" class="underSelected">Construct</a></li> <li><a href="#">Products</a></li> <li><a href="#">Community Forum</a></li> <li><a href="#">Contact Us</a></li> </ul> </nav></div> </header> <div class="contentWrapper"> <div class="wideCol"> <div id="coin-slider" class="slideShowWrapper"> <a href="#" target="_blank"> <img src="images/screenshot1.jpg" alt="Screenshot" /> <span> Scirra software allows you to bring your imagination to life </span> </a> <a href="#"> <img src="images/screenshot2.jpg" alt="Screenshot" /> <span> Export your creations to HTML5 pages </span> </a> <a href="#"> <img src="images/screenshot3.jpg" alt="Screenshot" /> <span> Another description of some image </span> </a> <a href="#"> <img src="images/screenshot4.jpg" alt="Screenshot" /> <span> Something motivational to tell people </span> </a> </div> <div class="newsWrapper"> <h2>Latest from Twitter</h2> <div id="twitterFeed"> <p>The news on the block is this. Something has happened some news or something. <span class="smallDate">About 6 hours ago</span></p> <p>Another thing has happened lets tell the world some news or something. Lots to think about. Lots to do.<span class="smallDate">About 6 hours ago</span></p> <p>Shocker! Santa Claus is not real. This is breaking news, we must spread it. <span class="smallDate">About 6 hours ago</span></p> </div> </div> </div> <div class="thinCol"> <h1>Main Heading</h1> <p>Some paragraph goes here. It tells you about the picture. Cool! Have you thought about downloading Construct 2? Well you can download it with the link below. This column will expand vertically.</p> <h3>Help Me!</h3> <p>This column will keep expanging and expanging. It pads stuff out to make other things look good imo.</p> <h3>Why Download?</h3> <p>As well as other features, we also have some other features. Check out our <a href="#">other features</a>. Each of our other features is really cool and there to help everyone suceed.</p> <a href="#" class="s downloadBox" title="Download Construct 2 Now"> <div class="downloadHead">Download</div> <div class="downloadSize">24.5 MB</div> </a> </div> <div class="clear"></div> <h2>This Weeks Spotlight</h2> <div class="halfColWrapper"> <img src="images/spotlight1.png" class="spotLightImg" alt="Spotlight User" /> <p>Our spotlight member this week is Pooh-Bah. He writes good stuff. Read it. <a class="moreInfoLink" href="#">Learn More</a></p> </div> <div class="halfColWrapper r"> <img src="images/spotlight2.png" class="spotLightImg" alt="Spotlight Game" /> <p>Killer Bears is a scary ass game from JimmyJones. How many bears can you escape from? <a class="moreInfoLink" href="#">Learn More</a></p> </div> <div class="clear"></div> </div> </div><div class="mainEnder"></div> <footer> <div class="footerWrapper"> <div class="footerBox"> <div class="footerItem"> <h4>Community</h4> <ul> <li><a href="#">The Blog</a></li> <li><a href="#">Community Forum</a></li> <li><a href="#">RSS Feed</a></li> <li> <a class="s footIco facebook" href="http://www.facebook.com/ScirraOfficial" target="_blank" title="Visit Scirra on Facebook"></a> <a class="s footIco twitter" href="http://twitter.com/Scirra" target="_blank" title="Follow Scirra on Twitter"></a> <a class="s footIco youtube" href="http://www.youtube.com/user/ScirraVideos" target="_blank" title="Visit Scirra on Youtube"></a> </li> </ul> </div> <div class="footerItem"> <h4>About Us</h4> <ul> <li><a href="#">Contact Information</a></li> <li><a href="#">Advertising</a></li> <li><a href="#">History</a></li> <li><a href="#">Privacy Policy</a></li> <li><a href="#">Terms and Conditions</a></li> </ul> </div> <div class="footerItem"> <h4>Want to Help?</h4> <p>You can contribute to the community <a href="#">in lots of ways</a>. We have a large active friendly community, and there are lots of ways to join in!</p> <a href="#" class="ralign"><strong>Learn More</strong></a> </div> <div class="clear"></div> </div> </div> <div class="copyright"> Copyright &copy; 2011 Scirra.com. All rights reserved. </div> </footer> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js"></script> <script type="text/javascript" src="js/common.js"></script> <script type="text/javascript" src="plugins/coin-slider/coin-slider.min.js"></script> <script type="text/javascript" src="js/homepage.js"></script> </body> </html>

    Read the article

  • Caveats with the runAllManagedModulesForAllRequests in IIS 7/8

    - by Rick Strahl
    One of the nice enhancements in IIS 7 (and now 8) is the ability to be able to intercept non-managed - ie. non ASP.NET served - requests from within ASP.NET managed modules. This opened up a ton of new functionality that could be applied across non-managed content using .NET code. I thought I had a pretty good handle on how IIS 7's Integrated mode pipeline works, but when I put together some samples last tonight I realized that the way that managed and unmanaged requests fire into the pipeline is downright confusing especially when it comes to the runAllManagedModulesForAllRequests attribute. There are a number of settings that can affect whether a managed module receives non-ASP.NET content requests such as static files or requests from other frameworks like PHP or ASP classic, and this is topic of this blog post. Native and Managed Modules The integrated mode IIS pipeline for IIS 7 and later - as the name suggests - allows for integration of ASP.NET pipeline events in the IIS request pipeline. Natively IIS runs unmanaged code and there are a host of native mode modules that handle the core behavior of IIS. If you set up a new IIS site or application without managed code support only the native modules are supported and fired without any interaction between native and managed code. If you use the Integrated pipeline with managed code enabled however things get a little more confusing as there both native modules and .NET managed modules can fire against the same IIS request. If you open up the IIS Modules dialog you see both managed and unmanaged modules. Unmanaged modules point at physical files on disk, while unmanaged modules point at .NET types and files referenced from the GAC or the current project's BIN folder. Both native and managed modules can co-exist and execute side by side and on the same request. When running in IIS 7 the IIS pipeline actually instantiates a the ASP.NET  runtime (via the System.Web.PipelineRuntime class) which unlike the core HttpRuntime classes in ASP.NET receives notification callbacks when IIS integrated mode events fire. The IIS pipeline is smart enough to detect whether managed handlers are attached and if they're none these notifications don't fire, improving performance. The good news about all of this for .NET devs is that ASP.NET style modules can be used for just about every kind of IIS request. All you need to do is create a new Web Application and enable ASP.NET on it, and then attach managed handlers. Handlers can look at ASP.NET content (ie. ASPX pages, MVC, WebAPI etc. requests) as well as non-ASP.NET content including static content like HTML files, images, javascript and css resources etc. It's very cool that this capability has been surfaced. However, with that functionality comes a lot of responsibility. Because every request passes through the ASP.NET pipeline if managed modules (or handlers) are attached there are possible performance implications that come with it. Running through the ASP.NET pipeline does add some overhead. ASP.NET and Your Own Modules When you create a new ASP.NET project typically the Visual Studio templates create the modules section like this: <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true" > </modules> </system.webServer> Specifically the interesting thing about this is the runAllManagedModulesForAllRequest="true" flag, which seems to indicate that it controls whether any registered modules always run, even when the value is set to false. Realistically though this flag does not control whether managed code is fired for all requests or not. Rather it is an override for the preCondition flag on a particular handler. With the flag set to the default true setting, you can assume that pretty much every IIS request you receive ends up firing through your ASP.NET module pipeline and every module you have configured is accessed even by non-managed requests like static files. In other words, your module will have to handle all requests. Now so far so obvious. What's not quite so obvious is what happens when you set the runAllManagedModulesForAllRequest="false". You probably would expect that immediately the non-ASP.NET requests no longer get funnelled through the ASP.NET Module pipeline. But that's not what actually happens. For example, if I create a module like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" /> by default it will fire against ALL requests regardless of the runAllManagedModulesForAllRequests flag. Even if the value runAllManagedModulesForAllRequests="false", the module is fired. Not quite expected. So what is the runAllManagedModulesForAllRequests really good for? It's essentially an override for managedHandler preCondition. If I declare my handler in web.config like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" preCondition="managedHandler" /> and the runAllManagedModulesForAllRequests="false" my module only fires against managed requests. If I switch the flag to true, now my module ends up handling all IIS requests that are passed through from IIS. The moral of the story here is that if you intend to only look at ASP.NET content, you should always set the preCondition="managedHandler" attribute to ensure that only managed requests are fired on this module. But even if you do this, realize that runAllManagedModulesForAllRequests="true" can override this setting. runAllManagedModulesForAllRequests and Http Application Events Another place the runAllManagedModulesForAllRequest attribute affects is the Global Http Application object (typically in global.asax) and the Application_XXXX events that you can hook up there. So while the events there are dynamically hooked up to the application class, they basically behave as if they were set with the preCodition="managedHandler" configuration switch. The end result is that if you have runAllManagedModulesForAllRequests="true" you'll see every Http request passed through the Application_XXXX events, and you only see ASP.NET requests with the flag set to "false". What's all that mean? Configuring an application to handle requests for both ASP.NET and other content requests can be tricky especially if you need to mix modules that might require both. Couple of things are important to remember. If your module doesn't need to look at every request, by all means set a preCondition="managedHandler" on it. This will at least allow it to respond to the runAllManagedModulesForAllRequests="false" flag and then only process ASP.NET requests. Look really carefully to see whether you actually need runAllManagedModulesForAllRequests="true" in your applications as set by the default new project templates in Visual Studio. Part of the reason, this is the default because it was required for the initial versions of IIS 7 and ASP.NET 2 in order to handle MVC extensionless URLs. However, if you are running IIS 7 or later and .NET 4.0 you can use the ExtensionlessUrlHandler instead to allow you MVC functionality without requiring runAllManagedModulesForAllRequests="true": <handlers> <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> </handlers> Oddly this is the default for Visual Studio 2012 MVC template apps, so I'm not sure why the default template still adds runAllManagedModulesForAllRequests="true" is - it should be enabled only if there's a specific need to access non ASP.NET requests. As a side note, it's interesting that when you access a static HTML resource, you can actually write into the Response object and get the output to show, which is trippy. I haven't looked closely to see how this works - whether ASP.NET just fires directly into the native output stream or whether the static requests are re-routed directly through the ASP.NET pipeline once a managed code module is detected. This doesn't work for all non ASP.NET resources - for example, I can't do the same with ASP classic requests, but it makes for an interesting demo when injecting HTML content into a static HTML page :-) Note that on the original Windows Server 2008 and Vista (IIS 7.0) you might need a HotFix in order for ExtensionLessUrlHandler to work properly for MVC projects. On my live server I needed it (about 6 months ago), but others have observed that the latest service updates have integrated this functionality and the hotfix is not required. On IIS 7.5 and later I've not needed any patches for things to just work. Plan for non-ASP.NET Requests It's important to remember that if you write a .NET Module to run on IIS 7, there's no way for you to prevent non-ASP.NET requests from hitting your module. So make sure you plan to support requests to extensionless URLs, to static resources like files. Luckily ASP.NET creates a full Request and full Response object for you for non ASP.NET content. So even for static files and even for ASP classic for example, you can look at Request.FilePath or Request.ContentType (in post handler pipeline events) to determine what content you are dealing with. As always with Module design make sure you check for the conditions in your code that make the module applicable and if a filter fails immediately exit - minimize the code that runs if your module doesn't need to process the request.© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7   ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

  • WSDL-world vs CLR-world – some differences

    - by nmarun
    A change in mindset is required when switching between a typical CLR application and a web service application. There are some things in a CLR environment that just don’t add-up in a WSDL arena (and vice-versa). I’m listing some of them here. When I say WSDL-world, I’m mostly talking with respect to a WCF Service and / or a Web Service. No (direct) Method Overloading: You definitely can have overloaded methods in a, say, Console application, but when it comes to a WCF / Web Services application, you need to adorn these overloaded methods with a special attribute so the service knows which specific method to invoke. When you’re working with WCF, use the Name property of the OperationContract attribute to provide unique names. 1: [OperationContract(Name = "AddInt")] 2: int Add(int arg1, int arg2); 3:  4: [OperationContract(Name = "AddDouble")] 5: double Add(double arg1, double arg2); By default, the proxy generates the code for this as: 1: [System.ServiceModel.OperationContractAttribute( 2: Action="http://tempuri.org/ILearnWcfService/AddInt", 3: ReplyAction="http://tempuri.org/ILearnWcfService/AddIntResponse")] 4: int AddInt(int arg1, int arg2); 5: 6: [System.ServiceModel.OperationContractAttribute( 7: Action="http://tempuri.org/ILearnWcfServiceExtend/AddDouble", 8: ReplyAction="http://tempuri.org/ILearnWcfServiceExtend/AddDoubleResponse")] 9: double AddDouble(double arg1, double arg2); With Web Services though the story is slightly different. Even after setting the MessageName property of the WebMethod attribute, the proxy does not change the name of the method, but only the underlying soap message changes. 1: [WebMethod] 2: public string HelloGalaxy() 3: { 4: return "Hello Milky Way!"; 5: } 6:  7: [WebMethod(MessageName = "HelloAnyGalaxy")] 8: public string HelloGalaxy(string galaxyName) 9: { 10: return string.Format("Hello {0}!", galaxyName); 11: } The one thing you need to remember is to set the WebServiceBinding accordingly. 1: [WebServiceBinding(ConformsTo = WsiProfiles.None)] The proxy is: 1: [System.Web.Services.Protocols.SoapDocumentMethodAttribute("http://tempuri.org/HelloGalaxy", 2: RequestNamespace="http://tempuri.org/", 3: ResponseNamespace="http://tempuri.org/", 4: Use=System.Web.Services.Description.SoapBindingUse.Literal, 5: ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)] 6: public string HelloGalaxy() 7:  8: [System.Web.Services.WebMethodAttribute(MessageName="HelloGalaxy1")] 9: [System.Web.Services.Protocols.SoapDocumentMethodAttribute("http://tempuri.org/HelloAnyGalaxy", 10: RequestElementName="HelloAnyGalaxy", 11: RequestNamespace="http://tempuri.org/", 12: ResponseElementName="HelloAnyGalaxyResponse", 13: ResponseNamespace="http://tempuri.org/", 14: Use=System.Web.Services.Description.SoapBindingUse.Literal, 15: ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)] 16: [return: System.Xml.Serialization.XmlElementAttribute("HelloAnyGalaxyResult")] 17: public string HelloGalaxy(string galaxyName) 18:  You see the calling method name is the same in the proxy, however the soap message that gets generated is different. Using interchangeable data types: See details on this here. Type visibility: In a CLR-based application, if you mark a field as private, well we all know, it’s ‘private’. Coming to a WSDL side of things, in a Web Service, private fields and web methods will not get generated in the proxy. In WCF however, all your operation contracts will be public as they get implemented from an interface. Even in case your ServiceContract interface is declared internal/private, you will see it as a public interface in the proxy. This is because type visibility is a CLR concept and has no bearing on WCF. Also if a private field has the [DataMember] attribute in a data contract, it will get emitted in the proxy class as a public property for the very same reason. 1: [DataContract] 2: public struct Person 3: { 4: [DataMember] 5: private int _x; 6:  7: [DataMember] 8: public int Id { get; set; } 9:  10: [DataMember] 11: public string FirstName { get; set; } 12:  13: [DataMember] 14: public string Header { get; set; } 15: } 16: } See the ‘_x’ field is a private member with the [DataMember] attribute, but the proxy class shows as below: 1: [System.Runtime.Serialization.DataMemberAttribute()] 2: public int _x { 3: get { 4: return this._xField; 5: } 6: set { 7: if ((this._xField.Equals(value) != true)) { 8: this._xField = value; 9: this.RaisePropertyChanged("_x"); 10: } 11: } 12: } Passing derived types to web methods / operation contracts: Once again, in a CLR application, I can have a derived class be passed as a parameter where a base class is expected. I have the following set up for my WCF service. 1: [DataContract] 2: public class Employee 3: { 4: [DataMember(Name = "Id")] 5: public int EmployeeId { get; set; } 6:  7: [DataMember(Name="FirstName")] 8: public string FName { get; set; } 9:  10: [DataMember] 11: public string Header { get; set; } 12: } 13:  14: [DataContract] 15: public class Manager : Employee 16: { 17: [DataMember] 18: private int _x; 19: } 20:  21: // service contract 22: [OperationContract] 23: Manager SaveManager(Employee employee); 24:  25: // in my calling code 26: Manager manager = new Manager {_x = 1, FirstName = "abc"}; 27: manager = LearnWcfServiceClient.SaveManager(manager); The above will throw an exception saying: In short, this is saying, that a Manager type was found where an Employee type was expected! Hierarchy flattening of interfaces in WCF: See details on this here. In CLR world, you’ll see the entire hierarchy as is. That’s another difference. Using ref parameters: * can use ref for parameters, but operation contract should not be one-way (gives an error when you do an update service reference)   => bad programming; create a return object that is composed of everything you need! This one kind of stumped me. Not sure why I tried this, but you can pass parameters prefixed with ref keyword* (* terms and conditions apply). The main issue is this, how would we know the changes that were made to a ‘ref’ input parameter are returned back from the service and updated to the local variable? Turns out both Web Services and WCF make this tracking happen by passing the input parameter in the response soap. This way when the deserializer does its magic, it maps all the elements of the response xml thereby updating our local variable. Here’s what I’m talking about. 1: [WebMethod(MessageName = "HelloAnyGalaxy")] 2: public string HelloGalaxy(ref string galaxyName) 3: { 4: string output = string.Format("Hello {0}", galaxyName); 5: if (galaxyName == "Andromeda") 6: { 7: galaxyName = string.Format("{0} (2.5 million light-years away)", galaxyName); 8: } 9: return output; 10: } This is how the request and response look like in soapUI. As I said above, the behavior is quite similar for WCF as well. But the catch comes when you have a one-way web methods / operation contracts. If you have an operation contract whose return type is void, is marked one-way and that has ref parameters then you’ll get an error message when you try to reference such a service. 1: [OperationContract(Name = "Sum", IsOneWay = true)] 2: void Sum(ref double arg1, ref double arg2); 3:  4: public void Sum(ref double arg1, ref double arg2) 5: { 6: arg1 += arg2; 7: } This is what I got when I did an update to my service reference: Makes sense, because a OneWay operation is… one-way – there’s no returning from this operation. You can also have a one-way web method: 1: [SoapDocumentMethod(OneWay = true)] 2: [WebMethod(MessageName = "HelloAnyGalaxy")] 3: public void HelloGalaxy(ref string galaxyName) This will throw an exception message similar to the one above when you try to update your web service reference. In the CLR space, there’s no such concept of a ‘one-way’ street! Yes, there’s void, but you very well can have ref parameters returned through such a method. Just a point here; although the ref/out concept sounds cool, it’s generally is a code-smell. The better approach is to always return an object that is composed of everything you need returned from a method. These are some of the differences that we need to bear when dealing with services that are different from our daily ‘CLR’ life.

    Read the article

  • Enabling XML-documentation for code contracts

    - by DigiMortal
    One nice feature that code contracts offer is updating of code documentation. If you are using source code documenting features of Visual Studio then code contracts may automate some tasks you otherwise have to implement manually. In this posting I will show you some XML documentation files with documented contracts. I will also explain how this feature works. Enabling XML-documentation in project settings As a first thing let’s enable generating of code documentation under project settings. Open project properties, move to Build page and make check to checkbox called “XML documentation file”. Save project settings and rebuild project. When project is built go to bin/Debug folder and open the XML-file. Here is my XML. <?xml version="1.0"?> <doc>     <assembly>         <name>Eneta.Examples.CodeContracts.Testable</name>     </assembly>     <members>         <member name="T:Eneta.Examples.CodeContracts.Testable.Randomizer">             <summary>             Class for generating random integers in user specified range.             </summary>         </member>         <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.#ctor(Eneta.Examples.CodeContracts.Testable.IRandomGenerator)">             <summary>             Constructor of Randomizer. Initializes Randomizer class.             </summary>             <param name="generator">Instance of random number generator.</param>         </member>         <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.GetRandomFromRangeContracted(System.Int32,System.Int32)">             <summary>             Returns random integer in given range.             </summary>             <param name="min">Minimum value of random integer.</param>             <param name="max">Maximum value of random integer.</param>         </member>     </members> </doc> You can see nothing about code contracts here. Enabling code contracts documentation Code contracts have their own settings and conditions for documentation. Open project properties and move to Code Contracts tab. From “Contract Reference Assembly” dropdown check Build and make check to checkbox “Emit contracts into XML doc file”. And again – save project setting, build the project and move to bin/Debug folder. Now you can see that there are two files for XML-documentation: <assembly name>.XML <assembly name>.old.XML First files is documentation with contracts, second file is original documentation without contracts. Let’s see now what is inside our new XML-documentation file. <?xml version="1.0"?> <doc>   <assembly>     <name>Eneta.Examples.CodeContracts.Testable</name>   </assembly>   <members>     <member name="T:Eneta.Examples.CodeContracts.Testable.Randomizer">       <summary>             Class for generating random integers in user specified range.             </summary>     </member>     <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.#ctor(Eneta.Examples.CodeContracts.Testable.IRandomGenerator)">       <summary>             Constructor of Randomizer. Initializes Randomizer class.             </summary>       <param name="generator">Instance of random number generator.</param>     </member>     <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.GetRandomFromRangeContracted(System.Int32,System.Int32)">       <summary>             Returns random integer in given range.             </summary>       <param name="min">Minimum value of random integer.</param>       <param name="max">Maximum value of random integer.</param>       <requires description="Min must be less than max" exception="T:System.ArgumentOutOfRangeException">                 min &lt; max</requires>       <exception cref="T:System.ArgumentOutOfRangeException">                 min &gt;= max</exception>       <ensures description="Return value is out of range">                 Contract.Result&lt;int&gt;() &gt;= min &amp;&amp;                 Contract.Result&lt;int&gt;() &lt;= max</ensures>     </member>   </members> </doc> As you can see then code contracts are pretty well documented. Messages that I provided with code contracts are also available in documentation. If I wrote very good and informative messages then these messages are very useful also in contracts documentation. Code contracts and Sandcastle Sandcastle knows nothing about code contracts by default. There is separate package of file for Sandcastle that is provided you by code contracts installation. You can read from code contracts manual: “Sandcastle (http://www.codeplex.com/Sandcastle) is a freely available tool that generates help les and web sites describing your APIs, based on the XML doc comments in your source code. The CodeContracts install contains a set of les that can be copied over a Sandcastle installation to take advantage of the additional contract information. The produced documentation adds a contract section to methods with declared requires and/or ensures. In order for Sandcastle to produce Contract sections, you need to patch a number of files in its installation. Please refer to the Sandcastle Readme.txt found under Start Menu/CodeContracts/Sandcastle for instructions. A future release of Sandcastle will hopefully support contract sections without the need for this patching step.” Integrating code contracts documentation to Sandcastle will be one of my next postings about code contracts. Conclusion if you are using code documentation then documentation about code contracts can be added to documentation very easily. All you have to do is to enable XML-documentation for contracts and build your project. Later you can use Sandcastle files provided by code contracts installer to integrate contracts documentation to your output documentation package.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90  | Next Page >