Search Results

Search found 24117 results on 965 pages for 're write'.

Page 178/965 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • C# async and actors

    - by Alex.Davies
    If you read my last post about async, you might be wondering what drove me to write such odd code in the first place. The short answer is that .NET Demon is written using NAct Actors. Actors are an old idea, which I believe deserve a renaissance under C# 5. The idea is to isolate each stateful object so that only one thread has access to its state at any point in time. That much should be familiar, it's equivalent to traditional lock-based synchronization. The different part is that actors pass "messages" to each other rather than calling a method and waiting for it to return. By doing that, each thread can only ever be holding one lock. This completely eliminates deadlocks, my least favourite concurrency problem. Most people who use actors take this quite literally, and there are plenty of frameworks which help you to create message classes and loops which can receive the messages, inspect what type of message they are, and process them accordingly. But I write C# for a reason. Do I really have to choose between using actors and everything I love about object orientation in C#? Type safety Interfaces Inheritance Generics As it turns out, no. You don't need to choose between messages and method calls. A method call makes a perfectly good message, as long as you don't wait for it to return. This is where asynchonous methods come in. I have used NAct for a while to wrap my objects in a proxy layer. As long as I followed the rule that methods must always return void, NAct queued up the call for later, and immediately released my thread. When I needed to get information out of other actors, I could use EventHandlers and callbacks (continuation passing style, for any CS geeks reading), and NAct would call me back in my isolated thread without blocking the actor that raised the event. Using callbacks looks horrible though. To remind you: m_BuildControl.FilterEnabledForBuilding(    projects,    enabledProjects = m_OutOfDateProjectFinder.FilterNeedsBuilding(        enabledProjects,             newDirtyProjects =             {                 ....... Which is why I'm really happy that NAct now supports async methods. Now, methods are allowed to return Task rather than just void. I can await those methods, and C# 5 will turn the rest of my method into a continuation for me. NAct will run the other method in the other actor's context, but will make sure that when my method resumes, we're back in my context. Neither actor was ever blocked waiting for the other one. Apart from when they were actually busy doing something, they were responsive to concurrent messages from other sources. To be fair, you could use async methods with lock statements to achieve exactly the same thing, but it's ugly. Here's a realistic example of an object that has a queue of data that gets passed to another object to be processed: class QueueProcessor {    private readonly ItemProcessor m_ItemProcessor = ...     private readonly object m_Sync = new object();    private Queue<object> m_DataQueue = ...    private List<object> m_Results = ...     public async Task ProcessOne() {         object data = null;         lock (m_Sync)         {             data = m_DataQueue.Dequeue();         }         var processedData = await m_ItemProcessor.ProcessData(data); lock (m_Sync)         {             m_Results.Add(processedData);         }     } } We needed to write two lock blocks, one to get the data to process, one to store the result. The worrying part is how easily we could have forgotten one of the locks. Compare that to the version using NAct: class QueueProcessorActor : IActor { private readonly ItemProcessor m_ItemProcessor = ... private Queue<object> m_DataQueue = ... private List<object> m_Results = ... public async Task ProcessOne()     {         // We are an actor, it's always thread-safe to access our private fields         var data = m_DataQueue.Dequeue();         var processedData = await m_ItemProcessor.ProcessData(data);         m_Results.Add(processedData);     } } You don't have to explicitly lock anywhere, NAct ensures that your code will only ever run on one thread, because it's an actor. Either way, async is definitely better than traditional synchronous code. Here's a diagram of what a typical synchronous implementation might do: The left side shows what is running on the thread that has the lock required to access the QueueProcessor's data. The red section is where that lock is held, but doesn't need to be. Contrast that with the async version we wrote above: Here, the lock is released in the middle. The QueueProcessor is free to do something else. Most importantly, even if the ItemProcessor sometimes calls the QueueProcessor, they can never deadlock waiting for each other. So I thoroughly recommend you use async for all code that has to wait a while for things. And if you find yourself writing lots of lock statements, think about using actors as well. Using actors and async together really takes the misery out of concurrent programming.

    Read the article

  • Design review for application facing memory issues

    - by Mr Moose
    I apologise in advance for the length of this post, but I want to paint an accurate picture of the problems my app is facing and then pose some questions below; I am trying to address some self inflicted design pain that is now leading to my application crashing due to out of memory errors. An abridged description of the problem domain is as follows; The application takes in a “dataset” that consists of numerous text files containing related data An individual text file within the dataset usually contains approx 20 “headers” that contain metadata about the data it contains. It also contains a large tab delimited section containing data that is related to data in one of the other text files contained within the dataset. The number of columns per file is very variable from 2 to 256+ columns. The original application was written to allow users to load a dataset, map certain columns of each of the files which basically indicating key information on the files to show how they are related as well as identify a few expected column names. Once this is done, a validation process takes place to enforce various rules and ensure that all the relationships between the files are valid. Once that is done, the data is imported into a SQL Server database. The database design is an EAV (Entity-Attribute-Value) model used to cater for the variable columns per file. I know EAV has its detractors, but in this case, I feel it was a reasonable choice given the disparate data and variable number of columns submitted in each dataset. The memory problem Given the fact the combined size of all text files was at most about 5 megs, and in an effort to reduce the database transaction time, it was decided to read ALL the data from files into memory and then perform the following; perform all the validation whilst the data was in memory relate it using an object model Start DB transaction and write the key columns row by row, noting the Id of the written row (all tables in the database utilise identity columns), then the Id of the newly written row is applied to all related data Once all related data had been updated with the key information to which it relates, these records are written using SqlBulkCopy. Due to our EAV model, we essentially have; x columns by y rows to write, where x can by 256+ and rows are often into the tens of thousands. Once all the data is written without error (can take several minutes for large datasets), Commit the transaction. The problem now comes from the fact we are now receiving individual files containing over 30 megs of data. In a dataset, we can receive any number of files. We’ve started seen datasets of around 100 megs coming in and I expect it is only going to get bigger from here on in. With files of this size, data can’t even be read into memory without the app falling over, let alone be validated and imported. I anticipate having to modify large chunks of the code to allow validation to occur by parsing files line by line and am not exactly decided on how to handle the import and transactions. Potential improvements I’ve wondered about using GUIDs to relate the data rather than relying on identity fields. This would allow data to be related prior to writing to the database. This would certainly increase the storage required though. Especially in an EAV design. Would you think this is a reasonable thing to try, or do I simply persist with identity fields (natural keys can’t be trusted to be unique across all submitters). Use of staging tables to get data into the database and only performing the transaction to copy data from staging area to actual destination tables. Questions For systems like this that import large quantities of data, how to you go about keeping transactions small. I’ve kept them as small as possible in the current design, but they are still active for several minutes and write hundreds of thousands of records in one transaction. Is there a better solution? The tab delimited data section is read into a DataTable to be viewed in a grid. I don’t need the full functionality of a DataTable, so I suspect it is overkill. Is there anyway to turn off various features of DataTables to make them more lightweight? Are there any other obvious things you would do in this situation to minimise the memory footprint of the application described above? Thanks for your kind attention.

    Read the article

  • Sort Data in Windows Phone using Collection View Source

    - by psheriff
    When you write a Windows Phone application you will most likely consume data from a web service somewhere. If that service returns data to you in a sort order that you do not want, you have an easy alternative to sort the data without writing any C# or VB code. You use the built-in CollectionViewSource object in XAML to perform the sorting for you. This assumes that you can get the data into a collection that implements the IEnumerable or IList interfaces.For this example, I will be using a simple Product class with two properties, and a list of Product objects using the Generic List class. Try this out by creating a Product class as shown in the following code:public class Product {  public Product(int id, string name)   {    ProductId = id;    ProductName = name;  }  public int ProductId { get; set; }  public string ProductName { get; set; }}Create a collection class that initializes a property called DataCollection with some sample data as shown in the code below:public class Products : List<Product>{  public Products()  {    InitCollection();  }  public List<Product> DataCollection { get; set; }  List<Product> InitCollection()  {    DataCollection = new List<Product>();    DataCollection.Add(new Product(3,        "PDSA .NET Productivity Framework"));    DataCollection.Add(new Product(1,        "Haystack Code Generator for .NET"));    DataCollection.Add(new Product(2,        "Fundamentals of .NET eBook"));    return DataCollection;  }}Notice that the data added to the collection is not in any particular order. Create a Windows Phone page and add two XML namespaces to the Page.xmlns:scm="clr-namespace:System.ComponentModel;assembly=System.Windows"xmlns:local="clr-namespace:WPSortData"The 'local' namespace is an alias to the name of the project that you created (in this case WPSortData). The 'scm' namespace references the System.Windows.dll and is needed for the SortDescription class that you will use for sorting the data. Create a phone:PhoneApplicationPage.Resources section in your Windows Phone page that looks like the following:<phone:PhoneApplicationPage.Resources>  <local:Products x:Key="products" />  <CollectionViewSource x:Key="prodCollection"      Source="{Binding Source={StaticResource products},                       Path=DataCollection}">    <CollectionViewSource.SortDescriptions>      <scm:SortDescription PropertyName="ProductName"                           Direction="Ascending" />    </CollectionViewSource.SortDescriptions>  </CollectionViewSource></phone:PhoneApplicationPage.Resources>The first line of code in the resources section creates an instance of your Products class. The constructor of the Products class calls the InitCollection method which creates three Product objects and adds them to the DataCollection property of the Products class. Once the Products object is instantiated you now add a CollectionViewSource object in XAML using the Products object as the source of the data to this collection. A CollectionViewSource has a SortDescriptions collection that allows you to specify a set of SortDescription objects. Each object can set a PropertyName and a Direction property. As you see in the above code you set the PropertyName equal to the ProductName property of the Product object and tell it to sort in an Ascending direction.All you have to do now is to create a ListBox control and set its ItemsSource property to the CollectionViewSource object. The ListBox displays the data in sorted order by ProductName and you did not have to write any LINQ queries or write other code to sort the data!<ListBox    ItemsSource="{Binding Source={StaticResource prodCollection}}"   DisplayMemberPath="ProductName" />SummaryIn this blog post you learned that you can sort any data without having to change the source code of where the data comes from. Simply feed the data into a CollectionViewSource in XAML and set some sort descriptions in XAML and the rest is done for you! This comes in very handy when you are consuming data from a source where the data is given to you and you do not have control over the sorting.NOTE: You can download this article and many samples like the one shown in this blog entry at my website. http://www.pdsa.com/downloads. Select “Tips and Tricks”, then “Sort Data in Windows Phone using Collection View Source” from the drop down list.Good Luck with your Coding,Paul Sheriff** SPECIAL OFFER FOR MY BLOG READERS **We frequently offer a FREE gift for readers of my blog. Visit http://www.pdsa.com/Event/Blog for your FREE gift!

    Read the article

  • SSIS: Building SQL databases on-the-fly using concatenated SQL scripts

    - by DrJohn
    Over the years I have developed many techniques which help automate the whole SQL Server build process. In my current process, where I need to build entire OLAP data marts on-the-fly, I make regular use of a simple but very effective mechanism to concatenate all the SQL Scripts together from my SSMS (SQL Server Management Studio) projects. This proves invaluable because in two clicks I can redeploy an entire SQL Server database with all tables, views, stored procedures etc. Indeed, I can also use the concatenated SQL scripts with SSIS to build SQL Server databases on-the-fly. You may be surprised to learn that I often redeploy the database several times per day, or even several times per hour, during the development process. This is because the deployment errors are logged and you can quickly see where SQL Scripts have object dependency errors. For example, after changing a table structure you may have forgotten to change any related views. The deployment log immediately points out all the objects which failed to build so you can fix and redeploy the database very quickly. The alternative approach (i.e. doing changes in the database directly using the SSMS UI) would require you to check all dependent objects before making changes. The chances are that you will miss something and wonder why your app returns the wrong data – a common problem caused by changing a table without re-creating dependent views. Using SQL Projects in SSMS A great many developers fail to make use of SQL Projects in SSMS (SQL Server Management Studio). To me they are invaluable way of organizing your SQL Scripts. The screenshot below shows a typical SSMS solution made up of several projects – one project for tables, another for views etc. The key point is that the projects naturally fall into the right order in file system because of the project name. The number in the folder or file name ensures that the projects the SQL scripts are concatenated together in the order that they need to be executed. Hence the script filenames start with 100, 110 etc. Concatenating SQL Scripts To concatenate the SQL Scripts together into one file, I use notepad.exe to create a simple batch file (see example screenshot) which uses the TYPE command to write the content of the SQL Script files into a combined file. As the SQL Scripts are in several folders, I simply use several TYPE command multiple times and append the output together. If you are unfamiliar with batch files, you may not know that the angled bracket (>) means write output of the program into a file. Two angled brackets (>>) means append output of this program into a file. So the command-line DIR > filelist.txt would write the content of the DIR command into a file called filelist.txt. In the example shown above, the concatenated file is called SB_DDS.sql If, like me you place the concatenated file under source code control, then the source code control system will change the file's attribute to "read-only" which in turn would cause the TYPE command to fail. The ATTRIB command can be used to remove the read-only flag. Using SQLCmd to execute the concatenated file Now that the SQL Scripts are all in one big file, we can execute the script against a database using SQLCmd using another batch file as shown below: SQLCmd has numerous options, but the script shown above simply executes the SS_DDS.sql file against the SB_DDS_DB database on the local machine and logs the errors to a file called SB_DDS.log. So after executing the batch file you can simply check the error log to see if your database built without a hitch. If you have errors, then simply fix the source files, re-create the concatenated file and re-run the SQLCmd to rebuild the database. This two click operation allows you to quickly identify and fix errors in your entire database definition.Using SSIS to execute the concatenated file To execute the concatenated SQL script using SSIS, you simply drop an Execute SQL task into your package and set the database connection as normal and then select File Connection as the SQLSourceType (as shown below). Create a file connection to your concatenated SQL script and you are ready to go.   Tips and TricksAdd a new-line at end of every fileThe most common problem encountered with this approach is that the GO statement on the last line of one file is placed on the same line as the comment at the top of the next file by the TYPE command. The easy fix to this is to ensure all your files have a new-line at the end.Remove all USE database statementsThe SQLCmd identifies which database the script should be run against.  So you should remove all USE database commands from your scripts - otherwise you may get unintentional side effects!!Do the Create Database separatelyIf you are using SSIS to create the database as well as create the objects and populate the database, then invoke the CREATE DATABASE command against the master database using a separate package before calling the package that executes the concatenated SQL script.    

    Read the article

  • Writing an ASP.Net Web based TFS Client

    - by Glav
    So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind. We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association. However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar. Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012) Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work. Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario. So for anyone else who wants to do this, here is a simple breakdown on what you have to do: Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access. Without this step, don’t bother trying to do anything else. In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.VersionControl.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Common.dll If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support. You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the <appSettings> element as shown below: <appSettings> <!-- Add reference to TFS Client Cache --> <add key="WorkItemTrackingCacheRoot" value="C:\windows\temp" /> </appSettings> With all that in place, you can write the following code: var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}"); var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token); var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}.visualstudio.com/defaultcollection”), clientCreds); TfsConfigurationServercurrentCollection.EnsureAuthenticated(); In the above code, not the URL contains the “defaultcollection” at the end of the URL. Obviously replace {yourdomain} with whatever is defined for your TFS in Azure instance. In addition, make sure the service user account and password that was generated in the first step is substituted in here. Note: If something is not right, the “EnsureAuthenticated()” call will throw an exception with the message being you are not authorised. If you forget the “defaultcollection” on the URL, it will still fail but with a message saying you are not authorised. That is, a similar but different exception message. And that is it. You can then query the collection using something like: var service = currentCollection.GetService<WorkItemStore>(); var proj = service.Projects[0]; var allQueries = proj.StoredQueries; for (int qcnt = 0; qcnt < allQueries.Count; qcnt++) {     var query = allQueries[qcnt];     var queryDesc = string.format(“Query found named: {0}”,query.Name); } You get the idea. If you search around, you will find references to the ServiceIdentityCredentialProvider which is referenced in this article. I had no luck with this method and it all looked too hard since it required an extra KB article and other magic sauce. So I hope that helps. This article certainly would have helped me save a boat load of time and frustration.

    Read the article

  • C#/.NET Little Wonders: Static Char Methods

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Often times in our code we deal with the bigger classes and types in the BCL, and occasionally forgot that there are some nice methods on the primitive types as well.  Today we will discuss some of the handy static methods that exist on the char (the C# alias of System.Char) type. The Background I was examining a piece of code this week where I saw the following: 1: // need to get the 5th (offset 4) character in upper case 2: var type = symbol.Substring(4, 1).ToUpper(); 3:  4: // test to see if the type is P 5: if (type == "P") 6: { 7: // ... do something with P type... 8: } Is there really any error in this code?  No, but it still struck me wrong because it is allocating two very short-lived throw-away strings, just to store and manipulate a single char: The call to Substring() generates a new string of length 1 The call to ToUpper() generates a new upper-case version of the string from Step 1. In my mind this is similar to using ToUpper() to do a case-insensitive compare: it isn’t wrong, it’s just much heavier than it needs to be (for more info on case-insensitive compares, see #2 in 5 More Little Wonders). One of my favorite books is the C++ Coding Standards: 101 Rules, Guidelines, and Best Practices by Sutter and Alexandrescu.  True, it’s about C++ standards, but there’s also some great general programming advice in there, including two rules I love:         8. Don’t Optimize Prematurely         9. Don’t Pessimize Prematurely We all know what #8 means: don’t optimize when there is no immediate need, especially at the expense of readability and maintainability.  I firmly believe this and in the axiom: it’s easier to make correct code fast than to make fast code correct.  Optimizing code to the point that it becomes difficult to maintain often gains little and often gives you little bang for the buck. But what about #9?  Well, for that they state: “All other things being equal, notably code complexity and readability, certain efficient design patterns and coding idioms should just flow naturally from your fingertips and are no harder to write then the pessimized alternatives. This is not premature optimization; it is avoiding gratuitous pessimization.” Or, if I may paraphrase: “where it doesn’t increase the code complexity and readability, prefer the more efficient option”. The example code above was one of those times I feel where we are violating a tacit C# coding idiom: avoid creating unnecessary temporary strings.  The code creates temporary strings to hold one char, which is just unnecessary.  I think the original coder thought he had to do this because ToUpper() is an instance method on string but not on char.  What he didn’t know, however, is that ToUpper() does exist on char, it’s just a static method instead (though you could write an extension method to make it look instance-ish). This leads me (in a long-winded way) to my Little Wonders for the day… Static Methods of System.Char So let’s look at some of these handy, and often overlooked, static methods on the char type: IsDigit(), IsLetter(), IsLetterOrDigit(), IsPunctuation(), IsWhiteSpace() Methods to tell you whether a char (or position in a string) belongs to a category of characters. IsLower(), IsUpper() Methods that check if a char (or position in a string) is lower or upper case ToLower(), ToUpper() Methods that convert a single char to the lower or upper equivalent. For example, if you wanted to see if a string contained any lower case characters, you could do the following: 1: if (symbol.Any(c => char.IsLower(c))) 2: { 3: // ... 4: } Which, incidentally, we could use a method group to shorten the expression to: 1: if (symbol.Any(char.IsLower)) 2: { 3: // ... 4: } Or, if you wanted to verify that all of the characters in a string are digits: 1: if (symbol.All(char.IsDigit)) 2: { 3: // ... 4: } Also, for the IsXxx() methods, there are overloads that take either a char, or a string and an index, this means that these two calls are logically identical: 1: // check given a character 2: if (char.IsUpper(symbol[0])) { ... } 3:  4: // check given a string and index 5: if (char.IsUpper(symbol, 0)) { ... } Obviously, if you just have a char, then you’d just use the first form.  But if you have a string you can use either form equally well. As a side note, care should be taken when examining all the available static methods on the System.Char type, as some seem to be redundant but actually have very different purposes.  For example, there are IsDigit() and IsNumeric() methods, which sound the same on the surface, but give you different results. IsDigit() returns true if it is a base-10 digit character (‘0’, ‘1’, … ‘9’) where IsNumeric() returns true if it’s any numeric character including the characters for ½, ¼, etc. Summary To come full circle back to our opening example, I would have preferred the code be written like this: 1: // grab 5th char and take upper case version of it 2: var type = char.ToUpper(symbol[4]); 3:  4: if (type == 'P') 5: { 6: // ... do something with P type... 7: } Not only is it just as readable (if not more so), but it performs over 3x faster on my machine:    1,000,000 iterations of char method took: 30 ms, 0.000050 ms/item.    1,000,000 iterations of string method took: 101 ms, 0.000101 ms/item. It’s not only immediately faster because we don’t allocate temporary strings, but as an added bonus there less garbage to collect later as well.  To me this qualifies as a case where we are using a common C# performance idiom (don’t create unnecessary temporary strings) to make our code better. Technorati Tags: C#,CSharp,.NET,Little Wonders,char,string

    Read the article

  • My Red Gate Experience

    - by Colin Rothwell
    I’m Colin, and I’ve been an intern working with Mike in publishing on Simple-Talk and SQLServerCentral for the past ten weeks. I’ve mostly been working “behind the scenes”, making improvements to the spam filtering, along with various other small tweaks. When I arrived at Red Gate, one of the first things Mike asked me was what I wanted to get out of the internship. It wasn’t a question I’d given a great deal of thought to, but my immediate response was the same as almost anybody: to support my growing family. Well, ok, not quite that, but money was certainly a motivator, along with simply making sure that I didn’t get bored over the summer. Three months is a long time to fill, and many of my friends end up getting bored, or worse, knitting obsessively. With the arrogance which seems fairly common among Cambridge people, I wasn’t expecting to really learn much here! In my mind, the part of the year where I am at Uni is the part where I learn things, whilst Red Gate would be an opportunity to apply what I’d learnt. Thankfully, the opposite is true: I’ve learnt a lot during my time here, and there has been a definite positive impact on the way I write code. The first thing I’ve really learnt is that test-driven development is, in general, a sensible way of working. Before coming, I didn’t really get it: how could you test something you hadn’t yet written? It didn’t make sense! My problem was seeing a test as having to test all the behaviour of a given function. Writing tests which test the bare minimum possible and building them up is a really good way of crystallising the direction the code needs to grow in, and ensures you never attempt to write too much code at time. One really good experience of this was early on in my internship when Mike and I were working on the query used to list active authors: I’d written something which I thought would do the trick, but by starting again using TDD we grew something which revealed that there were several subtle mistakes in the query I’d written. I’ve also been awakened to the value of pair programming. Whilst I could sort of see the point before coming, I also thought that it was impossible that two people would ever get more done at the same computer than if they were working separately. I still think that this is true for projects with pieces that developers can easily work on independently, and with developers who both know the codebase, but I’ve found that pair programming can be really good for learning a code base, and for building up small projects to the point where you can start working on separate components, as well as solving particularly difficult problems. Later on in my internship, for my down tools week project, I was working on adding Python support to Glimpse. Another intern and I we pair programmed the entire project, using ping pong pair programming as much as possible. One bonus that this brought which I wasn’t expecting was that I found myself less prone to distraction: with someone else peering over my shoulder, I didn’t have the ever-present temptation to open gmail, or facebook, or yammer, or twitter, or hacker news, or reddit, and so on, and so forth. I’m quite proud of this project: I think it’s some of the best code I’ve written. I’ve also been really won over to the value of descriptive variables names. In my pre-Red Gate life, as a lone-ranger style cowboy programmer, I’d developed a tendency towards laziness in variable names, sometimes abbreviating or, worse, using acronyms. I’ve swiftly realised that this is a bad idea when working with a team: saving a few key strokes is inevitably not worth it when it comes to reading code again in the future. Longer names also mean you can do away with a majority of comments. I appreciate that if you’ve come up with an O(n*log n) algorithm for something which seemed O(n^2), you probably want to explain how it works, but explaining what a variable name means is a big no no: it’s so very easy to change the behaviour of the code, whilst forgetting about the comments. Whilst at Red Gate, I took the opportunity to attend a code retreat, which really helped me to solidify all the things I’d learnt. To be completely free of any existing code base really lets you focus on best practises and think about how you write code. If you get a chance to go on a similar event, I’d highly recommend it! Cycling to Red Gate, I’ve also become much better at fitting inner tubes: if you’re struggling to get the tube out, or re-fit the tire, letting a bit of air out usually helps. I’ve also become quite a bit better at foosball and will miss having a foosball table! I’d like to finish off by saying thank you to everyone at Red Gate for having me. I’ve really enjoyed working with, and learning from, the team that brings you this web site. If you meet any of them, buy them a drink!

    Read the article

  • External USB 3 drive not recognized

    - by ilan123
    Ubuntu 12.10 64 bit seems not to recognize my external hard disk. It is a Vantec NST-310S3 external disk enclosure with a WD 3TB drive. The disk has two NTFS partitions. My PC is a dual boot system. Under Windows 7 the hard disk works fine but I can't make it work with Ubuntu. When the drive is connected to the PC then the command sudo fdisk -l seems to hang forever. Below are the output of lsusb and cat /proc/partitions without the external drive and then with it connected. I added also the last lines of the dmesg command at the end. First without the drive: ilan@linux:~$ lsusb Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 13ba:0017 Unknown PS/2 Keyboard+Mouse Adapter Bus 001 Device 004: ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver Bus 001 Device 005: ID 0ac8:3420 Z-Star Microelectronics Corp. Venus USB2.0 Camera ilan@linux:~$ cat /proc/partitions major minor #blocks name 8 0 1953514584 sda 8 1 102400 sda1 8 2 629043200 sda2 8 3 367001600 sda3 8 4 1 sda4 8 5 471859200 sda5 8 6 157286400 sda6 8 7 324115456 sda7 8 8 4101120 sda8 11 0 1048575 sr0 Second with the USB 3 drive: ilan@linux:~$ lsusb Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 004 Device 002: ID 174c:55aa ASMedia Technology Inc. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 13ba:0017 Unknown PS/2 Keyboard+Mouse Adapter Bus 001 Device 004: ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver Bus 001 Device 005: ID 0ac8:3420 Z-Star Microelectronics Corp. Venus USB2.0 Camera ilan@linux:~$ cat /proc/partitions major minor #blocks name 8 0 1953514584 sda 8 1 102400 sda1 8 2 629043200 sda2 8 3 367001600 sda3 8 4 1 sda4 8 5 471859200 sda5 8 6 157286400 sda6 8 7 324115456 sda7 8 8 4101120 sda8 11 0 1048575 sr0 8 16 2930266584 sdb ilan@linux:~$ lsusb -v -s 004:002 Bus 004 Device 002: ID 174c:55aa ASMedia Technology Inc. Couldn't open device, some information will be missing Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 9 idVendor 0x174c ASMedia Technology Inc. idProduct 0x55aa bcdDevice 1.00 iManufacturer 2 iProduct 3 iSerial 1 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 44 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk-Only iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0400 1x 1024 bytes bInterval 0 bMaxBurst 15 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0400 1x 1024 bytes bInterval 0 bMaxBurst 15 ilan@linux:~$ sudo fdisk -l [sudo] password for ilan: Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf1b4f1ee Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 1258293247 629043200 7 HPFS/NTFS/exFAT /dev/sda3 1258293248 1992296447 367001600 7 HPFS/NTFS/exFAT /dev/sda4 1992298494 3907028991 957365249 f W95 Ext'd (LBA) /dev/sda5 1992298496 2936016895 471859200 7 HPFS/NTFS/exFAT /dev/sda6 2936018944 3250591743 157286400 7 HPFS/NTFS/exFAT /dev/sda7 3250593792 3898824703 324115456 83 Linux /dev/sda8 3898826752 3907028991 4101120 82 Linux swap / Solaris dmesg output after connecting the external drive: [ 23.740567] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx [ 23.740786] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 49.144673] usb 4-1: >new SuperSpeed USB device number 2 using xhci_hcd [ 49.163039] usb 4-1: >Parent hub missing LPM exit latency info. Power management will be impacted. [ 49.166789] usb 4-1: >New USB device found, idVendor=174c, idProduct=55aa [ 49.166793] usb 4-1: >New USB device strings: Mfr=2, Product=3, SerialNumber=1 [ 49.166796] usb 4-1: >Product: AS2105 [ 49.166799] usb 4-1: >Manufacturer: ASMedia [ 49.166801] usb 4-1: >SerialNumber: 0123456789ABCDEF [ 49.206372] usbcore: registered new interface driver uas [ 49.228891] Initializing USB Mass Storage driver... [ 49.229042] scsi6 : usb-storage 4-1:1.0 [ 49.229115] usbcore: registered new interface driver usb-storage [ 49.229116] USB Mass Storage support registered. [ 64.045528] scsi 6:0:0:0: >Direct-Access WDC WD30 EZRX-00MMMB0 80.0 PQ: 0 ANSI: 0 [ 64.046224] sd 6:0:0:0: >Attached scsi generic sg2 type 0 [ 64.046881] sd 6:0:0:0: >[sdb] Very big device. Trying to use READ CAPACITY(16). [ 64.047610] sd 6:0:0:0: >[sdb] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB) [ 64.048368] sd 6:0:0:0: >[sdb] Write Protect is off [ 64.048373] sd 6:0:0:0: >[sdb] Mode Sense: 23 00 00 00 [ 64.048984] sd 6:0:0:0: >[sdb] No Caching mode page present [ 64.048987] sd 6:0:0:0: >[sdb] Assuming drive cache: write through [ 64.049297] sd 6:0:0:0: >[sdb] Very big device. Trying to use READ CAPACITY(16). [ 64.050942] sd 6:0:0:0: >[sdb] No Caching mode page present [ 64.050944] sd 6:0:0:0: >[sdb] Assuming drive cache: write through [ 94.245006] usb 4-1: >reset SuperSpeed USB device number 2 using xhci_hcd [ 94.262553] usb 4-1: >Parent hub missing LPM exit latency info. Power management will be impacted. [ 94.263805] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c00 [ 94.263808] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c40 [ 125.262722] usb 4-1: >reset SuperSpeed USB device number 2 using xhci_hcd [ 125.280304] usb 4-1: >Parent hub missing LPM exit latency info. Power management will be impacted. [ 125.281511] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c00 [ 125.281516] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c40

    Read the article

  • Caveat utilitor - Can I run two versions of Microsoft Project side-by-side?

    - by Martin Hinshelwood
    A number of out customers have asked if there are any problems in installing and running multiple versions of Microsoft Project on a single client. Although this is a case of Caveat utilitor (Let the user beware), as long as the user understands and accepts the issues that can occur then they can do this. Although Microsoft provide the ability to leave old versions of Office products (except Outlook) on your client when you are installing a new version of the product they certainly do not endorse doing so. Figure: For Project you can choose to keep the old stuff   That being the case I would have preferred that they put a “(NOT RECOMMENDED)” after the options to impart that knowledge to the rest of us, but they did not. The default and recommended behaviour is for the newer version installer to remove the older versions. Of course this does not apply in the revers. There are no forward compatibility packs for Office. There are a number of negative behaviours (or bugs) that can occur in this configuration: There is only one MS Project In Windows a file extension can only be associated with a single program.  In this case, MPP files can be associated with only one version of winproj.exe.  The executables are in different folders so if a user double-clicks a Project file on the desktop, file explorer, or Outlook email, Windows will launch the winproj.exe associated with MPP and then load the MPP file.  There are problems associated with this situation and in some cases workarounds. The user double-clicks on a Project 2010 file, Project 2007 launches but is unable to open the file because it is a newer version.  The workaround is for the user to launch Project 2010 from the Start menu then open the file.  If the file is attached to an email they will need to first drag the file to the desktop. All your linked MS Project files need to be of the same version There are a number of problems that occur when people use on Microsoft’s Object Linking and Embedding (OLE) technology.  The three common uses of OLE are: for inserted projects where a Master project contains sub-projects and each sub-project resides in its own MPP file shared resource pools where multiple MPP files share a common resource pool kept in a single MPP file cross-project links where a task or milestone in one MPP file has a  predecessor/successor relationship with a task or milestone in a different MPP file What I’ve seen happen before is that if you are running in a version of Project that is not associated with the MPP extension and then try and activate an OLE link then Project tries to launch the other version of Project.  Things start getting very confused since different MPP files are being controlled by different versions of Project running at the same time.  I haven’t tried this in awhile so I can’t give you exact symptoms but I suspect that if Project 2010 is involved the symptoms will be different then in a Project 2003/2007 scenario.  I’ve noticed that Project 2010 gives different error messages for the exact same problem when it occurs in Project 2003 or 2007.  -Anonymous The recommendation would be either not to use this feature if you have to have multiple versions of Project installed or to use only a single version of Project. You may get unexpected negative behaviours if you are using shared resource pools or resource pools even when you are not running multiple versions as I have found that they can get broken very easily. If you need these thing then it is probably best to use Project Server as it was created to solve many of these specific issues. Note: I would not even allow multiple people to access a network copy of a Project file because of the way Windows locks files in write mode. This can cause write-locks that get so bad a server restart is required I’ve seen user’s files get write-locked to the point where the only resolution is to reboot the server. Changing the default version to run for an extension So what if you want to change the default association from Project 2007 to Project 2010?   Figure: “Control Panel | Folder Options | Change the file associated with a file extension” Windows normally only lists the last version installed for a particular extension. You can select a specific version by selecting the program you want to change and clicking “Change program… | Browse…” and then selecting the .exe you want to use on the file system. Figure: You will need to select the exact version of “winproj.exe” that you want to run Conclusion Although it is possible to run multiple versions of Project on one system in the main it does not really make sense.

    Read the article

  • Session Report - Modern Software Development Anti-Patterns

    - by Janice J. Heiss
    In this standing-room-only session, building upon his 2011 JavaOne Rock Star “Diabolical Developer” session, Martijn Verburg, this time along with Ben Evans, identified and explored common “anti-patterns” – ways of doing things that keep developers from doing their best work. They emphasized the importance of social interaction and team communication, along with identifying certain psychological pitfalls that lead developers astray. Their emphasis was less on technical coding errors and more how to function well and to keep one’s focus on what really matters. They are the authors of the highly regarded The Well-Grounded Java Developer and are both movers and shakers in the London JUG community and on the Java Community Process. The large room was packed as they gave a fast-moving, witty presentation with lots of laughs and personal anecdotes. Below are a few of the anti-patterns they discussed.Anti-Pattern One: Conference-Driven DeliveryThe theme here is the belief that “Real pros hack code and write their slides minutes before their talks.” Their response to this anti-pattern is an expression popular in the military – PPPPPP, which stands for, “Proper preparation prevents piss-poor performance.”“Communication is very important – probably more important than the code you write,” claimed Verburg. “The more you speak in front of large groups of people the easier it gets, but it’s always important to do dry runs, to present to smaller groups. And important to be members of user groups where you can give presentations. It’s a great place to practice speaking skills; to gain new skills; get new contacts, to network.”They encouraged attendees to record themselves and listen to themselves giving a presentation. They advised them to start with a spouse or friends if need be. Learning to communicate to a group, they argued, is essential to being a successful developer. The emphasis here is that software development is a team activity and good, clear, accessible communication is essential to the functioning of software teams. Anti-Pattern Two: Mortgage-Driven Development The main theme here was that, in a period of worldwide recession and economic stagnation, people are concerned about keeping their jobs. So there is a tendency for developers to treat knowledge as power and not share what they know about their systems with their colleagues, so when it comes time to fix a problem in production, they will be the only one who knows how to fix it – and will have made themselves an indispensable cog in a machine so you cannot be fired. So developers avoid documentation at all costs, or if documentation is required, put it on a USB chip and lock it in a lock box. As in the first anti-pattern, the idea here is that communicating well with your colleagues is essential and documentation is a key part of this. Social interactions are essential. Both Verburg and Evans insisted that increasingly, year by year, successful software development is more about communication than the technical aspects of the craft. Developers who understand this are the ones who will have the most success. Anti-Pattern Three: Distracted by Shiny – Always Use the Latest Technology to Stay AheadThe temptation here is to pick out some obscure framework, try a bit of Scala, HTML5, and Clojure, and always use the latest technology and upgrade to the latest point release of everything. Don’t worry if something works poorly because you are ahead of the curve. Verburg and Evans insisted that there need to be sound reasons for everything a developer does. Developers should not bring in something simply because for some reason they just feel like it or because it’s new. They recommended a site run by a developer named Matt Raible with excellent comparison spread sheets regarding Web frameworks and other apps. They praised it as a useful tool to help developers in their decision-making processes. They pointed out that good developers sometimes make bad choices out of boredom, to add shiny things to their CV, out of frustration with existing processes, or just from a lack of understanding. They pointed out that some code may stay in a business system for 15 or 20 years, but not all code is created equal and some may change after 3 or 6 months. Developers need to know where the code they are contributing fits in. What is its likely lifespan? Anti-Pattern Four: Design-Driven Design The anti-pattern: If you want to impress your colleagues and bosses, use design patents left, right, and center – MVC, Session Facades, SOA, etc. Or the UML modeling suite from IBM, back in the day… Generate super fast code. And the more jargon you can talk when in the vicinity of the manager the better.Verburg shared a true story about a time when he was interviewing a guy for a job and asked him what his previous work was. The interviewee said that he essentially took patterns and uses an approved book of Enterprise Architecture Patterns and applied them. Verburg was dumbstruck that someone could have a job in which they took patterns from a book and applied them. He pointed out that the idea that design is a separate activity is simply wrong. He repeated a saying that he uses, “You should pay your junior developers for the lines of code they write and the things they add; you should pay your senior developers for what they take away.”He explained that by encouraging people to take things away, the code base gets simpler and reflects the actual business use cases developers are trying to solve, as opposed to the framework that is being imposed. He told another true story about a project to decommission a very long system. 98% of the code was decommissioned and people got a nice bonus. But the 2% remained on the mainframe so the 98% reduction in code resulted in zero reduction in costs, because the entire mainframe was needed to run the 2% that was left. There is an incentive to get rid of source code and subsystems when they are no longer needed. The session continued with several more anti-patterns that were equally insightful.

    Read the article

  • A Basic Thread

    - by Joe Mayo
    Most of the programs written are single-threaded, meaning that they run on the main execution thread. For various reasons such as performance, scalability, and/or responsiveness additional threads can be useful. .NET has extensive threading support, from the basic threads introduced in v1.0 to the Task Parallel Library (TPL) introduced in v4.0. To get started with threads, it's helpful to begin with the basics; starting a Thread. Why Do I Care? The scenario I'll use for needing to use a thread is writing to a file.  Sometimes, writing to a file takes a while and you don't want your user interface to lock up until the file write is done. In other words, you want the application to be responsive to the user. How Would I Go About It? The solution is to launch a new thread that performs the file write, allowing the main thread to return to the user right away.  Whenever the file writing thread completes, it will let the user know.  In the meantime, the user is free to interact with the program for other tasks. The following examples demonstrate how to do this. Show Me the Code? The code we'll use to work with threads is in the System.Threading namespace, so you'll need the following using directive at the top of the file: using System.Threading; When you run code on a thread, the code is specified via a method.  Here's the code that will execute on the thread: private static void WriteFile() { Thread.Sleep(1000); Console.WriteLine("File Written."); } The call to Thread.Sleep(1000) delays thread execution. The parameter is specified in milliseconds, and 1000 means that this will cause the program to sleep for approximately 1 second.  This method happens to be static, but that's just part of this example, which you'll see is launched from the static Main method.  A thread could be instance or static.  Notice that the method does not have parameters and does not have a return type. As you know, the way to refer to a method is via a delegate.  There is a delegate named ThreadStart in System.Threading that refers to a method without parameters or return type, shown below: ThreadStart fileWriterHandlerDelegate = new ThreadStart(WriteFile); I'll show you the whole program below, but the ThreadStart instance above goes in the Main method. The thread uses the ThreadStart instance, fileWriterHandlerDelegate, to specify the method to execute on the thread: Thread fileWriter = new Thread(fileWriterHandlerDelegate); As shown above, the argument type for the Thread constructor is the ThreadStart delegate type. The fileWriterHandlerDelegate argument is an instance of the ThreadStart delegate type. This creates an instance of a thread and what code will execute, but the new thread instance, fileWriter, isn't running yet. You have to explicitly start it, like this: fileWriter.Start(); Now, the code in the WriteFile method is executing on a separate thread. Meanwhile, the main thread that started the fileWriter thread continues on it's own.  You have two threads running at the same time. Okay, I'm Starting to Get Glassy Eyed. How Does it All Fit Together? The example below is the whole program, pulling all the previous bits together. It's followed by its output and an explanation. using System; using System.Threading; namespace BasicThread { class Program { static void Main() { ThreadStart fileWriterHandlerDelegate = new ThreadStart(WriteFile); Thread fileWriter = new Thread(fileWriterHandlerDelegate); Console.WriteLine("Starting FileWriter"); fileWriter.Start(); Console.WriteLine("Called FileWriter"); Console.ReadKey(); } private static void WriteFile() { Thread.Sleep(1000); Console.WriteLine("File Written"); } } } And here's the output: Starting FileWriter Called FileWriter File Written So, Why are the Printouts Backwards? The output above corresponds to Console.Writeline statements in the program, with the second and third seemingly reversed. In a single-threaded program, "File Written" would print before "Called FileWriter". However, this is a multi-threaded (2 or more threads) program.  In multi-threading, you can't make any assumptions about when a given thread will run.  In this case, I added the Sleep statement to the WriteFile method to greatly increase the chances that the message from the main thread will print first. Without the Thread.Sleep, you could run this on a system with multiple cores and/or multiple processors and potentially get different results each time. Interesting Tangent but What Should I Get Out of All This? Going back to the main point, launching the WriteFile method on a separate thread made the program more responsive.  The file writing logic ran for a while, but the main thread returned to the user, as demonstrated by the print out of "Called FileWriter".  When the file write finished, it let the user know via another print statement. This was a very efficient use of CPU resources that made for a more pleasant user experience. Joe

    Read the article

  • C#/.NET Little Wonders: Getting Caller Information

    - by James Michael Hare
    Originally posted on: http://geekswithblogs.net/BlackRabbitCoder/archive/2013/07/25/c.net-little-wonders-getting-caller-information.aspx Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. There are times when it is desirable to know who called the method or property you are currently executing.  Some applications of this could include logging libraries, or possibly even something more advanced that may server up different objects depending on who called the method. In the past, we mostly relied on the System.Diagnostics namespace and its classes such as StackTrace and StackFrame to see who our caller was, but now in C# 5, we can also get much of this data at compile-time. Determining the caller using the stack One of the ways of doing this is to examine the call stack.  The classes that allow you to examine the call stack have been around for a long time and can give you a very deep view of the calling chain all the way back to the beginning for the thread that has called you. You can get caller information by either instantiating the StackTrace class (which will give you the complete stack trace, much like you see when an exception is generated), or by using StackFrame which gets a single frame of the stack trace.  Both involve examining the call stack, which is a non-trivial task, so care should be done not to do this in a performance-intensive situation. For our simple example let's say we are going to recreate the wheel and construct our own logging framework.  Perhaps we wish to create a simple method Log which will log the string-ified form of an object and some information about the caller.  We could easily do this as follows: 1: static void Log(object message) 2: { 3: // frame 1, true for source info 4: StackFrame frame = new StackFrame(1, true); 5: var method = frame.GetMethod(); 6: var fileName = frame.GetFileName(); 7: var lineNumber = frame.GetFileLineNumber(); 8: 9: // we'll just use a simple Console write for now 10: Console.WriteLine("{0}({1}):{2} - {3}", 11: fileName, lineNumber, method.Name, message); 12: } So, what we are doing here is grabbing the 2nd stack frame (the 1st is our current method) using a 2nd argument of true to specify we want source information (if available) and then taking the information from the frame.  This works fine, and if we tested it out by calling from a file such as this: 1: // File c:\projects\test\CallerInfo\CallerInfo.cs 2:  3: public class CallerInfo 4: { 5: Log("Hello Logger!"); 6: } We'd see this: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! This works well, and in fact CallStack and StackFrame are still the best ways to examine deeper into the call stack.  But if you only want to get information on the caller of your method, there is another option… Determining the caller at compile-time In C# 5 (.NET 4.5) they added some attributes that can be supplied to optional parameters on a method to receive caller information.  These attributes can only be applied to methods with optional parameters with explicit defaults.  Then, as the compiler determines who is calling your method with these attributes, it will fill in the values at compile-time. These are the currently supported attributes available in the  System.Runtime.CompilerServices namespace": CallerFilePathAttribute – The path and name of the file that is calling your method. CallerLineNumberAttribute – The line number in the file where your method is being called. CallerMemberName – The member that is calling your method. So let’s take a look at how our Log method would look using these attributes instead: 1: static int Log(object message, 2: [CallerMemberName] string memberName = "", 3: [CallerFilePath] string fileName = "", 4: [CallerLineNumber] int lineNumber = 0) 5: { 6: // we'll just use a simple Console write for now 7: Console.WriteLine("{0}({1}):{2} - {3}", 8: fileName, lineNumber, memberName, message); 9: } Again, calling this from our sample Main would give us the same result: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! However, though this seems the same, there are a few key differences. First of all, there are only 3 supported attributes (at this time) that give you the file path, line number, and calling member.  Thus, it does not give you as rich of detail as a StackFrame (which can give you the calling type as well and deeper frames, for example).  Also, these are supported through optional parameters, which means we could call our new Log method like this: 1: // They're defaults, why not fill 'em in 2: Log("My message.", "Some member", "Some file", -13); In addition, since these attributes require optional parameters, they cannot be used in properties, only in methods. These caveats aside, they do let you get similar information inside of methods at a much greater speed!  How much greater?  Well lets crank through 1,000,000 iterations of each.  instead of logging to console, I’ll return the formatted string length of each.  Doing this, we get: 1: Time for 1,000,000 iterations with StackTrace: 5096 ms 2: Time for 1,000,000 iterations with Attributes: 196 ms So you see, using the attributes is much, much faster!  Nearly 25x faster in fact.  Summary There are a few ways to get caller information for a method.  The StackFrame allows you to get a comprehensive set of information spanning the whole call stack, but at a heavier cost.  On the other hand, the attributes allow you to quickly get at caller information baked in at compile-time, but to do so you need to create optional parameters in your methods to support it. Technorati Tags: Little Wonders,CSharp,C#,.NET,StackFrame,CallStack,CallerFilePathAttribute,CallerLineNumberAttribute,CallerMemberName

    Read the article

  • Null Values And The T-SQL IN Operator

    - by Jesse
    I came across some unexpected behavior while troubleshooting a failing test the other day that took me long enough to figure out that I thought it was worth sharing here. I finally traced the failing test back to a SELECT statement in a stored procedure that was using the IN t-sql operator to exclude a certain set of values. Here’s a very simple example table to illustrate the issue: Customers CustomerId INT, NOT NULL, Primary Key CustomerName nvarchar(100) NOT NULL SalesRegionId INT NULL   The ‘SalesRegionId’ column contains a number representing the sales region that the customer belongs to. This column is nullable because new customers get created all the time but assigning them to sales regions is a process that is handled by a regional manager on a periodic basis. For the purposes of this example, the Customers table currently has the following rows: CustomerId CustomerName SalesRegionId 1 Customer A 1 2 Customer B NULL 3 Customer C 4 4 Customer D 2 5 Customer E 3   How could we write a query against this table for all customers that are NOT in sales regions 2 or 4? You might try something like this: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE SalesRegionId NOT IN (2,4)   Will this work? In short, no; at least not in the way that you might expect. Here’s what this query will return given the example data we’re working with: CustomerId CustomerName SalesRegionId 1 Customer A 1 5 Customer E 5   I was expecting that this query would also return ‘Customer B’, since that customer has a NULL SalesRegionId. In my mind, having a customer with no sales region should be included in a set of customers that are not in sales regions 2 or 4.When I first started troubleshooting my issue I made note of the fact that this query should probably be re-written without the NOT IN clause, but I didn’t suspect that the NOT IN clause was actually the source of the issue. This particular query was only one minor piece in a much larger process that was being exercised via an automated integration test and I simply made a poor assumption that the NOT IN would work the way that I thought it should. So why doesn’t this work the way that I thought it should? From the MSDN documentation on the t-sql IN operator: If the value of test_expression is equal to any value returned by subquery or is equal to any expression from the comma-separated list, the result value is TRUE; otherwise, the result value is FALSE. Using NOT IN negates the subquery value or expression. The key phrase out of that quote is, “… is equal to any expression from the comma-separated list…”. The NULL SalesRegionId isn’t included in the NOT IN because of how NULL values are handled in equality comparisons. From the MSDN documentation on ANSI_NULLS: The SQL-92 standard requires that an equals (=) or not equal to (<>) comparison against a null value evaluates to FALSE. When SET ANSI_NULLS is ON, a SELECT statement using WHERE column_name = NULL returns zero rows even if there are null values in column_name. A SELECT statement using WHERE column_name <> NULL returns zero rows even if there are nonnull values in column_name. In fact, the MSDN documentation on the IN operator includes the following blurb about using NULL values in IN sub-queries or expressions that are used with the IN operator: Any null values returned by subquery or expression that are compared to test_expression using IN or NOT IN return UNKNOWN. Using null values in together with IN or NOT IN can produce unexpected results. If I were to include a ‘SET ANSI_NULLS OFF’ command right above my SELECT statement I would get ‘Customer B’ returned in the results, but that’s definitely not the right way to deal with this. We could re-write the query to explicitly include the NULL value in the WHERE clause: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE (SalesRegionId NOT IN (2,4) OR SalesRegionId IS NULL)   This query works and properly includes ‘Customer B’ in the results, but I ultimately opted to re-write the query using a LEFT OUTER JOIN against a table variable containing all of the values that I wanted to exclude because, in my case, there could potentially be several hundred values to be excluded. If we were to apply the same refactoring to our simple sales region example we’d end up with: 1: DECLARE @regionsToIgnore TABLE (IgnoredRegionId INT) 2: INSERT @regionsToIgnore values (2),(4) 3:  4: SELECT 5: c.CustomerId, 6: c.CustomerName, 7: c.SalesRegionId 8: FROM Customers c 9: LEFT OUTER JOIN @regionsToIgnore r ON r.IgnoredRegionId = c.SalesRegionId 10: WHERE r.IgnoredRegionId IS NULL By performing a LEFT OUTER JOIN from Customers to the @regionsToIgnore table variable we can simply exclude any rows where the IgnoredRegionId is null, as those represent customers that DO NOT appear in the ignored regions list. This approach will likely perform better if the number of sales regions to ignore gets very large and it also will correctly include any customers that do not yet have a sales region.

    Read the article

  • Why bother writing an Windows 8 app?

    - by Dennis Vroegop
    So you want to know more about development for Window 8. Great! There are lots of reasons you should be excited about this. Since I don’t know why YOU are interested in this, I’ll make a list of reasons people can choose from. (as a side note: whenever I talk about Win8 development I am referring to the Metro Style / WinRt side of things. Apps for the ‘classic’ desktop side of Win8 on Intel are business as usual…) So… Why would you care about making an app for Windows 8? 1. It’s cool. Let’s not beat around the bush: if you like development for a hobby then you’ll love to work on this new platform. You can create apps in a relative short time (short time as in compared to writing a new CRM system) and that makes it great for a hobby product. 2. You’ll stand out. Hey, we all need an ego boost every now and then. We all need to feel special. So if you can manage to be one of the first to have you app in the Store then you’ll likely to be noticed. Just close your eyes for a moment and image you standing in a bar. It’s crowded, and then you casually say “Oh yeah, I just had my app certified and it’s in the Win8 store now”. People will stop talking, will offer you drinks and beautiful women / gorgeous man / furry creatures from Alpha Centauri (whatever your preferences are) will propose. Or maybe not. Anyway…. 3. Make some cash! IDC predicts there will be about 350,000,000 Windows 8 licenses sold in the next year. Think about that number. 350,000,000. And they all have access to the Store. Where you’re app will be. With one little click they can select it, download and somehow magically $1.00 or $2.00 from their bank account is transferred to yours. Now, I am not saying that all of those people will download and buy your app but what if only 1% of them did? Remember: there aren’t that many apps available yet….. 4. Learn. Creating new small apps is a great way to learn new stuff. Yes, you could read about it (on this blog for instance) but the only way to learn something is to do it. So be prepared for the future and learn something new by doing it.Write an app! Now! 5. The biggie (for me at least): it’s fun. Even if you remove the points above it’s still fun to write for these devices and this platform. Now some of you will say : “But why not write a great app for IOS or Android?” I think this is a valid question. Of course the novelty of the platform wears out and points 2 and 3 from above list will not be as relevant as it is today. But still 1 4 and 5 remain. And don’t forget: if you already work on the Microsoft platform it’s not that hard to learn this new Win8 stuff. If you have done some XAML development (be it WPF or Silverlight) you are almost there in becoming a good Win8 developer. So you’ll be more productive much sooner than when you have to learn Objective C or Java. Even if you’re a HTML / Javascript developer (I say developer here, not designer) you’ll be up to speed on Win8 development pretty soon. Yes, you, that funky Web Developer who lives and breathes HTML5, CSS3 and JavaScript / Node.Js / JQuery: you too can be a Win8 developer. A first class Win8 developer! So.. Download the stuff you need from http://dev.windows.com install Windows 8 and Visual Studio 12 and by the time you’re ready I’ll be working on the next article: how to do all this? Happy coding!

    Read the article

  • Using XA Transactions in Coherence-based Applications

    - by jpurdy
    While the costs of XA transactions are well known (e.g. increased data contention, higher latency, significant disk I/O for logging, availability challenges, etc.), in many cases they are the most attractive option for coordinating logical transactions across multiple resources. There are a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager: Use of Coherence as a read-only cache, applying transactions to the underlying database (or any system of record) instead of the cache. Use of TransactionMap interface via the included resource adapter. Use of the new ACID transaction framework, introduced in Coherence 3.6.   Each of these may have significant drawbacks for certain workloads. Using Coherence as a read-only cache is the simplest option. In this approach, the application is responsible for managing both the database and the cache (either within the business logic or via application server hooks). This approach also tends to provide limited benefit for many workloads, particularly those workloads that either have queries (given the complexity of maintaining a fully cached data set in Coherence) or are not read-heavy (where the cost of managing the cache may outweigh the benefits of reading from it). All updates are made synchronously to the database, leaving it as both a source of latency as well as a potential bottleneck. This approach also prevents addressing "hot data" problems (when certain objects are updated by many concurrent transactions) since most database servers offer no facilities for explicitly controlling concurrent updates. Finally, this option tends to be a better fit for key-based access (rather than filter-based access such as queries) since this makes it easier to aggressively invalidate cache entries without worrying about when they will be reloaded. The advantage of this approach is that it allows strong data consistency as long as optimistic concurrency control is used to ensure that database updates are applied correctly regardless of whether the cache contains stale (or even dirty) data. Another benefit of this approach is that it avoids the limitations of Coherence's write-through caching implementation. TransactionMap is generally used when Coherence acts as system of record. TransactionMap is not generally compatible with write-through caching, so it will usually be either used to manage a standalone cache or when the cache is backed by a database via write-behind caching. TransactionMap has some restrictions that may limit its utility, the most significant being: The lock-based concurrency model is relatively inefficient and may introduce significant latency and contention. As an example, in a typical configuration, a transaction that updates 20 cache entries will require roughly 40ms just for lock management (assuming all locks are granted immediately, and excluding validation and writing which will require a similar amount of time). This may be partially mitigated by denormalizing (e.g. combining a parent object and its set of child objects into a single cache entry), at the cost of increasing false contention (e.g. transactions will conflict even when updating different child objects). If the client (application server JVM) fails during the commit phase, locks will be released immediately, and the transaction may be partially committed. In practice, this is usually not as bad as it may sound since the commit phase is usually very short (all locks having been previously acquired). Note that this vulnerability does not exist when a single NamedCache is used and all updates are confined to a single partition (generally implying the use of partition affinity). The unconventional TransactionMap API is cumbersome but manageable. Only a few methods are transactional, primarily get(), put() and remove(). The ACID transactions framework (accessed via the Connection class) provides atomicity guarantees by implementing the NamedCache interface, maintaining its own cache data and transaction logs inside a set of private partitioned caches. This feature may be used as either a local transactional resource or as logging XA resource. However, a lack of database integration precludes the use of this functionality for most applications. A side effect of this is that this feature has not seen significant adoption, meaning that any use of this is subject to the usual headaches associated with being an early adopter (greater chance of bugs and greater risk of hitting an unoptimized code path). As a result, for the moment, we generally recommend against using this feature. In summary, it is possible to use Coherence in XA-oriented applications, and several customers are doing this successfully, but it is not a core usage model for the product, so care should be taken before committing to this path. For most applications, the most robust solution is normally to use Coherence as a read-only cache of the underlying data resources, even if this prevents taking advantage of certain product features.

    Read the article

  • spliiting code in java-don't know what's wrong [closed]

    - by ???? ?????
    I'm writing a code to split a file into many files with a size specified in the code, and then it will join these parts later. The problem is with the joining code, it doesn't work and I can't figure what is wrong! This is my code: import java.io.*; import java.util.*; public class StupidSplit { static final int Chunk_Size = 10; static int size =0; public static void main(String[] args) throws IOException { String file = "b.txt"; int chunks = DivideFile(file); System.out.print((new File(file)).delete()); System.out.print(JoinFile(file, chunks)); } static boolean JoinFile(String fname, int nChunks) { /* * Joins the chunks together. Chunks have been divided using DivideFile * function so the last part of filename will ".partxxxx" Checks if all * parts are together by matching number of chunks found against * "nChunks", then joins the file otherwise throws an error. */ boolean successful = false; File currentDirectory = new File(System.getProperty("user.dir")); // File[] fileList = currentDirectory.listFiles(); /* populate only the files having extension like "partxxxx" */ List<File> lst = new ArrayList<File>(); // Arrays.sort(fileList); for (File file : fileList) { if (file.isFile()) { String fnm = file.getName(); int lastDot = fnm.lastIndexOf('.'); // add to list which match the name given by "fname" and have //"partxxxx" as extension" if (fnm.substring(0, lastDot).equalsIgnoreCase(fname) && (fnm.substring(lastDot + 1)).substring(0, 4).equals("part")) { lst.add(file); } } } /* * sort the list - it will be sorted by extension only because we have * ensured that list only contains those files that have "fname" and * "part" */ File[] files = (File[]) lst.toArray(new File[0]); Arrays.sort(files); System.out.println("size ="+files.length); System.out.println("hello"); /* Ensure that number of chunks match the length of array */ if (files.length == nChunks-1) { File ofile = new File(fname); FileOutputStream fos; FileInputStream fis; byte[] fileBytes; int bytesRead = 0; try { fos = new FileOutputStream(ofile,true); for (File file : files) { fis = new FileInputStream(file); fileBytes = new byte[(int) file.length()]; bytesRead = fis.read(fileBytes, 0, (int) file.length()); assert(bytesRead == fileBytes.length); assert(bytesRead == (int) file.length()); fos.write(fileBytes); fos.flush(); fileBytes = null; fis.close(); fis = null; } fos.close(); fos = null; } catch (FileNotFoundException fnfe) { System.out.println("Could not find file"); successful = false; return successful; } catch (IOException ioe) { System.out.println("Cannot write to disk"); successful = false; return successful; } /* ensure size of file matches the size given by server */ successful = (ofile.length() == StupidSplit.size) ? true : false; } else { successful = false; } return successful; } static int DivideFile(String fname) { File ifile = new File(fname); FileInputStream fis; String newName; FileOutputStream chunk; //int fileSize = (int) ifile.length(); double fileSize = (double) ifile.length(); //int nChunks = 0, read = 0, readLength = Chunk_Size; int nChunks = 0, read = 0, readLength = Chunk_Size; byte[] byteChunk; try { fis = new FileInputStream(ifile); StupidSplit.size = (int)ifile.length(); while (fileSize > 0) { if (fileSize <= Chunk_Size) { readLength = (int) fileSize; } byteChunk = new byte[readLength]; read = fis.read(byteChunk, 0, readLength); fileSize -= read; assert(read==byteChunk.length); nChunks++; //newName = fname + ".part" + Integer.toString(nChunks - 1); newName = String.format("%s.part%09d", fname, nChunks - 1); chunk = new FileOutputStream(new File(newName)); chunk.write(byteChunk); chunk.flush(); chunk.close(); byteChunk = null; chunk = null; } fis.close(); System.out.println(nChunks); // fis = null; } catch (FileNotFoundException fnfe) { System.out.println("Could not find the given file"); System.exit(-1); } catch (IOException ioe) { System.out .println("Error while creating file chunks. Exiting program"); System.exit(-1); }System.out.println(nChunks); return nChunks; } } }

    Read the article

  • Implementing History Support using jQuery for AJAX websites built on asp.net AJAX

    - by anil.kasalanati
    Problem Statement: Most modern day website use AJAX for page navigation and gone are the days of complete HTTP redirection so it is imperative that we support back and forward buttons on the browser so that end users navigation is not broken. In this article we discuss about solutions which are already available and problems with them. Microsoft History Support: Post .Net 3.5 sp1 Microsoft’s Script manager supports history for websites using Update panels. This is achieved by enabling the ENABLE HISTORY property for the script manager and then the event “Page_Browser_Navigate” needs to be handled. So whenever the browser buttons are clicked the event is fired and the application can write code to do the navigation. The following articles provide good tutorials on how to do that http://www.asp.net/aspnet-in-net-35-sp1/videos/introduction-to-aspnet-ajax-history http://www.codeproject.com/KB/aspnet/ajaxhistorymanagement.aspx And Microsoft api internally creates an IFrame and changes the bookmark of the url. Unfortunately this has a bug and it does not work in Ie6 and 7 which are the major browsers but it works in ie8 and Firefox. And Microsoft has apparently fixed this bug in .Net 4.0. Following is the blog http://weblogs.asp.net/joshclose/archive/2008/11/11/asp-net-ajax-addhistorypoint-bug.aspx For solutions which are still running on .net 3.5 sp1 there is no solution which Microsoft offers so there is  are two way to solve this o   Disable the back button. o   Develop custom solution.   Disable back button Even though this might look like a very simple thing to do there are issues around doing this  because there is no event which can be manipulated from the javascript. The browser does not provide an api to do this. So most of the technical solution on internet offer work arounds like doing a history.forward(1) so that even if the user clicks a back button the destination page redirects the user to the original page. This is not a good customer experience and does not work for asp.net website where there are different views in the same page. There are other ways around detecting the window unload events and writing code there. So there are 2 events onbeforeUnload and onUnload and we can write code to show a confirmation message to the user. If we write code in onUnLoad then we can only show a message but it is too late to stop the navigation. And if we write on onBeforeUnLoad we can stop the navigation if the user clicks cancel but this event would be triggered for all AJAX calls and hyperlinks where the href is anything other than #. We can do this but the website has to be checked properly to ensure there are no links where href is not # otherwise the user would see a popup message saying “you are leaving the website”. Believe me after doing a lot of research on the back button disable I found it easier to support it rather than disabling the button. So I am going to discuss a solution which work  using jQuery with some tweaking. Custom Solution JQuery already provides an api to manage the history of a AJAX website - http://plugins.jquery.com/project/history We need to integrate this with Microsoft Page request manager so that both of them work in tandem. The page state is maintained in the cookie so that it can be passed to the server and I used jQuery cookie plug in for that – http://plugins.jquery.com/node/1386/release Firstly when the page loads there is a need to hook up all the events on the page which needs to cause browser history and following is the code to that. jQuery(document).ready(function() {             // Initialize history plugin.             // The callback is called at once by present location.hash.             jQuery.history.init(pageload);               // set onlick event for buttons             jQuery("a[@rel='history']").click(function() {                 //                 var hash = this.page;                 hash = hash.replace(/^.*#/, '');                 isAsyncPostBack = true;                 // moves to a new page.                 // pageload is called at once.                 jQuery.history.load(hash);                 return true;             });         }); The above scripts basically gets all the DOM objects which have the attribute rel=”history” and add the event. In our test page we have the link button  which has the attribute rel set to history. <asp:LinkButton ID="Previous" rel="history" runat="server" onclick="PreviousOnClick">Previous</asp:LinkButton> <asp:LinkButton ID="AsyncPostBack" rel="history" runat="server" onclick="NextOnClick">Next</asp:LinkButton> <asp:LinkButton ID="HistoryLinkButton" runat="server" style="display:none" onclick="HistoryOnClick"></asp:LinkButton>   And you can see that there is an hidden HistoryLinkButton which used to send a sever side postback in case of browser back or previous buttons. And note that we need to use display:none and not visible= false because asp.net AJAX would disallow any post backs if visible=false. And in general the pageload event get executed on the client side when a back or forward is pressed and the function is shown below function pageload(hash) {                   if (hash) {                         if (!isAsyncPostBack) {                           jQuery.cookie("page", hash);                     __doPostBack("HistoryLinkButton", "");                 }                isAsyncPostBack = false;                             } else {                 // start page             jQuery("#load").empty();             }         }   As you can see in case there is an hash in the url we are basically do an asp.net AJAX post back using the following statement __doPostBack("HistoryLinkButton", ""); So whenever the user clicks back or forward the post back happens using the event statement we provide and Previous event code is invoked in the code behind.  We need to have the code to use the pageId present in the url to change the page content. And there is an important thing to note – because the hash is worked out using the pageId’s there is a need to recalculate the hash after every AJAX post back so following code is plugged in function ReWorkHash() {             jQuery("a[@rel='history']").unbind("click");             jQuery("a[@rel='history']").click(function() {                 //                 var hash = jQuery(this).attr("page");                 hash = hash.replace(/^.*#/, '');                 jQuery.cookie("page", hash);                 isAsyncPostBack = true;                                   // moves to a new page.                 // pageload is called at once.                 jQuery.history.load(hash);                 return true;             });        }   This code is executed from the code behind using ScriptManager RegisterClientScriptBlock as shown below –       ScriptManager.RegisterClientScriptBlock(this, typeof(_Default), "Recalculater", "ReWorkHash();", true);   A sample application is available to be downloaded at the following location – http://techconsulting.vpscustomer.com/Source/HistoryTest.zip And a working sample is available at – http://techconsulting.vpscustomer.com/Samples/Default.aspx

    Read the article

  • Writing Unit Tests for ASP.NET Web API Controller

    - by shiju
    In this blog post, I will write unit tests for a ASP.NET Web API controller in the EFMVC reference application. Let me introduce the EFMVC app, If you haven't heard about EFMVC. EFMVC is a simple app, developed as a reference implementation for demonstrating ASP.NET MVC, EF Code First, ASP.NET Web API, Domain-Driven Design (DDD), Test-Driven Development (DDD). The current version is built with ASP.NET MVC 4, EF Code First 5, ASP.NET Web API, Autofac, AutoMapper, Nunit and Moq. All unit tests were written with Nunit and Moq. You can download the latest version of the reference app from http://efmvc.codeplex.com/ Unit Test for HTTP Get Let’s write a unit test class for verifying the behaviour of a ASP.NET Web API controller named CategoryController. Let’s define mock implementation for Repository class, and a Command Bus that is used for executing write operations.  [TestFixture] public class CategoryApiControllerTest { private Mock<ICategoryRepository> categoryRepository; private Mock<ICommandBus> commandBus; [SetUp] public void SetUp() {     categoryRepository = new Mock<ICategoryRepository>();     commandBus = new Mock<ICommandBus>(); } The code block below provides the unit test for a HTTP Get operation. [Test] public void Get_All_Returns_AllCategory() {     // Arrange        IEnumerable<CategoryWithExpense> fakeCategories = GetCategories();     categoryRepository.Setup(x => x.GetCategoryWithExpenses()).Returns(fakeCategories);     CategoryController controller = new CategoryController(commandBus.Object, categoryRepository.Object)     {         Request = new HttpRequestMessage()                 {                     Properties = { { HttpPropertyKeys.HttpConfigurationKey, new HttpConfiguration() } }                 }     };     // Act     var categories = controller.Get();     // Assert     Assert.IsNotNull(categories, "Result is null");     Assert.IsInstanceOf(typeof(IEnumerable<CategoryWithExpense>),categories, "Wrong Model");             Assert.AreEqual(3, categories.Count(), "Got wrong number of Categories"); }        The GetCategories method is provided below: private static IEnumerable<CategoryWithExpense> GetCategories() {     IEnumerable<CategoryWithExpense> fakeCategories = new List<CategoryWithExpense> {     new CategoryWithExpense {CategoryId=1, CategoryName = "Test1", Description="Test1Desc", TotalExpenses=1000},     new CategoryWithExpense {CategoryId=2, CategoryName = "Test2", Description="Test2Desc",TotalExpenses=2000},     new CategoryWithExpense { CategoryId=3, CategoryName = "Test3", Description="Test3Desc",TotalExpenses=3000}       }.AsEnumerable();     return fakeCategories; } In the unit test method Get_All_Returns_AllCategory, we specify setup on the mocked type ICategoryrepository, for a call to GetCategoryWithExpenses method returns dummy data. We create an instance of the ApiController, where we have specified the Request property of the ApiController since the Request property is used to create a new HttpResponseMessage that will provide the appropriate HTTP status code along with response content data. Unit Tests are using for specifying the behaviour of components so that we have specified that Get operation will use the model type IEnumerable<CategoryWithExpense> for sending the Content data. The implementation of HTTP Get in the CategoryController is provided below: public IQueryable<CategoryWithExpense> Get() {     var categories = categoryRepository.GetCategoryWithExpenses().AsQueryable();     return categories; } Unit Test for HTTP Post The following are the behaviours we are going to implement for the HTTP Post: A successful HTTP Post  operation should return HTTP status code Created An empty Category should return HTTP status code BadRequest A successful HTTP Post operation should provide correct Location header information in the response for the newly created resource. Writing unit test for HTTP Post is required more information than we write for HTTP Get. In the HTTP Post implementation, we will call to Url.Link for specifying the header Location of Response as shown in below code block. var response = Request.CreateResponse(HttpStatusCode.Created, category); string uri = Url.Link("DefaultApi", new { id = category.CategoryId }); response.Headers.Location = new Uri(uri); return response; While we are executing Url.Link from unit tests, we have to specify HttpRouteData information from the unit test method. Otherwise, Url.Link will get a null value. The code block below shows the unit tests for specifying the behaviours for the HTTP Post operation. [Test] public void Post_Category_Returns_CreatedStatusCode() {     // Arrange        commandBus.Setup(c => c.Submit(It.IsAny<CreateOrUpdateCategoryCommand>())).Returns(new CommandResult(true));     Mapper.CreateMap<CategoryFormModel, CreateOrUpdateCategoryCommand>();          var httpConfiguration = new HttpConfiguration();     WebApiConfig.Register(httpConfiguration);     var httpRouteData = new HttpRouteData(httpConfiguration.Routes["DefaultApi"],         new HttpRouteValueDictionary { { "controller", "category" } });     var controller = new CategoryController(commandBus.Object, categoryRepository.Object)     {         Request = new HttpRequestMessage(HttpMethod.Post, "http://localhost/api/category/")         {             Properties =             {                 { HttpPropertyKeys.HttpConfigurationKey, httpConfiguration },                 { HttpPropertyKeys.HttpRouteDataKey, httpRouteData }             }         }     };     // Act     CategoryModel category = new CategoryModel();     category.CategoryId = 1;     category.CategoryName = "Mock Category";     var response = controller.Post(category);               // Assert     Assert.AreEqual(HttpStatusCode.Created, response.StatusCode);     var newCategory = JsonConvert.DeserializeObject<CategoryModel>(response.Content.ReadAsStringAsync().Result);     Assert.AreEqual(string.Format("http://localhost/api/category/{0}", newCategory.CategoryId), response.Headers.Location.ToString()); } [Test] public void Post_EmptyCategory_Returns_BadRequestStatusCode() {     // Arrange        commandBus.Setup(c => c.Submit(It.IsAny<CreateOrUpdateCategoryCommand>())).Returns(new CommandResult(true));     Mapper.CreateMap<CategoryFormModel, CreateOrUpdateCategoryCommand>();     var httpConfiguration = new HttpConfiguration();     WebApiConfig.Register(httpConfiguration);     var httpRouteData = new HttpRouteData(httpConfiguration.Routes["DefaultApi"],         new HttpRouteValueDictionary { { "controller", "category" } });     var controller = new CategoryController(commandBus.Object, categoryRepository.Object)     {         Request = new HttpRequestMessage(HttpMethod.Post, "http://localhost/api/category/")         {             Properties =             {                 { HttpPropertyKeys.HttpConfigurationKey, httpConfiguration },                 { HttpPropertyKeys.HttpRouteDataKey, httpRouteData }             }         }     };     // Act     CategoryModel category = new CategoryModel();     category.CategoryId = 0;     category.CategoryName = "";     // The ASP.NET pipeline doesn't run, so validation don't run.     controller.ModelState.AddModelError("", "mock error message");     var response = controller.Post(category);     // Assert     Assert.AreEqual(HttpStatusCode.BadRequest, response.StatusCode);   } In the above code block, we have written two unit methods, Post_Category_Returns_CreatedStatusCode and Post_EmptyCategory_Returns_BadRequestStatusCode. The unit test method Post_Category_Returns_CreatedStatusCode  verifies the behaviour 1 and 3, that we have defined in the beginning of the section “Unit Test for HTTP Post”. The unit test method Post_EmptyCategory_Returns_BadRequestStatusCode verifies the behaviour 2. For extracting the data from response, we call Content.ReadAsStringAsync().Result of HttpResponseMessage object and deserializeit it with Json Convertor. The implementation of HTTP Post in the CategoryController is provided below: // POST /api/category public HttpResponseMessage Post(CategoryModel category) {       if (ModelState.IsValid)     {         var command = new CreateOrUpdateCategoryCommand(category.CategoryId, category.CategoryName, category.Description);         var result = commandBus.Submit(command);         if (result.Success)         {                               var response = Request.CreateResponse(HttpStatusCode.Created, category);             string uri = Url.Link("DefaultApi", new { id = category.CategoryId });             response.Headers.Location = new Uri(uri);             return response;         }     }     else     {         return Request.CreateErrorResponse(HttpStatusCode.BadRequest, ModelState);     }     throw new HttpResponseException(HttpStatusCode.BadRequest); } The unit test implementation for HTTP Put and HTTP Delete are very similar to the unit test we have written for  HTTP Get. The complete unit tests for the CategoryController is given below: [TestFixture] public class CategoryApiControllerTest { private Mock<ICategoryRepository> categoryRepository; private Mock<ICommandBus> commandBus; [SetUp] public void SetUp() {     categoryRepository = new Mock<ICategoryRepository>();     commandBus = new Mock<ICommandBus>(); } [Test] public void Get_All_Returns_AllCategory() {     // Arrange        IEnumerable<CategoryWithExpense> fakeCategories = GetCategories();     categoryRepository.Setup(x => x.GetCategoryWithExpenses()).Returns(fakeCategories);     CategoryController controller = new CategoryController(commandBus.Object, categoryRepository.Object)     {         Request = new HttpRequestMessage()                 {                     Properties = { { HttpPropertyKeys.HttpConfigurationKey, new HttpConfiguration() } }                 }     };     // Act     var categories = controller.Get();     // Assert     Assert.IsNotNull(categories, "Result is null");     Assert.IsInstanceOf(typeof(IEnumerable<CategoryWithExpense>),categories, "Wrong Model");             Assert.AreEqual(3, categories.Count(), "Got wrong number of Categories"); }        [Test] public void Get_CorrectCategoryId_Returns_Category() {     // Arrange        IEnumerable<CategoryWithExpense> fakeCategories = GetCategories();     categoryRepository.Setup(x => x.GetCategoryWithExpenses()).Returns(fakeCategories);     CategoryController controller = new CategoryController(commandBus.Object, categoryRepository.Object)     {         Request = new HttpRequestMessage()         {             Properties = { { HttpPropertyKeys.HttpConfigurationKey, new HttpConfiguration() } }         }     };     // Act     var response = controller.Get(1);     // Assert     Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);     var category = JsonConvert.DeserializeObject<CategoryWithExpense>(response.Content.ReadAsStringAsync().Result);     Assert.AreEqual(1, category.CategoryId, "Got wrong number of Categories"); } [Test] public void Get_InValidCategoryId_Returns_NotFound() {     // Arrange        IEnumerable<CategoryWithExpense> fakeCategories = GetCategories();     categoryRepository.Setup(x => x.GetCategoryWithExpenses()).Returns(fakeCategories);     CategoryController controller = new CategoryController(commandBus.Object, categoryRepository.Object)     {         Request = new HttpRequestMessage()         {             Properties = { { HttpPropertyKeys.HttpConfigurationKey, new HttpConfiguration() } }         }     };     // Act     var response = controller.Get(5);     // Assert     Assert.AreEqual(HttpStatusCode.NotFound, response.StatusCode);            } [Test] public void Post_Category_Returns_CreatedStatusCode() {     // Arrange        commandBus.Setup(c => c.Submit(It.IsAny<CreateOrUpdateCategoryCommand>())).Returns(new CommandResult(true));     Mapper.CreateMap<CategoryFormModel, CreateOrUpdateCategoryCommand>();          var httpConfiguration = new HttpConfiguration();     WebApiConfig.Register(httpConfiguration);     var httpRouteData = new HttpRouteData(httpConfiguration.Routes["DefaultApi"],         new HttpRouteValueDictionary { { "controller", "category" } });     var controller = new CategoryController(commandBus.Object, categoryRepository.Object)     {         Request = new HttpRequestMessage(HttpMethod.Post, "http://localhost/api/category/")         {             Properties =             {                 { HttpPropertyKeys.HttpConfigurationKey, httpConfiguration },                 { HttpPropertyKeys.HttpRouteDataKey, httpRouteData }             }         }     };     // Act     CategoryModel category = new CategoryModel();     category.CategoryId = 1;     category.CategoryName = "Mock Category";     var response = controller.Post(category);               // Assert     Assert.AreEqual(HttpStatusCode.Created, response.StatusCode);     var newCategory = JsonConvert.DeserializeObject<CategoryModel>(response.Content.ReadAsStringAsync().Result);     Assert.AreEqual(string.Format("http://localhost/api/category/{0}", newCategory.CategoryId), response.Headers.Location.ToString()); } [Test] public void Post_EmptyCategory_Returns_BadRequestStatusCode() {     // Arrange        commandBus.Setup(c => c.Submit(It.IsAny<CreateOrUpdateCategoryCommand>())).Returns(new CommandResult(true));     Mapper.CreateMap<CategoryFormModel, CreateOrUpdateCategoryCommand>();     var httpConfiguration = new HttpConfiguration();     WebApiConfig.Register(httpConfiguration);     var httpRouteData = new HttpRouteData(httpConfiguration.Routes["DefaultApi"],         new HttpRouteValueDictionary { { "controller", "category" } });     var controller = new CategoryController(commandBus.Object, categoryRepository.Object)     {         Request = new HttpRequestMessage(HttpMethod.Post, "http://localhost/api/category/")         {             Properties =             {                 { HttpPropertyKeys.HttpConfigurationKey, httpConfiguration },                 { HttpPropertyKeys.HttpRouteDataKey, httpRouteData }             }         }     };     // Act     CategoryModel category = new CategoryModel();     category.CategoryId = 0;     category.CategoryName = "";     // The ASP.NET pipeline doesn't run, so validation don't run.     controller.ModelState.AddModelError("", "mock error message");     var response = controller.Post(category);     // Assert     Assert.AreEqual(HttpStatusCode.BadRequest, response.StatusCode);   } [Test] public void Put_Category_Returns_OKStatusCode() {     // Arrange        commandBus.Setup(c => c.Submit(It.IsAny<CreateOrUpdateCategoryCommand>())).Returns(new CommandResult(true));     Mapper.CreateMap<CategoryFormModel, CreateOrUpdateCategoryCommand>();     CategoryController controller = new CategoryController(commandBus.Object, categoryRepository.Object)     {         Request = new HttpRequestMessage()         {             Properties = { { HttpPropertyKeys.HttpConfigurationKey, new HttpConfiguration() } }         }     };     // Act     CategoryModel category = new CategoryModel();     category.CategoryId = 1;     category.CategoryName = "Mock Category";     var response = controller.Put(category.CategoryId,category);     // Assert     Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);    } [Test] public void Delete_Category_Returns_NoContentStatusCode() {     // Arrange              commandBus.Setup(c => c.Submit(It.IsAny<DeleteCategoryCommand >())).Returns(new CommandResult(true));     CategoryController controller = new CategoryController(commandBus.Object, categoryRepository.Object)     {         Request = new HttpRequestMessage()         {             Properties = { { HttpPropertyKeys.HttpConfigurationKey, new HttpConfiguration() } }         }     };     // Act               var response = controller.Delete(1);     // Assert     Assert.AreEqual(HttpStatusCode.NoContent, response.StatusCode);   } private static IEnumerable<CategoryWithExpense> GetCategories() {     IEnumerable<CategoryWithExpense> fakeCategories = new List<CategoryWithExpense> {     new CategoryWithExpense {CategoryId=1, CategoryName = "Test1", Description="Test1Desc", TotalExpenses=1000},     new CategoryWithExpense {CategoryId=2, CategoryName = "Test2", Description="Test2Desc",TotalExpenses=2000},     new CategoryWithExpense { CategoryId=3, CategoryName = "Test3", Description="Test3Desc",TotalExpenses=3000}       }.AsEnumerable();     return fakeCategories; } }  The complete implementation for the Api Controller, CategoryController is given below: public class CategoryController : ApiController {       private readonly ICommandBus commandBus;     private readonly ICategoryRepository categoryRepository;     public CategoryController(ICommandBus commandBus, ICategoryRepository categoryRepository)     {         this.commandBus = commandBus;         this.categoryRepository = categoryRepository;     } public IQueryable<CategoryWithExpense> Get() {     var categories = categoryRepository.GetCategoryWithExpenses().AsQueryable();     return categories; }   // GET /api/category/5 public HttpResponseMessage Get(int id) {     var category = categoryRepository.GetCategoryWithExpenses().Where(c => c.CategoryId == id).SingleOrDefault();     if (category == null)     {         return Request.CreateResponse(HttpStatusCode.NotFound);     }     return Request.CreateResponse(HttpStatusCode.OK, category); }   // POST /api/category public HttpResponseMessage Post(CategoryModel category) {       if (ModelState.IsValid)     {         var command = new CreateOrUpdateCategoryCommand(category.CategoryId, category.CategoryName, category.Description);         var result = commandBus.Submit(command);         if (result.Success)         {                               var response = Request.CreateResponse(HttpStatusCode.Created, category);             string uri = Url.Link("DefaultApi", new { id = category.CategoryId });             response.Headers.Location = new Uri(uri);             return response;         }     }     else     {         return Request.CreateErrorResponse(HttpStatusCode.BadRequest, ModelState);     }     throw new HttpResponseException(HttpStatusCode.BadRequest); }   // PUT /api/category/5 public HttpResponseMessage Put(int id, CategoryModel category) {     if (ModelState.IsValid)     {         var command = new CreateOrUpdateCategoryCommand(category.CategoryId, category.CategoryName, category.Description);         var result = commandBus.Submit(command);         return Request.CreateResponse(HttpStatusCode.OK, category);     }     else     {         return Request.CreateErrorResponse(HttpStatusCode.BadRequest, ModelState);     }     throw new HttpResponseException(HttpStatusCode.BadRequest); }       // DELETE /api/category/5     public HttpResponseMessage Delete(int id)     {         var command = new DeleteCategoryCommand { CategoryId = id };         var result = commandBus.Submit(command);         if (result.Success)         {             return new HttpResponseMessage(HttpStatusCode.NoContent);         }             throw new HttpResponseException(HttpStatusCode.BadRequest);     } } Source Code The EFMVC app can download from http://efmvc.codeplex.com/ . The unit test project can be found from the project EFMVC.Tests and Web API project can be found from EFMVC.Web.API.

    Read the article

  • SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience

    - by pinaldave
    I recently attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever This year the summit was at its best. Instead of writing about my usual routine or the event, I am going to write about the interesting things I did and how I felt about it! Best Summit Ever Trip to Seattle! This was my second trip to Seattle this year and the journey is always long. Here is the travel stats on how long it takes to get to Seattle: 24 hours official air time 36 hours total travel time (connection waits and airport commute) Every time I travel to USA I gain a day and when I travel back to home, I lose a day. However, the total traveling time is around 3 days. The journey is long and very exhausting. However, it is all worth it when you’re attending an event like SQLPASS. Here are few things I carry when I travel for a long journey: Dry Snack packs – I like to have some good Indian Dry Snacks along with me in my backpack so I can have my own snack when I want Amazon Kindle – Loaded with 80+ books A physical book – This is usually a very easy to read book I do not watch movies on the plane and usually spend my time reading something quick and easy. If I can go to sleep, I go for it. I prefer to not to spend time in conversation with the guy sitting next to me because usually I end up listening to their biography, which I cannot blog about. Sheraton Seattle SQLPASS In any case, I love to go to Seattle as the city is great and has everything a brilliant metropolis has to offer. The new Light Train is extremely convenient, and I can take it directly from the airport to the city center. My hotel, the Sheraton, was only few meters (in the USA people count in blocks – 3 blocks) away from the train station. This time I saved USD 40 each round trip due to the Light Train. Sessions I attended! Well, I really wanted to attend most of the sessions but there was great dilemma of which ones to choose. There were many, many sessions to be attended and at any given time there was more than one good session being presented. I had decided to attend sessions in area performance tuning and I attended quite a few sessions this year, compared to what I was able to do last year. Here are few names of the speakers whose sessions I attended (please note, following great speakers are not listed in any order. I loved them and I enjoyed their sessions): Conor Cunningham Rushabh Mehta Buck Woody Brent Ozar Jonathan Kehayias Chris Leonard Bob Ward Grant Fritchey I had great fun attending their sessions. The sessions were meaningful and enlightening. It is hard to rate any session but I have found that the insights learned in Conor Cunningham’s sessions are the highlight of the PASS Summit. Rushabh Mehta at Keynote SQLPASS   Bucky Woody and Brent Ozar I always like the sessions where the speaker is much closer to the audience and has real world experience. I think speakers who have worked in the real world deliver the best content and most useful information. Sessions I did not like! Indeed there were few sessions I did not like it and I am not going to name them here. However, there were strong reasons I did not like their sessions, and here is why: Sessions were all theory and had no real world connections. All technical questions ended with confusing answers (lots of “I will get back to you on it,” “it depends,” “let us take this offline” and many more…) “I am God” kind of attitude in the speakers For example, I attended a session of one very well known speaker who is a specialist for one particular area. I was bit late for the session and was surprised to see that in a room that could hold 350 people there were only 30 attendees. After sitting there for 15 minutes, I realized why lots of people left. Very soon I found I preferred to stare out the window instead of listening to that particular speaker. One on One Talk! Many times people ask me what I really like about PASS. I always say the experience of meeting SQL legends and spending time with them one on one and LEARNING! Here is the quick list of the people I met during this event and spent more than 30 minutes with each of them talking about various subjects: Pinal Dave and Brad Shulz Pinal Dave and Rushabh Mehta Michael Coles and Pinal Dave Rushabh Mehta – It is always pleasure to meet with him. He is a man with lots of energy and a passion for community. He recently told me that he really wanted to turn PASS into resource for learning for every SQL Server Developer and Administrator in the world. I had great in-depth discussion regarding how a single person can contribute to a community. Michael Coles – I consider him my best friend. It is always fun to meet him. He is funny and very knowledgeable. I think there are very few people who are as expert as he is in encryption and spatial databases. Worth meeting him every single time. Glenn Berry – A real friend of everybody. He is very a simple person and very true to his heart. I think there is not a single person in whole community who does not like him. He is a friends of all and everybody likes him very much. I once again had time to sit with him and learn so much from him. As he is known as Dr. DMV, I can be his nurse in the area of DMV. Brad Schulz – I always wanted to meet him but never got chance until today. I had great time meeting him in person and we have spent considerable amount of time together discussing various T-SQL tricks and tips. I do not know where he comes up with all the different ideas but I enjoy reading his blog and sharing his wisdom with me. Jonathan Kehayias – He is drill sergeant in US army. If you get the impression that he is a giant with very strong personality – you are wrong. He is very kind and soft spoken DBA with strong performance tuning skills. I asked him how he has kept his two jobs separate and I got very good answer – just work hard and have passion for what you do. I attended his sessions and his presentation style is very unique.  I feel like he is speaking in a language I understand. Louis Davidson – I had never had a chance to sit with him and talk about technology before. He has so much wisdom and he is very kind. During the dinner, I had talked with him for long time and without hesitation he started to draw a schema for me on the menu. It was a wonderful experience to learn from a master at the dinner table. He explained to me the real and practical differences between third normal form and forth normal form. Honestly I did not know earlier, but now I do. Erland Sommarskog – This man needs no introduction, he is very well known and very clear in conveying his ideas. I learned a lot from him during the course of year. Every time I meet him, I learn something new and this time was no exception. Joe Webb – Joey is all about community and people, we had interesting conversation about community, MVP and how one can be helpful to community without losing passion for long time. It is always pleasant to talk to him and of course, I had fun time. Ross Mistry – I call him my brother many times because he indeed looks like my cousin. He provided me lots of insight of how one can write book and how he keeps his books simple to appeal to all the readers. A wonderful person and great friend. Ola Hallgren - I did not know he was coming to the summit. I had great time meeting him and had a wonderful conversation with him regarding his scripts and future community activities. Blythe Morrow – She used to be integrated part of SQL Server Community and PASS HQ. It was wonderful to meet her again and re-connect. She is wonderful person and I had a great time talking to her. Solid Quality Mentors – It is difficult to decide who to mention here. Instead of writing all the names, I am going to include a photo of our meeting. I had great fun meeting various members of our global branches. This year I was sitting with my Spanish speaking friends and had great fun as Javier Loria from Solid Quality translated lots of things for me. Party, Party and Parties Every evening there were various parties. I did attend almost all of them. Every party had different theme but the goal of all the parties the same – networking. Here are the few parties where I had lots of fun: Dell Reception Party Exhibitor Party Solid Quality Fun Party Red Gate Friends Party MVP Dinner Microsoft Party MVP Dinner Quest Party Gameworks PASS Party Volunteer Party at Garage Solid Quality Mentors (10 Members out of 120) They were all great networking opportunities and lots of fun. I really had great time meeting people at the various parties. There were few people everywhere – well, I will say I am among them – who hopped parties. NDA – Not Decided Agenda During the event there were few meetings marked “NDA.” Someone asked me “why are these things NDA?”  My response was simple: because they are not sure themselves. NDA stands for Not Decided Agenda. Toys, Giveaways and Luggage I admit, I was like child in Gameworks and was playing to win soft toys. I was doing it for my daughter. I must thank all of the people who gave me their cards to try my luck. I won 4 soft-toys for my daughter and it was fun. Also, thanks to Angel who did a final toy swap with me to get the desired toy for my daughter. I also collected ducks from Idera, as my daughter really loves them. Solid Quality Booth Each of the exhibitors was giving away something and I got so much stuff that my luggage got quite a bit bigger when I returned. Best Exhibitor Idera had SQLDoctor (a real magician and fun guy) to promote their new tool SQLDoctor. I really had a great time participating in the magic myself. At one point, the magician made my watch disappear.  I have seen better magic before, but this time it caught me unexpectedly and I was taken by surprise. I won many ducks again. The Common Question I heard the following common questions: I have seen you somewhere – who are you? – I am Pinal Dave. I did not know that Pinal is your first name and Dave is your last name, how do you pronounce your last name again? – Da-way How old are you? – I am as old as I can be. Are you an Indian because you look like one? – I did not answer this one. Where are you from? This question was usually asked after looking at my badge which says India. So did you really fly from India? – Yes, because I have seasickness so I do not prefer the sea journey. How long was the journey? – 24/36/12 (air travel time/total travel time/time zone difference) Why do you write on SQLAuthority.com? – Because I want to. I remember your daughter looks like you. – Is this even a question? Of course, she is daddy’s little girl. There were so many other questions, I will have to write another blog post about it. SQLPASS Again, Best Summit Ever! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: SQLPASS

    Read the article

  • Error in installing ZTE AC2738 on ubuntu 3.0.0-12-generic

    - by Netro
    I am getting this error ,struct usb_serial_driver has no member named shutdown. I am installing on 64bit ubuntu 3.0.0-12-generic ... Beginning Verify CD ... ... Verify CD Succeed! ... Beginning Copy Install Package Files ... ... will take a long time, waiting 5 seconds, please ... Copy Install Package Files Succeed! ... 'ztemtApp' previous version not found. and install now Beginning install ... ... Current linux release version is 'Ubuntu' ... Checking 'App' process ... Checking old installation ... Installing ... Current Path is : . : /tmp/ztemt_datacard/Linux 1. Checking Previous Version ... 2. Copying Data Bin ... ... will take a few seconds, please waiting ... /tmp/ztemt_datacard/Linux 3. Auto Load Usb Driver Module ... Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service acpid restart Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the stop(8) and then start(8) utilities, e.g. stop acpid ; start acpid. The restart(8) utility is also available. acpid stop/waiting acpid start/running, process 11802 4. Changing pppd Options ... 5. Changing File Permission ... 6. Deleting Qt lib When Local QT Vertion > V4.4.0 ... ... Package 'libqtgui4' exist ... QT_VERSION = 4 7. Deleting process id file: EVDOApp.pid ... 8. Making USB Serial Driver Module : ztemt.ko ... ... will take a few seconds, please waiting ... make -C /lib/modules/3.0.0-12-generic/build M=/usr/local/bin/ztemtApp/zteusbserial/below2.6.27 modules make[1]: Entering directory `/usr/src/linux-headers-3.0.0-12-generic' CC [M] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘destroy_serial’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:159:14: error: ‘struct usb_serial_driver’ has no member named ‘shutdown’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:165:18: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_open’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:241:36: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:246:8: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:251:6: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:253:10: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:265:3: warning: passing argument 1 of ‘serial->type->open’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:265:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:265:3: warning: passing argument 2 of ‘serial->type->open’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:265:3: note: expected ‘struct usb_serial_port *’ but argument is of type ‘struct file *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:270:20: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:276:6: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:278:6: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:279:20: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_close’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:355:18: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:357:10: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:358:21: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:371:8: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:372:10: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:375:3: error: too many arguments to function ‘port->serial->type->close’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:377:11: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:378:12: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:379:9: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:380:8: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:386:20: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_write’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:407:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: warning: passing argument 1 of ‘port->serial->type->write’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: warning: passing argument 2 of ‘port->serial->type->write’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: note: expected ‘struct usb_serial_port *’ but argument is of type ‘const unsigned char *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: warning: passing argument 3 of ‘port->serial->type->write’ makes pointer from integer without a cast [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: note: expected ‘const unsigned char *’ but argument is of type ‘int’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: error: too few arguments to function ‘port->serial->type->write’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_write_room’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:429:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:435:2: warning: passing argument 1 of ‘port->serial->type->write_room’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:435:2: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_chars_in_buffer’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:451:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:457:2: warning: passing argument 1 of ‘port->serial->type->chars_in_buffer’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:457:2: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_throttle’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:472:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:479:3: warning: passing argument 1 of ‘port->serial->type->throttle’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:479:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_unthrottle’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:491:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:498:3: warning: passing argument 1 of ‘port->serial->type->unthrottle’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:498:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_ioctl’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:511:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: warning: passing argument 1 of ‘port->serial->type->ioctl’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: warning: passing argument 2 of ‘port->serial->type->ioctl’ makes integer from pointer without a cast [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: note: expected ‘unsigned int’ but argument is of type ‘struct file *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: error: too many arguments to function ‘port->serial->type->ioctl’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_set_termios’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:535:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: warning: passing argument 1 of ‘port->serial->type->set_termios’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: warning: passing argument 2 of ‘port->serial->type->set_termios’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: note: expected ‘struct usb_serial_port *’ but argument is of type ‘struct termios *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: error: too few arguments to function ‘port->serial->type->set_termios’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_break’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:554:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:561:3: warning: passing argument 1 of ‘port->serial->type->break_ctl’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:561:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_tiocmget’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:629:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:635:3: warning: passing argument 1 of ‘port->serial->type->tiocmget’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:635:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:635:3: error: too many arguments to function ‘port->serial->type->tiocmget’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_tiocmset’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:651:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: warning: passing argument 1 of ‘port->serial->type->tiocmset’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: warning: passing argument 2 of ‘port->serial->type->tiocmset’ makes integer from pointer without a cast [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: note: expected ‘unsigned int’ but argument is of type ‘struct file *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: error: too many arguments to function ‘port->serial->type->tiocmset’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘usb_serial_port_work’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:697:12: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘port_release’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:709:2: error: ‘struct device’ has no member named ‘bus_id’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘usb_serial_probe’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:858:2: error: implicit declaration of function ‘lock_kernel’ [-Werror=implicit-function-declaration] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:861:3: error: implicit declaration of function ‘unlock_kernel’ [-Werror=implicit-function-declaration] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1034:3: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1182:23: error: ‘struct device’ has no member named ‘bus_id’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1182:51: error: ‘struct device’ has no member named ‘bus_id’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1183:3: error: ‘struct device’ has no member named ‘bus_id’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘usb_serial_disconnect’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1257:13: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1258:21: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: At top level: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1280:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1280:2: warning: (near initialization for ‘serial_ops.ioctl’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1281:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1281:2: warning: (near initialization for ‘serial_ops.set_termios’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1284:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1284:2: warning: (near initialization for ‘serial_ops.break_ctl’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1286:2: error: unknown field ‘read_proc’ specified in initializer /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1286:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1286:2: warning: (near initialization for ‘serial_ops.ioctl’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1287:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1287:2: warning: (near initialization for ‘serial_ops.tiocmget’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1288:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1288:2: warning: (near initialization for ‘serial_ops.tiocmset’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘usb_serial_init’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1352:2: error: implicit declaration of function ‘info’ [-Werror=implicit-function-declaration] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘fixup_generic’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1406:2: error: ‘struct usb_serial_driver’ has no member named ‘shutdown’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1406:2: error: ‘struct usb_serial_driver’ has no member named ‘shutdown’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1406:1: error: ‘usb_serial_generic_shutdown’ undeclared (first use in this function) /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1406:1: note: each undeclared identifier is reported only once for each function it appears in cc1: some warnings being treated as errors make[2]: *** [/usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o] Error 1 make[1]: *** [_module_/usr/local/bin/ztemtApp/zteusbserial/below2.6.27] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.0.0-12-generic' make: *** [modules] Error 2 Install finished! Any suggestion? Update: from installation document , In some special cases, the setup package can’t automatically compile the driver, so you need change the configurations yourself and manually compile the driver. Method: enter the directory: /usr/local/bin/ztemtApp/zteusbserial, and find the corresponding kernel version of current system. I can see 2.6.27 2.6.28 2.6.29 2.6.30 2.6.31 2.6.32 2.6.33 2.6.34 2.6.35 2.6.36 2.6.37 2.6.38 2.6.39 below2.6.2 in the directory. Which version shall I pick up for installation? Readme file says, we have to do this to insert ztemt.ko in kernel.

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • Windows Azure Service Bus Splitter and Aggregator

    - by Alan Smith
    This article will cover basic implementations of the Splitter and Aggregator patterns using the Windows Azure Service Bus. The content will be included in the next release of the “Windows Azure Service Bus Developer Guide”, along with some other patterns I am working on. I’ve taken the pattern descriptions from the book “Enterprise Integration Patterns” by Gregor Hohpe. I bought a copy of the book in 2004, and recently dusted it off when I started to look at implementing the patterns on the Windows Azure Service Bus. Gregor has also presented an session in 2011 “Enterprise Integration Patterns: Past, Present and Future” which is well worth a look. I’ll be covering more patterns in the coming weeks, I’m currently working on Wire-Tap and Scatter-Gather. There will no doubt be a section on implementing these patterns in my “SOA, Connectivity and Integration using the Windows Azure Service Bus” course. There are a number of scenarios where a message needs to be divided into a number of sub messages, and also where a number of sub messages need to be combined to form one message. The splitter and aggregator patterns provide a definition of how this can be achieved. This section will focus on the implementation of basic splitter and aggregator patens using the Windows Azure Service Bus direct programming model. In BizTalk Server receive pipelines are typically used to implement the splitter patterns, with sequential convoy orchestrations often used to aggregate messages. In the current release of the Service Bus, there is no functionality in the direct programming model that implements these patterns, so it is up to the developer to implement them in the applications that send and receive messages. Splitter A message splitter takes a message and spits the message into a number of sub messages. As there are different scenarios for how a message can be split into sub messages, message splitters are implemented using different algorithms. The Enterprise Integration Patterns book describes the splatter pattern as follows: How can we process a message if it contains multiple elements, each of which may have to be processed in a different way? Use a Splitter to break out the composite message into a series of individual messages, each containing data related to one item. The Enterprise Integration Patterns website provides a description of the Splitter pattern here. In some scenarios a batch message could be split into the sub messages that are contained in the batch. The splitting of a message could be based on the message type of sub-message, or the trading partner that the sub message is to be sent to. Aggregator An aggregator takes a stream or related messages and combines them together to form one message. The Enterprise Integration Patterns book describes the aggregator pattern as follows: How do we combine the results of individual, but related messages so that they can be processed as a whole? Use a stateful filter, an Aggregator, to collect and store individual messages until a complete set of related messages has been received. Then, the Aggregator publishes a single message distilled from the individual messages. The Enterprise Integration Patterns website provides a description of the Aggregator pattern here. A common example of the need for an aggregator is in scenarios where a stream of messages needs to be combined into a daily batch to be sent to a legacy line-of-business application. The BizTalk Server EDI functionality provides support for batching messages in this way using a sequential convoy orchestration. Scenario The scenario for this implementation of the splitter and aggregator patterns is the sending and receiving of large messages using a Service Bus queue. In the current release, the Windows Azure Service Bus currently supports a maximum message size of 256 KB, with a maximum header size of 64 KB. This leaves a safe maximum body size of 192 KB. The BrokeredMessage class will support messages larger than 256 KB; in fact the Size property is of type long, implying that very large messages may be supported at some point in the future. The 256 KB size restriction is set in the service bus components that are deployed in the Windows Azure data centers. One of the ways of working around this size restriction is to split large messages into a sequence of smaller sub messages in the sending application, send them via a queue, and then reassemble them in the receiving application. This scenario will be used to demonstrate the pattern implementations. Implementation The splitter and aggregator will be used to provide functionality to send and receive large messages over the Windows Azure Service Bus. In order to make the implementations generic and reusable they will be implemented as a class library. The splitter will be implemented in the LargeMessageSender class and the aggregator in the LargeMessageReceiver class. A class diagram showing the two classes is shown below. Implementing the Splitter The splitter will take a large brokered message, and split the messages into a sequence of smaller sub-messages that can be transmitted over the service bus messaging entities. The LargeMessageSender class provides a Send method that takes a large brokered message as a parameter. The implementation of the class is shown below; console output has been added to provide details of the splitting operation. public class LargeMessageSender {     private static int SubMessageBodySize = 192 * 1024;     private QueueClient m_QueueClient;       public LargeMessageSender(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public void Send(BrokeredMessage message)     {         // Calculate the number of sub messages required.         long messageBodySize = message.Size;         int nrSubMessages = (int)(messageBodySize / SubMessageBodySize);         if (messageBodySize % SubMessageBodySize != 0)         {             nrSubMessages++;         }           // Create a unique session Id.         string sessionId = Guid.NewGuid().ToString();         Console.WriteLine("Message session Id: " + sessionId);         Console.Write("Sending {0} sub-messages", nrSubMessages);           Stream bodyStream = message.GetBody<Stream>();         for (int streamOffest = 0; streamOffest < messageBodySize;             streamOffest += SubMessageBodySize)         {                                     // Get the stream chunk from the large message             long arraySize = (messageBodySize - streamOffest) > SubMessageBodySize                 ? SubMessageBodySize : messageBodySize - streamOffest;             byte[] subMessageBytes = new byte[arraySize];             int result = bodyStream.Read(subMessageBytes, 0, (int)arraySize);             MemoryStream subMessageStream = new MemoryStream(subMessageBytes);               // Create a new message             BrokeredMessage subMessage = new BrokeredMessage(subMessageStream, true);             subMessage.SessionId = sessionId;               // Send the message             m_QueueClient.Send(subMessage);             Console.Write(".");         }         Console.WriteLine("Done!");     }} The LargeMessageSender class is initialized with a QueueClient that is created by the sending application. When the large message is sent, the number of sub messages is calculated based on the size of the body of the large message. A unique session Id is created to allow the sub messages to be sent as a message session, this session Id will be used for correlation in the aggregator. A for loop in then used to create the sequence of sub messages by creating chunks of data from the stream of the large message. The sub messages are then sent to the queue using the QueueClient. As sessions are used to correlate the messages, the queue used for message exchange must be created with the RequiresSession property set to true. Implementing the Aggregator The aggregator will receive the sub messages in the message session that was created by the splitter, and combine them to form a single, large message. The aggregator is implemented in the LargeMessageReceiver class, with a Receive method that returns a BrokeredMessage. The implementation of the class is shown below; console output has been added to provide details of the splitting operation.   public class LargeMessageReceiver {     private QueueClient m_QueueClient;       public LargeMessageReceiver(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public BrokeredMessage Receive()     {         // Create a memory stream to store the large message body.         MemoryStream largeMessageStream = new MemoryStream();           // Accept a message session from the queue.         MessageSession session = m_QueueClient.AcceptMessageSession();         Console.WriteLine("Message session Id: " + session.SessionId);         Console.Write("Receiving sub messages");           while (true)         {             // Receive a sub message             BrokeredMessage subMessage = session.Receive(TimeSpan.FromSeconds(5));               if (subMessage != null)             {                 // Copy the sub message body to the large message stream.                 Stream subMessageStream = subMessage.GetBody<Stream>();                 subMessageStream.CopyTo(largeMessageStream);                   // Mark the message as complete.                 subMessage.Complete();                 Console.Write(".");             }             else             {                 // The last message in the sequence is our completeness criteria.                 Console.WriteLine("Done!");                 break;             }         }                     // Create an aggregated message from the large message stream.         BrokeredMessage largeMessage = new BrokeredMessage(largeMessageStream, true);         return largeMessage;     } }   The LargeMessageReceiver initialized using a QueueClient that is created by the receiving application. The receive method creates a memory stream that will be used to aggregate the large message body. The AcceptMessageSession method on the QueueClient is then called, which will wait for the first message in a message session to become available on the queue. As the AcceptMessageSession can throw a timeout exception if no message is available on the queue after 60 seconds, a real-world implementation should handle this accordingly. Once the message session as accepted, the sub messages in the session are received, and their message body streams copied to the memory stream. Once all the messages have been received, the memory stream is used to create a large message, that is then returned to the receiving application. Testing the Implementation The splitter and aggregator are tested by creating a message sender and message receiver application. The payload for the large message will be one of the webcast video files from http://www.cloudcasts.net/, the file size is 9,697 KB, well over the 256 KB threshold imposed by the Service Bus. As the splitter and aggregator are implemented in a separate class library, the code used in the sender and receiver console is fairly basic. The implementation of the main method of the sending application is shown below.   static void Main(string[] args) {     // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Open the input file.     FileStream fileStream = new FileStream(AccountDetails.TestFile, FileMode.Open);       // Create a BrokeredMessage for the file.     BrokeredMessage largeMessage = new BrokeredMessage(fileStream, true);       Console.WriteLine("Sending: " + AccountDetails.TestFile);     Console.WriteLine("Message body size: " + largeMessage.Size);     Console.WriteLine();         // Send the message with a LargeMessageSender     LargeMessageSender sender = new LargeMessageSender(queueClient);     sender.Send(largeMessage);       // Close the messaging facory.     factory.Close();  } The implementation of the main method of the receiving application is shown below. static void Main(string[] args) {       // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Create a LargeMessageReceiver and receive the message.     LargeMessageReceiver receiver = new LargeMessageReceiver(queueClient);     BrokeredMessage largeMessage = receiver.Receive();       Console.WriteLine("Received message");     Console.WriteLine("Message body size: " + largeMessage.Size);       string testFile = AccountDetails.TestFile.Replace(@"\In\", @"\Out\");     Console.WriteLine("Saving file: " + testFile);       // Save the message body as a file.     Stream largeMessageStream = largeMessage.GetBody<Stream>();     largeMessageStream.Seek(0, SeekOrigin.Begin);     FileStream fileOut = new FileStream(testFile, FileMode.Create);     largeMessageStream.CopyTo(fileOut);     fileOut.Close();       Console.WriteLine("Done!"); } In order to test the application, the sending application is executed, which will use the LargeMessageSender class to split the message and place it on the queue. The output of the sender console is shown below. The console shows that the body size of the large message was 9,929,365 bytes, and the message was sent as a sequence of 51 sub messages. When the receiving application is executed the results are shown below. The console application shows that the aggregator has received the 51 messages from the message sequence that was creating in the sending application. The messages have been aggregated to form a massage with a body of 9,929,365 bytes, which is the same as the original large message. The message body is then saved as a file. Improvements to the Implementation The splitter and aggregator patterns in this implementation were created in order to show the usage of the patterns in a demo, which they do quite well. When implementing these patterns in a real-world scenario there are a number of improvements that could be made to the design. Copying Message Header Properties When sending a large message using these classes, it would be great if the message header properties in the message that was received were copied from the message that was sent. The sending application may well add information to the message context that will be required in the receiving application. When the sub messages are created in the splitter, the header properties in the first message could be set to the values in the original large message. The aggregator could then used the values from this first sub message to set the properties in the message header of the large message during the aggregation process. Using Asynchronous Methods The current implementation uses the synchronous send and receive methods of the QueueClient class. It would be much more performant to use the asynchronous methods, however doing so may well affect the sequence in which the sub messages are enqueued, which would require the implementation of a resequencer in the aggregator to restore the correct message sequence. Handling Exceptions In order to keep the code readable no exception handling was added to the implementations. In a real-world scenario exceptions should be handled accordingly.

    Read the article

  • Developing custom MBeans to manage J2EE Applications (Part III)

    - by philippe Le Mouel
    This is the third and final part in a series of blogs, that demonstrate how to add management capability to your own application using JMX MBeans. In Part I we saw: How to implement a custom MBean to manage configuration associated with an application. How to package the resulting code and configuration as part of the application's ear file. How to register MBeans upon application startup, and unregistered them upon application stop (or undeployment). How to use generic JMX clients such as JConsole to browse and edit our application's MBean. In Part II we saw: How to add localized descriptions to our MBean, MBean attributes, MBean operations and MBean operation parameters. How to specify meaningful name to our MBean operation parameters. We also touched on future enhancements that will simplify how we can implement localized MBeans. In this third and last part, we will re-write our MBean to simplify how we added localized descriptions. To do so we will take advantage of the functionality we already described in part II and that is now part of WebLogic 10.3.3.0. We will show how to take advantage of WebLogic's localization support to localize our MBeans based on the client's Locale independently of the server's Locale. Each client will see MBean descriptions localized based on his/her own Locale. We will show how to achieve this using JConsole, and also using a sample programmatic JMX Java client. The complete code sample and associated build files for part III are available as a zip file. The code has been tested against WebLogic Server 10.3.3.0 and JDK6. To build and deploy our sample application, please follow the instruction provided in Part I, as they also apply to part III's code and associated zip file. Providing custom descriptions take II In part II we localized our MBean descriptions by extending the StandardMBean class and overriding its many getDescription methods. WebLogic 10.3.3.0 similarly to JDK 7 can automatically localize MBean descriptions as long as those are specified according to the following conventions: Descriptions resource bundle keys are named according to: MBean description: <MBeanInterfaceClass>.mbean MBean attribute description: <MBeanInterfaceClass>.attribute.<AttributeName> MBean operation description: <MBeanInterfaceClass>.operation.<OperationName> MBean operation parameter description: <MBeanInterfaceClass>.operation.<OperationName>.<ParameterName> MBean constructor description: <MBeanInterfaceClass>.constructor.<ConstructorName> MBean constructor parameter description: <MBeanInterfaceClass>.constructor.<ConstructorName>.<ParameterName> We also purposely named our resource bundle class MBeanDescriptions and included it as part of the same package as our MBean. We already followed the above conventions when creating our resource bundle in part II, and our default resource bundle class with English descriptions looks like: package blog.wls.jmx.appmbean; import java.util.ListResourceBundle; public class MBeanDescriptions extends ListResourceBundle { protected Object[][] getContents() { return new Object[][] { {"PropertyConfigMXBean.mbean", "MBean used to manage persistent application properties"}, {"PropertyConfigMXBean.attribute.Properties", "Properties associated with the running application"}, {"PropertyConfigMXBean.operation.setProperty", "Create a new property, or change the value of an existing property"}, {"PropertyConfigMXBean.operation.setProperty.key", "Name that identify the property to set."}, {"PropertyConfigMXBean.operation.setProperty.value", "Value for the property being set"}, {"PropertyConfigMXBean.operation.getProperty", "Get the value for an existing property"}, {"PropertyConfigMXBean.operation.getProperty.key", "Name that identify the property to be retrieved"} }; } } We have now also added a resource bundle with French localized descriptions: package blog.wls.jmx.appmbean; import java.util.ListResourceBundle; public class MBeanDescriptions_fr extends ListResourceBundle { protected Object[][] getContents() { return new Object[][] { {"PropertyConfigMXBean.mbean", "Manage proprietes sauvegarde dans un fichier disque."}, {"PropertyConfigMXBean.attribute.Properties", "Proprietes associee avec l'application en cour d'execution"}, {"PropertyConfigMXBean.operation.setProperty", "Construit une nouvelle proprietee, ou change la valeur d'une proprietee existante."}, {"PropertyConfigMXBean.operation.setProperty.key", "Nom de la propriete dont la valeur est change."}, {"PropertyConfigMXBean.operation.setProperty.value", "Nouvelle valeur"}, {"PropertyConfigMXBean.operation.getProperty", "Retourne la valeur d'une propriete existante."}, {"PropertyConfigMXBean.operation.getProperty.key", "Nom de la propriete a retrouver."} }; } } So now we can just remove the many getDescriptions methods from our MBean code, and have a much cleaner: package blog.wls.jmx.appmbean; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.File; import java.net.URL; import java.util.Map; import java.util.HashMap; import java.util.Properties; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.MBeanRegistration; import javax.management.StandardMBean; import javax.management.MBeanOperationInfo; import javax.management.MBeanParameterInfo; public class PropertyConfig extends StandardMBean implements PropertyConfigMXBean, MBeanRegistration { private String relativePath_ = null; private Properties props_ = null; private File resource_ = null; private static Map operationsParamNames_ = null; static { operationsParamNames_ = new HashMap(); operationsParamNames_.put("setProperty", new String[] {"key", "value"}); operationsParamNames_.put("getProperty", new String[] {"key"}); } public PropertyConfig(String relativePath) throws Exception { super(PropertyConfigMXBean.class , true); props_ = new Properties(); relativePath_ = relativePath; } public String setProperty(String key, String value) throws IOException { String oldValue = null; if (value == null) { oldValue = String.class.cast(props_.remove(key)); } else { oldValue = String.class.cast(props_.setProperty(key, value)); } save(); return oldValue; } public String getProperty(String key) { return props_.getProperty(key); } public Map getProperties() { return (Map) props_; } private void load() throws IOException { InputStream is = new FileInputStream(resource_); try { props_.load(is); } finally { is.close(); } } private void save() throws IOException { OutputStream os = new FileOutputStream(resource_); try { props_.store(os, null); } finally { os.close(); } } public ObjectName preRegister(MBeanServer server, ObjectName name) throws Exception { // MBean must be registered from an application thread // to have access to the application ClassLoader ClassLoader cl = Thread.currentThread().getContextClassLoader(); URL resourceUrl = cl.getResource(relativePath_); resource_ = new File(resourceUrl.toURI()); load(); return name; } public void postRegister(Boolean registrationDone) { } public void preDeregister() throws Exception {} public void postDeregister() {} protected String getParameterName(MBeanOperationInfo op, MBeanParameterInfo param, int sequence) { return operationsParamNames_.get(op.getName())[sequence]; } } The only reason we are still extending the StandardMBean class, is to override the default values for our operations parameters name. If this isn't a concern, then one could just write the following code: package blog.wls.jmx.appmbean; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.File; import java.net.URL; import java.util.Properties; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.MBeanRegistration; import javax.management.StandardMBean; import javax.management.MBeanOperationInfo; import javax.management.MBeanParameterInfo; public class PropertyConfig implements PropertyConfigMXBean, MBeanRegistration { private String relativePath_ = null; private Properties props_ = null; private File resource_ = null; public PropertyConfig(String relativePath) throws Exception { props_ = new Properties(); relativePath_ = relativePath; } public String setProperty(String key, String value) throws IOException { String oldValue = null; if (value == null) { oldValue = String.class.cast(props_.remove(key)); } else { oldValue = String.class.cast(props_.setProperty(key, value)); } save(); return oldValue; } public String getProperty(String key) { return props_.getProperty(key); } public Map getProperties() { return (Map) props_; } private void load() throws IOException { InputStream is = new FileInputStream(resource_); try { props_.load(is); } finally { is.close(); } } private void save() throws IOException { OutputStream os = new FileOutputStream(resource_); try { props_.store(os, null); } finally { os.close(); } } public ObjectName preRegister(MBeanServer server, ObjectName name) throws Exception { // MBean must be registered from an application thread // to have access to the application ClassLoader ClassLoader cl = Thread.currentThread().getContextClassLoader(); URL resourceUrl = cl.getResource(relativePath_); resource_ = new File(resourceUrl.toURI()); load(); return name; } public void postRegister(Boolean registrationDone) { } public void preDeregister() throws Exception {} public void postDeregister() {} } Note: The above would also require changing the operations parameters name in the resource bundle classes. For instance: PropertyConfigMXBean.operation.setProperty.key would become: PropertyConfigMXBean.operation.setProperty.p0 Client based localization When accessing our MBean using JConsole started with the following command line: jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/tools.jar: $WL_HOME/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -debug We see that our MBean descriptions are localized according to the WebLogic's server Locale. English in this case: Note: Consult Part I for information on how to use JConsole to browse/edit our MBean. Now if we specify the client's Locale as part of the JConsole command line as follow: jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/tools.jar: $WL_HOME/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -J-Dweblogic.management.remote.locale=fr-FR -debug We see that our MBean descriptions are now localized according to the specified client's Locale. French in this case: We use the weblogic.management.remote.locale system property to specify the Locale that should be associated with the cient's JMX connections. The value is composed of the client's language code and its country code separated by the - character. The country code is not required, and can be omitted. For instance: -Dweblogic.management.remote.locale=fr We can also specify the client's Locale using a programmatic client as demonstrated below: package blog.wls.jmx.appmbean.client; import javax.management.MBeanServerConnection; import javax.management.ObjectName; import javax.management.MBeanInfo; import javax.management.remote.JMXConnector; import javax.management.remote.JMXServiceURL; import javax.management.remote.JMXConnectorFactory; import java.util.Hashtable; import java.util.Set; import java.util.Locale; public class JMXClient { public static void main(String[] args) throws Exception { JMXConnector jmxCon = null; try { JMXServiceURL serviceUrl = new JMXServiceURL( "service:jmx:iiop://127.0.0.1:7001/jndi/weblogic.management.mbeanservers.runtime"); System.out.println("Connecting to: " + serviceUrl); // properties associated with the connection Hashtable env = new Hashtable(); env.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote"); String[] credentials = new String[2]; credentials[0] = "weblogic"; credentials[1] = "weblogic"; env.put(JMXConnector.CREDENTIALS, credentials); // specifies the client's Locale env.put("weblogic.management.remote.locale", Locale.FRENCH); jmxCon = JMXConnectorFactory.newJMXConnector(serviceUrl, env); jmxCon.connect(); MBeanServerConnection con = jmxCon.getMBeanServerConnection(); Set mbeans = con.queryNames( new ObjectName( "blog.wls.jmx.appmbean:name=myAppProperties,type=PropertyConfig,*"), null); for (ObjectName mbeanName : mbeans) { System.out.println("\n\nMBEAN: " + mbeanName); MBeanInfo minfo = con.getMBeanInfo(mbeanName); System.out.println("MBean Description: "+minfo.getDescription()); System.out.println("\n"); } } finally { // release the connection if (jmxCon != null) jmxCon.close(); } } } The above client code is part of the zip file associated with this blog, and can be run using the provided client.sh script. The resulting output is shown below: $ ./client.sh Connecting to: service:jmx:iiop://127.0.0.1:7001/jndi/weblogic.management.mbeanservers.runtime MBEAN: blog.wls.jmx.appmbean:type=PropertyConfig,name=myAppProperties MBean Description: Manage proprietes sauvegarde dans un fichier disque. $ Miscellaneous Using Description annotation to specify MBean descriptions Earlier we have seen how to name our MBean descriptions resource keys, so that WebLogic 10.3.3.0 automatically uses them to localize our MBean. In some cases we might want to implicitly specify the resource key, and resource bundle. For instance when operations are overloaded, and the operation name is no longer sufficient to uniquely identify a single operation. In this case we can use the Description annotation provided by WebLogic as follow: import weblogic.management.utils.Description; @Description(resourceKey="myapp.resources.TestMXBean.description", resourceBundleBaseName="myapp.resources.MBeanResources") public interface TestMXBean { @Description(resourceKey="myapp.resources.TestMXBean.threshold.description", resourceBundleBaseName="myapp.resources.MBeanResources" ) public int getthreshold(); @Description(resourceKey="myapp.resources.TestMXBean.reset.description", resourceBundleBaseName="myapp.resources.MBeanResources") public int reset( @Description(resourceKey="myapp.resources.TestMXBean.reset.id.description", resourceBundleBaseName="myapp.resources.MBeanResources", displayNameKey= "myapp.resources.TestMXBean.reset.id.displayName.description") int id); } The Description annotation should be applied to the MBean interface. It can be used to specify MBean, MBean attributes, MBean operations, and MBean operation parameters descriptions as demonstrated above. Retrieving the Locale associated with a JMX operation from the MBean code There are several cases where it is necessary to retrieve the Locale associated with a JMX call from the MBean implementation. For instance this can be useful when localizing exception messages. This can be done as follow: import weblogic.management.mbeanservers.JMXContextUtil; ...... // some MBean method implementation public String setProperty(String key, String value) throws IOException { Locale callersLocale = JMXContextUtil.getLocale(); // use callersLocale to localize Exception messages or // potentially some return values such a Date .... } Conclusion With this last part we conclude our three part series on how to write MBeans to manage J2EE applications. We are far from having exhausted this particular topic, but we have gone a long way and are now capable to take advantage of the latest functionality provided by WebLogic's application server to write user friendly MBeans.

    Read the article

  • MacBook Pro Late 2009 SATA Resets, Slowness (Is my motherboard dying on both machines?)

    - by A Student at a University
    My MacBook Pro runs slower the longer it's on. I am getting kernel warnings. Some, but not all, resets correlate with AC power connects and disconnects. I don't think the warnings do. (How do I tell?) What are these errors? What causes them? Can this damage the drive or corrupt data? What is it seeing that motivates these? 02:37:16[ 0.791992] ahci 0000:00:0b.0: PCI INT A -> Link[LSI0] -> GSI 20 (level, low) -> IRQ 20 02:37:16[ 0.792047] ahci 0000:00:0b.0: irq 43 for MSI/MSI-X 02:37:16[ 0.792053] ahci 0000:00:0b.0: controller can't do PMP, turning off CAP_PMP 02:37:16[ 0.792104] ahci 0000:00:0b.0: AHCI 0001.0200 32 slots 6 ports 1.5 Gbps 0x3 impl IDE mode 02:37:16[ 0.792107] ahci 0000:00:0b.0: flags: 64bit ncq sntf pm led pio slum part boh 02:37:16[ 0.792111] ahci 0000:00:0b.0: setting latency timer to 64 02:37:16[ 0.813473] scsi0 : ahci 02:37:16[ 0.823340] scsi1 : ahci 02:37:16[ 0.848164] ata1: SATA max UDMA/133 abar m8192@0xe7484000 port 0xe7484100 irq 43 02:37:16[ 0.848166] ata2: SATA max UDMA/133 abar m8192@0xe7484000 port 0xe7484180 irq 43 02:37:16[ 1.190132] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16[ 1.190153] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16[ 1.213568] ata1.00: ATA-8: OCZ-VERTEX2, 1.23, max UDMA/133 02:37:16[ 1.213572] ata1.00: 195371568 sectors, multi 1: LBA48 NCQ (depth 31/32) 02:37:16[ 1.227293] ata2.00: ATA-8: ST9500420ASG, 0002SDM1, max UDMA/133 02:37:16[ 1.227297] ata2.00: 976773168 sectors, multi 16: LBA48 NCQ (depth 31/32) 02:37:16[ 1.229570] ata2.00: configured for UDMA/133 02:37:16[ 1.240120] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5850000 action 0xe frozen 02:37:16[ 1.240123] ata2: irq_stat 0x00000040, connection status changed 02:37:16[ 1.240127] ata2: SError: { PHYRdyChg CommWake LinkSeq TrStaTrns DevExch } 02:37:16[ 1.240133] ata2: hard resetting link 02:37:16[ 1.260738] ata1.00: configured for UDMA/133 02:37:16[ 1.280111] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5850000 action 0xe frozen 02:37:16[ 1.280114] ata1: irq_stat 0x00000040, connection status changed 02:37:16[ 1.280118] ata1: SError: { PHYRdyChg CommWake LinkSeq TrStaTrns DevExch } 02:37:16[ 1.280122] ata1: hard resetting link 02:37:16[ 1.990101] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16[ 1.994215] ata2.00: configured for UDMA/133 02:37:16[ 1.994220] ata2: EH complete 02:37:16[ 2.030097] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16[ 2.090773] ata1.00: configured for UDMA/133 02:37:16[ 2.090778] ata1: EH complete 02:37:16[ 2.090931] scsi 0:0:0:0: Direct-Access ATA OCZ-VERTEX2 1.23 PQ: 0 ANSI: 5 02:37:16[ 2.091045] sd 0:0:0:0: Attached scsi generic sg0 type 0 02:37:16[ 2.091121] sd 0:0:0:0: [sda] 195371568 512-byte logical blocks: (100 GB/93.1 GiB) 02:37:16[ 2.091159] scsi 1:0:0:0: Direct-Access ATA ST9500420ASG 0002 PQ: 0 ANSI: 5 02:37:16[ 2.091163] sd 0:0:0:0: [sda] Write Protect is off 02:37:16[ 2.091165] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 02:37:16[ 2.091183] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA 02:37:16[ 2.091252] sd 1:0:0:0: Attached scsi generic sg1 type 0 02:37:16[ 2.091446] sd 1:0:0:0: [sdb] 976773168 512-byte logical blocks: (500 GB/465 GiB) 02:37:16[ 2.091580] sd 1:0:0:0: [sdb] Write Protect is off 02:37:16[ 2.091582] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 02:37:16[ 2.091637] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA 02:37:16[ 2.093140] sd 0:0:0:0: [sda] Attached SCSI disk 02:37:16[ 2.093773] sd 1:0:0:0: [sdb] Attached SCSI disk 02:37:16[ 2.693899] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) 02:37:16[ 5.483492] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro 02:37:16[ 7.905040] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null) 02:37:25[ 19.553095] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:37:25[ 19.555266] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:37:25[ 19.641532] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen t4 02:37:25[ 19.641532] ata1: irq_stat 0x00000040, connection status changed 02:37:25[ 19.641532] ata1: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:37:25[ 19.641533] ata1: hard resetting link 02:37:25[ 19.642076] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen t4 02:37:25[ 19.642078] ata2: irq_stat 0x00000040, connection status changed 02:37:25[ 19.642081] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:37:25[ 19.642084] ata2: hard resetting link 02:37:26[ 20.392606] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:26[ 20.392610] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:26[ 20.396697] ata2.00: configured for UDMA/133 02:37:26[ 20.396703] ata2: EH complete 02:37:26[ 20.451491] ata1.00: configured for UDMA/133 02:37:26[ 20.451498] ata1: EH complete 02:37:30[ 24.563725] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:37:30[ 24.565939] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:37:30[ 24.627236] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5900000 action 0xe frozen t4 02:37:30[ 24.627240] ata1: irq_stat 0x00000040, connection status changed 02:37:30[ 24.627242] ata1: SError: { Dispar LinkSeq TrStaTrns DevExch } 02:37:30[ 24.627246] ata1: hard resetting link 02:37:30[ 24.632241] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen t4 02:37:30[ 24.632244] ata2: irq_stat 0x00000040, connection status changed 02:37:30[ 24.632247] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:37:30[ 24.632250] ata2: hard resetting link 02:37:31[ 25.372582] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:31[ 25.382615] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:31[ 25.386782] ata2.00: configured for UDMA/133 02:37:31[ 25.386788] ata2: EH complete 02:37:31[ 25.431668] ata1.00: configured for UDMA/133 02:37:31[ 25.431674] ata1: EH complete 02:45:54[ 529.141844] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:45:55[ 529.544529] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:45:55[ 529.622561] ata1: limiting SATA link speed to 1.5 Gbps 02:45:55[ 529.622568] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5850000 action 0xe frozen 02:45:55[ 529.622572] ata1: irq_stat 0x00400040, connection status changed 02:45:55[ 529.622576] ata1: SError: { PHYRdyChg CommWake LinkSeq TrStaTrns DevExch } 02:45:55[ 529.622583] ata1: hard resetting link 02:45:55[ 529.622609] ata2: limiting SATA link speed to 1.5 Gbps 02:45:55[ 529.622613] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen 02:45:55[ 529.622616] ata2: irq_stat 0x00000040, connection status changed 02:45:55[ 529.622620] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:45:55[ 529.622624] ata2: hard resetting link 02:45:56[ 530.380135] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:56[ 530.380157] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:56[ 530.384305] ata2.00: configured for UDMA/133 02:45:56[ 530.384314] ata2: EH complete 02:45:56[ 530.399225] ata1.00: configured for UDMA/133 02:45:56[ 530.399233] ata1: EH complete 02:45:58[ 532.395990] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:45:58[ 532.518270] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:45:58[ 532.590968] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5840000 action 0xe frozen t4 02:45:58[ 532.590973] ata1: irq_stat 0x00000040, connection status changed 02:45:58[ 532.590977] ata1: SError: { CommWake LinkSeq TrStaTrns DevExch } 02:45:58[ 532.590983] ata1: hard resetting link 02:45:58[ 532.591034] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen t4 02:45:58[ 532.591037] ata2: irq_stat 0x00000040, connection status changed 02:45:58[ 532.591041] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:45:58[ 532.591045] ata2: hard resetting link 02:45:59[ 533.340147] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:59[ 533.340168] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:59[ 533.344416] ata2.00: configured for UDMA/133 02:45:59[ 533.344424] ata2: EH complete 02:45:59[ 533.360839] ata1.00: configured for UDMA/133 02:45:59[ 533.360847] ata1: EH complete 02:45:59[ 533.584449] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:45:59[ 533.586999] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:45:59[ 533.660117] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen 02:45:59[ 533.660122] ata2: irq_stat 0x00000040, connection status changed 02:45:59[ 533.660126] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:45:59[ 533.660132] ata2: hard resetting link 02:45:59[ 533.660141] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5850000 action 0xe frozen 02:45:59[ 533.660143] ata1: irq_stat 0x00000040, connection status changed 02:45:59[ 533.660147] ata1: SError: { PHYRdyChg CommWake LinkSeq TrStaTrns DevExch } 02:45:59[ 533.660151] ata1: hard resetting link 02:46:00[ 534.412536] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:00[ 534.412562] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:00[ 534.416768] ata2.00: configured for UDMA/133 02:46:00[ 534.416777] ata2: EH complete 02:46:00[ 534.431396] ata1.00: configured for UDMA/133 02:46:00[ 534.431401] ata1: EH complete 02:46:03[ 537.384649] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:03[ 537.504214] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:03[ 537.585992] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5900000 action 0xe frozen t4 02:46:03[ 537.585996] ata1: irq_stat 0x00000040, connection status changed 02:46:03[ 537.585999] ata1: SError: { Dispar LinkSeq TrStaTrns DevExch } 02:46:03[ 537.586002] ata1: hard resetting link 02:46:03[ 537.586028] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen t4 02:46:03[ 537.586030] ata2: irq_stat 0x00000040, connection status changed 02:46:03[ 537.586033] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:46:03[ 537.586036] ata2: hard resetting link 02:46:04[ 538.330147] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:04[ 538.330168] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:04[ 538.334389] ata2.00: configured for UDMA/133 02:46:04[ 538.334398] ata2: EH complete 02:46:04[ 538.343511] ata1.00: configured for UDMA/133 02:46:04[ 538.343519] ata1: EH complete 02:46:04[ 538.456413] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:46:04[ 538.459404] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:46:04[ 538.540138] ata1.00: limiting speed to UDMA/100:PIO4 02:46:04[ 538.540144] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5850000 action 0xe frozen 02:46:04[ 538.540148] ata1: irq_stat 0x00000040, connection status changed 02:46:04[ 538.540153] ata1: SError: { PHYRdyChg CommWake LinkSeq TrStaTrns DevExch } 02:46:04[ 538.540159] ata1: hard resetting link 02:46:04[ 538.540202] ata2.00: limiting speed to UDMA/100:PIO4 02:46:04[ 538.540207] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen 02:46:04[ 538.540211] ata2: irq_stat 0x00000040, connection status changed 02:46:04[ 538.540215] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:46:04[ 538.540220] ata2: hard resetting link 02:46:05[ 539.290054] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:05[ 539.290041] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:05[ 539.294100] ata2.00: configured for UDMA/100 02:46:05[ 539.294106] ata2: EH complete 02:46:05[ 539.314125] ata1.00: configured for UDMA/100 02:46:05[ 539.314132] ------------[ cut here ]------------ 02:46:05[ 539.314140] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 02:46:05[ 539.314144] Hardware name: MacBookPro5,3 02:46:05[ 539.314146] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 02:46:05[ 539.314221] Pid: 202, comm: scsi_eh_0 Tainted: P 2.6.35-25-generic #44-Ubuntu 02:46:05[ 539.314224] Call Trace: 02:46:05[ 539.314233] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 02:46:05[ 539.314237] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 02:46:05[ 539.314242] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 02:46:05[ 539.314246] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 02:46:05[ 539.314256] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 02:46:05[ 539.314261] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 02:46:05[ 539.314266] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 02:46:05[ 539.314270] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 02:46:05[ 539.314275] [<ffffffff8107f266>] kthread+0x96/0xa0 02:46:05[ 539.314280] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 02:46:05[ 539.314284] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 02:46:05[ 539.314288] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 02:46:05[ 539.314291] ---[ end trace 76dbffc2d5d49d9b ]--- 02:46:05[ 539.314296] ata1: EH complete 02:46:12[ 547.040091] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen 02:46:12[ 547.040098] ata1.00: failed command: IDENTIFY DEVICE 02:46:12[ 547.040106] ata1.00: cmd ec/00:00:00:00:00/00:00:00:00:00/40 tag 0 pio 512 in 02:46:12[ 547.040108] res 40/00:01:00:00:00/00:00:00:00:00/e0 Emask 0x4 (timeout) 02:46:12[ 547.040111] ata1.00: status: { DRDY } 02:46:12[ 547.040117] ata1: hard resetting link 02:46:13[ 547.390144] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:13[ 547.408430] ata1.00: configured for UDMA/100 02:46:13[ 547.408438] ------------[ cut here ]------------ 02:46:13[ 547.408447] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 02:46:13[ 547.408451] Hardware name: MacBookPro5,3 02:46:13[ 547.408453] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 02:46:13[ 547.408528] Pid: 202, comm: scsi_eh_0 Tainted: P W 2.6.35-25-generic #44-Ubuntu 02:46:13[ 547.408531] Call Trace: 02:46:13[ 547.408540] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 02:46:13[ 547.408544] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 02:46:13[ 547.408549] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 02:46:13[ 547.408553] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 02:46:13[ 547.408563] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 02:46:13[ 547.408567] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 02:46:13[ 547.408572] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 02:46:13[ 547.408577] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 02:46:13[ 547.408582] [<ffffffff8107f266>] kthread+0x96/0xa0 02:46:13[ 547.408587] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 02:46:13[ 547.408591] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 02:46:13[ 547.408595] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 02:46:13[ 547.408598] ---[ end trace 76dbffc2d5d49d9c ]--- 02:46:13[ 547.408620] ata1: EH complete 02:46:13[ 547.562470] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:13[ 547.671380] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:13[ 547.738198] ata1.00: limiting speed to UDMA/33:PIO4 02:46:13[ 547.738204] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5800000 action 0xe frozen t4 02:46:13[ 547.738208] ata1: irq_stat 0x00000040, connection status changed 02:46:13[ 547.738212] ata1: SError: { LinkSeq TrStaTrns DevExch } 02:46:13[ 547.738218] ata1: hard resetting link 02:46:13[ 547.738262] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5900000 action 0xe frozen t4 02:46:13[ 547.738265] ata2: irq_stat 0x00000040, connection status changed 02:46:13[ 547.738269] ata2: SError: { Dispar LinkSeq TrStaTrns DevExch } 02:46:13[ 547.738274] ata2: hard resetting link 02:46:14[ 548.482561] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:14[ 548.484083] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:14[ 548.486809] ata2.00: configured for UDMA/100 02:46:14[ 548.486818] ata2: EH complete 02:46:14[ 548.498998] ata1.00: configured for UDMA/33 02:46:14[ 548.499004] ata1: EH complete 02:46:18[ 552.410499] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:18[ 552.522521] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:18[ 552.529674] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5800000 action 0xe frozen t4 02:46:18[ 552.529678] ata1: irq_stat 0x00000040, connection status changed 02:46:18[ 552.529680] ata1: SError: { LinkSeq TrStaTrns DevExch } 02:46:18[ 552.529684] ata1: hard resetting link 02:46:18[ 552.529716] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5800000 action 0xe frozen t4 02:46:18[ 552.529718] ata2: irq_stat 0x00000040, connection status changed 02:46:18[ 552.529720] ata2: SError: { LinkSeq TrStaTrns DevExch } 02:46:18[ 552.529723] ata2: hard resetting link 02:46:19[ 553.280059] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:19[ 553.280068] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:19[ 553.284141] ata2.00: configured for UDMA/100 02:46:19[ 553.284150] ata2: EH complete 02:46:19[ 553.301629] ata1.00: configured for UDMA/33 02:46:19[ 553.301637] ata1: EH complete

    Read the article

  • Creating Item Templates as Visual Studio 2010 Extensions

    - by maziar
    Technorati Tags: Visual Studio 2010 Extension,T4 Template,VSIX,Item Template Wizard This blog post briefly introduces creation of an item template as a Visual studio 2010 extension. Problem specification Assume you are writing a Framework for data-oriented applications and you decide to include all your application messages in a SQL server database table. After creating the table, your create a class in your framework for getting messages with a string key specified.   var message = FrameworkMessages.Get("ChangesSavedSuccess");   Everyone would say this code is so error prone, because message keys are not strong-typed, and might create errors in application that are not caught in tests. So we think of a way to make it strong-typed, i.e. create a class to use it like this:   var message = Messages.ChangesSavedSuccess; in Messages class the code looks like this: public string ChangesSavedSuccess {     get { return FrameworkMessages.Get("ChangesSavedSuccess"); } }   And clearly, we are not going to create the Messages class manually; we need a class generator for it.   Again assume that the application(s) that intend to use our framework, contain multiple sub-systems. So each sub-system need to have it’s own strong-typed message class that call FrameworkMessages.Get method. So we would like to make our code generator an Item Template so that each developer would easily add the item to his project and no other works would be necessary.   Solution We create a T4 Text Template to generate our strong typed class from database. Then create a Visual Studio Item Template with this generator and publish it.   What Are T4 Templates You might be already familiar with T4 templates. If it’s so, you can skip this section. T4 Text Template is a fine Visual Studio file type (.tt) that generates output text. This file is a mixture of text blocks and code logic (in C# or VB). For example, you can generate HTML files, C# classes, Resource files and etc with use of a T4 template.   Syntax highlighting In Visual Studio 2010 a T4 Template by default will no be syntax highlighted and no auto-complete is supported for it. But there is a fine visual studio extension named ‘Visual T4’ which can be downloaded free from VisualStudioGallery. This tool offers IntelliSense, syntax coloring, validation, transformation preview and more for T4 templates.     How Item Templates work in Visual Studio Visual studio extensions allow us to add some functionalities to visual studio. In our case we need to create a .vsix file that adds a template to visual studio item templates. Item templates are zip files containing the template file and a meta-data file with .vstemplate extension. This .vstemplate file is an XML file that provides some information about the template. A .vsix file also is a zip file (renamed to .vsix) that are open with visual studio extension installer. (Re-installing a vsix file requires that old one to be uninstalled from VS: Tools > Extension Manager.) Installing a vsix will need Visual Studio to be closed and re-opened to take effect. Visual studio extension installer will easily find the item template’s zip file and copy it to visual studio’s template items folder. You can find other visual studio templates in [<VS Install Path>\Common7\IDE\ItemTemplates] and you can edit them; but be very careful with VS default templates.   How Can I Create a VSIX file 1. Visual Studio SDK depending on your Visual Studio’s version, you need to download Microsoft Visual Studio SDK. Note that if you have VS 2010 SP1, you will need to download and install VS 2010 SP1 SDK; VS 2010 SDK will not be installed (unless you change registry value that indicated your service pack number). Here is the link for VS 2010 SP1 SDK. After downloading, Run it and follow the wizard to complete the installation.   2. Create the file you want to make it an Item Template Create a project (or use an existing one) and add you file, edit it to make it work fine.   Back to our own problem, we need to create a T4 (.tt) template. VS-Prok: Add > New Item > General > Text Template Type a file name, ex. Message.tt, and press Add. Create the T4 template file (in this blog I do not intend to include T4 syntaxes so I just write down the code which is clear enough for you to understand)   <#@ template debug="false" hostspecific="true" language="C#" #> <#@ output extension=".cs" #> <#@ Assembly Name="System.Data" #> <#@ Import Namespace="System.Data.SqlClient" #> <#@ Import Namespace="System.Text" #> <#@ Import Namespace="System.IO" #> <#     var connectionString = "server=Maziar-PC; Database=MyDatabase; Integrated Security=True";     var systemName = "Sys1";     var builder = new StringBuilder();     using (var connection = new SqlConnection(connectionString))     {         connection.Open();         var command = connection.CreateCommand();         command.CommandText = string.Format("SELECT [Key] FROM [Message] WHERE System = '{0}'", systemName);         var reader = command.ExecuteReader();         while (reader.Read())         {             builder.AppendFormat("        public static string {0} {{ get {{ return FrameworkMessages.Get(\"{0}\"); }} }}\r\n", reader[0]);         }     } #> namespace <#= System.Runtime.Remoting.Messaging.CallContext.LogicalGetData("NamespaceHint") #> {     public static class <#= Path.GetFileNameWithoutExtension(Host.TemplateFile) #>     { <#= builder.ToString() #>     } } As you can see the T4 template connects to a database, reads message keys and generates a class. Here is the output: namespace MyProject.MyFolder {     public static class Messages     {         public static string ChangesSavedSuccess { get { return FrameworkMessages.Get("ChangesSavedSuccess"); } }         public static string ErrorSavingChanges { get { return FrameworkMessages.Get("ErrorSavingChanges"); } }     } }   The output looks fine but there is one problem. The connectionString and systemName are hard coded. so how can I create an flexible item template? One of features of item templates in visual studio is that you can create a designer wizard for your item template, so I can get connection information and system name there. now lets go on creating the vsix file.   3. Create Template In visual studio click on File > Export Template a wizard will show up. if first step click on Item Template on in the combo box select the project that contains Messages.tt. click next. Select Messages.tt from list in second step. click next. In the third step, you should choose References. For this template, System and System.Data are needed so choose them. click next. write down template name, description, if you like add a .ico file as the icon file and also preview image. Uncheck automatically add the templare … . Copy the output location in clip board. click finish.     4. Create VSIX Project In VS, Click File > New > Project > Extensibility > VSIX Project Type a name, ex. FrameworkMessages, Location, etc. The project will include a .vsixmanifest file. Fill in fields like Author, Product Name, Description, etc.   In Content section, click on Add Content. choose content type as Item Template. choose source as file. remember you have the template file address in clipboard? now paste it in front of file. click OK.     5. Build VSIX Project That’s it, build the project and go to output directory. You see a .vsix file. you can run it now. After restarting VS, if you click on a project > Add > New Item, you will see your item in list and you can add it. When you add this item to a project, if it has not references to System and System.Data they will be added. but the problem mentioned in step 2 is seen now.     6. Create Design Wizard for your Item Template Create a project i.e. Windows Application named ‘Framework.Messages.Design’, but better change its output type to Class Library. Add References to Microsoft.VisualStudio.TemplateWizardInterface and envdte Add a class Named MessagesDesigner in your project and Implement IWizard interface in it. This is what you should write: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.VisualStudio.TemplateWizard; using EnvDTE; namespace Framework.Messages.Design {     class MessageDesigner : IWizard     {         private bool CanAddProjectItem;         public void RunStarted(object automationObject, Dictionary<string, string> replacementsDictionary, WizardRunKind runKind, object[] customParams)         {             // Prompt user for Connection String and System Name in a Windows form (ShowDialog) here             // (try to provide good interface)             // if user clicks on cancel of your windows form return;             string connectionString = "connection;string"; // Set value from the form             string systemName = "system;name"; // Set value from the form             CanAddProjectItem = true;             replacementsDictionary.Add("$connectionString$", connectionString);             replacementsDictionary.Add("$systemName$", systemName);         }         public bool ShouldAddProjectItem(string filePath)         {             return CanAddProjectItem;         }         public void BeforeOpeningFile(ProjectItem projectItem)         {         }         public void ProjectFinishedGenerating(Project project)         {         }         public void ProjectItemFinishedGenerating(ProjectItem projectItem)         {         }         public void RunFinished()         {         }     } }   before your code runs  replacementsDictionary contains list of default template parameters. After that, two other parameters are added. Now build this project and copy the output assembly to [<VS Install Path>\Common7\IDE] folder.   your designer is ready.     The template that you had created is now added to your VSIX project. In windows explorer open your template zip file (extract it somewhere). open the .vstemplate file. first of all remove <ProjectItem SubType="Code" TargetFileName="$fileinputname$.cs" ReplaceParameters="true">Messages.cs</ProjectItem> because the .cs file is not to be intended to be a part of template and it will be generated. change value of ReplaceParameters for your .tt file to true to enable parameter replacement in this file. now right after </TemplateContent> end element, write this: <WizardExtension>   <Assembly>Framework.Messages.Design</Assembly>   <FullClassName>Framework.Messages.Design.MessageDesigner</FullClassName> </WizardExtension>   one other thing that you should do is to edit your .tt file and remove your .cs file. Lines 8 and 9 of your .tt file should be:     var connectionString = "$connectionString$";     var systemName = "$systemName$"; this parameters will be replaced when the item is added to a project. Save the contents to a zip file with same file name and replace the original file.   now again build your VSIX project, uninstall your extension. close VS. now run .vsix file. open vs, add an item of type messages to your project, bingo, your wizard form will show up. fill the fields and click ok, values are replaced in .tt file added.     that’s it. tried so hard to make this post brief, hope it was not so long…   Cheers Maziar

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >