Search Results

Search found 12918 results on 517 pages for 'hard turner'.

Page 165/517 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Using deprecated binders and C++0x lambdas

    - by Sumant
    C++0x has deprecated the use of old binders such as bind1st and bind2nd in favor of generic std::bind. C++0x lambdas bind nicely with std::bind but they don't bind with classic bind1st and bind2nd because by default lambdas don't have nested typedefs such as argument_type, first_argument_type, second_argument_type, and result_type. So I thought std::function can serve as a standard way to bind lambdas to the old binders because it exposes the necessary typedefs. However, using std::function is hard to use in this context because it forces you to spell out the function-type while instantiating it. auto bound = std::bind1st(std::function<int (int, int)>([](int i, int j){ return i < j; }), 10); // hard to use auto bound = std::bind1st(std::make_function([](int i, int j){ return i < j; }), 10); // nice to have but does not compile. I could not find a convenient object generator for std::function. Something like std::make_fuction would be nice to have. Does such a thing exist? If not, is there any other better way of binding lamdas to the classic binders?

    Read the article

  • Callback from static library

    - by MortenHN
    I think this should be simple, but im having a real hard time finding information about this topic. I have made a static library and have no problem getting the basics to work. But im having a hard time figuring out how to make a call back from the static library to the main APP. I would like my static library to only use one header as front, this header should contain functions like: requestImage:(NSString *)path; requestLikstOfSomething:(NSSting *)guid; and so on.. These functions should do the necessary work and start a async NSURLConnection, and call back to the main application when the call have finished. How do you guys do this, what are the best ways to callback from a static library when a async method is finished? should i do this with delegates (is this possible), notifications, key/value observers. I really want to know how you guys have solved this, and what you regard as the best practices. Im going to have 20-25 different calls so i want the static library header file to be as simple as possible preferable only with a list of the 20-25 functions. UPDATE: My question is not how to use delegate pattern, but witch way is the best to do callbacks from static librarys. I would like to use delegates but i dont want to have 20-25 protocol declarations in the public header file. I would prefer to have only one function for each request. Thanks in advance. Best regards Morten

    Read the article

  • How do I load PersistentDocuments into the same window

    - by Brad Stone
    I want to open NSPersistentDocuments and load them into the same window one at a time. I'm almost there but missing some steps. Hopefully someone can help me. I have a few saved documents on the hard drive. On launch my app opens to an untitled NSPersistentDocument and creates a separate NSWindowController. When I press the button to load file 1 off the hard drive the data appears in the fields but two things are wrong that I can see: 1) changing the data doesn't make the document dirty 2) choosing save updates the persistentstore (I know this because when I open the file again I see the changes) but I get an error: +entityForName: could not locate an NSManagedObjectModel for entity name 'Book' Here's my code which is in the WindowController that was launched initially with the untitled document. This code isn't perfect. For example, I know I should processPendingChanges and save the current doc before I load the new one. This is test code to try to get over this hurdle. - (IBAction)newBookTwo:(id)sender { NSDocumentController *dc = [NSDocumentController sharedDocumentController]; NSURL *url = [NSURL fileURLWithPath:[@"~/Desktop/File 2.binary" stringByExpandingTildeInPath]]; NSError *error; MainWindowDocument *thisDoc = [dc openDocumentWithContentsOfURL:url display:NO error:&error]; [self setDocument:thisDoc]; [self setManagedObjectContext:[thisDoc managedObjectContext]]; } Thanks!

    Read the article

  • Where does the delete control go in my Cocoa user interface?

    - by Graham Lee
    Hi, I have a Cocoa application managing a collection of objects. The collection is presented in an NSCollectionView, with a "new object" button nearby so users can add to the collection. Of course, I know that having a "delete object" button next to that button would be dangerous, because people might accidentally knock it when they mean to create something. I don't like having "are you sure you want to..." dialogues, so I dispensed with the "delete object". There's a menu item under Edit for removing an object, and you can hit Cmd-backspace to do the same. The app supports undoing delete actions. Now I'm getting support emails ranging from "does it have to be so hard to delete things" to "why can't I delete objects?". That suggests I've made it a bit too hard, so what's the happy middle ground? I see applications from Apple that do it my way, or with the add/remove buttons next to each other, but I hate that latter option. Is there another good (and preferably common) convention for delete controls? I thought about an action menu but I don't think I have any other actions that would go in it, rendering the menu a bit thin.

    Read the article

  • C++ STL Map vs Vector speed

    - by sub
    In the interpreter for my experimental programming language I have a symbol table. Each symbol consists of a name and a value (the value can be e.g.: of type string, int, function, etc.). At first I represented the table with a vector and iterated through the symbols checking if the given symbol name fitted. Then I though using a map, in my case map<string,symbol>, would be better than iterating through the vector all the time but: It's a bit hard to explain this part but I'll try. If a variable is retrieved the first time in a program in my language, of course its position in the symbol table has to be found (using vector now). If I would iterate through the vector every time the line gets executed (think of a loop), it would be terribly slow (as it currently is, nearly as slow as microsoft's batch). So I could use a map to retrieve the variable: SymbolTable[ myVar.Name ] But think of the following: If the variable, still using vector, is found the first time, I can store its exact integer position in the vector with it. That means: The next time it is needed, my interpreter knows that it has been "cached" and doesn't search the symbol table for it but does something like SymbolTable.at( myVar.CachedPosition ). Now my (rather hard?) question: Should I use a vector for the symbol table together with caching the position of the variable in the vector? Should I rather use a map? Why? How fast is the [] operator? Should I use something completely different?

    Read the article

  • JAR files, don't they just bloat and slow Java down?

    - by Josamoto
    Okay, the question might seem dumb, but I'm asking it anyways. After struggling for hours to get a Spring + BlazeDS project up and running, I discovered that I was having problems with my project as a result of not including the right dependencies for Spring etc. There were .jars missing from my WEB-INF/lib folder, yes, silly me. After a while, I managed to get all the .jar files where they belong, and it comes at a whopping 12.5MB at that, and there's more than 30 of them! Which concerns me, but it probably and hopefully shouldn't be concerned. How does Java operate in terms of these JAR files, they do take up quite a bit of hard drive space, taking into account that it's compressed and compiled source code. So that can really quickly populate a lot of RAM and in an instant. My questions are: Does Java load an entire .jar file into memory when say for instance a class in that .jar is instantiated? What about stuff that's in the .jar that never gets used. Do .jars get cached somehow, for optimized application performance? When a single .jar is loaded, I understand that the thing sits in memory and is available across multiple HTTP requests (i.e. for the lifetime of the server instance running), unlike PHP where objects are created on the fly with each request, is this assumption correct? When using Spring, I'm thinking, I had to include all those fiddly .jars, wouldn't I just be better off just using native Java, with say at least and ORM solution like Hibernate? So far, Spring just took extra time configuring, extra hard drive space, extra memory, cpu consumption, so I'm concerned that the framework is going to cost too much application performance just to get for example, IoC implemented with my BlazeDS server. There still has to come ORM, a unit testing framework and bits and pieces here and there. It's just so easy to bloat up a project quickly and irresponsibly easily. Where do I draw the line?

    Read the article

  • Help Me With This MS-Access Query

    - by yae
    I have 2 tables: "products" and "pieces" PRODUCTS idProd product price PIECES id idProdMain idProdChild quant idProdMain and idProdChild are related with the table: "products". Other considerations is that 1 product can have some pieces and 1 product can be a piece. Price product equal a sum of quantity * price of all their pieces. "Products" table contains all products (p EXAMPLE: TABLE PRODUCTS (idProd - product - price) 1 - Computer - 300€ 2 - Hard Disk - 100€ 3 - Memory - 50€ 4 - Main Board - 100€ 5 - Software - 50€ 6 - CDroms 100 un. - 30€ TABLE PIECES (id - idProdMain - idProdChild - Quant.) 1 - 1 - 2 - 1 2 - 1 - 3 - 2 3 - 1 - 4 - 1 WHAT I NEED? I need update the price of the main product when the price of the product child (piece) is changed. Following the previous example, if I change the price of this product "memory" (is a piece too) to 60€, then product "Computer" will must change his price to 320€ How I can do it using queries? Already I have tried this to obtain the price of the main product, but not runs. This query not returns any value: SELECT Sum(products.price*pieces.quant) AS Expr1 FROM products LEFT JOIN pieces ON (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdMain) WHERE (((pieces.idProdMain)=5)); MORE INFO The table "products" contains all the products to sell that it is in the shop. The table "pieces" is to take a control of the compound products. To know those who are the products children. For example of compound product: computers. This product is composed by other products (motherboard, hard disk, memory, cpu, etc.)

    Read the article

  • Error when feeding a mysql db with a python-parsed data

    - by Barnabe
    I use this bit of code to feed some data i have parsed from a web page to a mysql database c=db.cursor() c.executemany( """INSERT INTO data (SID, Time, Value1, Level1, Value2, Level2, Value3, Level3, Value4, Level4, Value5, Level5, ObsDate) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)""", clean_data ) The parsed data looks like this (there are several hundred such lines) clean_data = [(161,00:00:00,8.19,1,4.46,4,7.87,4,6.54,null,4.45,6,2010-04-12),(162,00:00:00,7.55,1,9.52,1,1.90,1,4.76,null,0.14,1,2010-04-12),(164,00:00:00,8.01,1,8.09,1,0,null,8.49,null,0.20,2,2010-04-12),(166,00:00:00,8.30,1,4.77,4,10.99,5,9.11,null,0.36,2,2010-04-12)] if i hard code the data as above mySQL accepts my request (except for some quibbles about formatting) but if the variable clean_data is instead defined as the result of the parsing code, like this: cleaner = [(""" $!!'""", ')]'),(' $!!', ') etc etc] def processThis(str,lst): for find, replace in lst: str = str.replace(find, replace) return str clean_data = processThis(data,cleaner) then i get the dreaded "TypeError: not enough arguments for format string" After playing with formatting options for a few hours (I am very new to this) I am confused... what is the difference between the hard coded data and the result of the processThis function as fas as mySQL is concerned? Any idea greatly appreciated...

    Read the article

  • Parsing Windows Event Logs, is it possible?

    - by xceph
    Hello, I am doing a little research into the feasibility of a project I have in mind. It involves doing a little forensic work on images of hard drives, and I have been looking for information on how to analyze saved windows event log files. I do not require the ability to monitor current events, I simply want to be able to view events which have been created, and record the time and application/process which created those events. However I do not have much experience in the inner workings of the windows system specifics, and am wondering if this is possible? The plan is to create images of a hard drive, and then do the analysis on a second machine. Ideally this would be done in either Java or Python, as they are my most proficient languages. The main concerns I have are as follows: Is this information encrypted in anyway? Are there any existing API for parsing this data directly? Is there information available regarding the format in which these logs are stored, and how does it differ from windows versions? This must be possible from analyzing the drive itself, as ideally the installation of windows on the drive would not be running, (as it would be a mounted image on another system) The closest thing I could find in my searches is http://www.j-interop.org/ but that seems to be aimed at remote clients. Ideally nothing would have to be installed on the imaged drive. The other solution which seemed to also pop up is the JNI library, but that also seems to be more so in the area of monitoring a running system. Any help at all is greatly appreciated. :)

    Read the article

  • Theme-aware XAML resources in a WP7 project

    - by SandRock
    I'm making a Windows Phone 7 application and I'm a bit confused with dark/light themes. With a panorama, you very often set a background image. The issue is it's very hard to make a picture which is right for both dark and light themes. How are we supposed to proceed? Is there a way to force a dark/light theme for a panorama? This will avoid making theme-specific panorama background pictures. Then how do I do? I found xaml files in C:\Program Files (x86)\Microsoft SDKs\Windows Phone\v7.0\Design. If this is a right way to proceed, how can I import them for my panorama? Or if there's no way (or if it's wrong) to force a dark/light theme: how to write conditional XAML to set correct resources? Now I have the following XAML which is fine with the dark theme: <ImageBrush x:Key="PageBackground" ImageSource="Resources/PageBackground.png" Stretch="None" /> <ImageBrush x:Key="PanoramaBackground" ImageSource="Resources/PanoramaBackground.png" Stretch="None" /> But when I use a light theme, black controls and black texts are hard to read with my dark background pictures. So I made different pictures that I can use this way: <ImageBrush x:Key="PageBackground" ImageSource="Resources/PageBackgroundLight.png" Stretch="None" /> <ImageBrush x:Key="PanoramaBackground" ImageSource="Resources/PanoramaBackgroundLight.png" Stretch="None" /> Now my issue is to make XAML conditionnal to declare the right thing depending on the current theme. I found no relevant way on the Internet. I would prefer not to use code or code-behind for that because I believe XAML is able to do this (I just don't know how).

    Read the article

  • Ruby on Rails or PHP... Somehow uncertain

    - by Clamm
    Hi everybody... I've been thinking about this a long time now but i wanna hear your opinion because i always received the ebst answers here. So in advance... thank you guys. Right now i have to make this decision: Shift a prototype webservice to production quality. Choose either Ruby or PHP... (Background: A friend of mine is joining the project and prefers rails) I've already played around a bit with RoR (only basic stuff) but i am really disappointed about the documentation of Rails and Ruby. In relation to PHP i find only fragments or hard to use references. At the end i am a bit scared. I dont wanna waste my time realizing that i am not capable of doing s.th in Ruby what i could with PHP. Maybe only because i am too stupid and don't find a proper explanation ;-) Did anyone experience this shift and can tell me how easy/hard it was to switch from PHP to Ruby? E.G. would you recommend programming it in PHP and using MVC as a base pattern? Thanks for your opinion!!!

    Read the article

  • Most efficient way to make an activity log

    - by Nathan
    I am making a "recent activity" tab to profiles on my site and I also am going to have a log for moderators to see everything that happens on the site. This would require making an activity log of some sort. I just don't know what would be better. I have 2 options: Make a table called "activity" and then every time someone does something, add a record to it with the type of action, user id, timestamp, etc. Problem: table could get very long. Join all 3 tables (questions, answers, answer_comments) and then somehow show all these on the page in the order in which the action was taken. Problem: this would be extremely hard because I have no clue how I could make it say "John commented on an answer on Question Title Here" by just joining 3 tables. Does anyone know of a better way of making an activity log in this situation? I am using PHP and MySQL. If this is either too inefficient or hard I will probably just forget the Recent Activity tab on profiles but I still need an activity log for moderators. Here's some SQL that I started making for option 2, but this would not work because there is no way of detecting whether the action is a comment, question, or answer when I echo the info in a while loop: SELECT q.*, a.*, ac.* FROM questions q JOIN answers a ON a.questionid = q.qid JOIN answer_comments ac ON c.answerid = a.ans_id WHERE q.user = $userid AND a.userid = $userid AND ac.userid = $userid ORDER BY q.created DESC, a.created DESC, ac.created DESC Thanks in advance for any help!

    Read the article

  • Properties in User-Control Ctor are duplicated to the hosting form

    - by fortis
    An annoying behavior of Visual Studio (2008)'s designer is to duplicate any property of a control, which is set inside the control's constructor code to the InitializeComponent() method of the hosting form. For example, if I create a new user control and write the following line in its Ctor: this.Text = "Hard Coded Name"; then this same line will appear inside the InitializeComponent() method of any form hosting this control. As long as it's about this kind of properties, it's only annoying because if I were to change the Text property inside the control to: "Better Hard Coded Name", I'd have to go over all hosting controls and manually change the Text value over there too. The real problem is with ".Add(something)" properties - like if my control is a TableLayoutPanel and I want it to have a certain number of columns and rows. Any col or row style set inside the user control will be duplicated in the hosting form and we end up with having twice as wanted. The count (e.g. ColumnCount) will be as planned but if I were to add cols later on, things will get messy. Is there a way to signal VS's designer not to duplicate properties? What can I do to avoid this behavior?

    Read the article

  • PHP include taking too long

    - by wxiiir
    I have a php file with around 100mb which is full of arrays (only arrays). I've made a script that includes this file (for processing), first it exhausted the default Xampp 128mb memory limit, i've raised it to 1024mb but it just takes forever and doesn't do anything. I'm sure the problem is being created by the sheer size of the file because i've tried removing all lines of code and just leaving the include and an echo for me to know when it finishes executing, and it does the same thing (which is taking forever), i've also tried to run the 100mb file in separate and same thing again. A 10mb file is taking forever as well but a similar 1mb file is almost instantly read and executed so the problem must be more than just the file size. I was avoiding using c++ for a simple project as this and would rather not to as php is easier for me and the task that will be executed doesn't need to benefit from the added speed that it would have if it had been done in c++ but if i have no luck in solving this problem i guess i'll have to. EDIT Reasons for not using a database: 1-Whoever made it didn't used a database and it will be pretty hard to store this in an organized database if i'm not able to do something with it first, like just reading it, copying parts from it or putting in memory or something. 2-I don't have experience working with databases as pretty much all stuff i've ever done in php didn't needed large amounts of stored data, 50kb at best, if i was thinking about a big project or huge chunks of data as this one, i definitely would, but i didn't made this mess to start with and now i have to undo it. 3-The logic for having to store a small portion of data like 10mb in hard drive when now every computer has pretty much enough ram to fit the whole OS in it is pretty much incomprehensible unless someone gives a good explanation about it, if i had to access a lot of said files simultaneously i would understand but like i said, this is a simple project, this is the only file that will be accessed at a given time this isn't even to make some kind of website, it's to run a few times and be done with it.

    Read the article

  • Silverlight Tree View with Multiple Levels

    - by psheriff
    There are many examples of the Silverlight Tree View that you will find on the web, however, most of them only show you how to go to two levels. What if you have more than two levels? This is where understanding exactly how the Hierarchical Data Templates works is vital. In this blog post, I am going to break down how these templates work so you can really understand what is going on underneath the hood. To start, let’s look at the typical two-level Silverlight Tree View that has been hard coded with the values shown below: <sdk:TreeView>  <sdk:TreeViewItem Header="Managers">    <TextBlock Text="Michael" />    <TextBlock Text="Paul" />  </sdk:TreeViewItem>  <sdk:TreeViewItem Header="Supervisors">    <TextBlock Text="John" />    <TextBlock Text="Tim" />    <TextBlock Text="David" />  </sdk:TreeViewItem></sdk:TreeView> Figure 1 shows you how this tree view looks when you run the Silverlight application. Figure 1: A hard-coded, two level Tree View. Next, let’s create three classes to mimic the hard-coded Tree View shown above. First, you need an Employee class and an EmployeeType class. The Employee class simply has one property called Name. The constructor is created to accept a “name” argument that you can use to set the Name property when you create an Employee object. public class Employee{  public Employee(string name)  {    Name = name;  }   public string Name { get; set; }} Finally you create an EmployeeType class. This class has one property called EmpType and contains a generic List<> collection of Employee objects. The property that holds the collection is called Employees. public class EmployeeType{  public EmployeeType(string empType)  {    EmpType = empType;    Employees = new List<Employee>();  }   public string EmpType { get; set; }  public List<Employee> Employees { get; set; }} Finally we have a collection class called EmployeeTypes created using the generic List<> class. It is in the constructor for this class where you will build the collection of EmployeeTypes and fill it with Employee objects: public class EmployeeTypes : List<EmployeeType>{  public EmployeeTypes()  {    EmployeeType type;            type = new EmployeeType("Manager");    type.Employees.Add(new Employee("Michael"));    type.Employees.Add(new Employee("Paul"));    this.Add(type);     type = new EmployeeType("Project Managers");    type.Employees.Add(new Employee("Tim"));    type.Employees.Add(new Employee("John"));    type.Employees.Add(new Employee("David"));    this.Add(type);  }} You now have a data hierarchy in memory (Figure 2) which is what the Tree View control expects to receive as its data source. Figure 2: A hierachial data structure of Employee Types containing a collection of Employee objects. To connect up this hierarchy of data to your Tree View you create an instance of the EmployeeTypes class in XAML as shown in line 13 of Figure 3. The key assigned to this object is “empTypes”. This key is used as the source of data to the entire Tree View by setting the ItemsSource property as shown in Figure 3, Callout #1. Figure 3: You need to start from the bottom up when laying out your templates for a Tree View. The ItemsSource property of the Tree View control is used as the data source in the Hierarchical Data Template with the key of employeeTypeTemplate. In this case there is only one Hierarchical Data Template, so any data you wish to display within that template comes from the collection of Employee Types. The TextBlock control in line 20 uses the EmpType property of the EmployeeType class. You specify the name of the Hierarchical Data Template to use in the ItemTemplate property of the Tree View (Callout #2). For the second (and last) level of the Tree View control you use a normal <DataTemplate> with the name of employeeTemplate (line 14). The Hierarchical Data Template in lines 17-21 sets its ItemTemplate property to the key name of employeeTemplate (Line 19 connects to Line 14). The source of the data for the <DataTemplate> needs to be a property of the EmployeeTypes collection used in the Hierarchical Data Template. In this case that is the Employees property. In the Employees property there is a “Name” property of the Employee class that is used to display the employee name in the second level of the Tree View (Line 15). What is important here is that your lowest level in your Tree View is expressed in a <DataTemplate> and should be listed first in your Resources section. The next level up in your Tree View should be a <HierarchicalDataTemplate> which has its ItemTemplate property set to the key name of the <DataTemplate> and the ItemsSource property set to the data you wish to display in the <DataTemplate>. The Tree View control should have its ItemsSource property set to the data you wish to display in the <HierarchicalDataTemplate> and its ItemTemplate property set to the key name of the <HierarchicalDataTemplate> object. It is in this way that you get the Tree View to display all levels of your hierarchical data structure. Three Levels in a Tree View Now let’s expand upon this concept and use three levels in our Tree View (Figure 4). This Tree View shows that you now have EmployeeTypes at the top of the tree, followed by a small set of employees that themselves manage employees. This means that the EmployeeType class has a collection of Employee objects. Each Employee class has a collection of Employee objects as well. Figure 4: When using 3 levels in your TreeView you will have 2 Hierarchical Data Templates and 1 Data Template. The EmployeeType class has not changed at all from our previous example. However, the Employee class now has one additional property as shown below: public class Employee{  public Employee(string name)  {    Name = name;    ManagedEmployees = new List<Employee>();  }   public string Name { get; set; }  public List<Employee> ManagedEmployees { get; set; }} The next thing that changes in our code is the EmployeeTypes class. The constructor now needs additional code to create a list of managed employees. Below is the new code. public class EmployeeTypes : List<EmployeeType>{  public EmployeeTypes()  {    EmployeeType type;    Employee emp;    Employee managed;     type = new EmployeeType("Manager");    emp = new Employee("Michael");    managed = new Employee("John");    emp.ManagedEmployees.Add(managed);    managed = new Employee("Tim");    emp.ManagedEmployees.Add(managed);    type.Employees.Add(emp);     emp = new Employee("Paul");    managed = new Employee("Michael");    emp.ManagedEmployees.Add(managed);    managed = new Employee("Sara");    emp.ManagedEmployees.Add(managed);    type.Employees.Add(emp);    this.Add(type);     type = new EmployeeType("Project Managers");    type.Employees.Add(new Employee("Tim"));    type.Employees.Add(new Employee("John"));    type.Employees.Add(new Employee("David"));    this.Add(type);  }} Now that you have all of the data built in your classes, you are now ready to hook up this three-level structure to your Tree View. Figure 5 shows the complete XAML needed to hook up your three-level Tree View. You can see in the XAML that there are now two Hierarchical Data Templates and one Data Template. Again you list the Data Template first since that is the lowest level in your Tree View. The next Hierarchical Data Template listed is the next level up from the lowest level, and finally you have a Hierarchical Data Template for the first level in your tree. You need to work your way from the bottom up when creating your Tree View hierarchy. XAML is processed from the top down, so if you attempt to reference a XAML key name that is below where you are referencing it from, you will get a runtime error. Figure 5: For three levels in a Tree View you will need two Hierarchical Data Templates and one Data Template. Each Hierarchical Data Template uses the previous template as its ItemTemplate. The ItemsSource of each Hierarchical Data Template is used to feed the data to the previous template. This is probably the most confusing part about working with the Tree View control. You are expecting the content of the current Hierarchical Data Template to use the properties set in the ItemsSource property of that template. But you need to look to the template lower down in the XAML to see the source of the data as shown in Figure 6. Figure 6: The properties you use within the Content of a template come from the ItemsSource of the next template in the resources section. Summary Understanding how to put together your hierarchy in a Tree View is simple once you understand that you need to work from the bottom up. Start with the bottom node in your Tree View and determine what that will look like and where the data will come from. You then build the next Hierarchical Data Template to feed the data to the previous template you created. You keep doing this for each level in your Tree View until you get to the last level. The data for that last Hierarchical Data Template comes from the ItemsSource in the Tree View itself. NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “Silverlight TreeView with Multiple Levels” from the drop down list.

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • SQLAuthority News – Job Interviewing the Right Way (and for the Right Reasons) – Guest Post by Feodor Georgiev

    - by pinaldave
    Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. Feodor has written excellent article on Job Interviewing the Right Way. Here is his article in his own language. A while back I was thinking to start a blog post series on interviewing and employing IT personnel. At that time I had just read the ‘Smart and gets things done’ book (http://www.joelonsoftware.com/items/2007/06/05.html) and I was hyped up on some debatable topics regarding finding and employing the best people in the branch. I have no problem with hiring the best of the best; it’s just the definition of ‘the best of the best’ that makes things a bit more complicated. One of the fundamental books one can read on the topic of interviewing is the one mentioned above. If you have not read it, then you must do so; not because it contains the ultimate truth, and not because it gives the answers to most questions on the subject, but because the book contains an extensive set of questions about interviewing and employing people. Of course, a big part of these questions have different answers, depending on location, culture, available funds and so on. (What works in the US may not necessarily work in the Nordic countries or India, or it may work in a different way). The only thing that is valid regardless of any external factor is this: curiosity. In my belief there are two kinds of people – curious and not-so-curious; regardless of profession. Think about it – professional success is directly proportional to the individual’s curiosity + time of active experience in the field. (I say ‘active experience’ because vacations and any distractions do not count as experience :)  ) So, curiosity is the factor which will distinguish a good employee from the not-so-good one. But let’s shift our attention to something else for now: a few tips and tricks for successful interviews. Tip and trick #1: get your priorities straight. Your status usually dictates your priorities; for example, if the person looking for a job has just relocated to a new country, they might tend to ignore some of their priorities and overload others. In other words, setting priorities straight means to define the personal criteria by which the interview process is lead. For example, similar to the following questions can help define the criteria for someone looking for a job: How badly do I need a (any) job? Is it more important to work in a clean and quiet environment or is it important to get paid well (or both, if possible)? And so on… Furthermore, before going to the interview, the candidate should have a list of priorities, sorted by the most importance: e.g. I want a quiet environment, x amount of money, great helping boss, a desk next to a window and so on. Also it is a good idea to be prepared and know which factors can be compromised and to what extent. Tip and trick #2: the interview is a two-way street. A job candidate should not forget that the interview process is not a one-way street. What I mean by this is that while the employer is interviewing the potential candidate, the job seeker should not miss the chance to interview the employer. Usually, the employer and the candidate will meet for an interview and talk about a variety of topics. In a quality interview the candidate will be presented to key members of the team and will have the opportunity to ask them questions. By asking the right questions both parties will define their opinion about each other. For example, if the candidate talks to one of the potential bosses during the interview process and they notice that the potential manager has a hard time formulating a question, then it is up to the candidate to decide whether working with such person is a red flag for them. There are as many interview processes out there as there are companies and each one is different. Some bigger companies and corporates can afford pre-selection processes, 3 or even 4 stages of interviews, small companies usually settle with one interview. Some companies even give cognitive tests on the interview. Why not? In his book Joel suggests that a good candidate should be pampered and spoiled beyond belief with a week-long vacation in New York, fancy hotels, food and who knows what. For all I can imagine, an interview might even take place at the top of the Eifel tower (right, Mr. Joel, right?) I doubt, however, that this is the optimal way to capture the attention of a good employee. The ‘curiosity’ topic What I have learned so far in my professional experience is that opinions can be subjective. Plus, opinions on technology subjects can also be subjective. According to Joel, only hiring the best of the best is worth it. If you ask me, there is no such thing as best of the best, simply because human nature (well, aside from some physical limitations, like putting your pants on through your head :) ) has no boundaries. And why would it have boundaries? I have seen many curious and interesting people, naturally good at technology, though uninterested in it as one  can possibly be; I have also seen plenty of people interested in technology, who (in an ideal world) should have stayed far from it. At any rate, all of this sums up at the end to the ‘supply and demand’ factor. The interview process big-bang boils down to this: If there is a mutual benefit for both the employer and the potential employee to work together, then it all sorts out nicely. If there is no benefit, then it is much harder to get to a common place. Tip and trick #3: word-of-mouth is worth a thousand words Here I would just mention that the best thing a job candidate can get during the interview process is access to future team members or other employees of the new company. Nowadays the world has become quite small and everyone knows everyone. Look at LinkedIn, look at other professional networks and you will realize how small the world really is. Knowing people is a good way to become more approachable and to approach them. Tip and trick #4: Be confident. It is true that for some people confidence is as natural as breathing and others have to work hard to express it. Confidence is, however, a key factor in convincing the other side (potential employer or employee) that there is a great chance for success by working together. But it cannot get you very far if it’s not backed up by talent, curiosity and knowledge. Tip and trick #5: The right reasons What really bothers me in Sweden (and I am sure that there are similar situations in other countries) is that there is a tendency to fill quotas and to filter out candidates by criteria different from their skill and knowledge. In job ads I see quite often the phrases ‘positive thinker’, ‘team player’ and many similar hints about personality features. So my guess here is that discrimination has evolved to a new level. Let me clear up the definition of discrimination: ‘unfair treatment of a person or group on the basis of prejudice’. And prejudice is the ‘partiality that prevents objective consideration of an issue or situation’. In other words, there is not much difference whether a job candidate is filtered out by race, gender or by personality features – it is all a bad habit. And in reality, there is no proven correlation between the technology knowledge paired with skills and the personal features (gender, race, age, optimism). It is true that a significantly greater number of Darwin awards were given to men than to women, but I am sure that somewhere there is a paper or theory explaining the genetics behind this. J This topic actually brings to mind one of my favorite work related stories. A while back I was working for a big company with many teams involved in their processes. One of the teams was occupying 2 rooms – one had the team members and was full of light, colorful posters, chit-chats and giggles, whereas the other room was dark, lighted only by a single monitor with a quiet person in front of it. Later on I realized that the ‘dark room’ person was the guru and the ultimate problem-solving-brain who did not like the chats and giggles and hence was in a separate room. In reality, all severe problems which the chatty and cheerful team members could not solve and all emergencies were directed to ‘the dark room’. And thus all worked out well. The moral of the story: Personality has nothing to do with technology knowledge and skills. End of story. Summary: I’d like to stress the fact that there is no ultimately perfect candidate for a job, and there is no such thing as ‘best-of-the-best’. From my personal experience, the main criteria by which I measure people (co-workers and bosses) is the curiosity factor; I know from experience that the more curious and inventive a person is, the better chances there are for great achievements in their field. Related stories: (for extra credit) 1) Get your priorities straight. A while back as a consultant I was working for a few days at a time at different offices and for different clients, and so I was able to compare and analyze the work environments. There were two different places which I compared and recently I asked a friend of mine the following question: “Which one would you prefer as a work environment: a noisy office full of people, or a quiet office full of faulty smells because the office is rarely cleaned?” My friend was puzzled for a while, thought about it and said: “Hmm, you are talking about two different kinds of pollution… I will probably choose the second, since I can clean the workplace myself a bit…” 2) The interview is a two-way street. One time, during a job interview, I met a potential boss that had a hard time phrasing a question. At that particular time it was clear to me that I would not have liked to work under this person. According to my work religion, the properly asked question contains at least half of the answer. And if I work with someone who cannot ask a question… then I’d be doing double or triple work. At another interview, after the technical part with the team leader of the department, I was introduced to one of the team members and we were left alone for 5 minutes. I immediately jumped on the occasion and asked the blunt question: ‘What have you learned here for the past year and how do you like your job?’ The team member looked at me and said ‘Nothing really. I like playing with my cats at home, so I am out of here at 5pm and I don’t have time for much.’ I was disappointed at the time and I did not take the job offer. I wasn’t that shocked a few months later when the company went bankrupt. 3) The right reasons to take a job: personality check. A while back I was asked to serve as a job reference for a coworker. I agreed, and after some weeks I got a phone call from the company where my colleague was applying for a job. The conversation started with the manager’s question about my colleague’s personality and about their social skills. (You can probably guess what my internal reaction was… J ) So, after 30 minutes of pouring common sense into the interviewer’s head, we finally agreed on the fact that a shy or quiet personality has nothing to do with work skills and knowledge. Some years down the road my former colleague is taking the manager’s position as the manager is demoted to a different department. Reference: Feodor Georgiev, Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Loosely coupled .NET Cache Provider using Dependency Injection

    - by Rhames
    I have recently been reading the excellent book “Dependency Injection in .NET”, written by Mark Seemann. I do not generally buy software development related books, as I never seem to have the time to read them, but I have found the time to read Mark’s book, and it was time well spent I think. Reading the ideas around Dependency Injection made me realise that the Cache Provider code I wrote about earlier (see http://geekswithblogs.net/Rhames/archive/2011/01/10/using-the-asp.net-cache-to-cache-data-in-a-model.aspx) could be refactored to use Dependency Injection, which should produce cleaner code. The goals are to: Separate the cache provider implementation (using the ASP.NET data cache) from the consumers (loose coupling). This will also mean that the dependency on System.Web for the cache provider does not ripple down into the layers where it is being consumed (such as the domain layer). Provide a decorator pattern to allow a consumer of the cache provider to be implemented separately from the base consumer (i.e. if we have a base repository, we can decorate this with a caching version). Although I used the term repository, in reality the cache consumer could be just about anything. Use constructor injection to provide the Dependency Injection, with a suitable DI container (I use Castle Windsor). The sample code for this post is available on github, https://github.com/RobinHames/CacheProvider.git ICacheProvider In the sample code, the key interface is ICacheProvider, which is in the domain layer. 1: using System; 2: using System.Collections.Generic; 3:   4: namespace CacheDiSample.Domain 5: { 6: public interface ICacheProvider<T> 7: { 8: T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); 9: IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); 10: } 11: }   This interface contains two methods to retrieve data from the cache, either as a single instance or as an IEnumerable. the second paramerter is of type Func<T>. This is the method used to retrieve data if nothing is found in the cache. The ASP.NET implementation of the ICacheProvider interface needs to live in a project that has a reference to system.web, typically this will be the root UI project, or it could be a separate project. The key thing is that the domain or data access layers do not need system.web references adding to them. In my sample MVC application, the CacheProvider is implemented in the UI project, in a folder called “CacheProviders”: 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Caching; 6: using CacheDiSample.Domain; 7:   8: namespace CacheDiSample.CacheProvider 9: { 10: public class CacheProvider<T> : ICacheProvider<T> 11: { 12: public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) 13: { 14: return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry); 15: } 16:   17: public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) 18: { 19: return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry); 20: } 21:   22: #region Helper Methods 23:   24: private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) 25: { 26: U value; 27: if (!TryGetValue<U>(key, out value)) 28: { 29: value = retrieveData(); 30: if (!absoluteExpiry.HasValue) 31: absoluteExpiry = Cache.NoAbsoluteExpiration; 32:   33: if (!relativeExpiry.HasValue) 34: relativeExpiry = Cache.NoSlidingExpiration; 35:   36: HttpContext.Current.Cache.Insert(key, value, null, absoluteExpiry.Value, relativeExpiry.Value); 37: } 38: return value; 39: } 40:   41: private bool TryGetValue<U>(string key, out U value) 42: { 43: object cachedValue = HttpContext.Current.Cache.Get(key); 44: if (cachedValue == null) 45: { 46: value = default(U); 47: return false; 48: } 49: else 50: { 51: try 52: { 53: value = (U)cachedValue; 54: return true; 55: } 56: catch 57: { 58: value = default(U); 59: return false; 60: } 61: } 62: } 63:   64: #endregion 65:   66: } 67: }   The FetchAndCache helper method checks if the specified cache key exists, if it does not, the Func<U> retrieveData method is called, and the results are added to the cache. Using Castle Windsor to register the cache provider In the MVC UI project (my application root), Castle Windsor is used to register the CacheProvider implementation, using a Windsor Installer: 1: using Castle.MicroKernel.Registration; 2: using Castle.MicroKernel.SubSystems.Configuration; 3: using Castle.Windsor; 4:   5: using CacheDiSample.Domain; 6: using CacheDiSample.CacheProvider; 7:   8: namespace CacheDiSample.WindsorInstallers 9: { 10: public class CacheInstaller : IWindsorInstaller 11: { 12: public void Install(IWindsorContainer container, IConfigurationStore store) 13: { 14: container.Register( 15: Component.For(typeof(ICacheProvider<>)) 16: .ImplementedBy(typeof(CacheProvider<>)) 17: .LifestyleTransient()); 18: } 19: } 20: }   Note that the cache provider is registered as a open generic type. Consuming a Repository I have an existing couple of repository interfaces defined in my domain layer: IRepository.cs 1: using System; 2: using System.Collections.Generic; 3:   4: using CacheDiSample.Domain.Model; 5:   6: namespace CacheDiSample.Domain.Repositories 7: { 8: public interface IRepository<T> 9: where T : EntityBase 10: { 11: T GetById(int id); 12: IList<T> GetAll(); 13: } 14: }   IBlogRepository.cs 1: using System; 2: using CacheDiSample.Domain.Model; 3:   4: namespace CacheDiSample.Domain.Repositories 5: { 6: public interface IBlogRepository : IRepository<Blog> 7: { 8: Blog GetByName(string name); 9: } 10: }   These two repositories are implemented in the DataAccess layer, using Entity Framework to retrieve data (this is not important though). One important point is that in the BaseRepository implementation of IRepository, the methods are virtual. This will allow the decorator to override them. The BlogRepository is registered in a RepositoriesInstaller, again in the MVC UI project. 1: using Castle.MicroKernel.Registration; 2: using Castle.MicroKernel.SubSystems.Configuration; 3: using Castle.Windsor; 4:   5: using CacheDiSample.Domain.CacheDecorators; 6: using CacheDiSample.Domain.Repositories; 7: using CacheDiSample.DataAccess; 8:   9: namespace CacheDiSample.WindsorInstallers 10: { 11: public class RepositoriesInstaller : IWindsorInstaller 12: { 13: public void Install(IWindsorContainer container, IConfigurationStore store) 14: { 15: container.Register(Component.For<IBlogRepository>() 16: .ImplementedBy<BlogRepository>() 17: .LifestyleTransient() 18: .DependsOn(new 19: { 20: nameOrConnectionString = "BloggingContext" 21: })); 22: } 23: } 24: }   Now I can inject a dependency on the IBlogRepository into a consumer, such as a controller in my sample code: 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:   7: using CacheDiSample.Domain.Repositories; 8: using CacheDiSample.Domain.Model; 9:   10: namespace CacheDiSample.Controllers 11: { 12: public class HomeController : Controller 13: { 14: private readonly IBlogRepository blogRepository; 15:   16: public HomeController(IBlogRepository blogRepository) 17: { 18: if (blogRepository == null) 19: throw new ArgumentNullException("blogRepository"); 20:   21: this.blogRepository = blogRepository; 22: } 23:   24: public ActionResult Index() 25: { 26: ViewBag.Message = "Welcome to ASP.NET MVC!"; 27:   28: var blogs = blogRepository.GetAll(); 29:   30: return View(new Models.HomeModel { Blogs = blogs }); 31: } 32:   33: public ActionResult About() 34: { 35: return View(); 36: } 37: } 38: }   Consuming the Cache Provider via a Decorator I used a Decorator pattern to consume the cache provider, this means my repositories follow the open/closed principle, as they do not require any modifications to implement the caching. It also means that my controllers do not have any knowledge of the caching taking place, as the DI container will simply inject the decorator instead of the root implementation of the repository. The first step is to implement a BlogRepository decorator, with the caching logic in it. Note that this can reside in the domain layer, as it does not require any knowledge of the data access methods. BlogRepositoryWithCaching.cs 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5:   6: using CacheDiSample.Domain.Model; 7: using CacheDiSample.Domain; 8: using CacheDiSample.Domain.Repositories; 9:   10: namespace CacheDiSample.Domain.CacheDecorators 11: { 12: public class BlogRepositoryWithCaching : IBlogRepository 13: { 14: // The generic cache provider, injected by DI 15: private ICacheProvider<Blog> cacheProvider; 16: // The decorated blog repository, injected by DI 17: private IBlogRepository parentBlogRepository; 18:   19: public BlogRepositoryWithCaching(IBlogRepository parentBlogRepository, ICacheProvider<Blog> cacheProvider) 20: { 21: if (parentBlogRepository == null) 22: throw new ArgumentNullException("parentBlogRepository"); 23:   24: this.parentBlogRepository = parentBlogRepository; 25:   26: if (cacheProvider == null) 27: throw new ArgumentNullException("cacheProvider"); 28:   29: this.cacheProvider = cacheProvider; 30: } 31:   32: public Blog GetByName(string name) 33: { 34: string key = string.Format("CacheDiSample.DataAccess.GetByName.{0}", name); 35: // hard code 5 minute expiry! 36: TimeSpan relativeCacheExpiry = new TimeSpan(0, 5, 0); 37: return cacheProvider.Fetch(key, () => 38: { 39: return parentBlogRepository.GetByName(name); 40: }, 41: null, relativeCacheExpiry); 42: } 43:   44: public Blog GetById(int id) 45: { 46: string key = string.Format("CacheDiSample.DataAccess.GetById.{0}", id); 47:   48: // hard code 5 minute expiry! 49: TimeSpan relativeCacheExpiry = new TimeSpan(0, 5, 0); 50: return cacheProvider.Fetch(key, () => 51: { 52: return parentBlogRepository.GetById(id); 53: }, 54: null, relativeCacheExpiry); 55: } 56:   57: public IList<Blog> GetAll() 58: { 59: string key = string.Format("CacheDiSample.DataAccess.GetAll"); 60:   61: // hard code 5 minute expiry! 62: TimeSpan relativeCacheExpiry = new TimeSpan(0, 5, 0); 63: return cacheProvider.Fetch(key, () => 64: { 65: return parentBlogRepository.GetAll(); 66: }, 67: null, relativeCacheExpiry) 68: .ToList(); 69: } 70: } 71: }   The key things in this caching repository are: I inject into the repository the ICacheProvider<Blog> implementation, via the constructor. This will make the cache provider functionality available to the repository. I inject the parent IBlogRepository implementation (which has the actual data access code), via the constructor. This will allow the methods implemented in the parent to be called if nothing is found in the cache. I override each of the methods implemented in the repository, including those implemented in the generic BaseRepository. Each override of these methods follows the same pattern. It makes a call to the CacheProvider.Fetch method, and passes in the parentBlogRepository implementation of the method as the retrieval method, to be used if nothing is present in the cache. Configuring the Caching Repository in the DI Container The final piece of the jigsaw is to tell Castle Windsor to use the BlogRepositoryWithCaching implementation of IBlogRepository, but to inject the actual Data Access implementation into this decorator. This is easily achieved by modifying the RepositoriesInstaller to use Windsor’s implicit decorator wiring: 1: using Castle.MicroKernel.Registration; 2: using Castle.MicroKernel.SubSystems.Configuration; 3: using Castle.Windsor; 4:   5: using CacheDiSample.Domain.CacheDecorators; 6: using CacheDiSample.Domain.Repositories; 7: using CacheDiSample.DataAccess; 8:   9: namespace CacheDiSample.WindsorInstallers 10: { 11: public class RepositoriesInstaller : IWindsorInstaller 12: { 13: public void Install(IWindsorContainer container, IConfigurationStore store) 14: { 15:   16: // Use Castle Windsor implicit wiring for the block repository decorator 17: // Register the outermost decorator first 18: container.Register(Component.For<IBlogRepository>() 19: .ImplementedBy<BlogRepositoryWithCaching>() 20: .LifestyleTransient()); 21: // Next register the IBlogRepository inmplementation to inject into the outer decorator 22: container.Register(Component.For<IBlogRepository>() 23: .ImplementedBy<BlogRepository>() 24: .LifestyleTransient() 25: .DependsOn(new 26: { 27: nameOrConnectionString = "BloggingContext" 28: })); 29: } 30: } 31: }   This is all that is needed. Now if the consumer of the repository makes a call to the repositories method, it will be routed via the caching mechanism. You can test this by stepping through the code, and seeing that the DataAccess.BlogRepository code is only called if there is no data in the cache, or this has expired. The next step is to add the SQL Cache Dependency support into this pattern, this will be a future post.

    Read the article

  • VBoxManage: The given path [UUID] is not fully qualified

    - by Tgr
    $ vboxmanage clonehd foo.vmdk bar.vmdk 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Clone hard disk created in format 'VMDK'. UUID: f9dffd47-4907-44a5-b3f9-40eba3953d24 $ vboxmanage showhdinfo "f9dffd47-4907-44a5-b3f9-40eba3953d24" VBoxManage: error: The given path 'f9dffd47-4907-44a5-b3f9-40eba3953d24' is not fully qualified I can just use the path name, but it is annoying. What do I need to make the hd commands work with an UUID?

    Read the article

  • Replace dual-XP installs with single-XP install and repartition drive?

    - by caeious
    Hello, The Current Situation I have a hard drive that currently is split up like so: Primary Partition C: 9.77 GB NTFS Healthy (System) with XP Pro (in Polish) installed Extended Partition D: 39.82 GB NTFS Healthy (Boot) with XP Pro (in English) installed 6.30 GB Free space When I start my comuter I get a black and white Windows Boot Manager dual boot screen with 2 choices both being Microsoft Windows XP. The first choice is the English version of XP and the second choice is the Polish version of XP. Images of my Computer Management window and Dual Boot screen The Mission What I need to do is get rid of the entire extended partition (D: 39.82 GB & 6.30 free space) and just have the one primary C: drive which I assume will be somewheres around 55 GB big. So in the end I just want XP Pro in English running on my C: drive and no black and white boot screen to show up when starting up my laptop. The Question How do I go about successfully completing The Mission with out making my computer a useless pile of silicon, plastic and metal? UPDATE: So I went ahead and tried to follow Neal's suggestion but hit a wall. I got to a Windows XP Pro install screen that had the 3 following options as well as my drive data: To set up Windows XP on the selected item, press Enter To create a partition in the unpartitioned space, press C To delete the selected partition, press D 57232 MB Disk 0 at Id 0 on bus 0 on atapi [MBR] C: Partition1 [NTFS] 10001 MB ( 4642 MB free ) Unpartitioned space 6448 MB D: Partition2 [NTFS] 40774 MB ( 26225 MB free ) Unpartitioned space 8 MB I figured I would go with the first choice ((To set up Windows XP on the selected item, press Enter)) because I just wanted to set up Windows XP on C: Partition1 (which was preselected) so I pressed Enter which brought me to a screen displaying this message: You chose to install Windows XP on a partition that contains another operating system. Installing Windows XP on this partition might cause the other operating system to function improperly. CAUTION: Installing multiple operating systems on a single partition is not recommended. So this leads me to 2 new questions: How do I get rid of the Windows XP (Polish language) install on C: Partition 1 so that I can cleanly and safely install Windows XP (English language) on it? Neal, is this what you meant by me possibly having to delete the partition that the Windows XP (Polish language) install was located on? Since I have the option to delete partitions with the 3rd choice ((To delete the selected partition, press D)), should I do that on this screen or wait until I have Windows XP (English language) safely installed on C: Partition 1? I have to ask these questions because I have read that it is possibly dangerous to delete hard drive partitions. Just being cautious.

    Read the article

  • hp dl580 g5 diag error

    - by maruti
    server disk access is slow, and running insight diagnostics reports this error: Error: 640003 DST Error Error: 640006: The Read and/or Write HARD error rate is above threshold This drive has experienced/recorded error conditions reported by diagnosis and requires replacement"

    Read the article

  • What does the condition "new pull" mean?

    - by Nathan DeWitt
    I'm looking for a hard drive, and some of the conditions are listed as "New Pull" or "System Pull". I figure the System Pull means "taken from a computer and now sold separately" but what does New Pull mean? Does this mean it was assembled and never used? Or maybe it has been freshly pulled from a used machine?

    Read the article

  • Is the Core i5 Processor from Intel like the Celerons of yesteryear?

    - by Chris
    The title pretty much says it. I know that the Core i7's are Quad Core and Hyper-threaded (so 4 cores, and 8 logical), and the Core i5's are Quad Core as well but not Hyper-threaded, does this really make a difference? Or are the only people who are going to care are the ones who CPU intensive operations? I'm a developer, so I'm more concerned about hard drive speed most times than CPU speed. Any thoughts?

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >