Search Results

Search found 3195 results on 128 pages for 'tip and tricks'.

Page 128/128 | < Previous Page | 124 125 126 127 128 

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • 10 Essential Tools for building ASP.NET Websites

    - by Stephen Walther
    I recently put together a simple public website created with ASP.NET for my company at Superexpert.com. I was surprised by the number of free tools that I ended up using to put together the website. Therefore, I thought it would be interesting to create a list of essential tools for building ASP.NET websites. These tools work equally well with both ASP.NET Web Forms and ASP.NET MVC. Performance Tools After reading Steve Souders two (very excellent) books on front-end website performance High Performance Web Sites and Even Faster Web Sites, I have been super sensitive to front-end website performance. According to Souders’ Performance Golden Rule: “Optimize front-end performance first, that's where 80% or more of the end-user response time is spent” You can use the tools below to reduce the size of the images, JavaScript files, and CSS files used by an ASP.NET application. 1. Sprite and Image Optimization Framework CSS sprites were first described in an article written for A List Apart entitled CSS sprites: Image Slicing’s Kiss of Death. When you use sprites, you combine multiple images used by a website into a single image. Next, you use CSS trickery to display particular sub-images from the combined image in a webpage. The primary advantage of sprites is that they reduce the number of requests required to display a webpage. Requesting a single large image is faster than requesting multiple small images. In general, the more resources – images, JavaScript files, CSS files – that must be moved across the wire, the slower your website. However, most people avoid using sprites because they require a lot of work. You need to combine all of the images and write just the right CSS rules to display the sub-images. The Microsoft Sprite and Image Optimization Framework enables you to avoid all of this work. The framework combines the images for you automatically. Furthermore, the framework includes an ASP.NET Web Forms control and an ASP.NET MVC helper that makes it easy to display the sub-images. You can download the Sprite and Image Optimization Framework from CodePlex at http://aspnet.codeplex.com/releases/view/50869. The Sprite and Image Optimization Framework was written by Morgan McClean who worked in the office next to mine at Microsoft. Morgan was a scary smart Intern from Canada and we discussed the Framework while he was building it (I was really excited to learn that he was working on it). Morgan added some great advanced features to this framework. For example, the Sprite and Image Optimization Framework supports something called image inlining. When you use image inlining, the actual image is stored in the CSS file. Here’s an example of what image inlining looks like: .Home_StephenWalther_small-jpg { width:75px; height:100px; background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEsAAABkCAIAAABB1lpeAAAAB GdBTUEAALGOfPtRkwAAACBjSFJNAACHDwAAjA8AAP1SAACBQAAAfXkAAOmLAAA85QAAGcxzPIV3AAAKL s+zNfREAAAAASUVORK5CYII=) no-repeat 0% 0%; } The actual image (in this case a picture of me that is displayed on the home page of the Superexpert.com website) is stored in the CSS file. If you visit the Superexpert.com website then very few separate images are downloaded. For example, all of the images with a red border in the screenshot below take advantage of CSS sprites: Unfortunately, there are some significant Gotchas that you need to be aware of when using the Sprite and Image Optimization Framework. There are workarounds for these Gotchas. I plan to write about these Gotchas and workarounds in a future blog entry. 2. Microsoft Ajax Minifier Whenever possible you should combine, minify, compress, and cache with a far future header all of your JavaScript and CSS files. The Microsoft Ajax Minifier makes it easy to minify JavaScript and CSS files. Don’t confuse minification and compression. You need to do both. According to Souders, you can reduce the size of a JavaScript file by an additional 20% (on average) by minifying a JavaScript file after you compress the file. When you minify a JavaScript or CSS file, you use various tricks to reduce the size of the file before you compress the file. For example, you can minify a JavaScript file by replacing long JavaScript variables names with short variables names and removing unnecessary white space and comments. You can minify a CSS file by doing such things as replacing long color names such as #ffffff with shorter equivalents such as #fff. The Microsoft Ajax Minifier was created by Microsoft employee Ron Logan. Internally, this tool was being used by several large Microsoft websites. We also used the tool heavily on the ASP.NET team. I convinced Ron to publish the tool on CodePlex so that everyone in the world could take advantage of it. You can download the tool from the ASP.NET Ajax website and read documentation for the tool here. I created the installer for the Microsoft Ajax Minifier. When creating the installer, I also created a Visual Studio build task to make it easy to minify all of your JavaScript and CSS files whenever you do a build within Visual Studio automatically. Read the Ajax Minifier Quick Start to learn how to configure the build task. 3. ySlow The ySlow tool is a free add-on for Firefox created by Yahoo that enables you to test the front-end of your website. For example, here are the current test results for the Superexpert.com website: The Superexpert.com website has an overall score of B (not perfect but not bad). The ySlow tool is not perfect. For example, the Superexpert.com website received a failing grade of F for not using a Content Delivery Network even though the website using the Microsoft Ajax Content Delivery Network for JavaScript files such as jQuery. Uptime After publishing a website live to the world, you want to ensure that the website does not encounter any issues and that it stays live. I use the following tools to monitor the Superexpert.com website now that it is live. 4. ELMAH ELMAH stands for Error Logging Modules and Handlers for ASP.NET. ELMAH enables you to record any errors that happen at your website so you can review them in the future. You can download ELMAH for free from the ELMAH project website. ELMAH works great with both ASP.NET Web Forms and ASP.NET MVC. You can configure ELMAH to store errors in a number of different stores including XML files, the Event Log, an Access database, a SQL database, an Oracle database, or in computer RAM. You also can configure ELMAH to email error messages to you when they happen. By default, you can access ELMAH by requesting the elmah.axd page from a website with ELMAH installed. Here’s what the elmah page looks like from the Superexpert.com website (this page is password-protected because secret information can be revealed in an error message): If you click on a particular error message, you can view the original Yellow Screen ASP.NET error message (even when the error message was never displayed to the actual user). I installed ELMAH by taking advantage of the new package manager for ASP.NET named NuGet (originally named NuPack). You can read the details about NuGet in the following blog entry by Scott Guthrie. You can download NuGet from CodePlex. 5. Pingdom I use Pingdom to verify that the Superexpert.com website is always up. You can sign up for Pingdom by visiting Pingdom.com. You can use Pingdom to monitor a single website for free. At the Pingdom website, you configure the frequency that your website gets pinged. I verify that the Superexpert.com website is up every 5 minutes. I have the Pingdom service verify that it can retrieve the string “Contact Us” from the website homepage. If your website goes down, you can configure Pingdom so that it sends an email, Twitter, SMS, or iPhone alert. I use the Pingdom iPhone app which looks like this: 6. Host Tracker If your website does go down then you need some way of determining whether it is a problem with your local network or if your website is down for everyone. I use a website named Host-Tracker.com to check how badly a website is down. Here’s what the Host-Tracker website displays for the Superexpert.com website when the website can be successfully pinged from everywhere in the world: Notice that Host-Tracker pinged the Superexpert.com website from 68 locations including Roubaix, France and Scranton, PA. Debugging I mean debugging in the broadest possible sense. I use the following tools when building a website to verify that I have not made a mistake. 7. HTML Spell Checker Why doesn’t Visual Studio have a built-in spell checker? Don’t know – I’ve always found this mysterious. Fortunately, however, a former member of the ASP.NET team wrote a free spell checker that you can use with your ASP.NET pages. I find a spell checker indispensible. It is easy to delude yourself that you are capable of perfect spelling. I’m always super embarrassed when I actually run the spell checking tool and discover all of my spelling mistakes. The fastest way to add the HTML Spell Checker extension to Visual Studio is to select the menu option Tools, Extension Manager within Visual Studio. Click on Online Gallery and search for HTML Spell Checker: 8. IIS SEO Toolkit If people cannot find your website through Google then you should not even bother to create it. Microsoft has a great extension for IIS named the IIS Search Engine Optimization Toolkit that you can use to identify issue with your website that would hurt its page rank. You also can use this tool to quickly create a sitemap for your website that you can submit to Google or Bing. You can even generate the sitemap for an ASP.NET MVC website. Here’s what the report overview for the Superexpert.com website looks like: Notice that the Sueprexpert.com website had plenty of violations. For example, there are 65 cases in which a page has a broken hyperlink. You can drill into these violations to identity the exact page and location where these violations occur. 9. LinqPad If your ASP.NET website accesses a database then you should be using LINQ to Entities with the Entity Framework. Using LINQ involves some magic. LINQ queries written in C# get converted into SQL queries for you. If you are not careful about how you write your LINQ queries, you could unintentionally build a really badly performing website. LinqPad is a free tool that enables you to experiment with your LINQ queries. It even works with Microsoft SQL CE 4 and Azure. You can use LinqPad to execute a LINQ to Entities query and see the results. You also can use it to see the resulting SQL that gets executed against the database: 10. .NET Reflector I use .NET Reflector daily. The .NET Reflector tool enables you to take any assembly and disassemble the assembly into C# or VB.NET code. You can use .NET Reflector to see the “Source Code” of an assembly even when you do not have the actual source code. You can download a free version of .NET Reflector from the Redgate website. I use .NET Reflector primarily to help me understand what code is doing internally. For example, I used .NET Reflector with the Sprite and Image Optimization Framework to better understand how the MVC Image helper works. Here’s part of the disassembled code from the Image helper class: Summary In this blog entry, I’ve discussed several of the tools that I used to create the Superexpert.com website. These are tools that I use to improve the performance, improve the SEO, verify the uptime, or debug the Superexpert.com website. All of the tools discussed in this blog entry are free. Furthermore, all of these tools work with both ASP.NET Web Forms and ASP.NET MVC. Let me know if there are any tools that you use daily when building ASP.NET websites.

    Read the article

  • Silverlight Tree View with Multiple Levels

    - by psheriff
    There are many examples of the Silverlight Tree View that you will find on the web, however, most of them only show you how to go to two levels. What if you have more than two levels? This is where understanding exactly how the Hierarchical Data Templates works is vital. In this blog post, I am going to break down how these templates work so you can really understand what is going on underneath the hood. To start, let’s look at the typical two-level Silverlight Tree View that has been hard coded with the values shown below: <sdk:TreeView>  <sdk:TreeViewItem Header="Managers">    <TextBlock Text="Michael" />    <TextBlock Text="Paul" />  </sdk:TreeViewItem>  <sdk:TreeViewItem Header="Supervisors">    <TextBlock Text="John" />    <TextBlock Text="Tim" />    <TextBlock Text="David" />  </sdk:TreeViewItem></sdk:TreeView> Figure 1 shows you how this tree view looks when you run the Silverlight application. Figure 1: A hard-coded, two level Tree View. Next, let’s create three classes to mimic the hard-coded Tree View shown above. First, you need an Employee class and an EmployeeType class. The Employee class simply has one property called Name. The constructor is created to accept a “name” argument that you can use to set the Name property when you create an Employee object. public class Employee{  public Employee(string name)  {    Name = name;  }   public string Name { get; set; }} Finally you create an EmployeeType class. This class has one property called EmpType and contains a generic List<> collection of Employee objects. The property that holds the collection is called Employees. public class EmployeeType{  public EmployeeType(string empType)  {    EmpType = empType;    Employees = new List<Employee>();  }   public string EmpType { get; set; }  public List<Employee> Employees { get; set; }} Finally we have a collection class called EmployeeTypes created using the generic List<> class. It is in the constructor for this class where you will build the collection of EmployeeTypes and fill it with Employee objects: public class EmployeeTypes : List<EmployeeType>{  public EmployeeTypes()  {    EmployeeType type;            type = new EmployeeType("Manager");    type.Employees.Add(new Employee("Michael"));    type.Employees.Add(new Employee("Paul"));    this.Add(type);     type = new EmployeeType("Project Managers");    type.Employees.Add(new Employee("Tim"));    type.Employees.Add(new Employee("John"));    type.Employees.Add(new Employee("David"));    this.Add(type);  }} You now have a data hierarchy in memory (Figure 2) which is what the Tree View control expects to receive as its data source. Figure 2: A hierachial data structure of Employee Types containing a collection of Employee objects. To connect up this hierarchy of data to your Tree View you create an instance of the EmployeeTypes class in XAML as shown in line 13 of Figure 3. The key assigned to this object is “empTypes”. This key is used as the source of data to the entire Tree View by setting the ItemsSource property as shown in Figure 3, Callout #1. Figure 3: You need to start from the bottom up when laying out your templates for a Tree View. The ItemsSource property of the Tree View control is used as the data source in the Hierarchical Data Template with the key of employeeTypeTemplate. In this case there is only one Hierarchical Data Template, so any data you wish to display within that template comes from the collection of Employee Types. The TextBlock control in line 20 uses the EmpType property of the EmployeeType class. You specify the name of the Hierarchical Data Template to use in the ItemTemplate property of the Tree View (Callout #2). For the second (and last) level of the Tree View control you use a normal <DataTemplate> with the name of employeeTemplate (line 14). The Hierarchical Data Template in lines 17-21 sets its ItemTemplate property to the key name of employeeTemplate (Line 19 connects to Line 14). The source of the data for the <DataTemplate> needs to be a property of the EmployeeTypes collection used in the Hierarchical Data Template. In this case that is the Employees property. In the Employees property there is a “Name” property of the Employee class that is used to display the employee name in the second level of the Tree View (Line 15). What is important here is that your lowest level in your Tree View is expressed in a <DataTemplate> and should be listed first in your Resources section. The next level up in your Tree View should be a <HierarchicalDataTemplate> which has its ItemTemplate property set to the key name of the <DataTemplate> and the ItemsSource property set to the data you wish to display in the <DataTemplate>. The Tree View control should have its ItemsSource property set to the data you wish to display in the <HierarchicalDataTemplate> and its ItemTemplate property set to the key name of the <HierarchicalDataTemplate> object. It is in this way that you get the Tree View to display all levels of your hierarchical data structure. Three Levels in a Tree View Now let’s expand upon this concept and use three levels in our Tree View (Figure 4). This Tree View shows that you now have EmployeeTypes at the top of the tree, followed by a small set of employees that themselves manage employees. This means that the EmployeeType class has a collection of Employee objects. Each Employee class has a collection of Employee objects as well. Figure 4: When using 3 levels in your TreeView you will have 2 Hierarchical Data Templates and 1 Data Template. The EmployeeType class has not changed at all from our previous example. However, the Employee class now has one additional property as shown below: public class Employee{  public Employee(string name)  {    Name = name;    ManagedEmployees = new List<Employee>();  }   public string Name { get; set; }  public List<Employee> ManagedEmployees { get; set; }} The next thing that changes in our code is the EmployeeTypes class. The constructor now needs additional code to create a list of managed employees. Below is the new code. public class EmployeeTypes : List<EmployeeType>{  public EmployeeTypes()  {    EmployeeType type;    Employee emp;    Employee managed;     type = new EmployeeType("Manager");    emp = new Employee("Michael");    managed = new Employee("John");    emp.ManagedEmployees.Add(managed);    managed = new Employee("Tim");    emp.ManagedEmployees.Add(managed);    type.Employees.Add(emp);     emp = new Employee("Paul");    managed = new Employee("Michael");    emp.ManagedEmployees.Add(managed);    managed = new Employee("Sara");    emp.ManagedEmployees.Add(managed);    type.Employees.Add(emp);    this.Add(type);     type = new EmployeeType("Project Managers");    type.Employees.Add(new Employee("Tim"));    type.Employees.Add(new Employee("John"));    type.Employees.Add(new Employee("David"));    this.Add(type);  }} Now that you have all of the data built in your classes, you are now ready to hook up this three-level structure to your Tree View. Figure 5 shows the complete XAML needed to hook up your three-level Tree View. You can see in the XAML that there are now two Hierarchical Data Templates and one Data Template. Again you list the Data Template first since that is the lowest level in your Tree View. The next Hierarchical Data Template listed is the next level up from the lowest level, and finally you have a Hierarchical Data Template for the first level in your tree. You need to work your way from the bottom up when creating your Tree View hierarchy. XAML is processed from the top down, so if you attempt to reference a XAML key name that is below where you are referencing it from, you will get a runtime error. Figure 5: For three levels in a Tree View you will need two Hierarchical Data Templates and one Data Template. Each Hierarchical Data Template uses the previous template as its ItemTemplate. The ItemsSource of each Hierarchical Data Template is used to feed the data to the previous template. This is probably the most confusing part about working with the Tree View control. You are expecting the content of the current Hierarchical Data Template to use the properties set in the ItemsSource property of that template. But you need to look to the template lower down in the XAML to see the source of the data as shown in Figure 6. Figure 6: The properties you use within the Content of a template come from the ItemsSource of the next template in the resources section. Summary Understanding how to put together your hierarchy in a Tree View is simple once you understand that you need to work from the bottom up. Start with the bottom node in your Tree View and determine what that will look like and where the data will come from. You then build the next Hierarchical Data Template to feed the data to the previous template you created. You keep doing this for each level in your Tree View until you get to the last level. The data for that last Hierarchical Data Template comes from the ItemsSource in the Tree View itself. NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “Silverlight TreeView with Multiple Levels” from the drop down list.

    Read the article

  • Read XML Files using LINQ to XML and Extension Methods

    - by psheriff
    In previous blog posts I have discussed how to use XML files to store data in your applications. I showed you how to read those XML files from your project and get XML from a WCF service. One of the problems with reading XML files is when elements or attributes are missing. If you try to read that missing data, then a null value is returned. This can cause a problem if you are trying to load that data into an object and a null is read. This blog post will show you how to create extension methods to detect null values and return valid values to load into your object. The XML Data An XML data file called Product.xml is located in the \Xml folder of the Silverlight sample project for this blog post. This XML file contains several rows of product data that will be used in each of the samples for this post. Each row has 4 attributes; namely ProductId, ProductName, IntroductionDate and Price. <Products>  <Product ProductId="1"           ProductName="Haystack Code Generator for .NET"           IntroductionDate="07/01/2010"  Price="799" />  <Product ProductId="2"           ProductName="ASP.Net Jumpstart Samples"           IntroductionDate="05/24/2005"  Price="0" />  ...  ...</Products> The Product Class Just as you create an Entity class to map each column in a table to a property in a class, you should do the same for an XML file too. In this case you will create a Product class with properties for each of the attributes in each element of product data. The following code listing shows the Product class. public class Product : CommonBase{  public const string XmlFile = @"Xml/Product.xml";   private string _ProductName;  private int _ProductId;  private DateTime _IntroductionDate;  private decimal _Price;   public string ProductName  {    get { return _ProductName; }    set {      if (_ProductName != value) {        _ProductName = value;        RaisePropertyChanged("ProductName");      }    }  }   public int ProductId  {    get { return _ProductId; }    set {      if (_ProductId != value) {        _ProductId = value;        RaisePropertyChanged("ProductId");      }    }  }   public DateTime IntroductionDate  {    get { return _IntroductionDate; }    set {      if (_IntroductionDate != value) {        _IntroductionDate = value;        RaisePropertyChanged("IntroductionDate");      }    }  }   public decimal Price  {    get { return _Price; }    set {      if (_Price != value) {        _Price = value;        RaisePropertyChanged("Price");      }    }  }} NOTE: The CommonBase class that the Product class inherits from simply implements the INotifyPropertyChanged event in order to inform your XAML UI of any property changes. You can see this class in the sample you download for this blog post. Reading Data When using LINQ to XML you call the Load method of the XElement class to load the XML file. Once the XML file has been loaded, you write a LINQ query to iterate over the “Product” Descendants in the XML file. The “select” portion of the LINQ query creates a new Product object for each row in the XML file. You retrieve each attribute by passing each attribute name to the Attribute() method and retrieving the data from the “Value” property. The Value property will return a null if there is no data, or will return the string value of the attribute. The Convert class is used to convert the value retrieved into the appropriate data type required by the Product class. private void LoadProducts(){  XElement xElem = null;   try  {    xElem = XElement.Load(Product.XmlFile);     // The following will NOT work if you have missing attributes    var products =         from elem in xElem.Descendants("Product")        orderby elem.Attribute("ProductName").Value        select new Product        {          ProductId = Convert.ToInt32(            elem.Attribute("ProductId").Value),          ProductName = Convert.ToString(            elem.Attribute("ProductName").Value),          IntroductionDate = Convert.ToDateTime(            elem.Attribute("IntroductionDate").Value),          Price = Convert.ToDecimal(elem.Attribute("Price").Value)        };     lstData.DataContext = products;  }  catch (Exception ex)  {    MessageBox.Show(ex.Message);  }} This is where the problem comes in. If you have any missing attributes in any of the rows in the XML file, or if the data in the ProductId or IntroductionDate is not of the appropriate type, then this code will fail! The reason? There is no built-in check to ensure that the correct type of data is contained in the XML file. This is where extension methods can come in real handy. Using Extension Methods Instead of using the Convert class to perform type conversions as you just saw, create a set of extension methods attached to the XAttribute class. These extension methods will perform null-checking and ensure that a valid value is passed back instead of an exception being thrown if there is invalid data in your XML file. private void LoadProducts(){  var xElem = XElement.Load(Product.XmlFile);   var products =       from elem in xElem.Descendants("Product")      orderby elem.Attribute("ProductName").Value      select new Product      {        ProductId = elem.Attribute("ProductId").GetAsInteger(),        ProductName = elem.Attribute("ProductName").GetAsString(),        IntroductionDate =            elem.Attribute("IntroductionDate").GetAsDateTime(),        Price = elem.Attribute("Price").GetAsDecimal()      };   lstData.DataContext = products;} Writing Extension Methods To create an extension method you will create a class with any name you like. In the code listing below is a class named XmlExtensionMethods. This listing just shows a couple of the available methods such as GetAsString and GetAsInteger. These methods are just like any other method you would write except when you pass in the parameter you prefix the type with the keyword “this”. This lets the compiler know that it should add this method to the class specified in the parameter. public static class XmlExtensionMethods{  public static string GetAsString(this XAttribute attr)  {    string ret = string.Empty;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      ret = attr.Value;    }     return ret;  }   public static int GetAsInteger(this XAttribute attr)  {    int ret = 0;    int value = 0;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      if(int.TryParse(attr.Value, out value))        ret = value;    }     return ret;  }   ...  ...} Each of the methods in the XmlExtensionMethods class should inspect the XAttribute to ensure it is not null and that the value in the attribute is not null. If the value is null, then a default value will be returned such as an empty string or a 0 for a numeric value. Summary Extension methods are a great way to simplify your code and provide protection to ensure problems do not occur when reading data. You will probably want to create more extension methods to handle XElement objects as well for when you use element-based XML. Feel free to extend these extension methods to accept a parameter which would be the default value if a null value is detected, or any other parameters you wish. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose “Tips & Tricks”, then "Read XML Files using LINQ to XML and Extension Methods" from the drop-down. Good Luck with your Coding,Paul D. Sheriff  

    Read the article

  • A Communication System for XAML Applications

    - by psheriff
    In any application, you want to keep the coupling between any two or more objects as loose as possible. Coupling happens when one class contains a property that is used in another class, or uses another class in one of its methods. If you have this situation, then this is called strong or tight coupling. One popular design pattern to help with keeping objects loosely coupled is called the Mediator design pattern. The basics of this pattern are very simple; avoid one object directly talking to another object, and instead use another class to mediate between the two. As with most of my blog posts, the purpose is to introduce you to a simple approach to using a message broker, not all of the fine details. IPDSAMessageBroker Interface As with most implementations of a design pattern, you typically start with an interface or an abstract base class. In this particular instance, an Interface will work just fine. The interface for our Message Broker class just contains a single method “SendMessage” and one event “MessageReceived”. public delegate void MessageReceivedEventHandler( object sender, PDSAMessageBrokerEventArgs e); public interface IPDSAMessageBroker{  void SendMessage(PDSAMessageBrokerMessage msg);   event MessageReceivedEventHandler MessageReceived;} PDSAMessageBrokerMessage Class As you can see in the interface, the SendMessage method requires a type of PDSAMessageBrokerMessage to be passed to it. This class simply has a MessageName which is a ‘string’ type and a MessageBody property which is of the type ‘object’ so you can pass whatever you want in the body. You might pass a string in the body, or a complete Customer object. The MessageName property will help the receiver of the message know what is in the MessageBody property. public class PDSAMessageBrokerMessage{  public PDSAMessageBrokerMessage()  {  }   public PDSAMessageBrokerMessage(string name, object body)  {    MessageName = name;    MessageBody = body;  }   public string MessageName { get; set; }   public object MessageBody { get; set; }} PDSAMessageBrokerEventArgs Class As our message broker class will be raising an event that others can respond to, it is a good idea to create your own event argument class. This class will inherit from the System.EventArgs class and add a couple of additional properties. The properties are the MessageName and Message. The MessageName property is simply a string value. The Message property is a type of a PDSAMessageBrokerMessage class. public class PDSAMessageBrokerEventArgs : EventArgs{  public PDSAMessageBrokerEventArgs()  {  }   public PDSAMessageBrokerEventArgs(string name,     PDSAMessageBrokerMessage msg)  {    MessageName = name;    Message = msg;  }   public string MessageName { get; set; }   public PDSAMessageBrokerMessage Message { get; set; }} PDSAMessageBroker Class Now that you have an interface class and a class to pass a message through an event, it is time to create your actual PDSAMessageBroker class. This class implements the SendMessage method and will also create the event handler for the delegate created in your Interface. public class PDSAMessageBroker : IPDSAMessageBroker{  public void SendMessage(PDSAMessageBrokerMessage msg)  {    PDSAMessageBrokerEventArgs args;     args = new PDSAMessageBrokerEventArgs(      msg.MessageName, msg);     RaiseMessageReceived(args);  }   public event MessageReceivedEventHandler MessageReceived;   protected void RaiseMessageReceived(    PDSAMessageBrokerEventArgs e)  {    if (null != MessageReceived)      MessageReceived(this, e);  }} The SendMessage method will take a PDSAMessageBrokerMessage object as an argument. It then creates an instance of a PDSAMessageBrokerEventArgs class, passing to the constructor two items: the MessageName from the PDSAMessageBrokerMessage object and also the object itself. It may seem a little redundant to pass in the message name when that same message name is part of the message, but it does make consuming the event and checking for the message name a little cleaner – as you will see in the next section. Create a Global Message Broker In your WPF application, create an instance of this message broker class in the App class located in the App.xaml file. Create a public property in the App class and create a new instance of that class in the OnStartUp event procedure as shown in the following code: public partial class App : Application{  public PDSAMessageBroker MessageBroker { get; set; }   protected override void OnStartup(StartupEventArgs e)  {    base.OnStartup(e);     MessageBroker = new PDSAMessageBroker();  }} Sending and Receiving Messages Let’s assume you have a user control that you load into a control on your main window and you want to send a message from that user control to the main window. You might have the main window display a message box, or put a string into a status bar as shown in Figure 1. Figure 1: The main window can receive and send messages The first thing you do in the main window is to hook up an event procedure to the MessageReceived event of the global message broker. This is done in the constructor of the main window: public MainWindow(){  InitializeComponent();   (Application.Current as App).MessageBroker.     MessageReceived += new MessageReceivedEventHandler(       MessageBroker_MessageReceived);} One piece of code you might not be familiar with is accessing a property defined in the App class of your XAML application. Within the App.Xaml file is a class named App that inherits from the Application object. You access the global instance of this App class by using Application.Current. You cast Application.Current to ‘App’ prior to accessing any of the public properties or methods you defined in the App class. Thus, the code (Application.Current as App).MessageBroker, allows you to get at the MessageBroker property defined in the App class. In the MessageReceived event procedure in the main window (shown below) you can now check to see if the MessageName property of the PDSAMessageBrokerEventArgs is equal to “StatusBar” and if it is, then display the message body into the status bar text block control. void MessageBroker_MessageReceived(object sender,   PDSAMessageBrokerEventArgs e){  switch (e.MessageName)  {    case "StatusBar":      tbStatus.Text = e.Message.MessageBody.ToString();      break;  }} In the Page 1 user control’s Loaded event procedure you will send the message “StatusBar” through the global message broker to any listener using the following code: private void UserControl_Loaded(object sender,  RoutedEventArgs e){  // Send Status Message  (Application.Current as App).MessageBroker.    SendMessage(new PDSAMessageBrokerMessage("StatusBar",      "This is Page 1"));} Since the main window is listening for the message ‘StatusBar’, it will display the value “This is Page 1” in the status bar at the bottom of the main window. Sending a Message to a User Control The previous example sent a message from the user control to the main window. You can also send messages from the main window to any listener as well. Remember that the global message broker is really just a broadcaster to anyone who has hooked into the MessageReceived event. In the constructor of the user control named ucPage1 you can hook into the global message broker’s MessageReceived event. You can then listen for any messages that are sent to this control by using a similar switch-case structure like that in the main window. public ucPage1(){  InitializeComponent();   // Hook to the Global Message Broker  (Application.Current as App).MessageBroker.    MessageReceived += new MessageReceivedEventHandler(      MessageBroker_MessageReceived);} void MessageBroker_MessageReceived(object sender,  PDSAMessageBrokerEventArgs e){  // Look for messages intended for Page 1  switch (e.MessageName)  {    case "ForPage1":      MessageBox.Show(e.Message.MessageBody.ToString());      break;  }} Once the ucPage1 user control has been loaded into the main window you can then send a message using the following code: private void btnSendToPage1_Click(object sender,  RoutedEventArgs e){  PDSAMessageBrokerMessage arg =     new PDSAMessageBrokerMessage();   arg.MessageName = "ForPage1";  arg.MessageBody = "Message For Page 1";   // Send a message to Page 1  (Application.Current as App).MessageBroker.SendMessage(arg);} Since the MessageName matches what is in the ucPage1 MessageReceived event procedure, ucPage1 can do anything in response to that event. It is important to note that when the message gets sent it is sent to all MessageReceived event procedures, not just the one that is looking for a message called “ForPage1”. If the user control ucPage1 is not loaded and this message is broadcast, but no other code is listening for it, then it is simply ignored. Remove Event Handler In each class where you add an event handler to the MessageReceived event you need to make sure to remove those event handlers when you are done. Failure to do so can cause a strong reference to the class and thus not allow that object to be garbage collected. In each of your user control’s make sure in the Unloaded event to remove the event handler. private void UserControl_Unloaded(object sender, RoutedEventArgs e){  if (_MessageBroker != null)    _MessageBroker.MessageReceived -=         _MessageBroker_MessageReceived;} Problems with Message Brokering As with most “global” classes or classes that hook up events to other classes, garbage collection is something you need to consider. Just the simple act of hooking up an event procedure to a global event handler creates a reference between your user control and the message broker in the App class. This means that even when your user control is removed from your UI, the class will still be in memory because of the reference to the message broker. This can cause messages to still being handled even though the UI is not being displayed. It is up to you to make sure you remove those event handlers as discussed in the previous section. If you don’t, then the garbage collector cannot release those objects. Instead of using events to send messages from one object to another you might consider registering your objects with a central message broker. This message broker now becomes a collection class into which you pass an object and what messages that object wishes to receive. You do end up with the same problem however. You have to un-register your objects; otherwise they still stay in memory. To alleviate this problem you can look into using the WeakReference class as a method to store your objects so they can be garbage collected if need be. Discussing Weak References is beyond the scope of this post, but you can look this up on the web. Summary In this blog post you learned how to create a simple message broker system that will allow you to send messages from one object to another without having to reference objects directly. This does reduce the coupling between objects in your application. You do need to remember to get rid of any event handlers prior to your objects going out of scope or you run the risk of having memory leaks and events being called even though you can no longer access the object that is responding to that event. NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “A Communication System for XAML Applications” from the drop down list.

    Read the article

  • Get XML from Server for Use on Windows Phone

    - by psheriff
    When working with mobile devices you always need to take into account bandwidth usage and power consumption. If you are constantly connecting to a server to retrieve data for an input screen, then you might think about moving some of that data down to the phone and cache the data on the phone. An example would be a static list of US State Codes that you are asking the user to select from. Since this is data that does not change very often, this is one set of data that would be great to cache on the phone. Since the Windows Phone does not have an embedded database, you can just use an XML string stored in Isolated Storage. Of course, then you need to figure out how to get data down to the phone. You can either ship it with the application, or connect and retrieve the data from your server one time and thereafter cache it and retrieve it from the cache. In this blog post you will see how to create a WCF service to retrieve data from a Product table in a database and send that data as XML to the phone and store it in Isolated Storage. You will then read that data from Isolated Storage using LINQ to XML and display it in a ListBox. Step 1: Create a Windows Phone Application The first step is to create a Windows Phone application called WP_GetXmlFromDataSet (or whatever you want to call it). On the MainPage.xaml add the following XAML within the “ContentPanel” grid: <StackPanel>  <Button Name="btnGetXml"          Content="Get XML"          Click="btnGetXml_Click" />  <Button Name="btnRead"          Content="Read XML"          IsEnabled="False"          Click="btnRead_Click" />  <ListBox Name="lstData"            Height="430"            ItemsSource="{Binding}"            DisplayMemberPath="ProductName" /></StackPanel> Now it is time to create the WCF Service Application that you will call to get the XML from a table in a SQL Server database. Step 2: Create a WCF Service Application Add a new project to your solution called WP_GetXmlFromDataSet.Services. Delete the IService1.* and Service1.* files and the App_Data folder, as you don’t generally need these items. Add a new WCF Service class called ProductService. In the IProductService class modify the void DoWork() method with the following code: [OperationContract]string GetProductXml(); Open the code behind in the ProductService.svc and create the GetProductXml() method. This method (shown below) will connect up to a database and retrieve data from a Product table. public string GetProductXml(){  string ret = string.Empty;  string sql = string.Empty;  SqlDataAdapter da;  DataSet ds = new DataSet();   sql = "SELECT ProductId, ProductName,";  sql += " IntroductionDate, Price";  sql += " FROM Product";   da = new SqlDataAdapter(sql,    ConfigurationManager.ConnectionStrings["Sandbox"].ConnectionString);   da.Fill(ds);   // Create Attribute based XML  foreach (DataColumn col in ds.Tables[0].Columns)  {    col.ColumnMapping = MappingType.Attribute;  }   ds.DataSetName = "Products";  ds.Tables[0].TableName = "Product";  ret = ds.GetXml();   return ret;} After retrieving the data from the Product table using a DataSet, you will want to set each column’s ColumnMapping property to Attribute. Using attribute based XML will make the data transferred across the wire a little smaller. You then set the DataSetName property to the top-level element name you want to assign to the XML. You then set the TableName property on the DataTable to the name you want each element to be in your XML. The last thing you need to do is to call the GetXml() method on the DataSet object which will return an XML string of the data in your DataSet object. This is the value that you will return from the service call. The XML that is returned from the above call looks like the following: <Products>  <Product ProductId="1"           ProductName="PDSA .NET Productivity Framework"           IntroductionDate="9/3/2010"           Price="5000" />  <Product ProductId="3"           ProductName="Haystack Code Generator for .NET"           IntroductionDate="7/1/2010"           Price="599.00" />  ...  ...  ... </Products> The GetProductXml() method uses a connection string from the Web.Config file, so add a <connectionStrings> element to the Web.Config file in your WCF Service application. Modify the settings shown below as needed for your server and database name. <connectionStrings>  <add name="Sandbox"        connectionString="Server=Localhost;Database=Sandbox;                         Integrated Security=Yes"/></connectionStrings> The Product Table You will need a Product table that you can read data from. I used the following structure for my product table. Add any data you want to this table after you create it in your database. CREATE TABLE Product(  ProductId int PRIMARY KEY IDENTITY(1,1) NOT NULL,  ProductName varchar(50) NOT NULL,  IntroductionDate datetime NULL,  Price money NULL) Step 3: Connect to WCF Service from Windows Phone Application Back in your Windows Phone application you will now need to add a Service Reference to the WCF Service application you just created. Right-mouse click on the Windows Phone Project and choose Add Service Reference… from the context menu. Click on the Discover button. In the Namespace text box enter “ProductServiceRefrence”, then click the OK button. If you entered everything correctly, Visual Studio will generate some code that allows you to connect to your Product service. On the MainPage.xaml designer window double click on the Get XML button to generate the Click event procedure for this button. In the Click event procedure make a call to a GetXmlFromServer() method. This method will also need a “Completed” event procedure to be written since all communication with a WCF Service from Windows Phone must be asynchronous.  Write these two methods as follows: private const string KEY_NAME = "ProductData"; private void GetXmlFromServer(){  ProductServiceClient client = new ProductServiceClient();   client.GetProductXmlCompleted += new     EventHandler<GetProductXmlCompletedEventArgs>      (client_GetProductXmlCompleted);   client.GetProductXmlAsync();  client.CloseAsync();} void client_GetProductXmlCompleted(object sender,                                   GetProductXmlCompletedEventArgs e){  // Store XML data in Isolated Storage  IsolatedStorageSettings.ApplicationSettings[KEY_NAME] = e.Result;   btnRead.IsEnabled = true;} As you can see, this is a fairly standard call to a WCF Service. In the Completed event you get the Result from the event argument, which is the XML, and store it into Isolated Storage using the IsolatedStorageSettings.ApplicationSettings class. Notice the constant that I added to specify the name of the key. You will use this constant later to read the data from Isolated Storage. Step 4: Create a Product Class Even though you stored XML data into Isolated Storage when you read that data out you will want to convert each element in the XML file into an actual Product object. This means that you need to create a Product class in your Windows Phone application. Add a Product class to your project that looks like the code below: public class Product{  public string ProductName{ get; set; }  public int ProductId{ get; set; }  public DateTime IntroductionDate{ get; set; }  public decimal Price{ get; set; }} Step 5: Read Settings from Isolated Storage Now that you have the XML data stored in Isolated Storage, it is time to use it. Go back to the MainPage.xaml design view and double click on the Read XML button to generate the Click event procedure. From the Click event procedure call a method named ReadProductXml().Create this method as shown below: private void ReadProductXml(){  XElement xElem = null;   if (IsolatedStorageSettings.ApplicationSettings.Contains(KEY_NAME))  {    xElem = XElement.Parse(     IsolatedStorageSettings.ApplicationSettings[KEY_NAME].ToString());     // Create a list of Product objects    var products =         from prod in xElem.Descendants("Product")        orderby prod.Attribute("ProductName").Value        select new Product        {          ProductId = Convert.ToInt32(prod.Attribute("ProductId").Value),          ProductName = prod.Attribute("ProductName").Value,          IntroductionDate =             Convert.ToDateTime(prod.Attribute("IntroductionDate").Value),          Price = Convert.ToDecimal(prod.Attribute("Price").Value)        };     lstData.DataContext = products;  }} The ReadProductXml() method checks to make sure that the key name that you saved your XML as exists in Isolated Storage prior to trying to open it. If the key name exists, then you retrieve the value as a string. Use the XElement’s Parse method to convert the XML string to a XElement object. LINQ to XML is used to iterate over each element in the XElement object and create a new Product object from each attribute in your XML file. The LINQ to XML code also orders the XML data by the ProductName. After the LINQ to XML code runs you end up with an IEnumerable collection of Product objects in the variable named “products”. You assign this collection of product data to the DataContext of the ListBox you created in XAML. The DisplayMemberPath property of the ListBox is set to “ProductName” so it will now display the product name for each row in your products collection. Summary In this article you learned how to retrieve an XML string from a table in a database, return that string across a WCF Service and store it into Isolated Storage on your Windows Phone. You then used LINQ to XML to create a collection of Product objects from the data stored and display that data in a Windows Phone list box. This same technique can be used in Silverlight or WPF applications too. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "Get XML From Server for Use on Windows Phone" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

  • e-interview: SunSpace to WebCenter migration

    - by me
    I had the pleasure to do an e-interview with Ana Neves around the SunSpace to WebCenter migration project.  Below is the english version of the interview.  Enjoy   Peter, you joined Oracle in 2009 through the acquisition of Sun. Becoming a part of Oracle meant many changes. The internal collaboration platform was one of them, as per a post you wrote back in 2011. Sun had SunSpace. How would you describe SunSpace? SunSpace was the internal Community and Social Collaboration platform for the Sun's Global Sales and Services Organization. SunSpace served around 600 communities with a main focus around technology, products and services. SunSpace was a big success. Within 3 months of its launch SunSpace had over 20,000 users and it won the Atlassian "Not just another wiki" Award for the best use of Confluence (https://blogs.oracle.com/peterreiser/entry/goodbye_sunspace_hello_webcenter). What made SunSpace so special? 1. People centric versus  Web centric The main concept of SunSpace put the person in the middle of everything. All relevant information, resources  etc. where dynamically pushed to a person's  myProfile ( Facebook like interface) based on the person's interest and  needs.  2. Ease to use  SunSpace was really easy to use. We spent a lot of time on social interaction design to optimize the user experience.  Also we integrated some sophisticated technology to hide complexity from the user. As example - when a user added a document to SunSpace - we analyzed the content of the document and suggested related metadata and tags to the user based on a sophisticated algorithm which was integrated with the corporate taxonomy. Based on this metadata the document was automatically shared with the relevant communities.  3. Easy to find One of the main use cases for SunSpace was that  a user could quickly find the content and information they needed for their job.  The search implementation was based on:  optimized search engine algorithm using social value based ranking enhancements community facilitated search optimization  faceted search which recommended highly relevant  content like products, communities and experts 4. Social Adoption  - How to build vibrant communities You can deploy the coolest social technology but what if the users are not using it?   To drive user adoption we implemented two  complementary models: 4.1 Community Methodology  We developed a set of best practices on how to create, run and sustain communities including: community structure and types (e.g. Community of Practice, Community of Interest etc.) & tips and tricks on how to build a "vibrant " communities, Community Health check etc.  These best practices where constantly tuned and updated by the community of community drivers. 4.2. Social Value System To drive user adoption there is ONE key  question you  have to answer for each individual user: What's In It For Me (WIIFM) We developed a Social Value System called Community Equity which measures the social value flow between People, Content and Metadata. Based on this technology we added "Gamfication" techniques (although at that time this term did not exist ) to SunSpace to honor people for the active contribution and participation.  As example: All  social credentials a user earned trough active community participation where dynamically displayed on her/his myProfile. How would you describe WebCenter? Oracle WebCenter (@oraclewebcenter) is the Oracle's  user engagement platform for social business. It helps people work together more efficiently through contextual collaboration tools that optimize connections between people, information, and applications and ensures users have access to the right information in the context of the business process in which they are engaged. Oracle WebCenter can help your organization deliver contextual and targeted Web experiences to users and enable employees to access information and applications through intuitive portals, composite applications, and mash-ups. How does it compare to SunSpace in terms of functionality? Before I answer this question, I would like to point out some limitation we started to see with the current SunSpace implementation. Due to the massive growth of the user population (>20,000 users), we experienced  performance and scalability challenges with the current technology. Also at the time - Sun Internal Communications and SunIT planned to replace the entire Sun Intranet with SunSpace. We  kicked-off a project to evaluate the enterprise level technology which eventually would replace the good old static Intranet.  And then Oracle acquired Sun. We already had defined the functional requirements for the Intranet replacement with a Social Enterprise Stack and we just needed to evaluate the functional requirements against WebCenter   Below are the summary of this evaluation  MyProfile SunSpace WebCenter How WebCenter Works Home MyProfile: to access, click on your name at the top of any WebCenter page Your name, title, and reporting line are displayed.  Sub-tabs show your activity stream (Activities); people in your network (Connections); files you have uploaded (Documents); your contact information (Organization); and any personal information you wish to share (About).   Files MyFiles Allows you to upload, download and store documents or wiki pages within folders and subfolders.  The WebDav interface allows you to download / upload files / folders with a simple drag and drop to / from your local machine.  Tagging is supported and recommended. Network HomeMyConnections Home: displays the activity stream of individuals in your network.MyConnections: shows individuals in your network.  Click on a person's name to see their contact info and link to their profile. Status Updates MyProfle > Activties Add and displays  your recent activties and status updates. Watches Preferences > Subscriptions > Current Subscriptions Receive email notifications when  pages / spaces you watch are modified. Drafts N/A WebCenter does not support Drafts Settings Preferences: to access, click on 'Preferences' at the top of any WebCenter page Set your general preferences, as well as your WebCenter messaging, search and mail settings. MyCommunities MySpaces: to access, click on 'Spaces' at the top of any WebCenter page Displays MySpaces (communities you are a member of); and Recent Spaces (communities you have recently visited). Community SunSpace Webcenter How Webcenter Works Home Home Displays a community introduction and activity stream.  Members can add messages, links or documents via the Community Message Board. No Top Contributors widget. People Members Lists members of the community. The Mail All Members feature allows moderators and participants to send a message to all members of the community. Membership Management can be found under > Manage > Members News News Members can post and access latest community news and they can subscribe to news using an RSS reader Documents Documents Allows community members to upload, download and store documents or wiki pages within folders and subfolders.  The WebDav interface allows participants to download / upload files / folders with a simple drag and drop to / from your local machine.  Tagging is supported and recommended. Wiki Wiki Allows community members to create and update web pages with a WYSIWYG editor.  Note: WebCenter does not support macros or portlet embedding. Forum Forum Post community forum topics. Contribute to community forum conversations.  N/A Calendar Update and/or view the Community Calendar. N/A Analytics Displays detailed analytics data (views,downloads, unique users etc.) for Pages, Wiki, Documents, and Forum in a given community space. What is the adoption of WebCenter at Oracle? The entire Intranet serving around 100,000 users  is running on WebCenter Content.  For professional communities we use WebCenter Portal and Spaces. Currently we have around 6,000 community spaces with  around 40,000 members.  Does Oracle have any metrics to assess usage and impact of WebCenter? Can you give us some examples? Sure -  we have a lot of metrics   For the Intranet we use traditional metrics like pageviews, monthly unique visitors and unique visits.  For Communities we use the WebCenter Portal/Spaces analytics service which gives as a wealth of data. The key metrics we track are: Space traffic (PageViews, Unique Users) Wiki,Documents (views, downloads etc.) Forum (users, views, posts etc.) Registered members over time  Depending on the community we can filter/segment the metrics by User Properties e.g. Country, Organization, Job Role etc. What are you doing to improve usage and impact? 1. We  integrating the WebCenter social services/fabric into all  main business applications. As example The Fusion CRM deployment is seamless integrated with Oracle Social Network (OSN) and all conversation around an opportunity or customer engagement is  done in OSN (see youtube video). 2. We drive Social Best Practice trough a program called "Social Networking & Business Collaboration (SNBC) program" You worked both with WebCenter and SunSpace. Knowing what you know today, if you had the chance to choose between the two, which one would you choose? Why? That's a tricky question   In the early days of  the Social Enterprise implementation (we started SunSpace in 2006), we needed an agile and easy to deploy technology to keep up with the users requirements. Sometimes we pushed two releases per day  and we were in a permanent perpetual beta mode - SunSpace was perfect for that.  After the social implementation matured over time - community generated content became business critical and we saw a change in the  requirements from agile to stability, scalability and reliability  of the infrastructure.  WebCenter is the right choice for such an enterprise-level deployment.  You are a WebCenter Evangelist at Oracle. What do you do as part of that role? Our  role is to help position Oracle as one of the key thought leaders and solutions provider for Social Business. In addition we drive social innovation trough our Oracle Appslab  team. Is that a full time role? Yes  How many other Evangelists are there in Oracle? We are currently 5 people in the WebCenter evangelist team (@webcentervoices): Christian Finn (@cfinn) leads the team - Christian came from the Microsoft Sharepoint product management team and is a recognized expert in Social Business and Enterprise Collaboration. Noël Jaffré  (@noeljaffre) is our Web Experience Management (WEM) guru and came to Oracle via FatWire acquisition (now WebCenter Sites). Jake Kuramoto (@theapplab) is part of the Oracle AppsLab innovation  team - Jake is well known as  the driving force behind  http://theappslab.com  a blog around social and innovation.  Noel Portugal (@noelportugal) is a developer in the Oracle AppsLab innovation team - he is the inventor of OraTweet - Oracle's internal tweeting platform  Peter Reiser (@peterreiser) is  a Social Business guru and the inventor of SunSpace and Community Equity.  What area of the business do you and the rest of the Evangelists sit in? What area of the organisation is responsible for WebCenter? We are part of the WebCenter product management  organization.  Is WebCenter part of the Knowledge Management strategy? Oracle WebCenter is the Oracle's user engagement platform for social business. It brings together the most complete portfolio of portal, web experience management, content, social and collaboration technologies into a single product suite and is the product foundation of the Oracle Knowledge Management strategy.  I am aware Oracle also uses Beehive internally. How would you describe Beehive? Oracle Beehive provides an integrated set of communication and collaboration services built on a single scalable, secure, enterprise-class platform Beehive is  internally used for enterprise wide mail, calendar and real collaboration (Web conferencing) services.  Are Beehive and WebCenter connected? Historically Beehive and WebCenter Portal & Content had some overlap in functionally. (Hey - if  a company has an acquisition strategy to strengthen its product offering and accelerate  innovation, it's pretty normal that functional overlap exists  :- )) A key objective of the WebCenter strategy is  to combine all social and collaboration offerings under the WebCenter product family. That means that certain Beehive components  will be integrated into the overall WebCenter product offering.  Are there any other internal collaboration tools at Oracle? Which ones There here are two other main social tools which are widely used at Oracle  Oracle Connect was the first social tool the Oracle AppsLab team created in 2007 - see (Jake's blog post for details). It is still extensively used. ... and as a former Sun guy I like this quote from the blog post:  "Traffic to Connect peaked right after the Sun merger in 2010, when it served several hundred thousand pageviews each month; since then, traffic has subsided, but still averages tens of thousands of pageviews to several thousand users each month." Oratweet - Oracle internal microblogging platform has been used since June 2008 and it is still growing.  It's entirely written in Oracle Application Express (APEX) which is a rapid web application development tool for the Oracle database. Wanna try it out? Here you can download the code.  What is Oracle's strategy regarding (all these) collaboration tools? Pretty straight forward. The strategy is to seamless  integrate the WebCenter social & collaboration services into all Business Applications to help customers to socialize their enterprise. 

    Read the article

  • CodePlex Daily Summary for Friday, June 15, 2012

    CodePlex Daily Summary for Friday, June 15, 2012Popular ReleasesStackBuilder: StackBuilder 1.0.8.0: + Corrected a few bugs in batch processor + Corrected a bug in layer thumbnail generatorAutoUpdaterdotNET : Autoupdate for VB.NET and C# Developer: AutoUpdater.NET 1.1: Release Notes *New feature added that allows user to select remind later interval.SharePoint KnowledgeBase (ISC 2012): ISC.KBase.Advanced: A full-trust version of the Base application. This version uses a Managed Metadata column for categories and contains other enhancements.Microsoft SQL Server Product Samples: Database: AdventureWorks 2008 OLTP Script: Install AdventureWorks2008 OLTP database from script The AdventureWorks database can be created by running the instawdb.sql DDL script contained in the AdventureWorks 2008 OLTP Script.zip file. The instawdb.sql script depends on two path environment variables: SqlSamplesDatabasePath and SqlSamplesSourceDataPath. The SqlSamplesDatabasePath environment variable is set to the default Microsoft ® SQL Server 2008 path. You will need to change the SqlSamplesSourceDataPath environment variable to th...HigLabo: HigLabo_20120613: Bug fix HigLabo.Mail Decode header encoded by CP1252WipeTouch, a jQuery touch plugin: 1.2.0: Changes since 1.1.0: New: wipeMove event, triggered while moving the mouse/finger. New: added "source" to the result object. Bug fix: sometimes vertical wipe events would not trigger correctly. Bug fix: improved tapToClick handler. General code refactoring. Windows Phone 7 is not supported, yet! Its behaviour is completely broken and would require some special tricks to make it work. Maybe in the future...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3026 (June 2012): Fixes: round( 0.0 ) local TimeZone name TimeZone search compiling multi-script-assemblies PhpString serialization DocDocument::loadHTMLFile() token_get_all() parse_url()BlackJumboDog: Ver5.6.4: 2012.06.13 Ver5.6.4  (1) Web???????、???POST??????????????????Yahoo! UI Library: YUI Compressor for .Net: Version 2.0.0.0 - Ferret: - Merging both 3.5 and 2.0 codebases to a single .NET 2.0 assembly. - MSBuild Task. - NAnt Task.ExcelFileEditor: .CS File: nothingBizTalk Scheduled Task Adapter: Release 4.0: Works with BizTalk Server 2010. Compiled in .NET Framework 4.0. In this new version are available small improvements compared to the current version (3.0). We can highlight the following improvements or changes: 24 hours support in “start time” property. Previous versions had an issue with setting the start time, as it shown 12 hours watch but no AM/PM. Daily scheduler review. Solved a small bug on Daily Properties: unable to switch between “Every day” and “on these days” Installation e...Weapsy - ASP.NET MVC CMS: 1.0.0 RC: - Upgrade to Entity Framework 4.3.1 - Added AutoMapper custom version (by nopCommerce Team) - Added missed model properties and localization resources of Plugin Definitions - Minor changes - Fixed some bugsWebSocket4Net: WebSocket4Net 0.7: Changes included in this release: updated ClientEngine added proper exception handling code added state support for callback added property AllowUnstrustedCertificate for JsonWebSocket added properties for sending ping automatically improved JsBridge fixed a uri compatibility issueXenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0 Beta: Catalog and Publication reviews and ratings Store language packs in data base Improve reporting system Improve Import/Export system A lot of WebAdmin app UI improvements Initial implementation of the WebForum app DB indexes Improve and simplify architecture Less abstractions Modernize architecture Improve, simplify and unify API Simplify and improve testing A lot of new unit tests Codebase refactoring and ReSharpering Utilize Castle Windsor Utilize NHibernate ORM ...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.55: Properly handle IE extension to CSS3 grammar that allows for multiple parameters to functional pseudo-class selectors. add new switch -braces:(new|same) that affects where opening braces are placed in multi-line output. The default, "new" puts them on their own new line; "same" outputs them at the end of the previous line. add new optional values to the -inline switch: -inline:(force|noforce), which can be combined with the existing boolean value via comma-separators; value "force" (which...Microsoft Media Platform: Player Framework: MMP Player Framework 2.7 (Silverlight and WP7): Additional DownloadsSMFv2.7 Full Installer (MSI) - This will install everything you need in order to develop your own SMF player application, including the IIS Smooth Streaming Client. It only includes the assemblies. If you want the source code please follow the link above. Smooth Streaming Sample Player - This is a pre-built player that includes support for IIS Smooth Streaming. You can configure the player to playback your content by simplying editing a configuration file - no need to co...Liberty: v3.2.1.0 Release 10th June 2012: Change Log -Added -Liberty is now digitally signed! If the certificate on Liberty.exe is missing, invalid, or does not state that it was developed by "Xbox Chaos, Open Source Developer," your copy of Liberty may have been altered in some (possibly malicious) way. -Reach Mass biped max health and shield changer -Fixed -H3/ODST Fixed all of the glitches that users kept reporting (also reverted the changes made in 3.2.0.2) -Reach Made some tag names clearer and more consistent between m...Media Companion: Media Companion 3.503b: It has been a while, so it's about time we release another build! Major effort has been for fixing trailer downloads, plus a little bit of work for episode guide tag in TV show NFOs.Json.NET: Json.NET 4.5 Release 7: Fix - Fixed Metro build to pass Windows Application Certification Kit on Windows 8 Release Preview Fix - Fixed Metro build error caused by an anonymous type Fix - Fixed ItemConverter not being used when serializing dictionaries Fix - Fixed an incorrect object being passed to the Error event when serializing dictionaries Fix - Fixed decimal properties not being correctly ignored with DefaultValueHandlingSP Sticky Notes: SPSticky Solution Package: Install steps:Install the sandbox solution to the site collection Activate the solution Add a SPSticky web part (from a 'Custom' group) to a page that resides on the same web as the StickyList list Initial version, supported functionality: Moving a sticky around (position is saved in list) Resizing of the sticky (size is not saved) Changing the text of the sticky (saved in list) Saving occurs when you click away from the sticky being edited Stickys are filtered based on the curre...New ProjectsAdvanced Utility Libs: This project provides many basic and advanced utility libs for developer.ATopSearchEngine: ATopSearchEngine, a class projectAudio Player 3D: A WPF 3D Audio Player.beginABC: day la project test :)BookmarkSave For VB6: An addin, written in VB.net, for VB6 that preserves Bookmarks and Breakpoints between runs of the IDE.Centos 5 & 6 Managements Packs For System Center Operations Manager 2012: Centos 5 & 6 Managements Packs For System Center Operations Manager 2012ChildProcesses - Child Process Management for .NET: ChildProcesses.NET is a child process management library for the .NET framework. It allows to create child processes, and provides bidirectional extendable interprocess communication based on WCF and NamedPipes out of the box Child and parent processes monitor each other and notify about termination or other events. Child processes are an alternative to AppDomains when parent must continue if the child crashes. It is also poossible to mix 32Bit and 64 Bit processes. Cookie Free Analytics: Cookie Free Analytics (CFA) is a free server side Google Analytics tracking solution for Windows based websites running IIS & ASP.NET. 100% javascript & cookie free - with CFA you can track your visitors & file downloads in Google Analytics without cookies or JavaScript. Ideal for mobile websites or those affected by the EU Cookie law.F# (FSharp) based Windows Phone 7.5 software to answer: Where am I?: Location Finder F# (FSharp) based Windows Phone 7.5 software to answer the simple question: - Where am I? Henyo Word: The summary is required.Houma-Thibodaux .NET User Group: The CodePlex home of the Houma-Thibodaux .NET user group.JqGridHelper: some code for using jqgrid 1. javacsript function for customer mutil-search 2. C# function for parse querystring and build sql query command 3. a sql procedure templete for searching NetFluid.IRC: Mibbit clone in NetFluid with the possibilities to use permanent eggdrop in C# nGo: nGo frameworkNinja Client-Server Game: Naruto based online client-server game.NoLinEq: NoLinEq is a mathematical tool for solving nonlinear equations using different methods.Oculus: Oculus is a real-time ETW listening service that is plugin-basedPinoy Henyo: Response.Redirect (https://henyoword.codeplex.com)Razorblade: Razorblade is intended to be a template for WebDesign/WebDevelopment. It is based on HTML5Boilerplate and uses ASP.NET wtih the Razor Syntax (C#). Original HTML5Boilerplate comments are intact with some added comments to explain some of the Razor Syntax.ServiceFramework: The ServiceFramework project is a framework built using Microsoft WCF to address architectural governance issues by tracking the activity of services.Sharp Console: Sharp Console is a Windows Command Line (WCL)alternative written in C#. It targets those who lack access to the WCL or simply wish to use the NET framework instead. It aims to provide the same (and more!) functionality as the WCL. Contribute anything you feel will make it better!SharpKit.on{X}: This project allows you to write on{x} rules using C#shequ01: shequ01 websiteSilena: Porta lectus ac sed scelerisque, pellentesque, cras a et urna enim ultricies, tristique magnis nec proin. Velit tristique, aliquet quis ut mid lorem eros? Dolor magna. Aliquet. Et? Sociis pellentesque, tincidunt aenean cursus, pulvinar! Placerat etiam! Mid eu? Vut a. Placerat duis, risus adipiscing lacus, sagittis magna vut etiam, sed. Et ridiculus! Et sit, enim scelerisque velit tristique vel? Adipiscing vel adipiscing eu. Enim proin egestas lorem, aenean lectus enim vel nisi, augue duis pla...Silverlight 5 Chat: Full-duplex Publish/Subscribe -pattern on TCP/IP: F-Sharp (F#) WCF "PollingDuplex" WebService with SL5 FSharp clientSimple CQRS implemented with F#: Simple CQRS on F# (F-Sharp) 3.0 SOD: sustainable office designerSolvis Control II Viewer: SolvisSC2Viewer kann die Logdaten der Solvis Heizung Steuerung SolvisControl 2 auslesen und grafisch darstellen. testdd061412012tfs01: bvtestddgit061420123: eswrtestddgit061420124: fgtestddhg061420121: sdtestgit061420121: xctesthg06142012hg3: dfsTurismo Digital: Turismo DigitlUmbraco 5 File Picker: ????Umbraco 5 ???????。????????????????????。WeatherFrame: Windows Phone weather app.

    Read the article

  • CodePlex Daily Summary for Wednesday, June 20, 2012

    CodePlex Daily Summary for Wednesday, June 20, 2012Popular ReleasesApex: Apex 1.4: Apex 1.4Apex 1.4 provides a framework for rapid MVVM development. Download Apex 1.4 to get the core binaries, Visual Studio Extensions, Project Templates, Samples and Documentation. The 1.4 Release provides a vast number of enhancements via the Apex Broker. The Apex Broker is an object that can be used to retrieve models, get the view for a view model and more, much like an IoC container. The new Zune Style application templates for WPF and Silverlight give a great starting point for makin...Auto Proxy Configuration: V 1.0: This tool consists of a windows application that allows you to define a proxy server for each DNSDomain and a windows service that matches the DNSDomain to the database and set the proxy accordingly.51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.6.11: One Click Install from NuGet Changes to Version 2.1.6.111. Altered the GetIsCrawler method in MobileCapabilities.cs to return null if no value for IsCrawler is provided. Previously the method returned false breaking values provided via the BrowserCap file. 2. Enhanced Xml/Reader.cs to reduce the number of byte objects created when decompressing zipped XML files. 3. Altered Location.cs GetUrl method to remove replacement string tags {0} from the new url if no values were found to replace them...NShader - HLSL - GLSL - CG - Shader Syntax Highlighter AddIn for Visual Studio: NShader 1.3 - VS2010 + VS2012: This is a small maintenance release to support new VS2012 as well as VS2010. This release is also fixing the issue The "Comment Selection" include the first line after the selection If the new NShader version doesn't highlight your shader, you can try to: Remove the registry entry: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0\FontAndColors\Cache Remove all lines using "fx" or "hlsl" in file C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\CommonExtensions\Micr...asp.net membership: v1.0 Membership Management: Abmemunit is a Membership Management Project where developer can use their learning purposes of various issues related to ASP.NET, MVC, ASP.NET membership, Entity Framework, Code First, Ajax, Unit Test, IOC Container, Repository, and Service etc. Though it is a very simple project and all of these topics are used in an easy manner gathering from various big projects, it can easily understand. User End Functional Specification The functionalities of this project are very simple and straight...JSON Toolkit: JSON Toolkit 4.0: Up to 2.5x performance improvement in stringify operations Up to 1.7x performance improvement in parse operations Improved error messages when parsing invalid JSON strings Extended support to .Net 2.0, .Net 3.5, .Net 4.0, Silverlight 4, Windows Phone, Windows 8 metro apps and Xbox JSON namespace changed to ComputerBeacon.Json namespaceXenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0: System Requirements OS Windows 7 Windows Vista Windows Server 2008 Windows Server 2008 R2 Web Server Internet Information Service 7.0 or above .NET Framework .NET Framework 4.0 WCF Activation feature HTTP Activation Non-HTTP Activation for net.pipe/net.tcp WCF bindings ASP.NET MVC ASP.NET MVC 3.0 Database Microsoft SQL Server 2005 Microsoft SQL Server 2008 Microsoft SQL Server 2008 R2 Additional Deployment Configuration Started Windows Process Activation service Start...ASP.NET REST Services Framework: Release 1.3 - Standard version: The REST-services Framework v1.3 has important functional changes allowing to use complex data types as service call parameters. Such can be mapped to form or query string variables or the HTTP Message Body. This is especially useful when REST-style service URLs with POST or PUT HTTP method is used. Beginning from v1.1 the REST-services Framework is compatible with ASP.NET Routing model as well with CRUD (Create, Read, Update, and Delete) principle. These two are often important when buildin...NanoMVVM: a lightweight wpf MVVM framework: v0.10 stable beta: v0.10 Minor fixes to ui and code, added error example to async commands, separated project into various releases (mainly into logical wholes), removed expression blend satellite assembliesCrashReporter.NET : Exception reporting library for C# and VB.NET: CrashReporter.NET 1.1: Added screenshot support that takes screenshot of user's desktop on application crash and provides option to include screenshot with crash report. Added Windows version in crash reports. Added email field and exception type field in crash report dialog. Added exception type in crash reports. Added screenshot tab that shows crash screenshot.MFCMAPI: June 2012 Release: Build: 15.0.0.1034 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeMonoGame - Write Once, Play Everywhere: MonoGame 2.5.1: Release Notes The MonoGame team are pleased to announce that MonoGame v2.5.1 has been released. This release contains important bug fixes and minor updates. Recent additions include project templates for iOS and MacOS. The MonoDevelop.MonoGame AddIn also works on Linux. We have removed the dependency on the thirdparty GamePad library to allow MonoGame to be included in the debian/ubuntu repositories. There have been a major bug fix to ensure textures are disposed of correctly as well as some ...????: ????2.0.2: 1、???????????。 2、DJ???????10?,?????????10?。 3、??.NET 4.5(Windows 8)????????????。 4、???????????。 5、??????????????。 6、???Windows 8????。 7、?????2.0.1???????????????。 8、??DJ?????????。Azure Storage Explorer: Azure Storage Explorer 5 Preview 1 (6.17.2012): Azure Storage Explorer verison 5 is in development, and Preview 1 provides an early look at the new user interface and some of the new features. Here's what's new in v5 Preview 1: New UI, similar to the new Windows Azure HTML5 portal Support for configuring and viewing storage account logging Support for configuring and viewing storage account monitoring Uses the Windows Azure 1.7 SDK libraries Bug fixesCodename 'Chrometro': Developer Preview: Welcome to the Codename 'Chrometro' Developer Preview! This is the very first public preview of the app. Please note that this is a highly primitive build and the app is not even half of what it is meant to be. The Developer Preview sports the following: 1) An easy to use application setup. 2) The Assistant which simplifies your task of customization. 3) The partially complete Metro UI. 4) A variety of settings 5) A partially complete web browsing experience To get started, download the Ins...Cosmos (C# Open Source Managed Operating System): Release 92560: Prerequisites Visual Studio 2010 - Any version including Express. Express users must also install Visual Studio 2010 Integrated Shell runtime VMWare - Cosmos can run on real hardware as well as other virtualization environments but our default debug setup is configured for VMWare. VMWare Player (Free). or Workstation VMWare VIX API 1.11AutoUpdaterdotNET : Autoupdate for VB.NET and C# Developer: AutoUpdater.NET 1.1: Release Notes New feature added that allows user to select remind later interval.Microsoft SQL Server Product Samples: Database: AdventureWorks 2008 OLTP Script: Install AdventureWorks2008 OLTP database from script The AdventureWorks database can be created by running the instawdb.sql DDL script contained in the AdventureWorks 2008 OLTP Script.zip file. The instawdb.sql script depends on two path environment variables: SqlSamplesDatabasePath and SqlSamplesSourceDataPath. The SqlSamplesDatabasePath environment variable is set to the default Microsoft ® SQL Server 2008 path. You will need to change the SqlSamplesSourceDataPath environment variable to th...WipeTouch, a jQuery touch plugin: 1.2.0: Changes since 1.1.0: New: wipeMove event, triggered while moving the mouse/finger. New: added "source" to the result object. Bug fix: sometimes vertical wipe events would not trigger correctly. Bug fix: improved tapToClick handler. General code refactoring. Windows Phone 7 is not supported, yet! Its behaviour is completely broken and would require some special tricks to make it work. Maybe in the future...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3026 (June 2012): Fixes: round( 0.0 ) local TimeZone name TimeZone search compiling multi-script-assemblies PhpString serialization DocDocument::loadHTMLFile() token_get_all() parse_url()New ProjectsAMPlebrot: Sample code for Microsoft AMP including a auto-zooming Mandlebrot set browser and various random number generator implementations.Aquasoft ISZR web services Proxy: Knihovna pro pripojení k Informacnímu Systému Základních Registru (ISZR).Article Authoring Add-in for Word (NLM JATS): Use Microsoft Word to create, edit, save, and upload journal articles in the NLM Journal Article Tag Suite (JATS) DTDBetter Calculator: Scientific calculator with support for user variables and full expression evaluation.Cosmos Image Converter (CIC) GUI: CIC Gui is a program developed for COSMOS (http://cosmos.codeplex.com/) that processes an image and creates code compatible with COSMOS for you.CSNN: ?????? ?????? ????????? ???????????? ???????????? ????????? ????, ??????????? ????????? ????????? ? ??????????? ???????? ?? ????????.Cuddy Chat Server Client: Small Chat Server & Client - basierend auf C# mit SocketsdemoASP: asp.net mvcEspera: Espera is a portable music player, specialized for partys. It is written in C# with WPF as frontend technology.ExamEvaluator: Utility to create excelspreadsheets that can be used to evaluate the results of an exam. (Dutch only for now)FTToolkit for WinRT: FTToolkit for WinRT is the Windows RT Version of FTToolkit. The framework is designed for using in Metro Style Apps on Windows 8HydroServer Lite: HydroServer Lite is a lightweight version of the CUAHSI HydroServer written in PHP. It can be run on any webhosting service that supports PHP and MySQL. ImmunityBusterWP7: We will let you know about this soon...JavaVM for small microcontrollers: uJavaVM is a Java virtual machine for small microcontrollers written in portable C. The project is organized to support multiple microcontroller platforms. Jaw-Breaker: Hello,JIMS: Jangids Inventory Management SystemLee's Simple HTA Template: A SIMPLE HTA that can be used in various scripts and OSD. Very Minimal, intended to be used a starting template.Log4netExtensions: Log4net extensions for none static classMedical information Management System: The Medical Information Management System (MIMS) is a comprehensive solution for the various stakeholders in Medical Industry in the Nigeria.Minimap XNA game component for TemporalWars Indie Game Engine: This Minimap XNA Component is designed to show unit movement, structures placed in the game world, and take direct orders (Windows Platform). MusicCreator: This is a program that make sounds. Just open a sound and add into the program. then press the record button and the sound will be recorded.Osnova CMS: Osnova CMS, content management frameworkPicBin: PicBin is like PasteBin only that it is for pics. The Project uses .NET 4.5, ASP.NET MVC 4 and HTML5.PowerShell scripts to enable and disable tracing for Microsoft Dynamics CRM 2011: These are PowerShell scripts (files). There are 2 scripts in the zip file "CRM2011EnableDisableTrace.zip".PrepareQuery for MVC: ??????????? ??? ASP.NET MVC. Central.Linq.Mvc ?????? ?????????? ??? Central.Linq, ??????? ????????? ??????????? ???????????? ??????? ?? ?????? ?? ??????.Proggy: This will be a code-driven CMS for .NET developers. If I ever get the time to finish it.QuickSL: SL????, ??:??????、??????。 ?UI???????Temporal-Wars XNA Indie Game Engine: Temporal Wars 3D Engine includes a full suite of WYSIWG tools designed for rapid creation of your game world. Created for myself (Ben), now available for FREE.VIPER: VIPER RuntimeWeibo API library: Sina Weibo API library is NOT based on OAuth, but it is much more powerful in operating weibo activities via program. Major usage: 1. Rebot 2. Archive content

    Read the article

  • From HttpRuntime.Cache to Windows Azure Caching (Preview)

    - by Jeff
    I don’t know about you, but the announcement of Windows Azure Caching (Preview) (yes, the parentheses are apparently part of the interim name) made me a lot more excited about using Azure. Why? Because one of the great performance tricks of any Web app is to cache frequently used data in memory, so it doesn’t have to hit the database, a service, or whatever. When you run your Web app on one box, HttpRuntime.Cache is a sweet and stupid-simple solution. Somewhere in the data fetching pieces of your app, you can see if an object is available in cache, and return that instead of hitting the data store. I did this quite a bit in POP Forums, and it dramatically cuts down on the database chatter. The problem is that it falls apart if you run the app on many servers, in a Web farm, where one server may initiate a change to that data, and the others will have no knowledge of the change, making it stale. Of course, if you have the infrastructure to do so, you can use something like memcached or AppFabric to do a distributed cache, and achieve the caching flavor you desire. You could do the same thing in Azure before, but it would cost more because you’d need to pay for another role or VM or something to host the cache. Now, you can use a portion of the memory from each instance of a Web role to act as that cache, with no additional cost. That’s huge. So if you’re using a percentage of memory that comes out to 100 MB, and you have three instances running, that’s 300 MB available for caching. For the uninitiated, a Web role in Azure is essentially a VM that runs a Web app (worker roles are the same idea, only without the IIS part). You can spin up many instances of the role, and traffic is load balanced to the various instances. It’s like adding or removing servers to a Web farm all willy-nilly and at your discretion, and it’s what the cloud is all about. I’d say it’s my favorite thing about Windows Azure. The slightly annoying thing about developing for a Web role in Azure is that the local emulator that’s launched by Visual Studio is a little on the slow side. If you’re used to using the built-in Web server, you’re used to building and then alt-tabbing to your browser and refreshing a page. If you’re just changing an MVC view, you’re not even doing the building part. Spinning up the simulated Azure environment is too slow for this, but ideally you want to code your app to use this fantastic distributed cache mechanism. So first off, here’s the link to the page showing how to code using the caching feature. If you’re used to using HttpRuntime.Cache, this should be pretty familiar to you. Let’s say that you want to use the Azure cache preview when you’re running in Azure, but HttpRuntime.Cache if you’re running local, or in a regular IIS server environment. Through the magic of dependency injection, we can get there pretty quickly. First, design an interface to handle the cache insertion, fetching and removal. Mine looks like this: public interface ICacheProvider {     void Add(string key, object item, int duration);     T Get<T>(string key) where T : class;     void Remove(string key); } Now we’ll create two implementations of this interface… one for Azure cache, one for HttpRuntime: public class AzureCacheProvider : ICacheProvider {     public AzureCacheProvider()     {         _cache = new DataCache("default"); // in Microsoft.ApplicationServer.Caching, see how-to      }         private readonly DataCache _cache;     public void Add(string key, object item, int duration)     {         _cache.Add(key, item, new TimeSpan(0, 0, 0, 0, duration));     }     public T Get<T>(string key) where T : class     {         return _cache.Get(key) as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } public class LocalCacheProvider : ICacheProvider {     public LocalCacheProvider()     {         _cache = HttpRuntime.Cache;     }     private readonly System.Web.Caching.Cache _cache;     public void Add(string key, object item, int duration)     {         _cache.Insert(key, item, null, DateTime.UtcNow.AddMilliseconds(duration), System.Web.Caching.Cache.NoSlidingExpiration);     }     public T Get<T>(string key) where T : class     {         return _cache[key] as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } Feel free to expand these to use whatever cache features you want. I’m not going to go over dependency injection here, but I assume that if you’re using ASP.NET MVC, you’re using it. Somewhere in your app, you set up the DI container that resolves interfaces to concrete implementations (Ninject call is a “kernel” instead of a container). For this example, I’ll show you how StructureMap does it. It uses a convention based scheme, where if you need to get an instance of IFoo, it looks for a class named Foo. You can also do this mapping explicitly. The initialization of the container looks something like this: ObjectFactory.Initialize(x =>             {                 x.Scan(scan =>                         {                             scan.AssembliesFromApplicationBaseDirectory();                             scan.WithDefaultConventions();                         });                 if (Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.IsAvailable)                     x.For<ICacheProvider>().Use<AzureCacheProvider>();                 else                     x.For<ICacheProvider>().Use<LocalCacheProvider>();             }); If you use Ninject or Windsor or something else, that’s OK. Conceptually they’re all about the same. The important part is the conditional statement that checks to see if the app is running in Azure. If it is, it maps ICacheProvider to AzureCacheProvider, otherwise it maps to LocalCacheProvider. Now when a request comes into your MVC app, and the chain of dependency resolution occurs, you can see to it that the right caching code is called. A typical design may have a call stack that goes: Controller –> BusinessLogicClass –> Repository. Let’s say your repository class looks like this: public class MyRepo : IMyRepo {     public MyRepo(ICacheProvider cacheProvider)     {         _context = new MyDataContext();         _cache = cacheProvider;     }     private readonly MyDataContext _context;     private readonly ICacheProvider _cache;     public SomeType Get(int someTypeID)     {         var key = "somename-" + someTypeID;         var cachedObject = _cache.Get<SomeType>(key);         if (cachedObject != null)         {             _context.SomeTypes.Attach(cachedObject);             return cachedObject;         }         var someType = _context.SomeTypes.SingleOrDefault(p => p.SomeTypeID == someTypeID);         _cache.Add(key, someType, 60000);         return someType;     } ... // more stuff to update, delete or whatever, being sure to remove // from cache when you do so  When the DI container gets an instance of the repo, it passes an instance of ICacheProvider to the constructor, which in this case will be whatever implementation was specified when the container was initialized. The Get method first tries to hit the cache, and of course doesn’t care what the underlying implementation is, Azure, HttpRuntime, or otherwise. If it finds the object, it returns it right then. If not, it hits the database (this example is using Entity Framework), and inserts the object into the cache before returning it. The important thing not pictured here is that other methods in the repo class will construct the key for the cached object, in this case “somename-“ plus the ID of the object, and then remove it from cache, in any method that alters or deletes the object. That way, no matter what instance of the role is processing the request, it won’t find the object if it has been made stale, that is, updated or outright deleted, forcing it to attempt to hit the database. So is this good technique? Well, sort of. It depends on how you use it, and what your testing looks like around it. Because of differences in behavior and execution of the two caching providers, for example, you could see some strange errors. For example, I immediately got an error indicating there was no parameterless constructor for an MVC controller, because the DI resolver failed to create instances for the dependencies it had. In reality, the NuGet packaged DI resolver for StructureMap was eating an exception thrown by the Azure components that said my configuration, outlined in that how-to article, was wrong. That error wouldn’t occur when using the HttpRuntime. That’s something a lot of people debate about using different components like that, and how you configure them. I kinda hate XML config files, and like the idea of the code-based approach above, but you should be darn sure that your unit and integration testing can account for the differences.

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Rounded Corners and Shadows &ndash; Dialogs with CSS

    - by Rick Strahl
    Well, it looks like we’ve finally arrived at a place where at least all of the latest versions of main stream browsers support rounded corners and box shadows. The two CSS properties that make this possible are box-shadow and box-radius. Both of these CSS Properties now supported in all the major browsers as shown in this chart from QuirksMode: In it’s simplest form you can use box-shadow and border radius like this: .boxshadow { -moz-box-shadow: 3px 3px 5px #535353; -webkit-box-shadow: 3px 3px 5px #535353; box-shadow: 3px 3px 5px #535353; } .roundbox { -moz-border-radius: 6px 6px 6px 6px; -webkit-border-radius: 6px; border-radius: 6px 6px 6px 6px; } box-shadow: horizontal-shadow-pixels vertical-shadow-pixels blur-distance shadow-color box-shadow attributes specify the the horizontal and vertical offset of the shadow, the blur distance (to give the shadow a smooth soft look) and a shadow color. The spec also supports multiple shadows separated by commas using the attributes above but we’re not using that functionality here. box-radius: top-left-radius top-right-radius bottom-right-radius bottom-left-radius border-radius takes a pixel size for the radius for each corner going clockwise. CSS 3 also specifies each of the individual corner elements such as border-top-left-radius, but support for these is much less prevalent so I would recommend not using them for now until support improves. Instead use the single box-radius to specify all corners. Browser specific Support in older Browsers Notice that there are two variations: The actual CSS 3 properties (box-shadow and box-radius) and the browser specific ones (-moz, –webkit prefixes for FireFox and Chrome/Safari respectively) which work in slightly older versions of modern browsers before official CSS 3 support was added. The goal is to spread support as widely as possible and the prefix versions extend the range slightly more to those browsers that provided early support for these features. Notice that box-shadow and border-radius are used after the browser specific versions to ensure that the latter versions get precedence if the browser supports both (last assignment wins). Use the .boxshadow and .roundbox Styles in HTML To use these two styles create a simple rounded box with a shadow you can use HTML like this: <!-- Simple Box with rounded corners and shadow --> <div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue"> <div class="boxcontenttext"> Simple Rounded Corner Box. </div> </div> which looks like this in the browser: This works across browsers and it’s pretty sweet and simple. Watch out for nested Elements! There are a couple of things to be aware of however when using rounded corners. Specifically, you need to be careful when you nest other non-transparent content into the rounded box. For example check out what happens when I change the inside <div> to have a colored background: <!-- Simple Box with rounded corners and shadow --> <div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue"> <div class="boxcontenttext" style="background: khaki;"> Simple Rounded Corner Box. </div> </div> which renders like this:   If you look closely you’ll find that the inside <div>’s corners are not rounded and so ‘poke out’ slightly over the rounded corners. It looks like the rounded corners are ‘broken’ up instead of a solid rounded line around the corner, which his pretty ugly. The bigger the radius the more drastic this effect becomes . To fix this issue the inner <div> also has have rounded corners at the same or slightly smaller radius than the outer <div>. The simple fix for this is to simply also apply the roundbox style to the inner <div> in addition to the boxcontenttext style already applied: <div class="boxcontenttext roundbox" style="background: khaki;"> The fixed display now looks proper: Separate Top and Bottom Elements This gets even a little more tricky if you have an element at the top or bottom only of the rounded box. What if you need to add something like a header or footer <div> that have non-transparent backgrounds which is a pretty common scenario? In those cases you want only the top or bottom corners rounded and not both. To make this work a couple of additional styles to round only the top and bottom corners can be created: .roundbox-top { -moz-border-radius: 4px 4px 0 0; -webkit-border-radius: 4px 4px 0 0; border-radius: 4px 4px 0 0; } .roundbox-bottom { -moz-border-radius: 0 0 4px 4px; -webkit-border-radius: 0 0 4px 4px; border-radius: 0 0 4px 4px; } Notice that radius used for the ‘inside’ rounding is smaller (4px) than the outside radius (6px). This is so the inner radius fills into the outer border – if you use the same size you may have some white space showing between inner and out rounded corners. Experiment with values to see what works – in my experimenting the behavior across browsers here is consistent (thankfully). These styles can be applied in addition to other styles to make only the top or bottom portions of an element rounded. For example imagine I have styles like this: .gridheader, .gridheaderbig, .gridheaderleft, .gridheaderright { padding: 4px 4px 4px 4px; background: #003399 url(images/vertgradient.png) repeat-x; text-align: center; font-weight: bold; text-decoration: none; color: khaki; } .gridheaderleft { text-align: left; } .gridheaderright { text-align: right; } .gridheaderbig { font-size: 135%; } If I just apply say gridheader by itself in HTML like this: <div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue"> <div class="gridheaderleft">Box with a Header</div> <div class="boxcontenttext" style="background: khaki;"> Simple Rounded Corner Box. </div> </div> This results in a pretty funky display – again due to the fact that the inner elements render square rather than rounded corners: If you look close again you can see that both the header and the main content have square edges which jumps out at the eye. To fix this you can now apply the roundbox-top and roundbox-bottom to the header and content respectively: <div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue"> <div class="gridheaderleft roundbox-top">Box with a Header</div> <div class="boxcontenttext roundbox-bottom" style="background: khaki;"> Simple Rounded Corner Box. </div> </div> Which now gives the proper display with rounded corners both on the top and bottom: All of this is sweet to be supported – at least by the newest browser – without having to resort to images and nasty JavaScripts solutions. While this is still not a mainstream feature yet for the majority of actually installed browsers, the majority of browser users are very likely to have this support as most browsers other than IE are actively pushing users to upgrade to newer versions. Since this is a ‘visual display only feature it degrades reasonably well in non-supporting browsers: You get an uninteresting square and non-shadowed browser box, but the display is still overall functional. The main sticking point – as always is Internet Explorer versions 8.0 and down as well as older versions of other browsers. With those browsers you get a functional view that is a little less interesting to look at obviously: but at least it’s still functional. Maybe that’s just one more incentive for people using older browsers to upgrade to a  more modern browser :-) Creating Dialog Related Styles In a lot of my AJAX based applications I use pop up windows which effectively work like dialogs. Using the simple CSS behaviors above, it’s really easy to create some fairly nice looking overlaid windows with nothing but CSS. Here’s what a typical ‘dialog’ I use looks like: The beauty of this is that it’s plain CSS – no plug-ins or images (other than the gradients which are optional) required. Add jQuery-ui draggable (or ww.jquery.js as shown below) and you have a nice simple inline implementation of a dialog represented by a simple <div> tag. Here’s the HTML for this dialog: <div id="divDialog" class="dialog boxshadow" style="width: 450px;"> <div class="dialog-header"> <div class="closebox"></div> User Sign-in </div> <div class="dialog-content"> <label>Username:</label> <input type="text" name="txtUsername" value=" " /> <label>Password</label> <input type="text" name="txtPassword" value=" " /> <hr /> <input type="button" id="btnLogin" value="Login" /> </div> <div class="dialog-statusbar">Ready</div> </div> Most of this behavior is driven by the ‘dialog’ styles which are fairly basic and easy to understand. They do use a few support images for the gradients which are provided in the sample I’ve provided. Here’s what the CSS looks like: .dialog { background: White; overflow: hidden; border: solid 1px steelblue; -moz-border-radius: 6px 6px 4px 4px; -webkit-border-radius: 6px 6px 4px 4px; border-radius: 6px 6px 3px 3px; } .dialog-header { background-image: url(images/dialogheader.png); background-repeat: repeat-x; text-align: left; color: cornsilk; padding: 5px; padding-left: 10px; font-size: 1.02em; font-weight: bold; position: relative; -moz-border-radius: 4px 4px 0px 0px; -webkit-border-radius: 4px 4px 0px 0px; border-radius: 4px 4px 0px 0px; } .dialog-top { -moz-border-radius: 4px 4px 0px 0px; -webkit-border-radius: 4px 4px 0px 0px; border-radius: 4px 4px 0px 0px; } .dialog-bottom { -moz-border-radius: 0 0 3px 3px; -webkit-border-radius: 0 0 3px 3px; border-radius: 0 0 3px 3px; } .dialog-content { padding: 15px; } .dialog-statusbar, .dialog-toolbar { background: #eeeeee; background-image: url(images/dialogstrip.png); background-repeat: repeat-x; padding: 5px; padding-left: 10px; border-top: solid 1px silver; border-bottom: solid 1px silver; font-size: 0.8em; } .dialog-statusbar { -moz-border-radius: 0 0 3px 3px; -webkit-border-radius: 0 0 3px 3px; border-radius: 0 0 3px 3px; padding-right: 10px; } .closebox { position: absolute; right: 2px; top: 2px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 1; filter: alpha(opacity="100"); } The main style is the dialog class which is the outer box. It has the rounded border that serves as the outline. Note that I didn’t add the box-shadow to this style because in some situations I just want the rounded box in an inline display that doesn’t have a shadow so it’s still applied separately. dialog-header, then has the rounded top corners and displays a typical dialog heading format. dialog-bottom and dialog-top then provide the same functionality as roundbox-top and roundbox-bottom described earlier but are provided mainly in the stylesheet for consistency to match the dialog’s round edges and making it easier to  remember and find in Intellisense as it shows up in the same dialog- group. dialog-statusbar and dialog-toolbar are two elements I use a lot for floating windows – the toolbar serves for buttons and options and filters typically, while the status bar provides information specific to the floating window. Since the the status bar is always on the bottom of the dialog it automatically handles the rounding of the bottom corners. Finally there’s  closebox style which is to be applied to an empty <div> tag in the header typically. What this does is render a close image that is by default low-lighted with a low opacity value, and then highlights when hovered over. All you’d have to do handle the close operation is handle the onclick of the <div>. Note that the <div> right aligns so typically you should specify it before any other content in the header. Speaking of closable – some time ago I created a closable jQuery plug-in that basically automates this process and can be applied against ANY element in a page, automatically removing or closing the element with some simple script code. Using this you can leave out the <div> tag for closable and just do the following: To make the above dialog closable (and draggable) which makes it effectively and overlay window, you’d add jQuery.js and ww.jquery.js to the page: <script type="text/javascript" src="../../scripts/jquery.min.js"></script> <script type="text/javascript" src="../../scripts/ww.jquery.min.js"></script> and then simply call: <script type="text/javascript"> $(document).ready(function () { $("#divDialog") .draggable({ handle: ".dialog-header" }) .closable({ handle: ".dialog-header", closeHandler: function () { alert("Window about to be closed."); return true; // true closes - false leaves open } }); }); </script> * ww.jquery.js emulates base features in jQuery-ui’s draggable. If jQuery-ui is loaded its draggable version will be used instead and voila you have now have a draggable and closable window – here in mid-drag:   The dragging and closable behaviors are of course optional, but it’s the final touch that provides dialog like window behavior. Relief for older Internet Explorer Versions with CSS Pie If you want to get these features to work with older versions of Internet Explorer all the way back to version 6 you can check out CSS Pie. CSS Pie provides an Internet Explorer behavior file that attaches to specific CSS rules and simulates these behavior using script code in IE (mostly by implementing filters). You can simply add the behavior to each CSS style that uses box-shadow and border-radius like this: .boxshadow {     -moz-box-shadow: 3px 3px 5px #535353;     -webkit-box-shadow: 3px 3px 5px #535353;           box-shadow: 3px 3px 5px #535353;     behavior: url(scripts/PIE.htc);           } .roundbox {      -moz-border-radius: 6px 6px 6px 6px;     -webkit-border-radius: 6px;      border-radius: 6px 6px 6px 6px;     behavior: url(scripts/PIE.htc); } CSS Pie requires the PIE.htc on your server and referenced from each CSS style that needs it. Note that the url() for IE behaviors is NOT CSS file relative as other CSS resources, but rather PAGE relative , so if you have more than one folder you probably need to reference the HTC file with a fixed path like this: behavior: url(/MyApp/scripts/PIE.htc); in the style. Small price to pay, but a royal pain if you have a common CSS file you use in many applications. Once the PIE.htc file has been copied and you have applied the behavior to each style that uses these new features Internet Explorer will render rounded corners and box shadows! Yay! Hurray for box-shadow and border-radius All of this functionality is very welcome natively in the browser. If you think this is all frivolous visual candy, you might be right :-), but if you take a look on the Web and search for rounded corner solutions that predate these CSS attributes you’ll find a boatload of stuff from image files, to custom drawn content to Javascript solutions that play tricks with a few images. It’s sooooo much easier to have this functionality built in and I for one am glad to see that’s it’s finally becoming standard in the box. Still remember that when you use these new CSS features, they are not universal, and are not going to be really soon. Legacy browsers, especially old versions of Internet Explorer that can’t be updated will continue to be around and won’t work with this shiny new stuff. I say screw ‘em: Let them get a decent recent browser or see a degraded and ugly UI. We have the luxury with this functionality in that it doesn’t typically affect usability – it just doesn’t look as nice. Resources Download the Sample The sample includes the styles and images and sample page as well as ww.jquery.js for the draggable/closable example. Online Sample Check out the sample described in this post online. Closable and Draggable Documentation Documentation for the closeable and draggable plug-ins in ww.jquery.js. You can also check out the full documentation for all the plug-ins contained in ww.jquery.js here. © Rick Strahl, West Wind Technologies, 2005-2011Posted in HTML  CSS  

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Help getting frame rate (fps) up in Python + Pygame

    - by Jordan Magnuson
    I am working on a little card-swapping world-travel game that I sort of envision as a cross between Bejeweled and the 10 Days geography board games. So far the coding has been going okay, but the frame rate is pretty bad... currently I'm getting low 20's on my Core 2 Duo. This is a problem since I'm creating the game for Intel's March developer competition, which is squarely aimed at netbooks packing underpowered Atom processors. Here's a screen from the game: ![www.necessarygames.com/my_games/betraveled/betraveled-fps.png][1] I am very new to Python and Pygame (this is the first thing I've used them for), and am sadly lacking in formal CS training... which is to say that I think there are probably A LOT of bad practices going on in my code, and A LOT that could be optimized. If some of you older Python hands wouldn't mind taking a look at my code and seeing if you can't find any obvious areas for optimization, I would be extremely grateful. You can download the full source code here: http://www.necessarygames.com/my_games/betraveled/betraveled_src0328.zip Compiled exe here: www.necessarygames.com/my_games/betraveled/betraveled_src0328.zip One thing I am concerned about is my event manager, which I feel may have some performance wholes in it, and another thing is my rendering... I'm pretty much just blitting everything to the screen all the time (see the render routines in my game_components.py below); I recently found out that you should only update the areas of the screen that have changed, but I'm still foggy on how that accomplished exactly... could this be a huge performance issue? Any thoughts are much appreciated! As usual, I'm happy to "tip" you for your time and energy via PayPal. Jordan Here are some bits of the source: Main.py #Remote imports import pygame from pygame.locals import * #Local imports import config import rooms from event_manager import * from events import * class RoomController(object): """Controls which room is currently active (eg Title Screen)""" def __init__(self, screen, ev_manager): self.room = None self.screen = screen self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.room = self.set_room(config.room) def set_room(self, room_const): #Unregister old room from ev_manager if self.room: self.room.ev_manager.unregister_listener(self.room) self.room = None #Set new room based on const if room_const == config.TITLE_SCREEN: return rooms.TitleScreen(self.screen, self.ev_manager) elif room_const == config.GAME_MODE_ROOM: return rooms.GameModeRoom(self.screen, self.ev_manager) elif room_const == config.GAME_ROOM: return rooms.GameRoom(self.screen, self.ev_manager) elif room_const == config.HIGH_SCORES_ROOM: return rooms.HighScoresRoom(self.screen, self.ev_manager) def notify(self, event): if isinstance(event, ChangeRoomRequest): if event.game_mode: config.game_mode = event.game_mode self.room = self.set_room(event.new_room) def render(self, surface): self.room.render(surface) #Run game def main(): pygame.init() screen = pygame.display.set_mode(config.screen_size) ev_manager = EventManager() spinner = CPUSpinnerController(ev_manager) room_controller = RoomController(screen, ev_manager) pygame_event_controller = PyGameEventController(ev_manager) spinner.run() # this runs the main function if this script is called to run. # If it is imported as a module, we don't run the main function. if __name__ == "__main__": main() event_manager.py #Remote imports import pygame from pygame.locals import * #Local imports import config from events import * def debug( msg ): print "Debug Message: " + str(msg) class EventManager: #This object is responsible for coordinating most communication #between the Model, View, and Controller. def __init__(self): from weakref import WeakKeyDictionary self.listeners = WeakKeyDictionary() self.eventQueue= [] self.gui_app = None #---------------------------------------------------------------------- def register_listener(self, listener): self.listeners[listener] = 1 #---------------------------------------------------------------------- def unregister_listener(self, listener): if listener in self.listeners: del self.listeners[listener] #---------------------------------------------------------------------- def post(self, event): if isinstance(event, MouseButtonLeftEvent): debug(event.name) #NOTE: copying the list like this before iterating over it, EVERY tick, is highly inefficient, #but currently has to be done because of how new listeners are added to the queue while it is running #(eg when popping cards from a deck). Should be changed. See: http://dr0id.homepage.bluewin.ch/pygame_tutorial08.html #and search for "Watch the iteration" for listener in list(self.listeners): #NOTE: If the weakref has died, it will be #automatically removed, so we don't have #to worry about it. listener.notify(event) #------------------------------------------------------------------------------ class PyGameEventController: """...""" def __init__(self, ev_manager): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.input_freeze = False #---------------------------------------------------------------------- def notify(self, incoming_event): if isinstance(incoming_event, UserInputFreeze): self.input_freeze = True elif isinstance(incoming_event, UserInputUnFreeze): self.input_freeze = False elif isinstance(incoming_event, TickEvent): #Share some time with other processes, so we don't hog the cpu pygame.time.wait(5) #Handle Pygame Events for event in pygame.event.get(): #If this event manager has an associated PGU GUI app, notify it of the event if self.ev_manager.gui_app: self.ev_manager.gui_app.event(event) #Standard event handling for everything else ev = None if event.type == QUIT: ev = QuitEvent() elif event.type == pygame.MOUSEBUTTONDOWN and not self.input_freeze: if event.button == 1: #Button 1 pos = pygame.mouse.get_pos() ev = MouseButtonLeftEvent(pos) elif event.type == pygame.MOUSEMOTION: pos = pygame.mouse.get_pos() ev = MouseMoveEvent(pos) #Post event to event manager if ev: self.ev_manager.post(ev) #------------------------------------------------------------------------------ class CPUSpinnerController: def __init__(self, ev_manager): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.clock = pygame.time.Clock() self.cumu_time = 0 self.keep_going = True #---------------------------------------------------------------------- def run(self): if not self.keep_going: raise Exception('dead spinner') while self.keep_going: time_passed = self.clock.tick() fps = self.clock.get_fps() self.cumu_time += time_passed self.ev_manager.post(TickEvent(time_passed, fps)) if self.cumu_time >= 1000: self.cumu_time = 0 self.ev_manager.post(SecondEvent()) pygame.quit() #---------------------------------------------------------------------- def notify(self, event): if isinstance(event, QuitEvent): #this will stop the while loop from running self.keep_going = False rooms.py #Remote imports import pygame #Local imports import config import continents from game_components import * from my_gui import * from pgu import high class Room(object): def __init__(self, screen, ev_manager): self.screen = screen self.ev_manager = ev_manager self.ev_manager.register_listener(self) def notify(self, event): if isinstance(event, TickEvent): pygame.display.set_caption('FPS: ' + str(int(event.fps))) self.render(self.screen) pygame.display.update() def get_highs_table(self): fname = 'high_scores.txt' highs_table = None config.all_highs = high.Highs(fname) if config.game_mode == config.TIME_CHALLENGE: if config.difficulty == config.EASY: highs_table = config.all_highs['time_challenge_easy'] if config.difficulty == config.MED_DIF: highs_table = config.all_highs['time_challenge_med'] if config.difficulty == config.HARD: highs_table = config.all_highs['time_challenge_hard'] if config.difficulty == config.SUPER: highs_table = config.all_highs['time_challenge_super'] elif config.game_mode == config.PLAN_AHEAD: pass return highs_table class TitleScreen(Room): def __init__(self, screen, ev_manager): Room.__init__(self, screen, ev_manager) self.background = pygame.image.load('assets/images/interface/background.jpg').convert() #Initialize #--------------------------------------- self.gui_form = gui.Form() self.gui_app = gui.App(config.gui_theme) self.ev_manager.gui_app = self.gui_app c = gui.Container(align=0,valign=0) #Quit Button #--------------------------------------- b = StartGameButton(ev_manager=self.ev_manager) c.add(b, 0, 0) self.gui_app.init(c) def render(self, surface): surface.blit(self.background, (0, 0)) #GUI self.gui_app.paint(surface) class GameModeRoom(Room): def __init__(self, screen, ev_manager): Room.__init__(self, screen, ev_manager) self.background = pygame.image.load('assets/images/interface/background.jpg').convert() self.create_gui() #Create pgu gui elements def create_gui(self): #Setup #--------------------------------------- self.gui_form = gui.Form() self.gui_app = gui.App(config.gui_theme) self.ev_manager.gui_app = self.gui_app c = gui.Container(align=0,valign=-1) #Mode Relaxed Button #--------------------------------------- b = GameModeRelaxedButton(ev_manager=self.ev_manager) self.b = b print b.rect c.add(b, 0, 200) #Mode Time Challenge Button #--------------------------------------- b = TimeChallengeButton(ev_manager=self.ev_manager) self.b = b print b.rect c.add(b, 0, 250) #Mode Think Ahead Button #--------------------------------------- # b = PlanAheadButton(ev_manager=self.ev_manager) # self.b = b # print b.rect # c.add(b, 0, 300) #Initialize #--------------------------------------- self.gui_app.init(c) def render(self, surface): surface.blit(self.background, (0, 0)) #GUI self.gui_app.paint(surface) class GameRoom(Room): def __init__(self, screen, ev_manager): Room.__init__(self, screen, ev_manager) #Game mode #--------------------------------------- self.new_board_timer = None self.game_mode = config.game_mode config.current_highs = self.get_highs_table() self.highs_dialog = None self.game_over = False #Images #--------------------------------------- self.background = pygame.image.load('assets/images/interface/game screen2-1.jpg').convert() self.logo = pygame.image.load('assets/images/interface/logo_small.png').convert_alpha() self.game_over_text = pygame.image.load('assets/images/interface/text_game_over.png').convert_alpha() self.trip_complete_text = pygame.image.load('assets/images/interface/text_trip_complete.png').convert_alpha() self.zoom_game_over = None self.zoom_trip_complete = None self.fade_out = None #Text #--------------------------------------- self.font = pygame.font.Font(config.font_sans, config.interface_font_size) #Create game components #--------------------------------------- self.continent = self.set_continent(config.continent) self.board = Board(config.board_size, self.ev_manager) self.deck = Deck(self.ev_manager, self.continent) self.map = Map(self.continent) self.longest_trip = 0 #Set pos of game components #--------------------------------------- board_pos = (SCREEN_MARGIN[0], 109) self.board.set_pos(board_pos) map_pos = (config.screen_size[0] - self.map.size[0] - SCREEN_MARGIN[0], 57); self.map.set_pos(map_pos) #Trackers #--------------------------------------- self.game_clock = Chrono(self.ev_manager) self.swap_counter = 0 self.level = 0 #Create gui #--------------------------------------- self.create_gui() #Create initial board #--------------------------------------- self.new_board = self.deck.deal_new_board(self.board) self.ev_manager.post(NewBoardComplete(self.new_board)) def set_continent(self, continent_const): #Set continent based on const if continent_const == config.EUROPE: return continents.Europe() if continent_const == config.AFRICA: return continents.Africa() else: raise Exception('Continent constant not recognized') #Create pgu gui elements def create_gui(self): #Setup #--------------------------------------- self.gui_form = gui.Form() self.gui_app = gui.App(config.gui_theme) self.ev_manager.gui_app = self.gui_app c = gui.Container(align=-1,valign=-1) #Timer Progress bar #--------------------------------------- self.timer_bar = None self.time_increase = None self.minutes_left = None self.seconds_left = None self.timer_text = None if self.game_mode == config.TIME_CHALLENGE: self.time_increase = config.time_challenge_start_time self.timer_bar = gui.ProgressBar(config.time_challenge_start_time,0,config.max_time_bank,width=306) c.add(self.timer_bar, 172, 57) #Connections Progress bar #--------------------------------------- self.connections_bar = None self.connections_bar = gui.ProgressBar(0,0,config.longest_trip_needed,width=306) c.add(self.connections_bar, 172, 83) #Quit Button #--------------------------------------- b = QuitButton(ev_manager=self.ev_manager) c.add(b, 950, 20) #Generate Board Button #--------------------------------------- b = GenerateBoardButton(ev_manager=self.ev_manager, room=self) c.add(b, 500, 20) #Board Size? #--------------------------------------- bs = SetBoardSizeContainer(config.BOARD_LARGE, ev_manager=self.ev_manager, board=self.board) c.add(bs, 640, 20) #Fill Board? #--------------------------------------- t = FillBoardCheckbox(config.fill_board, ev_manager=self.ev_manager) c.add(t, 740, 20) #Darkness? #--------------------------------------- t = UseDarknessCheckbox(config.use_darkness, ev_manager=self.ev_manager) c.add(t, 840, 20) #Initialize #--------------------------------------- self.gui_app.init(c) def advance_level(self): self.level += 1 print 'Advancing to next level' print 'New level: ' + str(self.level) if self.timer_bar: print 'Time increase: ' + str(self.time_increase) self.timer_bar.value += self.time_increase self.time_increase = max(config.min_advance_time, int(self.time_increase * 0.9)) self.board = self.new_board self.new_board = None self.zoom_trip_complete = None self.game_clock.unpause() def notify(self, event): #Tick event if isinstance(event, TickEvent): pygame.display.set_caption('FPS: ' + str(int(event.fps))) self.render(self.screen) pygame.display.update() #Wait to deal new board when advancing levels if self.zoom_trip_complete and self.zoom_trip_complete.finished: self.zoom_trip_complete = None self.ev_manager.post(UnfreezeCards()) self.new_board = self.deck.deal_new_board(self.board) self.ev_manager.post(NewBoardComplete(self.new_board)) #New high score? if self.zoom_game_over and self.zoom_game_over.finished and not self.highs_dialog: if config.current_highs.check(self.level) != None: self.zoom_game_over.visible = False data = 'time:' + str(self.game_clock.time) + ',swaps:' + str(self.swap_counter) self.highs_dialog = HighScoreDialog(score=self.level, data=data, ev_manager=self.ev_manager) self.highs_dialog.open() elif not self.fade_out: self.fade_out = FadeOut(self.ev_manager, config.TITLE_SCREEN) #Second event elif isinstance(event, SecondEvent): if self.timer_bar: if not self.game_clock.paused: self.timer_bar.value -= 1 if self.timer_bar.value <= 0 and not self.game_over: self.ev_manager.post(GameOver()) self.minutes_left = self.timer_bar.value / 60 self.seconds_left = self.timer_bar.value % 60 if self.seconds_left < 10: leading_zero = '0' else: leading_zero = '' self.timer_text = ''.join(['Time Left: ', str(self.minutes_left), ':', leading_zero, str(self.seconds_left)]) #Game over elif isinstance(event, GameOver): self.game_over = True self.zoom_game_over = ZoomImage(self.ev_manager, self.game_over_text) #Trip complete event elif isinstance(event, TripComplete): print 'You did it!' self.game_clock.pause() self.zoom_trip_complete = ZoomImage(self.ev_manager, self.trip_complete_text) self.new_board_timer = Timer(self.ev_manager, 2) self.ev_manager.post(FreezeCards()) print 'Room posted newboardcomplete' #Board Refresh Complete elif isinstance(event, BoardRefreshComplete): if event.board == self.board: print 'Longest trip needed: ' + str(config.longest_trip_needed) print 'Your longest trip: ' + str(self.board.longest_trip) if self.board.longest_trip >= config.longest_trip_needed: self.ev_manager.post(TripComplete()) elif event.board == self.new_board: self.advance_level() self.connections_bar.value = self.board.longest_trip self.connection_text = ' '.join(['Connections:', str(self.board.longest_trip), '/', str(config.longest_trip_needed)]) #CardSwapComplete elif isinstance(event, CardSwapComplete): self.swap_counter += 1 elif isinstance(event, ConfigChangeBoardSize): config.board_size = event.new_size elif isinstance(event, ConfigChangeCardSize): config.card_size = event.new_size elif isinstance(event, ConfigChangeFillBoard): config.fill_board = event.new_value elif isinstance(event, ConfigChangeDarkness): config.use_darkness = event.new_value def render(self, surface): #Background surface.blit(self.background, (0, 0)) #Map self.map.render(surface) #Board self.board.render(surface) #Logo surface.blit(self.logo, (10,10)) #Text connection_text = self.font.render(self.connection_text, True, BLACK) surface.blit(connection_text, (25, 84)) if self.timer_text: timer_text = self.font.render(self.timer_text, True, BLACK) surface.blit(timer_text, (25, 64)) #GUI self.gui_app.paint(surface) if self.zoom_trip_complete: self.zoom_trip_complete.render(surface) if self.zoom_game_over: self.zoom_game_over.render(surface) if self.fade_out: self.fade_out.render(surface) class HighScoresRoom(Room): def __init__(self, screen, ev_manager): Room.__init__(self, screen, ev_manager) self.background = pygame.image.load('assets/images/interface/background.jpg').convert() #Initialize #--------------------------------------- self.gui_app = gui.App(config.gui_theme) self.ev_manager.gui_app = self.gui_app c = gui.Container(align=0,valign=0) #High Scores Table #--------------------------------------- hst = HighScoresTable() c.add(hst, 0, 0) self.gui_app.init(c) def render(self, surface): surface.blit(self.background, (0, 0)) #GUI self.gui_app.paint(surface) game_components.py #Remote imports import pygame from pygame.locals import * import random import operator from copy import copy from math import sqrt, floor #Local imports import config from events import * from matrix import Matrix from textrect import render_textrect, TextRectException from hyphen import hyphenator from textwrap2 import TextWrapper ############################## #CONSTANTS ############################## SCREEN_MARGIN = (10, 10) #Colors BLACK = (0, 0, 0) WHITE = (255, 255, 255) RED = (255, 0, 0) YELLOW = (255, 200, 0) #Directions LEFT = -1 RIGHT = 1 UP = 2 DOWN = -2 #Cards CARD_MARGIN = (10, 10) CARD_PADDING = (2, 2) #Card types BLANK = 0 COUNTRY = 1 TRANSPORT = 2 #Transport types PLANE = 0 TRAIN = 1 CAR = 2 SHIP = 3 class Timer(object): def __init__(self, ev_manager, time_left): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.time_left = time_left self.paused = False def __repr__(self): return str(self.time_left) def pause(self): self.paused = True def unpause(self): self.paused = False def notify(self, event): #Pause Event if isinstance(event, Pause): self.pause() #Unpause Event elif isinstance(event, Unpause): self.unpause() #Second Event elif isinstance(event, SecondEvent): if not self.paused: self.time_left -= 1 class Chrono(object): def __init__(self, ev_manager, start_time=0): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.time = start_time self.paused = False def __repr__(self): return str(self.time_left) def pause(self): self.paused = True def unpause(self): self.paused = False def notify(self, event): #Pause Event if isinstance(event, Pause): self.pause() #Unpause Event elif isinstance(event, Unpause): self.unpause() #Second Event elif isinstance(event, SecondEvent): if not self.paused: self.time += 1 class Map(object): def __init__(self, continent): self.map_image = pygame.image.load(continent.map).convert_alpha() self.map_text = pygame.image.load(continent.map_text).convert_alpha() self.pos = (0, 0) self.set_color() self.map_image = pygame.transform.smoothscale(self.map_image, config.map_size) self.size = self.map_image.get_size() def set_pos(self, pos): self.pos = pos def set_color(self): image_pixel_array = pygame.PixelArray(self.map_image) image_pixel_array.replace(config.GRAY1, config.COLOR1) image_pixel_array.replace(config.GRAY2, config.COLOR2) image_pixel_array.replace(config.GRAY3, config.COLOR3) image_pixel_array.replace(config.GRAY4, config.COLOR4) image_pixel_array.replace(config.GRAY5, config.COLOR5)

    Read the article

  • Benchmark Linq2SQL, Subsonic2, Subsonic3 - Any other ideas to make them faster ?

    - by Aristos
    I am working with Subsonic 2 more than 3 years now... After Linq appears and then Subsonic 3, I start thinking about moving to the new Linq futures that are connected to sql. I must say that I start move and port my subsonic 2 with SubSonic 3, and very soon I discover that the speed was so slow thats I didn't believe it - and starts all that tests. Then I test Linq2Sql and see also a delay - compare it with Subsonic 2. My question here is, especial for the linq2sql, and the up-coming dotnet version 4, what else can I do to speed it up ? What else on linq2sql settings, or classes, not on this code that I have used for my messures I place here the project that I make the tests, also the screen shots of the results. How I make the tests - and the accurate of my measures. I use only for my question Google chrome, because its difficult for me to show here a lot of other measures that I have done with more complex programs. This is the most simple one, I just measure the Data Read. How can I prove that. I make a simple Thread.Sleep(10 seconds) and see if I see that 10 seconds on Google Chrome Measure, and yes I see it. here are more test with this Sleep thead to see whats actually Chrome gives. 10 seconds delay 100 ms delay Zero delay There is only a small 15ms thats get on messure, is so small compare it with the rest of my tests that I do not care about. So what I measure I measure just the data read via each method - did not count the data or database delay, or any disk read or anything like that. Later on the image with the result I show that no disk activity exist on the measures See this image to see what really I measure and if this is correct Why I chose this kind of test Its simple, it's real, and it's near my real problem that I found the delay of subsonic 3 in real program with real data. Now lets tests the dals Start by see this image I have 4-5 calls on every method, the one after the other. The results are. For a loop of 100 times, ask for 5 Rows, one not exist, approximatively.. Simple adonet:81ms SubSonic 2 :210ms linq2sql :1.70sec linq2sql using CompiledQuery.Compile :239ms Subsonic 3 :15.00sec (wow - extreme slow) The project http://www.planethost.gr/DalSpeedTests.rar Can any one confirm this benchmark, or make any optimizations to help me out ? Other tests Some one publish here this link http://ormbattle.net/ (and then remove it - don not know why) In this page you can find a really useful advanced tests for all, except subsonic 2 and subsonic 3 that I have here ! Optimizing What I really ask here is if some one can now any trick how to optimize the DALs, not by changing the test code, but by changing the code and the settings on each dal. For example... Optimizing Linq2SQL I start search how to optimize Linq2sql and found this article, and maybe more exist. Finally I make the tricks from that page to run, and optimize the code using them all. The speed was near 1.50sec from 1.70.... big improvement, but still slow. Then I found a different way - same idea article, and wow ! the speed is blow up. Using this trick with CompiledQuery.Compile, the time from 1.5sec is now 239ms. Here is the code for the precompiled... Func<DataClassesDataContext, int, IQueryable<Product>> compiledQuery = CompiledQuery.Compile((DataClassesDataContext meta, int IdToFind) => (from myData in meta.Products where myData.ProductID.Equals(IdToFind) select myData)); StringBuilder Test = new StringBuilder(); int[] MiaSeira = { 5, 6, 10, 100, 7 }; using (DataClassesDataContext context = new DataClassesDataContext()) { context.ObjectTrackingEnabled = false; for (int i = 0; i < 100; i++) { foreach (int EnaID in MiaSeira) { var oFindThat2P = compiledQuery(context, EnaID); foreach (Product One in oFindThat2P) { Test.Append("<br />"); Test.Append(One.ProductName); } } } } Optimizing SubSonic 3 and problems I make many performance profiling, and start change the one after the other and the speed is better but still too slow. I post them on subsonic group but they ignore the problem, they say that everything is fast... Here is some capture of my profiling and delay points inside subsonic source code I have end up that subsonic3 make more call on the structure of the database rather than on data itself. Needs to reconsider the hole way of asking for data, and follow the subsonic2 idea if this is possible. Try to make precompile to subsonic 3 like I did in linq2Sql but fail for the moment... Optimizing SubSonic 2 After I discover that subsonic 3 is extreme slow, I start my checks on subsonic 2 - that I have never done before believing that is fast. (and it is) So its come up with some points that can be faster. For example there are many loops like this ones that actually is slow because of string manipulation and compares inside the loop. I must say to you that this code called million of times ! on a period of few minutes ! of data asking from the program. On small amount of tables and small fields maybe this is not a big think for some people, but on large amount of tables, the delay is even more. So I decide and optimize the subsonic 2 by my self, by replacing the string compares, with number compares! Simple. I do that almost on every point that profiler say that is slow. I change also all small points that can be even a little faster, and disable some not so used thinks. The results, 5% faster on NorthWind database, near 20% faster on my database with 250 tables. That is count with 500ms less in 10 seconds process on northwind, 100ms faster on my database on 500ms process time. I do not have captures to show you for that because I have made them with different code, different time, and track them down on paper. Anyway this is my story and my question on all that, what else do you know to make them even faster... For this measures I have use Subsonic 2.2 optimized by me, Subsonic 3.0.0.3 a little optimized by me, and Dot.Net 3.5

    Read the article

  • Different behavior of functors (copies, assignments) in VS2010 (compared with VS2005)

    - by Patrick
    When moving from VS2005 to VS2010 we noticed a performance decrease, which seemed to be caused by additional copies of a functor. The following code illustrates the problem. It is essential to have a map where the value itself is a set. On both the map and the set we defined a comparison functor (which is templated in the example). #include <iostream> #include <map> #include <set> class A { public: A(int i, char c) : m_i(i), m_c(c) { std::cout << "Construct object " << m_c << m_i << std::endl; } A(const A &a) : m_i(a.m_i), m_c(a.m_c) { std::cout << "Copy object " << m_c << m_i << std::endl; } ~A() { std::cout << "Destruct object " << m_c << m_i << std::endl; } void operator= (const A &a) { m_i = a.m_i; m_c = a.m_c; std::cout << "Assign object " << m_c << m_i << std::endl; } int m_i; char m_c; }; class B : public A { public: B(int i) : A(i, 'B') { } static const char s_c = 'B'; }; class C : public A { public: C(int i) : A(i, 'C') { } static const char s_c = 'C'; }; template <class X> class compareA { public: compareA() : m_i(999) { std::cout << "Construct functor " << X::s_c << m_i << std::endl; } compareA(const compareA &a) : m_i(a.m_i) { std::cout << "Copy functor " << X::s_c << m_i << std::endl; } ~compareA() { std::cout << "Destruct functor " << X::s_c << m_i << std::endl; } void operator= (const compareA &a) { m_i = a.m_i; std::cout << "Assign functor " << X::s_c << m_i << std::endl; } bool operator() (const X &x1, const X &x2) const { std::cout << "Comparing object " << x1.m_i << " with " << x2.m_i << std::endl; return x1.m_i < x2.m_i; } private: int m_i; }; typedef std::set<C, compareA<C> > SetTest; typedef std::map<B, SetTest, compareA<B> > MapTest; int main() { int i = 0; std::cout << "--- " << i++ << std::endl; MapTest mapTest; std::cout << "--- " << i++ << std::endl; SetTest &setTest = mapTest[0]; std::cout << "--- " << i++ << std::endl; } If I compile this code with VS2005 I get the following output: --- 0 Construct functor B999 Copy functor B999 Copy functor B999 Destruct functor B999 Destruct functor B999 --- 1 Construct object B0 Construct functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Copy object B0 Copy functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Copy object B0 Copy functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Destruct functor C999 Destruct object B0 Destruct functor C999 Destruct object B0 --- 2 If I compile this with VS2010, I get the following output: --- 0 Construct functor B999 Copy functor B999 Copy functor B999 Destruct functor B999 Destruct functor B999 --- 1 Construct object B0 Construct functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Copy object B0 Copy functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Copy functor C999 Assign functor C999 Assign functor C999 Destruct functor C999 Copy object B0 Copy functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Copy functor C999 Assign functor C999 Assign functor C999 Destruct functor C999 Destruct functor C999 Destruct object B0 Destruct functor C999 Destruct object B0 --- 2 The output for the first statement (constructing the map) is identical. The output for the second statement (creating the first element in the map and getting a reference to it), is much bigger in the VS2010 case: Copy constructor of functor: 10 times vs 8 times Assignment of functor: 2 times vs. 0 times Destructor of functor: 10 times vs 8 times My questions are: Why does the STL copy a functor? Isn't it enough to construct it once for every instantiation of the set? Why is the functor constructed more in the VS2010 case than in the VS2005 case? (didn't check VS2008) And why is it assigned two times in VS2010 and not in VS2005? Are there any tricks to avoid the copy of functors? I saw a similar question at http://stackoverflow.com/questions/2216041/prevent-unnecessary-copies-of-c-functor-objects but I'm not sure that's the same question. Thanks in advance, Patrick

    Read the article

  • "Class ref in pre-verified class resolved to unexpected implementation" when running android tests i

    - by Mike
    I have a module that builds an app called MyApp. I have another that builds some testcases for that app, called MyAppTests. They both build their own APKs, and they both work fine from within my IDE. I'd like to build them using ant so that I can take advantage of continuous integration. Building the app module works fine. I'm having difficulty getting the Test module to compile and run. Using Christopher's tip from a previous question, I used android create test-project -p MyAppTests -m ../MyApp -n MyAppTests to create the necessary build files to build and run my test project. This seems to work great (once I remove an unnecessary test case that it constructed for me and revert my AndroidManifest.xml to the one I was using before it got replaced by android create), but I have two problems. The first problem: The project doesn't compile because it's missing libraries. $ ant run-tests Buildfile: build.xml [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -install-tested-project: [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 1 source file to /Users/mike/Projects/myapp/android/MyApp/bin/classes -dex: [echo] Converting compiled files and external libraries into /Users/mike/Projects/myapp/android/MyApp/bin/classes.dex... [echo] -package-resources: [echo] Packaging resources [aaptexec] Creating full resource package... -package-debug-sign: [apkbuilder] Creating MyApp-debug-unaligned.apk and signing it with a debug key... [apkbuilder] Using keystore: /Users/mike/.android/debug.keystore debug: [echo] Running zip align on final apk... [echo] Debug Package: /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk install: [echo] Installing /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk onto default emulator or device... [exec] 1567 KB/s (288354 bytes in 0.179s) [exec] pkg: /data/local/tmp/MyApp-debug.apk [exec] Success -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... [mkdir] Created dir: /Users/mike/Projects/myapp/android/MyAppTests/gen [mkdir] Created dir: /Users/mike/Projects/myapp/android/MyAppTests/bin [mkdir] Created dir: /Users/mike/Projects/myapp/android/MyAppTests/bin/classes -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 5 source files to /Users/mike/Projects/myapp/android/MyAppTests/bin/classes [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:4: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:8: package com.google.gson does not exist [javac] import com.google.gson.JsonElement; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:9: package com.google.gson does not exist [javac] import com.google.gson.JsonParser; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:11: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class GsonTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:6: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:7: package roboguice.util does not exist [javac] import roboguice.util.RoboLooperThread; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:11: package com.google.gson does not exist [javac] import com.google.gson.JsonObject; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:15: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class HttpTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:4: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:12: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class LinksTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:4: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:5: package roboguice.util does not exist [javac] import roboguice.util.RoboAsyncTask; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:6: package roboguice.util does not exist [javac] import roboguice.util.RoboLooperThread; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:12: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class SafeAsyncTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectResource': class file for roboguice.inject.InjectResource not found [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectResource' [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView': class file for roboguice.inject.InjectView not found [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView' [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView' [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView' [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:15: cannot find symbol [javac] symbol : class JsonParser [javac] location: class com.myapp.test.GsonTest [javac] final JsonParser parser = new JsonParser(); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:15: cannot find symbol [javac] symbol : class JsonParser [javac] location: class com.myapp.test.GsonTest [javac] final JsonParser parser = new JsonParser(); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:18: cannot find symbol [javac] symbol : class JsonElement [javac] location: class com.myapp.test.GsonTest [javac] final JsonElement e = parser.parse(s); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:20: cannot find symbol [javac] symbol : class JsonElement [javac] location: class com.myapp.test.GsonTest [javac] final JsonElement e2 = parser.parse(s2); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:19: cannot find symbol [javac] symbol : method getInstrumentation() [javac] location: class com.myapp.test.HttpTest [javac] assertEquals("MyApp", getInstrumentation().getTargetContext().getResources().getString(com.myapp.R.string.app_name)); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:62: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.HttpTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:82: cannot find symbol [javac] symbol : method assertTrue(java.lang.String,boolean) [javac] location: class com.myapp.test.HttpTest [javac] assertTrue(result[0], result[0].contains("Search")); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:87: cannot find symbol [javac] symbol : class JsonObject [javac] location: class com.myapp.test.HttpTest [javac] final JsonObject[] result = {null}; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:90: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.HttpTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:117: cannot find symbol [javac] symbol : class JsonObject [javac] location: class com.myapp.test.HttpTest [javac] final JsonObject[] result = {null}; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:120: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.HttpTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:27: cannot find symbol [javac] symbol : method assertTrue(boolean) [javac] location: class com.myapp.test.LinksTest [javac] assertTrue(m.matches()); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:28: cannot find symbol [javac] symbol : method assertEquals(java.lang.String,java.lang.String) [javac] location: class com.myapp.test.LinksTest [javac] assertEquals( map.get(url), m.group(1) ); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:19: cannot find symbol [javac] symbol : method getInstrumentation() [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals("MyApp", getInstrumentation().getTargetContext().getString(com.myapp.R.string.app_name)); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:27: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:65: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:74: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:105: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:113: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:144: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:154: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:187: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/StoriesTest.java:11: cannot access roboguice.activity.GuiceListActivity [javac] class file for roboguice.activity.GuiceListActivity not found [javac] public class StoriesTest extends ActivityUnitTestCase<Stories> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/StoriesTest.java:21: cannot access roboguice.application.GuiceApplication [javac] class file for roboguice.application.GuiceApplication not found [javac] setApplication( new MyApplication( getInstrumentation().getTargetContext() ) ); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/StoriesTest.java:22: incompatible types [javac] found : com.myapp.activity.Stories [javac] required: android.app.Activity [javac] final Activity activity = startActivity(intent, null, null); [javac] ^ [javac] 39 errors [javac] 6 warnings BUILD FAILED /opt/local/android-sdk-mac/platforms/android-1.6/templates/android_rules.xml:248: Compile failed; see the compiler error output for details. Total time: 24 seconds That's not a hard problem to solve. I'm not sure it's the right thing to do, but I copied the missing libraries (roboguice and gson) from the MyApp/libs directory to the MyAppTests/libs directory and everything seems to compile fine. But that leads to the second problem, which I'm currently stuck on. The tests compile fine but they won't run: $ cp ../MyApp/libs/gson-r538.jar libs/ $ cp ../MyApp/libs/roboguice-1.1-SNAPSHOT.jar libs/ 0 10:23 /Users/mike/Projects/myapp/android/MyAppTests $ ant run-testsBuildfile: build.xml [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -install-tested-project: [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 1 source file to /Users/mike/Projects/myapp/android/MyApp/bin/classes -dex: [echo] Converting compiled files and external libraries into /Users/mike/Projects/myapp/android/MyApp/bin/classes.dex... [echo] -package-resources: [echo] Packaging resources [aaptexec] Creating full resource package... -package-debug-sign: [apkbuilder] Creating MyApp-debug-unaligned.apk and signing it with a debug key... [apkbuilder] Using keystore: /Users/mike/.android/debug.keystore debug: [echo] Running zip align on final apk... [echo] Debug Package: /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk install: [echo] Installing /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk onto default emulator or device... [exec] 1396 KB/s (288354 bytes in 0.201s) [exec] pkg: /data/local/tmp/MyApp-debug.apk [exec] Success -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 5 source files to /Users/mike/Projects/myapp/android/MyAppTests/bin/classes [javac] Note: /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java uses unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. -dex: [echo] Converting compiled files and external libraries into /Users/mike/Projects/myapp/android/MyAppTests/bin/classes.dex... [echo] -package-resources: [echo] Packaging resources [aaptexec] Creating full resource package... -package-debug-sign: [apkbuilder] Creating MyAppTests-debug-unaligned.apk and signing it with a debug key... [apkbuilder] Using keystore: /Users/mike/.android/debug.keystore debug: [echo] Running zip align on final apk... [echo] Debug Package: /Users/mike/Projects/myapp/android/MyAppTests/bin/MyAppTests-debug.apk install: [echo] Installing /Users/mike/Projects/myapp/android/MyAppTests/bin/MyAppTests-debug.apk onto default emulator or device... [exec] 1227 KB/s (94595 bytes in 0.075s) [exec] pkg: /data/local/tmp/MyAppTests-debug.apk [exec] Success run-tests: [echo] Running tests ... [exec] [exec] android.test.suitebuilder.TestSuiteBuilder$FailedToCreateTests:INSTRUMENTATION_RESULT: shortMsg=Class ref in pre-verified class resolved to unexpected implementation [exec] INSTRUMENTATION_RESULT: longMsg=java.lang.IllegalAccessError: Class ref in pre-verified class resolved to unexpected implementation [exec] INSTRUMENTATION_CODE: 0 BUILD SUCCESSFUL Total time: 38 seconds Any idea what's causing the "Class ref in pre-verified class resolved to unexpected implementation" error?

    Read the article

  • Can't build and run an android test project created using "ant create test-project" when tested proj

    - by Mike
    I have a module that builds an app called MyApp. I have another that builds some testcases for that app, called MyAppTests. They both build their own APKs, and they both work fine from within my IDE. I'd like to build them using ant so that I can take advantage of continuous integration. Building the app module works fine. I'm having difficulty getting the Test module to compile and run. Using Christopher's tip from a previous question, I used android create test-project -p MyAppTests -m ../MyApp -n MyAppTests to create the necessary build files to build and run my test project. This seems to work great (once I remove an unnecessary test case that it constructed for me and revert my AndroidManifest.xml to the one I was using before it got replaced by android create), but I have two problems. The first problem: The project doesn't compile because it's missing libraries. $ ant run-tests Buildfile: build.xml [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -install-tested-project: [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 1 source file to /Users/mike/Projects/myapp/android/MyApp/bin/classes -dex: [echo] Converting compiled files and external libraries into /Users/mike/Projects/myapp/android/MyApp/bin/classes.dex... [echo] -package-resources: [echo] Packaging resources [aaptexec] Creating full resource package... -package-debug-sign: [apkbuilder] Creating MyApp-debug-unaligned.apk and signing it with a debug key... [apkbuilder] Using keystore: /Users/mike/.android/debug.keystore debug: [echo] Running zip align on final apk... [echo] Debug Package: /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk install: [echo] Installing /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk onto default emulator or device... [exec] 1567 KB/s (288354 bytes in 0.179s) [exec] pkg: /data/local/tmp/MyApp-debug.apk [exec] Success -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... [mkdir] Created dir: /Users/mike/Projects/myapp/android/MyAppTests/gen [mkdir] Created dir: /Users/mike/Projects/myapp/android/MyAppTests/bin [mkdir] Created dir: /Users/mike/Projects/myapp/android/MyAppTests/bin/classes -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 5 source files to /Users/mike/Projects/myapp/android/MyAppTests/bin/classes [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:4: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:8: package com.google.gson does not exist [javac] import com.google.gson.JsonElement; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:9: package com.google.gson does not exist [javac] import com.google.gson.JsonParser; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:11: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class GsonTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:6: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:7: package roboguice.util does not exist [javac] import roboguice.util.RoboLooperThread; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:11: package com.google.gson does not exist [javac] import com.google.gson.JsonObject; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:15: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class HttpTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:4: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:12: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class LinksTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:4: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:5: package roboguice.util does not exist [javac] import roboguice.util.RoboAsyncTask; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:6: package roboguice.util does not exist [javac] import roboguice.util.RoboLooperThread; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:12: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class SafeAsyncTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectResource': class file for roboguice.inject.InjectResource not found [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectResource' [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView': class file for roboguice.inject.InjectView not found [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView' [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView' [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView' [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:15: cannot find symbol [javac] symbol : class JsonParser [javac] location: class com.myapp.test.GsonTest [javac] final JsonParser parser = new JsonParser(); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:15: cannot find symbol [javac] symbol : class JsonParser [javac] location: class com.myapp.test.GsonTest [javac] final JsonParser parser = new JsonParser(); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:18: cannot find symbol [javac] symbol : class JsonElement [javac] location: class com.myapp.test.GsonTest [javac] final JsonElement e = parser.parse(s); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:20: cannot find symbol [javac] symbol : class JsonElement [javac] location: class com.myapp.test.GsonTest [javac] final JsonElement e2 = parser.parse(s2); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:19: cannot find symbol [javac] symbol : method getInstrumentation() [javac] location: class com.myapp.test.HttpTest [javac] assertEquals("MyApp", getInstrumentation().getTargetContext().getResources().getString(com.myapp.R.string.app_name)); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:62: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.HttpTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:82: cannot find symbol [javac] symbol : method assertTrue(java.lang.String,boolean) [javac] location: class com.myapp.test.HttpTest [javac] assertTrue(result[0], result[0].contains("Search")); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:87: cannot find symbol [javac] symbol : class JsonObject [javac] location: class com.myapp.test.HttpTest [javac] final JsonObject[] result = {null}; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:90: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.HttpTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:117: cannot find symbol [javac] symbol : class JsonObject [javac] location: class com.myapp.test.HttpTest [javac] final JsonObject[] result = {null}; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:120: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.HttpTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:27: cannot find symbol [javac] symbol : method assertTrue(boolean) [javac] location: class com.myapp.test.LinksTest [javac] assertTrue(m.matches()); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:28: cannot find symbol [javac] symbol : method assertEquals(java.lang.String,java.lang.String) [javac] location: class com.myapp.test.LinksTest [javac] assertEquals( map.get(url), m.group(1) ); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:19: cannot find symbol [javac] symbol : method getInstrumentation() [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals("MyApp", getInstrumentation().getTargetContext().getString(com.myapp.R.string.app_name)); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:27: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:65: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:74: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:105: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:113: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:144: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:154: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:187: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/StoriesTest.java:11: cannot access roboguice.activity.GuiceListActivity [javac] class file for roboguice.activity.GuiceListActivity not found [javac] public class StoriesTest extends ActivityUnitTestCase<Stories> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/StoriesTest.java:21: cannot access roboguice.application.GuiceApplication [javac] class file for roboguice.application.GuiceApplication not found [javac] setApplication( new MyApplication( getInstrumentation().getTargetContext() ) ); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/StoriesTest.java:22: incompatible types [javac] found : com.myapp.activity.Stories [javac] required: android.app.Activity [javac] final Activity activity = startActivity(intent, null, null); [javac] ^ [javac] 39 errors [javac] 6 warnings BUILD FAILED /opt/local/android-sdk-mac/platforms/android-1.6/templates/android_rules.xml:248: Compile failed; see the compiler error output for details. Total time: 24 seconds That's not a hard problem to solve. I'm not sure it's the right thing to do, but I copied the missing libraries (roboguice and gson) from the MyApp/libs directory to the MyAppTests/libs directory and everything seems to compile fine. But that leads to the second problem, which I'm currently stuck on. The tests compile fine but they won't run: $ cp ../MyApp/libs/gson-r538.jar libs/ $ cp ../MyApp/libs/roboguice-1.1-SNAPSHOT.jar libs/ 0 10:23 /Users/mike/Projects/myapp/android/MyAppTests $ ant run-testsBuildfile: build.xml [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -install-tested-project: [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 1 source file to /Users/mike/Projects/myapp/android/MyApp/bin/classes -dex: [echo] Converting compiled files and external libraries into /Users/mike/Projects/myapp/android/MyApp/bin/classes.dex... [echo] -package-resources: [echo] Packaging resources [aaptexec] Creating full resource package... -package-debug-sign: [apkbuilder] Creating MyApp-debug-unaligned.apk and signing it with a debug key... [apkbuilder] Using keystore: /Users/mike/.android/debug.keystore debug: [echo] Running zip align on final apk... [echo] Debug Package: /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk install: [echo] Installing /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk onto default emulator or device... [exec] 1396 KB/s (288354 bytes in 0.201s) [exec] pkg: /data/local/tmp/MyApp-debug.apk [exec] Success -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 5 source files to /Users/mike/Projects/myapp/android/MyAppTests/bin/classes [javac] Note: /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java uses unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. -dex: [echo] Converting compiled files and external libraries into /Users/mike/Projects/myapp/android/MyAppTests/bin/classes.dex... [echo] -package-resources: [echo] Packaging resources [aaptexec] Creating full resource package... -package-debug-sign: [apkbuilder] Creating MyAppTests-debug-unaligned.apk and signing it with a debug key... [apkbuilder] Using keystore: /Users/mike/.android/debug.keystore debug: [echo] Running zip align on final apk... [echo] Debug Package: /Users/mike/Projects/myapp/android/MyAppTests/bin/MyAppTests-debug.apk install: [echo] Installing /Users/mike/Projects/myapp/android/MyAppTests/bin/MyAppTests-debug.apk onto default emulator or device... [exec] 1227 KB/s (94595 bytes in 0.075s) [exec] pkg: /data/local/tmp/MyAppTests-debug.apk [exec] Success run-tests: [echo] Running tests ... [exec] [exec] android.test.suitebuilder.TestSuiteBuilder$FailedToCreateTests:INSTRUMENTATION_RESULT: shortMsg=Class ref in pre-verified class resolved to unexpected implementation [exec] INSTRUMENTATION_RESULT: longMsg=java.lang.IllegalAccessError: Class ref in pre-verified class resolved to unexpected implementation [exec] INSTRUMENTATION_CODE: 0 BUILD SUCCESSFUL Total time: 38 seconds Any idea what's causing the "Class ref in pre-verified class resolved to unexpected implementation" error?

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • Passing variables to shopping cart with Javascript

    - by albatross
    This question is an extension of this one: http://stackoverflow.com/questions/2359238/calculate-order-price-by-date-selection-value I'm trying to make a conference registration page based off the previous page, which passes the variables(name, email, price) to my organization's outdated shopping cart using javascript. I'm also using Seminar Registration by CSSTricks (http://css-tricks.com/examples/SeminarRegTutorial/) Currently, my proceed to payment button produces an 'element is undefined' error on line 298(same thing on unresolved previous question, linked above^): switch (document.Information.amount.value) { Any help would be greatly appreciated. I'm at my wits end with this. Here is the page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>Seminar Registration Form with jQuery</title> <link rel="stylesheet" type="text/css" href="css/style.css" media="screen" /> <script src="js/jquery-1.2.6.js" type="text/javascript" charset="utf-8"></script> <script src="js/form-fun.jquery.js" type="text/javascript" charset="utf-8"></script> <!--[if IE]> <style type="text/css"> legend { position: relative; top: -30px; } fieldset { margin: 30px 10px 0 0; } </style> <script type="text/javascript"> $(function(){ $("#step_2 legend").css({ opacity: 0.5 }); $("#step_3 legend").css({ opacity: 0.5 }); }); </script> <![endif]--> </head> <body> <div id="page-wrap"> <h1>Conference <span>Registration</span></h1> <form action="#" method="post"> <fieldset id="step_1"> <legend>Step 1</legend> <label for="num_attendees"> How cool are you? </label> <select id="amount"> <option id="0" value="0">Please Choose</option> <option id="prof" value="90.00">Professional</option> <option id="grad" value="55.00">Graduate Student</option> </select> <br /> <div id="attendee_1_wrap" class="name_wrap push"> <h3>Who are you?</h3> <p> <label for="FirstName"> First Name: </label> <input type="text" id="FirstName" class="name_input"></input> </p> <p> <label for="LastName"> Last Name: </label> <input type="text" id="LastName" class="name_input"></input> </p> <p> <label for="OfficialTitle"> Official Title: </label> <input type="text" id="OfficialTitle" class="name_input"></input> </p> <h3>How do we find you?</h3> <label for="email">Email: </label> <input id="email" name="email" class="required email" /> </p> <p> <label for="Address">Street Address: </label><input name="Address" id="Address" type="text" size="20" maxlength="75" /> </p> <p> <label for="City">City: </label><input name="City" id="City" /> </p> <p> <label for="State">State: </label><select name="State" id="State"> <option selected value="IL">IL</option> <option value="AL">AL</option> <option value="AK">AK</option> <option value="AZ">AZ</option> <option value="AR">AR</option> <option value="CA">CA</option> <option value="CO">CO</option> <option value="CT">CT</option> <option value="DE">DE</option> <option value="DC">DC</option> <option value="FL">FL</option> <option value="GA">GA</option> <option value="HI">HI</option> <option value="ID">ID</option> <option value="IN">IN</option> <option value="IA">IA</option> <option value="KS">KS</option> <option value="KY">KY</option> <option value="LA">LA</option> <option value="ME">ME</option> <option value="MD">MD</option> <option value="MA">MA</option> <option value="MI">MI</option> <option value="MN">MN</option> <option value="MS">MS</option> <option value="MO">MO</option> <option value="MT">MT</option> <option value="NE">NE</option> <option value="NV">NV</option> <option value="NH">NH</option> <option value="NJ">NJ</option> <option value="NM">NM</option> <option value="NY">NY</option> <option value="NC">NC</option> <option value="ND">ND</option> <option value="OH">OH</option> <option value="OK">OK</option> <option value="OR">OR</option> <option value="PA">PA</option> <option value="RI">RI</option> <option value="SC">SC</option> <option value="SD">SD</option> <option value="TN">TN</option> <option value="TX">TX</option> <option value="UT">UT</option> <option value="VT">VT</option> <option value="VA">VA</option> <option value="WA">WA</option> <option value="WV">WV</option> <option value="WI">WI</option> <option value="WY">WY</option> </select> </p> <p> <label for="Zip">Zip Code: </label><input name="Zip" id="Zip" type="text" value="" size="5" maxlength="10" /> </p> <p> <label for="Phone">Telephone: </label><input name="Phone" id="Phone" type="text" value="" size="10" maxlength="13" /> </p> </div> </fieldset> <fieldset id="step_2"> <legend>Step 2</legend> <p> Do you work in Higher Education? </p> <input type="radio" id="company_name_toggle_on" name="company_name_toggle_group"></input> <label for="company_name_toggle_on">Yes</label> &emsp; <input type="radio" id="company_name_toggle_off" name="company_name_toggle_group"></input> <label for="company_name_toggle_off">No</label> <div id="company_name_wrap"> <label for="company_name"> Which School? </label> <input type="text" id="company_name"></input> </div> <div class="push"> <p> Will anyone in your group require special accommodations? </p> <input type="radio" id="special_accommodations_toggle_on" name="special_accommodations_toggle"></input> <label for="special_accommodations_toggle_on">Yes</label> &emsp; <input type="radio" id="special_accommodations_toggle_off" name="special_accommodations_toggle"></input> <label for="special_accommodations_toggle_off">No</label> </div> <div id="special_accommodations_wrap"> <label for="special_accomodations_text"> Please explain below: </label> <textarea rows="10" cols="10" id="special_accomodations_text"></textarea> </div> </fieldset> <fieldset id="step_3"> <legend>Step 3</legend> <label for="rock"> Are you ready to rock? </label> <input type="checkbox" id="rock"></input> <p> <INPUT onclick="javascript:PaymentButtonClick()" type=button value="Proceed to payment" name=PaymentButton> <img src="images/visa1.gif" /> <img src="images/mastercard1.gif" /> </p> </fieldset> </form> </div> <FORM name="emailForm" action="mailform.asp" method=post"> <INPUT type="hidden" value="Conference Registration" name="mf_subject"> <INPUT type="hidden" value="Yes" name="mf_email_results"> <INPUT type="hidden" title="" style="BACKGROUND-COLOR: #ffffa0" size="20" name="num_attendees"> <INPUT type="hidden" title="" style="BACKGROUND-COLOR: #ffffa0" size="17" name="FirstName"> <INPUT type="hidden" title="" style="BACKGROUND-COLOR: #ffffa0" size="22" name="LastName"> <INPUT type="hidden" title="" style="BACKGROUND-COLOR: #ffffff" size="64" name="OfficialTitle"> <INPUT type="hidden" title="" style="BACKGROUND-COLOR: #ffffff" size="40" name="email"> <INPUT type="hidden" title="" style="BACKGROUND-COLOR: #ffffff" size="48" name="Address"> <INPUT type="hidden" title="" style="BACKGROUND-COLOR: #ffffa0" size="17" name="City"> <INPUT type="hidden" title="" style="BACKGROUND-COLOR: #ffffa0" size="17" name="State"> <INPUT type="hidden" title="" style="BACKGROUND-COLOR: #ffffa0" size="17" name="Zip"> <INPUT type="hidden" title="" style="BACKGROUND-COLOR: #ffffa0" size="17" name="Phone"> <INPUT type="hidden" title="" style="BACKGROUND-COLOR: #ffffa0" size="17" name="company_name"> <INPUT type="hidden" title="" style="BACKGROUND-COLOR: #ffffff" size="20" name="special_accomodations_text"> <INPUT type="hidden" value="[email protected]" name="mf_from"> <INPUT type="hidden" value="[email protected]" name="mf_to"> </FORM> <FORM name="addform" action="https://webcluster.niu.edu/CreditCard/servlet/Shopping_Cart_Add_Item_Servlet" method="post"> <INPUT type="hidden" value="orient" name="Dept_ID"> <INPUT type="hidden" value="Orientation" name="Product_Name"> <INPUT type="hidden" value="z000000" name="Product_Code"> <INPUT type="hidden" value="" name="amount"> <INPUT type="hidden" value="/orientation/index.shtml" name="return_link"> <INPUT type="hidden" value="http://www.niu.edu" name="return_server"> <INPUT type="hidden" value="1" name="quantity"> <INPUT type="hidden" value="0" name="tax"> <INPUT type="hidden" value="0" name="ship"> <INPUT type="hidden" value="DQ83225" name="sale_id"> <INPUT type="hidden" value="XXXXXX" name="sale_acct"> </FORM> <SCRIPT language="Javascript"> function PaymentButtonClick() { switch (document.Information.amount.value) { case 'prof': document.Information.amount.value = 90.00; break; case 'grad': document.Information.amount.value = 55.00; break; } document.addform.Product_Name.value = document.Information.FirstName.value + ","+ document.Information.LastName.value+","+ document.Information.OfficialTitle.value+","+ document.Information.email.name+","+","+ document.Information.Address.value+ "," + document.Information.City.value+ "," + document.Information.State.value+ "," + document.Information.Zip.value+ "," + document.Information.Phone.value+ "," + document.Information.company_name.value+ "," + document.Information.special_accomodations_text.value; document.addform.Product_Code.value = document.Information.LastName.value; if ((document.Information.UCheck.checked==true) && (document.Information.altDate1.value != "") && (document.Information.altDate1.value != "x")) { if (document.Information.StudentLastName.value != "" || document.Information.StudentFirstName.value != "" || document.Information.StudentID.value != "" ) { document.addform.submit(); } else { alert("Please enter missing information"); } } } </SCRIPT> </body> </html>

    Read the article

< Previous Page | 124 125 126 127 128