Search Results

Search found 108518 results on 4341 pages for 'reading source code'.

Page 41/4341 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • A Bridge to Open Source

    Next week, several members of the Open Source Programs office will be in Portland, OR for the second Open Source Bridge conference which takes place over four days...

    Read the article

  • Posting source code in blogger- fails with C# containers

    - by Lirik
    I tried the solutions that are posted in this related SO question and for the most part the code snippets are working, but there are some cases that are still getting garbled by Blogger when it publishes the blog. In particular, declaring generic containers seems to be most troublesome. Please see the code examples on my blog and in particular the section where I define the dictionary (http://mlai-lirik.blogspot.com/). I want to display this: static Dictionary<int, List<Delegate>> _delegate = new Dictionary<int,List<Delegate>>(); But blogger publishes this: static Dictionary<int, list=""><delegate>> _delegate = new Dictionary<int, list=""><delegate>>(); And it caps the end of my code section with this: </delegate></delegate></int,></delegate></int,> Apparently blogger thinks that the <int and <delegate> portion of the dictionary are some sort of HTML tags and it automatically attempts to close them at the end of the code snippet. Does anybody know how to get around this problem?

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 3 – Table per Concrete Type (TPC) and Choosing Strategy Guidelines

    - by mortezam
    This is the third (and last) post in a series that explains different approaches to map an inheritance hierarchy with EF Code First. I've described these strategies in previous posts: Part 1 – Table per Hierarchy (TPH) Part 2 – Table per Type (TPT)In today’s blog post I am going to discuss Table per Concrete Type (TPC) which completes the inheritance mapping strategies supported by EF Code First. At the end of this post I will provide some guidelines to choose an inheritance strategy mainly based on what we've learned in this series. TPC and Entity Framework in the Past Table per Concrete type is somehow the simplest approach suggested, yet using TPC with EF is one of those concepts that has not been covered very well so far and I've seen in some resources that it was even discouraged. The reason for that is just because Entity Data Model Designer in VS2010 doesn't support TPC (even though the EF runtime does). That basically means if you are following EF's Database-First or Model-First approaches then configuring TPC requires manually writing XML in the EDMX file which is not considered to be a fun practice. Well, no more. You'll see that with Code First, creating TPC is perfectly possible with fluent API just like other strategies and you don't need to avoid TPC due to the lack of designer support as you would probably do in other EF approaches. Table per Concrete Type (TPC)In Table per Concrete type (aka Table per Concrete class) we use exactly one table for each (nonabstract) class. All properties of a class, including inherited properties, can be mapped to columns of this table, as shown in the following figure: As you can see, the SQL schema is not aware of the inheritance; effectively, we’ve mapped two unrelated tables to a more expressive class structure. If the base class was concrete, then an additional table would be needed to hold instances of that class. I have to emphasize that there is no relationship between the database tables, except for the fact that they share some similar columns. TPC Implementation in Code First Just like the TPT implementation, we need to specify a separate table for each of the subclasses. We also need to tell Code First that we want all of the inherited properties to be mapped as part of this table. In CTP5, there is a new helper method on EntityMappingConfiguration class called MapInheritedProperties that exactly does this for us. Here is the complete object model as well as the fluent API to create a TPC mapping: public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } }          public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } }          public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } }      public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; }              protected override void OnModelCreating(ModelBuilder modelBuilder)     {         modelBuilder.Entity<BankAccount>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("BankAccounts");         });         modelBuilder.Entity<CreditCard>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("CreditCards");         });                 } } The Importance of EntityMappingConfiguration ClassAs a side note, it worth mentioning that EntityMappingConfiguration class turns out to be a key type for inheritance mapping in Code First. Here is an snapshot of this class: namespace System.Data.Entity.ModelConfiguration.Configuration.Mapping {     public class EntityMappingConfiguration<TEntityType> where TEntityType : class     {         public ValueConditionConfiguration Requires(string discriminator);         public void ToTable(string tableName);         public void MapInheritedProperties();     } } As you have seen so far, we used its Requires method to customize TPH. We also used its ToTable method to create a TPT and now we are using its MapInheritedProperties along with ToTable method to create our TPC mapping. TPC Configuration is Not Done Yet!We are not quite done with our TPC configuration and there is more into this story even though the fluent API we saw perfectly created a TPC mapping for us in the database. To see why, let's start working with our object model. For example, the following code creates two new objects of BankAccount and CreditCard types and tries to add them to the database: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount();     CreditCard creditCard = new CreditCard() { CardType = 1 };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Running this code throws an InvalidOperationException with this message: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. The reason we got this exception is because DbContext.SaveChanges() internally invokes SaveChanges method of its internal ObjectContext. ObjectContext's SaveChanges method on its turn by default calls AcceptAllChanges after it has performed the database modifications. AcceptAllChanges method merely iterates over all entries in ObjectStateManager and invokes AcceptChanges on each of them. Since the entities are in Added state, AcceptChanges method replaces their temporary EntityKey with a regular EntityKey based on the primary key values (i.e. BillingDetailId) that come back from the database and that's where the problem occurs since both the entities have been assigned the same value for their primary key by the database (i.e. on both BillingDetailId = 1) and the problem is that ObjectStateManager cannot track objects of the same type (i.e. BillingDetail) with the same EntityKey value hence it throws. If you take a closer look at the TPC's SQL schema above, you'll see why the database generated the same values for the primary keys: the BillingDetailId column in both BankAccounts and CreditCards table has been marked as identity. How to Solve The Identity Problem in TPC As you saw, using SQL Server’s int identity columns doesn't work very well together with TPC since there will be duplicate entity keys when inserting in subclasses tables with all having the same identity seed. Therefore, to solve this, either a spread seed (where each table has its own initial seed value) will be needed, or a mechanism other than SQL Server’s int identity should be used. Some other RDBMSes have other mechanisms allowing a sequence (identity) to be shared by multiple tables, and something similar can be achieved with GUID keys in SQL Server. While using GUID keys, or int identity keys with different starting seeds will solve the problem but yet another solution would be to completely switch off identity on the primary key property. As a result, we need to take the responsibility of providing unique keys when inserting records to the database. We will go with this solution since it works regardless of which database engine is used. Switching Off Identity in Code First We can switch off identity simply by placing DatabaseGenerated attribute on the primary key property and pass DatabaseGenerationOption.None to its constructor. DatabaseGenerated attribute is a new data annotation which has been added to System.ComponentModel.DataAnnotations namespace in CTP5: public abstract class BillingDetail {     [DatabaseGenerated(DatabaseGenerationOption.None)]     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } As always, we can achieve the same result by using fluent API, if you prefer that: modelBuilder.Entity<BillingDetail>()             .Property(p => p.BillingDetailId)             .HasDatabaseGenerationOption(DatabaseGenerationOption.None); Working With The Object Model Our TPC mapping is ready and we can try adding new records to the database. But, like I said, now we need to take care of providing unique keys when creating new objects: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount()      {          BillingDetailId = 1                          };     CreditCard creditCard = new CreditCard()      {          BillingDetailId = 2,         CardType = 1     };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Polymorphic Associations with TPC is Problematic The main problem with this approach is that it doesn’t support Polymorphic Associations very well. After all, in the database, associations are represented as foreign key relationships and in TPC, the subclasses are all mapped to different tables so a polymorphic association to their base class (abstract BillingDetail in our example) cannot be represented as a simple foreign key relationship. For example, consider the the domain model we introduced here where User has a polymorphic association with BillingDetail. This would be problematic in our TPC Schema, because if User has a many-to-one relationship with BillingDetail, the Users table would need a single foreign key column, which would have to refer both concrete subclass tables. This isn’t possible with regular foreign key constraints. Schema Evolution with TPC is Complex A further conceptual problem with this mapping strategy is that several different columns, of different tables, share exactly the same semantics. This makes schema evolution more complex. For example, a change to a base class property results in changes to multiple columns. It also makes it much more difficult to implement database integrity constraints that apply to all subclasses. Generated SQLLet's examine SQL output for polymorphic queries in TPC mapping. For example, consider this polymorphic query for all BillingDetails and the resulting SQL statements that being executed in the database: var query = from b in context.BillingDetails select b; Just like the SQL query generated by TPT mapping, the CASE statements that you see in the beginning of the query is merely to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type). TPC's SQL Queries are Union Based As you can see in the above screenshot, the first SELECT uses a FROM-clause subquery (which is selected with a red rectangle) to retrieve all instances of BillingDetails from all concrete class tables. The tables are combined with a UNION operator, and a literal (in this case, 0 and 1) is inserted into the intermediate result; (look at the lines highlighted in yellow.) EF reads this to instantiate the correct class given the data from a particular row. A union requires that the queries that are combined, project over the same columns; hence, EF has to pad and fill up nonexistent columns with NULL. This query will really perform well since here we can let the database optimizer find the best execution plan to combine rows from several tables. There is also no Joins involved so it has a better performance than the SQL queries generated by TPT where a Join is required between the base and subclasses tables. Choosing Strategy GuidelinesBefore we get into this discussion, I want to emphasize that there is no one single "best strategy fits all scenarios" exists. As you saw, each of the approaches have their own advantages and drawbacks. Here are some rules of thumb to identify the best strategy in a particular scenario: If you don’t require polymorphic associations or queries, lean toward TPC—in other words, if you never or rarely query for BillingDetails and you have no class that has an association to BillingDetail base class. I recommend TPC (only) for the top level of your class hierarchy, where polymorphism isn’t usually required, and when modification of the base class in the future is unlikely. If you do require polymorphic associations or queries, and subclasses declare relatively few properties (particularly if the main difference between subclasses is in their behavior), lean toward TPH. Your goal is to minimize the number of nullable columns and to convince yourself (and your DBA) that a denormalized schema won’t create problems in the long run. If you do require polymorphic associations or queries, and subclasses declare many properties (subclasses differ mainly by the data they hold), lean toward TPT. Or, depending on the width and depth of your inheritance hierarchy and the possible cost of joins versus unions, use TPC. By default, choose TPH only for simple problems. For more complex cases (or when you’re overruled by a data modeler insisting on the importance of nullability constraints and normalization), you should consider the TPT strategy. But at that point, ask yourself whether it may not be better to remodel inheritance as delegation in the object model (delegation is a way of making composition as powerful for reuse as inheritance). Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. EF acts as a buffer between the domain and relational models, but that doesn’t mean you can ignore persistence concerns when designing your classes. SummaryIn this series, we focused on one of the main structural aspect of the object/relational paradigm mismatch which is inheritance and discussed how EF solve this problem as an ORM solution. We learned about the three well-known inheritance mapping strategies and their implementations in EF Code First. Hopefully it gives you a better insight about the mapping of inheritance hierarchies as well as choosing the best strategy for your particular scenario. Happy New Year and Happy Code-Firsting! References ADO.NET team blog Java Persistence with Hibernate book a { color: #5A99FF; } a:visited { color: #5A99FF; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } .exception { background-color: #f0f0f0; font-style: italic; padding-bottom: 5px; padding-left: 5px; padding-top: 5px; padding-right: 5px; }

    Read the article

  • Using Git in Enterprise environment

    - by sarat
    Git is an excellent version control. If we exclude the fact that, it doesn't have an excellent GUI support, it's really good and fast. But the source controls like Clearcase has large support for enterprise customers. Companies investing huge amount for source control servers and licesense. Of late most of the large companies like Google adopting Git over the other version controls. But the company is having strong open source group which consistently provide development and support for the tool (Even they might be having a custom version of Git of their own). At the same time, large companies are not really bothered about adopting open source projects and make it relevant for them. Is Git really a reliable tool for enterprise environment, especially for Windows Platform? The support is a question for Git as it's an open source version control. Any companies are there to provide solutions and support? How the server costs comparing to other version controls like Clear-case?

    Read the article

  • UnsatisfiedLinkError on Websphere Application Server 6.1 Data Source

    - by user338154
    Hi, I am unable to start the installed App on my WAS instance. I believe the root cause is an UnsatisfiedLinkError which is shown as follows: Caused by: java.lang.UnsatisfiedLinkError: no ocijdbc10 in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1682) at java.lang.Runtime.loadLibrary0(Runtime.java:822) at java.lang.System.loadLibrary(System.java:993) at oracle.jdbc.driver.T2CConnection$1.run(T2CConnection.java:3147) at java.security.AccessController.doPrivileged(Native Method) at oracle.jdbc.driver.T2CConnection.loadNativeLibrary(T2CConnection.java:3143) at oracle.jdbc.driver.T2CConnection.logon(T2CConnection.java:221) at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:441) at oracle.jdbc.driver.T2CConnection.(T2CConnection.java:132) at oracle.jdbc.driver.T2CDriverExtension.getConnection(T2CDriverExtension.java:78) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:801) at oracle.jdbc.pool.OracleDataSource.getPhysicalConnection(OracleDataSource.java:297) at oracle.jdbc.xa.client.OracleXADataSource.getPooledConnection(OracleXADataSource.java:515) at oracle.jdbc.xa.client.OracleXADataSource.getXAConnection(OracleXADataSource.java:159) at oracle.jdbc.xa.client.OracleXADataSource.getXAConnection(OracleXADataSource.java:133) at com.ibm.ws.rsadapter.spi.InternalGenericDataStoreHelper$1.run(InternalGenericDataStoreHelper.java:935) at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:118) at com.ibm.ws.rsadapter.spi.InternalGenericDataStoreHelper.getPooledConnection(InternalGenericDataStoreHelper.java:972) at com.ibm.ws.rsadapter.spi.WSRdbDataSource.getPooledConnection(WSRdbDataSource.java:1625) at com.ibm.ws.rsadapter.spi.WSManagedConnectionFactoryImpl.createManagedConnection(WSManagedConnectionFactoryImpl.java:1220) at com.ibm.ejs.j2c.FreePool.createManagedConnectionWithMCWrapper(FreePool.java:1988) at com.ibm.ejs.j2c.FreePool.createOrWaitForConnection(FreePool.java:1660) at com.ibm.ejs.j2c.PoolManager.reserve(PoolManager.java:2341) at com.ibm.ejs.j2c.ConnectionManager.allocateMCWrapper(ConnectionManager.java:932) at com.ibm.ejs.j2c.ConnectionManager.allocateConnection(ConnectionManager.java:608) at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:449) at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:418) at org.apache.ojb.broker.accesslayer.ConnectionFactoryAbstractImpl.newConnectionFromDataSource(Unknown Source) at org.apache.ojb.broker.accesslayer.ConnectionFactoryAbstractImpl.lookupConnection(Unknown Source) at org.apache.ojb.broker.accesslayer.ConnectionFactoryManagedImpl.lookupConnection(Unknown Source) at org.apache.ojb.broker.accesslayer.ConnectionManagerImpl.getConnection(Unknown Source) at org.apache.ojb.broker.accesslayer.StatementManager.getPreparedStatement(Unknown Source) at org.apache.ojb.broker.accesslayer.JdbcAccessImpl.executeQuery(Unknown Source) at org.apache.ojb.broker.accesslayer.RsQueryObject.performQuery(Unknown Source) at org.apache.ojb.broker.accesslayer.RsIterator.(Unknown Source) at org.apache.ojb.broker.core.RsIteratorFactoryImpl.createRsIterator(Unknown Source) at org.apache.ojb.broker.core.PersistenceBrokerImpl.getRsIteratorFromQuery(Unknown Source) at org.apache.ojb.broker.core.PersistenceBrokerImpl.getIteratorFromQuery(Unknown Source) at org.apache.ojb.broker.core.QueryReferenceBroker.getCollectionByQuery(Unknown Source) at org.apache.ojb.broker.core.QueryReferenceBroker.getCollectionByQuery(Unknown Source) at org.apache.ojb.broker.core.QueryReferenceBroker.getCollectionByQuery(Unknown Source) at org.apache.ojb.broker.core.PersistenceBrokerImpl.getCollectionByQuery(Unknown Source) at org.apache.ojb.broker.core.DelegatingPersistenceBroker.getCollectionByQuery(Unknown Source) at org.apache.ojb.broker.core.DelegatingPersistenceBroker.getCollectionByQuery(Unknown Source) at com.ascential.xmeta.persistence.orm.impl.ojb.OjbPersistentEObjectPersistenceRegistry.loadPackageCache(OjbPersistentEObjectPersistenceRegistry.java:371) ... 115 more My LD_LIBRARY_PATH variable for the 'was' user is /opt/oracle/product/10.2.0/lib What else should I be checking to fix this error? Please help. Thanks

    Read the article

  • Concentrating on tedious manuals

    - by intuited
    Reading manuals is often boring — sometimes so boring that it becomes all but impossible to focus on the task. Yet.. so essential. As frustrating as it is to have to reread the same 2-page section four times in order to finally read the whole thing the whole way through without mentally skipping off to Marseilles in mid-paragraph, it's much worse to realize at some later point that you could have saved yourself hours of work by doing so. What sorts of jedi mind tricks — and not so jedi ones — can be employed to keep the mind focused on absorbing this critical matter?

    Read the article

  • Introduction to Reading Electronics Schematics [Video]

    - by Jason Fitzpatrick
    If you’re interested in electronics tinkering but a bit overwhelmed by learning electronics schematics, this helpful introductory video will get you started. Courtesy of Make magazine, this video tutorial covers what a schematic is, how schematics are laid out, and the basics of reading a schematic and its component symbols. When you’re done with the video you’ll have a better grasp of electronics circuit schematics than most of the population and, hopefully, and increased comfort reading schematics for all those DIY projects we post here. Collin’s Lab: Schematics [Make via Hacked Gadgets] HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast! Amazon’s New Kindle Fire Tablet: the How-To Geek Review

    Read the article

  • Using the Static Code Analysis feature of Visual Studio (Premium/Ultimate) to find memory leakage problems

    - by terje
    Memory for managed code is handled by the garbage collector, but if you use any kind of unmanaged code, like native resources of any kind, open files, streams and window handles, your application may leak memory if these are not properly handled.  To handle such resources the classes that own these in your application should implement the IDisposable interface, and preferably implement it according to the pattern described for that interface. When you suspect a memory leak, the immediate impulse would be to start up a memory profiler and start digging into that.   However, before you follow that impulse, do a Static Code Analysis run with a ruleset tuned to finding possible memory leaks in your code.  If you get any warnings from this, fix them before you go on with the profiling. How to use a ruleset In Visual Studio 2010 (Premium and Ultimate editions) you can define your own rulesets containing a list of Static Code Analysis checks.   I have defined the memory checks as shown in the lists below as ruleset files, which can be downloaded – see bottom of this post.  When you get them, you can easily attach them to every project in your solution using the Solution Properties dialog. Right click the solution, and choose Properties at the bottom, or use the Analyze menu and choose “Configure Code Analysis for Solution”: In this dialog you can now choose the Memorycheck ruleset for every project you want to investigate.  Pressing Apply or Ok opens every project file and changes the projects code analysis ruleset to the one we have specified here. How to define your own ruleset  (skip this if you just download my predefined rulesets) If you want to define the ruleset yourself, open the properties on any project, choose Code Analysis tab near the bottom, choose any ruleset in the drop box and press Open Clear out all the rules by selecting “Source Rule Sets” in the Group By box, and unselect the box Change the Group By box to ID, and select the checks you want to include from the lists below. Note that you can change the action for each check to either warning, error or none, none being the same as unchecking the check.   Now go to the properties window and set a new name and description for your ruleset. Then save (File/Save as) the ruleset using the new name as its name, and use it for your projects as detailed above. It can also be wise to add the ruleset to your solution as a solution item. That way it’s there if you want to enable Code Analysis in some of your TFS builds.   Running the code analysis In Visual Studio 2010 you can either do your code analysis project by project using the context menu in the solution explorer and choose “Run Code Analysis”, you can define a new solution configuration, call it for example Debug (Code Analysis), in for each project here enable the Enable Code Analysis on Build   In Visual Studio Dev-11 it is all much simpler, just go to the Solution root in the Solution explorer, right click and choose “Run code analysis on solution”.     The ruleset checks The following list is the essential and critical memory checks.  CheckID Message Can be ignored ? Link to description with fix suggestions CA1001 Types that own disposable fields should be disposable No  http://msdn.microsoft.com/en-us/library/ms182172.aspx CA1049 Types that own native resources should be disposable Only if the pointers assumed to point to unmanaged resources point to something else  http://msdn.microsoft.com/en-us/library/ms182173.aspx CA1063 Implement IDisposable correctly No  http://msdn.microsoft.com/en-us/library/ms244737.aspx CA2000 Dispose objects before losing scope No  http://msdn.microsoft.com/en-us/library/ms182289.aspx CA2115 1 Call GC.KeepAlive when using native resources See description  http://msdn.microsoft.com/en-us/library/ms182300.aspx CA2213 Disposable fields should be disposed If you are not responsible for release, of if Dispose occurs at deeper level  http://msdn.microsoft.com/en-us/library/ms182328.aspx CA2215 Dispose methods should call base class dispose Only if call to base happens at deeper calling level  http://msdn.microsoft.com/en-us/library/ms182330.aspx CA2216 Disposable types should declare a finalizer Only if type does not implement IDisposable for the purpose of releasing unmanaged resources  http://msdn.microsoft.com/en-us/library/ms182329.aspx CA2220 Finalizers should call base class finalizers No  http://msdn.microsoft.com/en-us/library/ms182341.aspx Notes: 1) Does not result in memory leak, but may cause the application to crash   The list below is a set of optional checks that may be enabled for your ruleset, because the issues these points too often happen as a result of attempting to fix up the warnings from the first set.   ID Message Type of fault Can be ignored ? Link to description with fix suggestions CA1060 Move P/invokes to NativeMethods class Security No http://msdn.microsoft.com/en-us/library/ms182161.aspx CA1816 Call GC.SuppressFinalize correctly Performance Sometimes, see description http://msdn.microsoft.com/en-us/library/ms182269.aspx CA1821 Remove empty finalizers Performance No http://msdn.microsoft.com/en-us/library/bb264476.aspx CA2004 Remove calls to GC.KeepAlive Performance and maintainability Only if not technically correct to convert to SafeHandle http://msdn.microsoft.com/en-us/library/ms182293.aspx CA2006 Use SafeHandle to encapsulate native resources Security No http://msdn.microsoft.com/en-us/library/ms182294.aspx CA2202 Do not dispose of objects multiple times Exception (System.ObjectDisposedException) No http://msdn.microsoft.com/en-us/library/ms182334.aspx CA2205 Use managed equivalents of Win32 API Maintainability and complexity Only if the replace doesn’t provide needed functionality http://msdn.microsoft.com/en-us/library/ms182365.aspx CA2221 Finalizers should be protected Incorrect implementation, only possible in MSIL coding No http://msdn.microsoft.com/en-us/library/ms182340.aspx   Downloadable ruleset definitions I have defined three rulesets, one called Inmeta.Memorycheck with the rules in the first list above, and Inmeta.Memorycheck.Optionals containing the rules in the second list, and the last one called Inmeta.Memorycheck.All containing the sum of the two first ones.  All three rulesets can be found in the  zip archive  “Inmeta.Memorycheck” downloadable from here.   Links to some other resources relevant to Static Code Analysis MSDN Magazine Article by Mickey Gousset on Static Code Analysis in VS2010 MSDN :  Analyzing Managed Code Quality by Using Code Analysis, root of the documentation for this Preventing generated code from being analyzed using attributes Online training course on Using Code Analysis with VS2010 Blogpost by Tatham Oddie on custom code analysis rules How to write custom rules, from Microsoft Code Analysis Team Blog Microsoft Code Analysis Team Blog

    Read the article

  • Oracle ATG Web Commerce 10 Implementation Developer Boot Camp - Reading (UK) - October 1-12, 2012

    - by Richard Lefebvre
    REGISTER NOW: Oracle ATG Web Commerce 10 Implementation Developer Boot Camp Reading, UK, October 1-12, 2012! OPN invites you to join us for a 10-day implementation bootcamp on Oracle ATG Web Commerce in Reading, UK from October 1-12, 2012.This 10-day boot camp is designed to provide partners with hands-on experience and technical training to successfully build and deploy Oracle ATG Web Commerce 10 Applications. This particular boot camp is focused on helping partners develop the essential skills needed to implement every aspect of an ATG Commerce Application from scratch, (not CRS-based), with a specific goal of enabling experienced Java/J2EE developers with a path towards becoming functional, effective, and contributing members of an ATG implementation team. Built for both new and experienced ATG developers alike, the collaborative nature of this program and its exercises, have proven to be highly effective and extremely valuable in learning the best practices for implementing ATG solutions. Though not required, this bootcamp provides a structured path to earning a Certified Oracle ATG Web Commerce 10 Specialization! What Is Covered: This boot camp is for Application Developers and Software Architects wanting to gain valuable insight into ATG application development best practices, as well as relevant and applicable implementation experience on projects modeled after four of the most common types of applications built on the ATG platform. The following learning objectives are all critical, and are of equal priority in enabling this role to succeed. This learning boot camp will help with: Building a basic functional transaction-ready ATG Web Commerce 10 Application. Utilizing ATG’s platform features such as scenarios, slots, targeters, user profiles and segments, to create a personalized user experience. Building Nucleus components to support and/or extend application functionality. Understanding the intricacies of ATG order checkout and fulfillment. Specifying, designing and implementing new commerce features in ATG 10. Building a functional commerce application modeled after four of the most common types of applications built on the ATG platform, within an agile-based project team environment and under simulated real-world project conditions. Duration: The Oracle ATG Web Commerce 10 Implementation Developer Boot Camp is an instructor-led workshop spanning 10 days. Audience: Application Developers Software Architects Prerequisite Training and Environment Requirements: Programming and Markup Experience with Java J2EE, JavaScript, XML, HTML and CSS Completion of Oracle ATG Web Commerce 10 Implementation Specialist Development Guided Learning Path modules Participants will be required to bring their own laptop that meets the minimum specifications:   64-bit PC and OS (e.g. Windows 7 64-bit) 4GB RAM or more 40GB Hard Disk Space Laptops will require access to the Internet through Remote Desktop via Windows. Agenda Topics: Week 1 – Day 1 through 5 Build a Basic Commerce Application In week one of the boot camp training, we will apply knowledge learned from the ATG Web Commerce 10 Implementation Developer Guided Learning Path modules, towards building a basic transaction-ready commerce application. There will be little to no lectures delivered in this boot camp, as developers will be fully engaged in ATG Application Development activities and best practices. Developers will work independently on the following lab assignments from day's 1 through 5: Lab Assignments  1 Environment Setup 2 Build a dynamic Home Page 3 Site Authentication 4 Build Customer Registration 5 Display Top Level Categories 6 Display Product Sub-Categories 7 Display Product List Page 8 Display Product Detail Page 9 ATG Inventory 10 Build “Add to Cart” Functionality 11 Build Shopping Cart 12 Build Checkout Page  13 Build Checkout Review Page 14 Create an Order and Build Order Confirmation Page 15 Implement Slots and Targeters for Personalization 16 Implement Pricing and Promotions 17 Order Fulfillment Back to top Week 2 – Day 6 through 10 Team-based Case Project In the second week of the boot camp training, participants will be asked to join a project team that will select a case project for the team to implement. Teams will be able to choose from four of the most common application types developed and deployed on the ATG platform. They are as follows: Hard goods with physical fulfillment, Soft goods with electronic fulfillment, a Service or subscription case example, a Course/Event registration case example. Team projects will have approximately 160 hours of use cases/stories for each team to build (40 hours per developer). Each day's Use Cases/Stories will build upon the prior day's work, and therefore must be fully completed at the end of each day. Please note that this boot camp intends to simulate real-world project conditions, and as such will likely require the need for project teams to possibly work beyond normal business hours. To promote further collaboration and group learning, each team will be asked to present their work and share the methodologies and solutions that they've applied to their cases at the end of each day. Location: Oracle Reading CVC TPC510 Room: Wraysbury Reading, UK 9:00 AM – 5:00 PM  Registration Fee (10 Days): US $3,375 Please click on the following link to REGISTER or  visit the Oracle ATG Web Commerce 10 Implementation Developer Boot Camp page for more information. Questions: Patrick Ty Partner Enablement, Oracle Commerce Phone: 310.343.7687 Mobile: 310.633.1013 Email: [email protected]

    Read the article

  • Source of parsers for programming languages?

    - by Arkaaito
    I'm dusting off an old project of mine which calculates a number of simple metrics about large software projects. One of the metrics is the length of files/classes/methods. Currently my code "guesses" where class/method boundaries are based on a very crude algorithm (traverse the file, maintaining a "current depth" and adjusting it whenever you encounter unquoted brackets; when you return to the level a class or method began on, consider it exited). However, there are many problems with this procedure, and a "simple" way of detecting when your depth has changed is not always effective. To make this give accurate results, I need to use the canonical way (in each language) of detecting function definitions, class definitions and depth changes. This amounts to writing a simple parser to generate parse trees containing at least these elements for every language I want my project to be applicable to. Obviously parsers have been written for all these languages before, so it seems like I shouldn't have to duplicate that effort (even though writing parsers is fun). Is there some open-source project which collects ready-to-use parser libraries for a bunch of source languages? Or should I just be using ANTLR to make my own from scratch? (Note: I'd be delighted to port the project to another language to make use of a great existing resource, so if you know of one, it doesn't matter what language it's written in.)

    Read the article

  • Microsoft and the open source community

    - by Charles Young
    For the last decade, I have repeatedly, in my imitable Microsoft fan boy style, offered an alternative view to commonly held beliefs about Microsoft's stance on open source licensing.  In earlier times, leading figures in Microsoft were very vocal in resisting the idea that commercial licensing is outmoded or morally reprehensible.  Many people interpreted this as all-out corporate opposition to open source licensing.  I never read it that way. It is true that I've met individual employees of Microsoft who are antagonistic towards FOSS (free and open source software), but I've met more who are supportive or at least neutral on the subject.  In any case, individual attitudes of employees don't necessarily reflect a corporate stance.  The strongest opposition I've encountered has actually come from outside the company.  It's not a charitable thought, but I sometimes wonder if there are people in the .NET community who are opposed to FOSS simply because they believe, erroneously, that Microsoft is opposed. Here, for what it is worth, are the points I've repeated endlessly over the years and which have often been received with quizzical scepticism. a)  A decade ago, Microsoft's big problem was not FOSS per se, or even with copyleft.  The thing which really kept them awake at night was the fear that one day, someone might find, deep in the heart of the Windows code base, some code that should not be there and which was published under GPL.  The likelihood of this ever happening has long since faded away, but there was a time when MS was running scared.  I suspect this is why they held out for a while from making Windows source code open to inspection.  Nowadays, as an MVP, I am positively encouraged to ask to see Windows source. b)  Microsoft has never opposed the open source community.  They have had problems with specific people and organisations in the FOSS community.  Back in the 1990s, Richard Stallman gave time and energy to a successful campaign to launch antitrust proceedings against Microsoft.  In more recent times, the negative attitude of certain people to Microsoft's submission of two FOSS licences to the OSI (both of which have long since been accepted), and the mad scramble to try to find any argument, however tenuous, to block their submission was not, let us say, edifying. c) Microsoft has never, to my knowledge, written off the FOSS model.  They certainly don't agree that more traditional forms of licensing are inappropriate or immoral, and they've always been prepared to say so.  One reason why it was so hard to convince people that Microsoft is not rabidly antagonistic towards FOSS licensing is that so many people think they have no involvement in open source.  A decade ago, there was virtually no evidence of any such involvement.  However, that was a long time ago.  Quietly over the years, Microsoft has got on with the job of working out how to make use of FOSS licensing and how to support the FOSS community.  For example, as well as making increasingly extensive use of Github, they run an important FOSS forge (CodePlex) on which they, themselves, host many hundreds of distinct projects.  The total count may even be in the thousands now.  I suspect there is a limit of about 500 records on CodePlex searches because, for the past few years, whenever I search for Microsoft-specific projects on CodePlex, I always get approx. 500 hits.  Admittedly, a large volume of the stuff they publish under FOSS licences amounts to code samples, but many of those 'samples' have grown into useful and fully featured frameworks, libraries and tools. All this is leading up to the observation that yesterday's announcement by Scott Guthrie marks a significant milestone and should not go unnoticed.  If you missed it, let me summarise.   From the first release of .NET, Microsoft has offered a web development framework called ASP.NET.  The core libraries are included in the .NET framework which is released free of charge, but which is not open source.   However, in recent years, the number of libraries that constitute ASP.NET have grown considerably.  Today, most professional ASP.NET web development exploits the ASP.NET MVC framework.  This, together with several other important parts of the ASP.NET technology stack, is released on CodePlex under the Apache 2.0 licence.   Hence, today, a huge swathe of web development on the .NET/Azure platform relies four-square on the use of FOSS frameworks and libraries. Yesterday, Scott Guthrie announced the next stage of ASP.NET's journey towards FOSS nirvana.  This involves extending ASP.NET's FOSS stack to include Web API and the MVC Razor view engine which is rapidly becoming the de facto 'standard' for building web pages in ASP.NET.  However, perhaps the more important announcement is that the ASP.NET team will now accept and review contributions from the community.  Scott points out that this model is already in place elsewhere in Microsoft, and specifically draws attention to development of the Windows Azure SDKs.  These SDKs are central to Azure development.   The .NET and Java SDKs are published under Apache 2.0 on Github and Microsoft is open to community contributions.  Accepting contributions is a more profound move than simply releasing code under FOSS licensing.  It means that Microsoft is wholeheartedly moving towards a full-blooded open source approach for future evolution of some of their central and most widely used .NET and Azure frameworks and libraries.  In conjunction with Scott's announcement, Microsoft has also released Git support for CodePlex (at long last!) and, perhaps more importantly, announced significant new investment in their own FOSS forge. Here at Solidsoft we have several reasons to be very interested in Scott's announcement. I'll draw attention to one of them.  Earlier this year we wrote the initial version of a new UK Government web application called CloudStore.  CloudStore provides a way for local and central government to discover and purchase applications and services. We wrote the web site using ASP.NET MVC which is FOSS.  However, this point has been lost on the ladies and gentlemen of the press and, I suspect, on some of the decision makers on the government side.  They announced a few weeks ago that future versions of CloudStore will move to a FOSS framework, clearly oblivious of the fact that it is already built on a FOSS framework.  We are, it is fair to say, mildly irked by the uninformed and badly out-of-date assumption that “if it is Microsoft, it can't be FOSS”.  Old prejudices live on.

    Read the article

  • Database source control

    - by Bojan Skrchevski
    Should database files(scripts etc.) be on source control? If so, what is the best method to keep it and update it there? Is there even a need for database files to be on source control since we can put it on a development server where everyone can use it and make changes to it if needed. But, then we can't get it back if someone messes it up. What approach is best used for databases on source-control?

    Read the article

  • Do you still limit line length in code?

    - by Noldorin
    This is a matter on which I would like to gauge the opinion of the community: Do you still limit the length of lines of code to a fixed maximum? This was certainly a convention of the past for many languages; one would typically cap the number of characters per line to a value such as 80 (and more recnetly 100 or 120 I believe). As far as I understand, the primary reasons for limiting line length are: Readability - You don't have to scroll over horizontally when you want to see the end of some lines. Printing - Admittedly (at least in my experience), most code that you are working on does not get printed out on paper, but by limiting the number of characters you can insure that formatting doesn't get messed up when printed. Past editors (?) - Not sure about this one, but I suspect that at some point in the distant past of programming, (at least some) text editors may have been based on a fixed-width buffer. I'm sure there are points that I am still missing out, so feel free to add to these... Now, when I tend to observe C or C# code nowadays, I often see a number of different styles, the main ones being: Line length capped to 80, 100, or even 120 characters. As far as I understand, 80 is the traditional length, but the longer ones of 100 and 120 have appeared because of the widespread use of high resolutions and widescreen monitors nowadays. No line length capping at all. This tends to be pretty horrible to read, and I don't see it too often, though it's certainly not too rare either. Inconsistent capping of line length. The length of some lines are limited to a fixed maximum (or even a maximum that changes depending on the file/location in code), while others (possibly comments) are not at all. My personal preference here (at least recently) has been to cap the line length to 100 in the Visual Studio editor. This means that in a decently sized window (on a non-widescreen monitor), the ends of lines are still fully visible. I can however see a few disadvantages in this, especially when you end up writing code that's indented 3 or 4 levels and then having to include a long string literal - though I often take this as a sign to refactor my code! In particular, I am curious what the C and C# coders (or anyone who uses Visual Studio for that matter) think about this point, though I would be interested in hearing anyone's thoughts on the subject. Edit Thanks for the all answers - I appreciate the variety of opinions here, all presenting sound reasons. Consensus does seem to be tipping in the direction of always (or almost always) limit the line length. Interestingly, it seems to be in various coding standards to limit the line length. Judging by some of the answers, both the Python and Google CPP guidelines set the limit at 80 chars. I haven't seen anything similar regarding C# or VB.NET, but I would be curious to see if there are ones anywhere.

    Read the article

  • Creating reproducible builds to verify Free Software

    - by mikkykkat
    Free Software is about freedom and privacy, Open Source software is great but making that fully practical usually won't happen. Most Free Software developers publicize binaries that we can't verify are really compiled from the source code or have something bad injected already! We have the freedom to change the code, but privacy for ordinary users is missing. For desktop software there is a lot of languages and opportunities to create Free Software with a reproducible build process (compiling source code to always produce the exact same binary), but for mobile computing I don't know if same thing is possible or not? Mobile devices are probably the future of computing and Android is the only Open Source environment so far which accept Java for coding. Compiling same Android application won't result in the exact same binary every time. For Open Source Android apps how we can verify the produced binary (.apk) is really compiled from the source code? Is there any way to create reproducible builds from the Android SDK or does Java fail here for Free Software? is there any java software ever wrote with a reproducible build?

    Read the article

  • Shared Data Source name error underscore characters added

    - by mick
    The name of our shared data source in RS (report server) is "AF1 Live Database" (no underscore characters - just spaces between words) and is the same in report builder in VS. However, the following error pops up when the RDL of this report is uploaded onto our company site and run. (error we are receiving...) The report server cannot process the report or shared dataset. The shared data source 'AF1_Live_Database' for the report server or SharePoint site is not valid. Browse to the server or site and select a shared data source. (rsInvalidDataSourceReference) We have no idea why the error reports the shared data source as 'AF1_Live_Database' with underscore characters? As this appears to be the problem that keeps the report from running we are seeking your help, thanks.

    Read the article

  • Who is code wanderer?

    - by DigiMortal
    In every area of life there are people with some bad habits or misbehaviors that affect the work process. Software development is also not free of this kind of people. Today I will introduce you code wanderer. Who is code wanderer? Code wandering is more like bad habit than serious diagnose. Code wanderers tend to review and “fix” source code in files written by others. When code wanderer has some free moments he starts to open the code files he or she has never seen before and starts making little fixes to these files. Why is code wanderer dangerous? These fixes seem correct and are usually first choice to do when considering nice code. But as changes are made by coder who has no idea about the code he or she “fixes” then “fixing” usually ends up with messing up working code written by others. Often these “fixes” are not found immediately because they doesn’t introduce errors detected by compilers. So these “fixes” find easily way to production environments because there is also very good chance that “fixed” code goes through all tests without any problems. How to stop code wanderer? The first thing is to talk with person and explain him or her why those changes are dangerous. It is also good to establish rules that state clearly why, when and how can somebody change the code written by other people. If this does not work it is possible to isolate this person so he or she can post his or her changes to code repository as patches and somebody reviews those changes before applying them.

    Read the article

  • What would you add to Code Complete 3rd Edition?

    - by Peter Turner
    It's been quite a few years since Code Complete was published. I really love the book, I keep it in the bathroom at the office and read a little out of it once or twice a day. I was just wondering for the sake of wonderment, what kinds of things need to be added to Code Complete 3e, and for the sake of reductionism, what kinds of things would be removed. Also, what languages would you use for code examples?

    Read the article

  • Sweet and Sour Source Control

    - by Tony Davis
    Most database developers don't use Source Control. A recent anonymous poll on SQL Server Central asked its readers "Which Version Control system do you currently use to store you database scripts?" The winner, with almost 30% of the vote was...none: "We don't use source control for database scripts". In second place with almost 28% of the vote was Microsoft's VSS. VSS? Given its reputation for being buggy, unstable and lacking most of the basic features required of a proper source control system, answering VSS is really just another way of saying "I don't use Source Control". At first glance, it's a surprising thought. You wonder how database developers can work in a team and find out what changed, when the system worked before but is now broken; to work out what happened to their changes that now seem to have vanished; to roll-back a mistake quickly so that the rest of the team have a functioning build; to find instantly whether a suspect change has been deployed to production. Unfortunately, the survey didn't ask about the scale of the database development, and correlate the two questions. If there is only one database developer within a schema, who has an automated approach to regular generation of build scripts, then the need for a formal source control system is questionable. After all, a database stores far more about its metadata than a traditional compiled application. However, what is meat for a small development is poison for a team-based development. Here, we need a form of Source Control that can reconcile simultaneous changes, store the history of changes, derive versions and builds and that can cope with forks and merges. The problem comes when one borrows a solution that was designed for conventional programming. A database is not thought of as a "file", but a vast, interdependent and intricate matrix of tables, indexes, constraints, triggers, enumerations, static data and so on, all subtly interconnected. It is an awkward fit. Subversion with its support for merges and forks, and the tolerance of different work practices, can be made to work well, if used carefully. It has a standards-based architecture that allows it to be used on all platforms such as Windows Mac, and Linux. In the words of Erland Sommerskog, developers should "just do it". What's in a database is akin to a "binary file", and the developer must work only from the file. You check out the file, edit it, and save it to disk to compile it. Dependencies are validated at this point and if you've broken anything (e.g. you renamed a column and broke all the objects that reference the column), you'll find out about it right away, and you'll be forced to fix it. Nevertheless, for many this is an alien way of working with SQL Server. Subversion is the powerhouse, not the GUI. It doesn't work seamlessly with your existing IDE, and that usually means SSMS. So the question then becomes more subtle. Would developers be less reluctant to use a fully-featured source (revision) control system for a team database development if they had a turn-key, reliable system that fitted in with their existing work-practices? I'd love to hear what you think. Cheers, Tony.

    Read the article

  • How to preserve &amp; in <pre><code>

    - by Marcy Sutton
    I am having trouble preserving an ampersand in a code example on my blog, because all HTML entities start with &. Any tips? For example: <pre> <code> $pageTitle = str_replace('&', ' &amp;', $page->attributes()->title); </code> </pre> Renders as: $pageTitle = str_replace('&', '&', $page->attributes()->title);

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >