Search Results

Search found 1068 results on 43 pages for 'strategies'.

Page 9/43 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Strategies for generating Zend Cache Keys

    - by emeraldjava
    ATM i'm manually generating a cache key based on the method name and parameters, then follow to the normal cache pattern. This is all done in the Controller and i'm calling a model class that 'extends Zend_Db_Table_Abstract'. public function indexAction() { $cache = Zend_Registry::get('cache'); $individualleaguekey = sprintf("getIndividualLeague_%d_%s",$leagueid,$division->code); if(!$leaguetable = $cache->load($individualleaguekey)) { $table = new Model_DbTable_Raceresult(); $leaguetable = $table->getIndividualLeague($leagueid,$division,$races); $cache->save($leaguetable, $individualleaguekey); } $this->view->leaguetable = $leaguetable; .... I want to avoid the duplication of parameters to the cache creation method and also to the model method, so i'm thinking of moving the caching logic away from my controller class and into model class packaged in './model/DbTable', but this seems incorrect since the DB model should only handle SQL operations. Any suggestions on how i can implement a clean patterned solution?

    Read the article

  • What are the strategies behind closing unresolved issues in different issue tracking process definitions

    - by wonko realtime
    Recently, i found out that it seems to me like a good part of the "administratives" tend to close "issues" in their bug- and issue-tracking systems with the reason that they don't fit in "their next release". One example for that can be found here: https://connect.microsoft.com/VisualStudio/feedback/details/640440/c-projects-add-option-to-remove-unused-references Because i fear that i've got a fundamental lack of understanding for this approach, i'm wondering if someone can point me to informations which could give some insight in the rationales behind such processes.

    Read the article

  • Strategies for Mapping Views in NHibernate

    - by Nathan Fisher
    It seems that NHibernate needs to have an id tag specified as part of the mapping. This presents a problem for views as most of the time (in my experience) a view will not have an Id. I have mapped views before in nhibernate, but they way I did it seemed to be be messy to me. Here is a contrived example of how I am doing it currently. Mapping <class name="ProductView" table="viewProduct" mutable="false" > <id name="Id" type="Guid"> <generator class="guid.comb" /> </id> <property name="Name" /> <!-- more properties --> </class> View SQL Select NewID() as Id, ProductName as Name, --More columns From Product Class public class ProductView { public virtual Id {get; set;} public virtual Name {get; set;} } I don't need an Id for the product or in the case of some views I may not have an id for the view, depending on if I have control over the View Is there a better way of mapping views to objects in nhibernate?

    Read the article

  • error handling strategies in C?

    - by Leo
    Given the code below: typedef struct {int a;} test_t; arbitrary_t test_dosomething(test_t* test) { if (test == NULL) { //options: //1. print an error and let it crash //e.g. fprintf(stderr, "null ref at %s:%u", __FILE__, __LINE__); //2. stop the world //e.g. exit(1); //3. return (i.e. function does nothing) //4. attempt to re-init test } printf("%d", test->a); //do something w/ test } I want to get a compiler error if test is ever NULL, but I guess that's not possible in C. Since I need to do null checking at runtime, what option is the most proper way to handle it?

    Read the article

  • URL Controller Mapping Strategies (PHP)

    - by sunwukung
    This is kind of an academic question, so feel free to exit now. I've had a dig through Stack for threads pertaining to URL/Controller mapping in MVC frameworks - in particular this one: http://stackoverflow.com/questions/125677/php-application-url-routing So far, I can ascertain two practices: 1: dynamic mapping through parsing the URL string (exploded on '/') 2: pattern matching matching url to config file containing available routes I wanted to get some feedback (or links to some other threads/articles) from folks regarding their views on how best to approach this task.

    Read the article

  • Strategies for "Always-Connected" Windows Client Data Architecture

    - by magz2010
    Hi. Let me start by saying: this is my 1st post here, this is a bit lenghty, and I havent done Windows Forms development in years....with that in mind please excuse me if this isn't directly a programming question and please bear with me as I really need the help!! I have been asked to develop a Windows Forms app for our company that talks to a central (local area network) Linux Server hosting a PostgreSQL database. The app is to allow users to authenticate themselves into the system and thereafter conduct the usual transactions with the PG database. Ordinarily, I would propose writing a webforms app against Mono, but the clients need to utilise local resources such as USB peripheral devices, so that is out of the question. While it might not seem clear, my questions are italised below: Dilemma #1: The application is meant to be always connected. How should I structure my DAL/BLL - Should this reside on the server or with the client? Dilemma #2: I have been reading up on Client Application Services (CAS), and it seems like a great fit for authentication, as everything is exposed via URIs. I know that a .NET Data Provider exists for PostgreSQL, but not too sure if CAS will all work on a Linux (Debian) server? Believe me, I would get my hands dirty and try myself, but I need to come up with a logical design first before resources are allocated to me for "trial purposes"! Dilemma #3: If the DAL/BLL is to reside on the server, is there any way I can create data services, and expose only these services to authenticated clients. There is a (security) requirement whereby a connection string with username and password to the database cannot be present on any client machines...even if security on the database side is quite rigid. I'm guessing that the only way for this to work would be to create the various CRUD data service methods that are exposed by an ASP.NET app, and have the WindowsForms make a request for data or persist data to the ASP.NET app (thru a URI) and have that return a resultset or value. Would I be correct in assuming this? Should I be looking into WCF Data Services? and will WCF work with a non-SQL Server database? Thank you for taking the time out to read this, but know that I am desperately seeking any advice on this! THANKS A MILLION!!!!

    Read the article

  • Decimal rounding strategies in enterprise applications

    - by Sapphire
    Well, I am wondering about a thing with rounding decimals, and storing them in DB. Problem is like this: Let's say we have a customer and a invoice. The invoice has total price of $100.495 (due to some discount percentage which is not integer number), but it is shown as $100.50 (when rounded, just for print on invoice). It is stored in the DB with the price of $100.495, which means that when customer makes a deposit of $100.50 it will have $0.005 extra on the account. If this is rounded, it will appear as $0, but after couple of invoices it would keep accumulating, which would appear wrong (although it actually is not). What is best to do in this case. Store the value of $100.50, or leave everything as-is?

    Read the article

  • MySQL Single Query Benchmarking Strategies

    - by Pepper
    Hello, I have a slow mySQL query in my application that I need to re-write. The problem is, it's only slow on my production server and only when it's not cached. The first time I run it, it will take 12 seconds, then anytime after that it'll be 500 milliseconds. Is there an easy way to test this query without it hitting the query cache so I can see the results of my refactoring? Thanks!

    Read the article

  • Database Schema Versioning Strategies

    - by Jack Ryan
    I work on a project that uses a reasonably large database, the live version weighing in at somewhere around 60-80GB. The live database is the only real definitive source of our schema, and because of its size duplicating this database is too slow to be done often. This means we have ended up developing our database schema in a pretty ad hoc way, using sql compare to migrate changes from dev dbs to the live system, and only wiping our dev dbs every month or two. I am hoping to get some pointers on how to improve our database development work flow so that we have a little more control. Some things to think about: Currently nobody is really in charge of the database schema, all developers can change it if they need to, though generally these decisions are talked about before they are done. There are stored procedures, functions, and views in the database. These should probably be dumped to files so they can be reloaded on every build. Schema changes should probably be checked in as scripts. We have started to do this recently. However all our scripts must then be numbered (because there may be dependencies between them), and must be re runnable (because our build script currently runs them all in order). This makes them hard to read because they are full of conditionals that check whether tables or columns already exist. This is a step that is often forgotten by developers. Getting a new database should be quick and easy. This is currently a big problem, it takes several hours to get a copy of last nights backup and restore it onto a dev machine. Some mechanism needs to be in place to allow developers to update static data. We have tables that contain data that is never updated through the application, but does potentially need to be changed when we do a new release (often this drives dropdowns). The whole thing needs to be runnable as part of a build script. Are there any tools that can be used to help to do this? Eventually I would like to be at a point where a new DB can be built from scratch without copying any data from the live system. I don't mind writing some scripts to glue all the steps together but each part should be easily editable so that we continue to use it rather than make changes directly on DBs.

    Read the article

  • Indexing/Performance strategies for vast amount of the same value

    - by DrColossos
    Base information: This is in context to the indexing process of OpenStreetMap data. To simplify the question: the core information is divided into 3 main types with value "W", "R", "N" (VARCHAR(1)). The table has somewhere around ~75M rows, all columns with "W" make up ~42M rows. Existing indexes are not relevant to this question. Now the question itself: The indexing of the data is done via an procedure. Inside this procedure, there are some loops that do the following: [...] SELECT * FROM table WHERE the_key = "W"; [...] The results get looped again and the above query itself is also in a loop. This takes a lot of time and slows down the process massivly. An indexon the_key is obviously useless since all the values that the index might use are the same ("W"). The script itself is running with a speed that is OK, only the SELECTing takes very long. Do I need to create a "special" kind of index that takes this into account and makes the SELECT quicker? If so, which one? need to tune some of the server parameters (they are already tuned and the result that they deliver seem to be good. If needed, I can post them)? have to live with the speed and simply get more hardware to gain more power (Tim Taylor grunt grunt)? Any alternatives to the above points (except rewriting it or not using it)?

    Read the article

  • Effective optimization strategies on modern C++ compilers

    - by user168715
    I'm working on scientific code that is very performance-critical. An initial version of the code has been written and tested, and now, with profiler in hand, it's time to start shaving cycles from the hot spots. It's well-known that some optimizations, e.g. loop unrolling, are handled these days much more effectively by the compiler than by a programmer meddling by hand. Which techniques are still worthwhile? Obviously, I'll run everything I try through a profiler, but if there's conventional wisdom as to what tends to work and what doesn't, it would save me significant time. I know that optimization is very compiler- and architecture- dependent. I'm using Intel's C++ compiler targeting the Core 2 Duo, but I'm also interested in what works well for gcc, or for "any modern compiler." Here are some concrete ideas I'm considering: Is there any benefit to replacing STL containers/algorithms with hand-rolled ones? In particular, my program includes a very large priority queue (currently a std::priority_queue) whose manipulation is taking a lot of total time. Is this something worth looking into, or is the STL implementation already likely the fastest possible? Along similar lines, for std::vectors whose needed sizes are unknown but have a reasonably small upper bound, is it profitable to replace them with statically-allocated arrays? I've found that dynamic memory allocation is often a severe bottleneck, and that eliminating it can lead to significant speedups. As a consequence I'm interesting in the performance tradeoffs of returning large temporary data structures by value vs. returning by pointer vs. passing the result in by reference. Is there a way to reliably determine whether or not the compiler will use RVO for a given method (assuming the caller doesn't need to modify the result, of course)? How cache-aware do compilers tend to be? For example, is it worth looking into reordering nested loops? Given the scientific nature of the program, floating-point numbers are used everywhere. A significant bottleneck in my code used to be conversions from floating point to integers: the compiler would emit code to save the current rounding mode, change it, perform the conversion, then restore the old rounding mode --- even though nothing in the program ever changed the rounding mode! Disabling this behavior significantly sped up my code. Are there any similar floating-point-related gotchas I should be aware of? One consequence of C++ being compiled and linked separately is that the compiler is unable to do what would seem to be very simple optimizations, such as move method calls like strlen() out of the termination conditions of loop. Are there any optimization like this one that I should look out for because they can't be done by the compiler and must be done by hand? On the flip side, are there any techniques I should avoid because they are likely to interfere with the compiler's ability to automatically optimize code? Lastly, to nip certain kinds of answers in the bud: I understand that optimization has a cost in terms of complexity, reliability, and maintainability. For this particular application, increased performance is worth these costs. I understand that the best optimizations are often to improve the high-level algorithms, and this has already been done.

    Read the article

  • Security strategies for storing password on disk

    - by Mike
    I am building a suite of batch jobs that require regular access to a database, running on a Solaris 10 machine. Because of (unchangable) design constraints, we are required use a certain program to connect to it. Said interface requires us to pass a plain-text password over a command line to connect to the database. This is a terrible security practice, but we are stuck with it. I am trying to make sure things are properly secured on our end. Since the processing is automated (ie, we can't prompt for a password), and I can't store anything outside the disk, I need a strategy for storing our password securely. Here are some basic rules The system has multiple users. We can assume that our permissions are properly enforced (ie, if a file with a is chmod'd to 600, it won't be publically readable) I don't mind anyone with superuser access looking at our stored password Here is what i've got so far Store password in password.txt $chmod 600 password.txt Process reads from password.txt when it's needed Buffer overwritten with zeros when it's no longer needed Although I'm sure there is a better way.

    Read the article

  • Strategies for testing reactive, asynchronous code

    - by Arne
    I am developing a data-flow oriented domain-specific language. To simplify, let's just look at Operations. Operations have a number of named parameters and can be asked to compute their result using their current state. To decide when an Operation should produce a result, it gets a Decision that is sensitive to which parameter got a value from who. When this Decision decides that it is fulfilled, it emits a Signal using an Observer. An Accessor listens for this Signal and in turn calls the Result method of the Operation in order to multiplex it to the parameters of other Operations. So far, so good, nicely decoupled design, composable and reusable and, depending on the specific Observer used, as asynchronous as you want it to be. Now here's my problem: I would love to start coding actual Tests against this design. But with an asynchronous Observer... how should I know that the whole signal-and-parameters-plumbing worked? Do I need to use time outs while waiting for a Signal in order to say that it was emitted successfully or not? How can I be, formally, sure that the Signal will not be emitted if I just wait a little longer (halting problem? ;-)) And, how can I be sure that the Signal was emitted because it was me who set a parameter, and not another Operation? It might well be that my test comes to early and sees a Signal that was emitted way before my setting a parameter caused a Decision to emit it. Currently, I guess the trivial cases are easy to test, but as soon as I want to test complex many-to-many - situations between operations I must resort to hoping that the design Just Works (tm)...

    Read the article

  • Best strategies for reading J code

    - by estanford
    I've been using J for a few months now, and I find that reading unfamiliar code (e.g. that I didn't write myself) is one of the most challenging aspects of the language, particularly when it's in tacit. After a while, I came up with this strategy: 1) Copy the code segment into a word document 2) Take each operator from (1) and place it on a separate line, so that it reads vertically 3) Replace each operator with its verbal description in the Vocabulary page 4) Do a rough translation from J syntax into English grammar 5) Use the translation to identify conceptually related components and separate them with line breaks 6) Write a description of what each component from (5) is supposed to do, in plain English prose 7) Write a description of what the whole program is supposed to do, based on (6) 8) Write an explanation of why the code from (1) can be said to represent the design concept from (7). Although I learn a lot from this process, I find it to be rather arduous and time-consuming -- especially if someone designed their program using a concept I never encountered before. So I wonder: do other people in the J community have favorite ways to figure out obscure code? If so, what are the advantages and disadvantages of these methods?

    Read the article

  • How to manage test fixtures for end-to-end testing?

    - by Peter Becker
    Having just set up a test framework for a new web application, I realized I missed one of the big questions: "How do I make tests independent from each other?" Years ago I have set up some complicated Ant scripting to do full cycles of deleting all database tables, creating the schema again, adding test data, starting the application, running one test and then stopping the application. That was a pain to maintain and restricted us to nightly tests due to the time it took to run the full suite. It was still worth it, but I wonder if there is an easier way. Are there alternatives to this approach? The main criterion is that each test should not be affected by any other test in the suite, no matter if it failed or succeeded.

    Read the article

  • How do you unit test a class that's meant to talk to data?

    - by Arda Xi
    I have a few repository classes that are meant to talk to different kinds of data, deriving from an IRepository interface laid out like so: In implementations, the code talks to a data source, be this a directory of XML files or a database or even just a cache. Is it possible to reliably unit test any of these implementations? I don't see a mock implementation working, because then I'm only testing the mock code and not the actual code.

    Read the article

  • Is there a way to only backup a SQL 2005 database structure fully, but only the data in a certain se

    - by TheSoftwareJedi
    I have several schemas in my database, and the largest one ("large" meaning disk space consumed) is my "web" schema which is a denormalized copy of data in the operational schemas. This denormalized data is able to be reconstructed at anytime, and is merely there for extremely fast read purposes. Since the data is redundant, and VERY large - I'd like to exclude it from being backed up. I already have stored procedures that can regenerate all of the data in that schema in a couple of hours - for use in the event of a failure. I assume I can split the tables in this schema out to another data file or such (ideally even on another drive for faster reads), but is there a way to never have that data file backup, yet still in the event of a failure its structure could be restored (and other DDL stuff like procs, views, etc)? Somewhat related, can I also have these tables not do transaction logging, if I go to "Full" backup mode for the rest of the database?

    Read the article

  • Should I be backing up a webapp's data to another host continuously ?

    - by user196289
    I have webapp in development. I need to plan for what happens if the host goes down. I will lose some very recent session status (which I can live with) and everything else should be persistently stored in the database. If I am starting up again after an outage, can I expect a good host to reconstruct the database to within minutes of where I was up to ? Or seconds ? Or should I build in a background process to continually mirror the database elsewhere ? What is normal / sensible ? Obviously a good host will have RAID and other redundancy so the likelihood of total loss should be low, and if they have periodic backups I should lose only very recent stuff but this is presumably likely to be designed with almost static web content in mind, and my site is transactional with new data being filed continuously (with a customer expectation that I don't ever lose it). Any suggestions / advice ? Are there off the shelf frameworks for doing this ? (I'm primarily working in Java) And should I just plan to save the data or should I plan to have an alternative usable host implementation ready to launch in case the host doesn't come back up in a suitable timeframe ?

    Read the article

  • A Better Way to Plan, Execute and Manage Enterprise Architecture

    - by JuergenKress
    IT Strategies from Oracle is an authorized library of guidelines and reference architectures that will help you better plan, execute, and manage your enterprise architecture and IT initiatives. The IT Strategies from Oracle library offers two types of best practice documents: practitioner guides containing pragmatic advice and approaches, and reference architectures containing the proven technology patterns to jumpstart your initiative. The IT Strategies from Oracle library can help you establish a reliable set of principles and standards to guide your use of Oracle technology. We will expand this library over time across all of Oracle's technologies. Today, you can access: Overview documents providing an introduction to all the resources available in the library and best practices maturity models Oracle Reference Architectures covering the application infrastructure foundation, management and monitoring, security, software engineering, service-oriented integration, service orientation, user interaction, engineered systems, and a master glossary. Enterprise Technology Strategies for Service-Oriented Architecture offering practitioner guides on creating a SOA roadmap, frameworks for governance, determining ROI, identifying services, software engineering, and white papers. Enterprise Technology Strategies for Event-Driven Architecture offering practitioner guides on creating an EDA roadmap and reference architectures on an EDA foundation and EDA infrastructure. Enterprise Technology Strategies for Business Process Management including practitioner guides on creating a BPM roadmap, business process engineering, governance, and reference architectures on a BPM foundation and BPM infrastructure. Enterprise Technology Strategies for Cloud Computing including reference architectures on a Cloud foundation and Cloud infrastructure. Enterprise Technology Strategies for Business Analytics includes a practitioner guide for creating a BA roadmap, and reference architectures for a BA foundation and BA infrastructure. Get the Oracle Enterprise Architecture content here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Architecture,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 3 – Table per Concrete Type (TPC) and Choosing Strategy Guidelines

    - by mortezam
    This is the third (and last) post in a series that explains different approaches to map an inheritance hierarchy with EF Code First. I've described these strategies in previous posts: Part 1 – Table per Hierarchy (TPH) Part 2 – Table per Type (TPT)In today’s blog post I am going to discuss Table per Concrete Type (TPC) which completes the inheritance mapping strategies supported by EF Code First. At the end of this post I will provide some guidelines to choose an inheritance strategy mainly based on what we've learned in this series. TPC and Entity Framework in the Past Table per Concrete type is somehow the simplest approach suggested, yet using TPC with EF is one of those concepts that has not been covered very well so far and I've seen in some resources that it was even discouraged. The reason for that is just because Entity Data Model Designer in VS2010 doesn't support TPC (even though the EF runtime does). That basically means if you are following EF's Database-First or Model-First approaches then configuring TPC requires manually writing XML in the EDMX file which is not considered to be a fun practice. Well, no more. You'll see that with Code First, creating TPC is perfectly possible with fluent API just like other strategies and you don't need to avoid TPC due to the lack of designer support as you would probably do in other EF approaches. Table per Concrete Type (TPC)In Table per Concrete type (aka Table per Concrete class) we use exactly one table for each (nonabstract) class. All properties of a class, including inherited properties, can be mapped to columns of this table, as shown in the following figure: As you can see, the SQL schema is not aware of the inheritance; effectively, we’ve mapped two unrelated tables to a more expressive class structure. If the base class was concrete, then an additional table would be needed to hold instances of that class. I have to emphasize that there is no relationship between the database tables, except for the fact that they share some similar columns. TPC Implementation in Code First Just like the TPT implementation, we need to specify a separate table for each of the subclasses. We also need to tell Code First that we want all of the inherited properties to be mapped as part of this table. In CTP5, there is a new helper method on EntityMappingConfiguration class called MapInheritedProperties that exactly does this for us. Here is the complete object model as well as the fluent API to create a TPC mapping: public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } }          public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } }          public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } }      public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; }              protected override void OnModelCreating(ModelBuilder modelBuilder)     {         modelBuilder.Entity<BankAccount>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("BankAccounts");         });         modelBuilder.Entity<CreditCard>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("CreditCards");         });                 } } The Importance of EntityMappingConfiguration ClassAs a side note, it worth mentioning that EntityMappingConfiguration class turns out to be a key type for inheritance mapping in Code First. Here is an snapshot of this class: namespace System.Data.Entity.ModelConfiguration.Configuration.Mapping {     public class EntityMappingConfiguration<TEntityType> where TEntityType : class     {         public ValueConditionConfiguration Requires(string discriminator);         public void ToTable(string tableName);         public void MapInheritedProperties();     } } As you have seen so far, we used its Requires method to customize TPH. We also used its ToTable method to create a TPT and now we are using its MapInheritedProperties along with ToTable method to create our TPC mapping. TPC Configuration is Not Done Yet!We are not quite done with our TPC configuration and there is more into this story even though the fluent API we saw perfectly created a TPC mapping for us in the database. To see why, let's start working with our object model. For example, the following code creates two new objects of BankAccount and CreditCard types and tries to add them to the database: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount();     CreditCard creditCard = new CreditCard() { CardType = 1 };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Running this code throws an InvalidOperationException with this message: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. The reason we got this exception is because DbContext.SaveChanges() internally invokes SaveChanges method of its internal ObjectContext. ObjectContext's SaveChanges method on its turn by default calls AcceptAllChanges after it has performed the database modifications. AcceptAllChanges method merely iterates over all entries in ObjectStateManager and invokes AcceptChanges on each of them. Since the entities are in Added state, AcceptChanges method replaces their temporary EntityKey with a regular EntityKey based on the primary key values (i.e. BillingDetailId) that come back from the database and that's where the problem occurs since both the entities have been assigned the same value for their primary key by the database (i.e. on both BillingDetailId = 1) and the problem is that ObjectStateManager cannot track objects of the same type (i.e. BillingDetail) with the same EntityKey value hence it throws. If you take a closer look at the TPC's SQL schema above, you'll see why the database generated the same values for the primary keys: the BillingDetailId column in both BankAccounts and CreditCards table has been marked as identity. How to Solve The Identity Problem in TPC As you saw, using SQL Server’s int identity columns doesn't work very well together with TPC since there will be duplicate entity keys when inserting in subclasses tables with all having the same identity seed. Therefore, to solve this, either a spread seed (where each table has its own initial seed value) will be needed, or a mechanism other than SQL Server’s int identity should be used. Some other RDBMSes have other mechanisms allowing a sequence (identity) to be shared by multiple tables, and something similar can be achieved with GUID keys in SQL Server. While using GUID keys, or int identity keys with different starting seeds will solve the problem but yet another solution would be to completely switch off identity on the primary key property. As a result, we need to take the responsibility of providing unique keys when inserting records to the database. We will go with this solution since it works regardless of which database engine is used. Switching Off Identity in Code First We can switch off identity simply by placing DatabaseGenerated attribute on the primary key property and pass DatabaseGenerationOption.None to its constructor. DatabaseGenerated attribute is a new data annotation which has been added to System.ComponentModel.DataAnnotations namespace in CTP5: public abstract class BillingDetail {     [DatabaseGenerated(DatabaseGenerationOption.None)]     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } As always, we can achieve the same result by using fluent API, if you prefer that: modelBuilder.Entity<BillingDetail>()             .Property(p => p.BillingDetailId)             .HasDatabaseGenerationOption(DatabaseGenerationOption.None); Working With The Object Model Our TPC mapping is ready and we can try adding new records to the database. But, like I said, now we need to take care of providing unique keys when creating new objects: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount()      {          BillingDetailId = 1                          };     CreditCard creditCard = new CreditCard()      {          BillingDetailId = 2,         CardType = 1     };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Polymorphic Associations with TPC is Problematic The main problem with this approach is that it doesn’t support Polymorphic Associations very well. After all, in the database, associations are represented as foreign key relationships and in TPC, the subclasses are all mapped to different tables so a polymorphic association to their base class (abstract BillingDetail in our example) cannot be represented as a simple foreign key relationship. For example, consider the the domain model we introduced here where User has a polymorphic association with BillingDetail. This would be problematic in our TPC Schema, because if User has a many-to-one relationship with BillingDetail, the Users table would need a single foreign key column, which would have to refer both concrete subclass tables. This isn’t possible with regular foreign key constraints. Schema Evolution with TPC is Complex A further conceptual problem with this mapping strategy is that several different columns, of different tables, share exactly the same semantics. This makes schema evolution more complex. For example, a change to a base class property results in changes to multiple columns. It also makes it much more difficult to implement database integrity constraints that apply to all subclasses. Generated SQLLet's examine SQL output for polymorphic queries in TPC mapping. For example, consider this polymorphic query for all BillingDetails and the resulting SQL statements that being executed in the database: var query = from b in context.BillingDetails select b; Just like the SQL query generated by TPT mapping, the CASE statements that you see in the beginning of the query is merely to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type). TPC's SQL Queries are Union Based As you can see in the above screenshot, the first SELECT uses a FROM-clause subquery (which is selected with a red rectangle) to retrieve all instances of BillingDetails from all concrete class tables. The tables are combined with a UNION operator, and a literal (in this case, 0 and 1) is inserted into the intermediate result; (look at the lines highlighted in yellow.) EF reads this to instantiate the correct class given the data from a particular row. A union requires that the queries that are combined, project over the same columns; hence, EF has to pad and fill up nonexistent columns with NULL. This query will really perform well since here we can let the database optimizer find the best execution plan to combine rows from several tables. There is also no Joins involved so it has a better performance than the SQL queries generated by TPT where a Join is required between the base and subclasses tables. Choosing Strategy GuidelinesBefore we get into this discussion, I want to emphasize that there is no one single "best strategy fits all scenarios" exists. As you saw, each of the approaches have their own advantages and drawbacks. Here are some rules of thumb to identify the best strategy in a particular scenario: If you don’t require polymorphic associations or queries, lean toward TPC—in other words, if you never or rarely query for BillingDetails and you have no class that has an association to BillingDetail base class. I recommend TPC (only) for the top level of your class hierarchy, where polymorphism isn’t usually required, and when modification of the base class in the future is unlikely. If you do require polymorphic associations or queries, and subclasses declare relatively few properties (particularly if the main difference between subclasses is in their behavior), lean toward TPH. Your goal is to minimize the number of nullable columns and to convince yourself (and your DBA) that a denormalized schema won’t create problems in the long run. If you do require polymorphic associations or queries, and subclasses declare many properties (subclasses differ mainly by the data they hold), lean toward TPT. Or, depending on the width and depth of your inheritance hierarchy and the possible cost of joins versus unions, use TPC. By default, choose TPH only for simple problems. For more complex cases (or when you’re overruled by a data modeler insisting on the importance of nullability constraints and normalization), you should consider the TPT strategy. But at that point, ask yourself whether it may not be better to remodel inheritance as delegation in the object model (delegation is a way of making composition as powerful for reuse as inheritance). Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. EF acts as a buffer between the domain and relational models, but that doesn’t mean you can ignore persistence concerns when designing your classes. SummaryIn this series, we focused on one of the main structural aspect of the object/relational paradigm mismatch which is inheritance and discussed how EF solve this problem as an ORM solution. We learned about the three well-known inheritance mapping strategies and their implementations in EF Code First. Hopefully it gives you a better insight about the mapping of inheritance hierarchies as well as choosing the best strategy for your particular scenario. Happy New Year and Happy Code-Firsting! References ADO.NET team blog Java Persistence with Hibernate book a { color: #5A99FF; } a:visited { color: #5A99FF; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } .exception { background-color: #f0f0f0; font-style: italic; padding-bottom: 5px; padding-left: 5px; padding-top: 5px; padding-right: 5px; }

    Read the article

  • Google présente les trois stratégies les plus importantes pour le référencement en 2011 : rapidité, contrôle interne et réseaux sociaux

    Google présente les trois stratégies les plus importantes pour le référencement en 2011 Rapidité, contrôle interne et Webmarketing sur les réseaux sociaux Sur sa chaine YouTube, le centre Google Webmaster Help répond régulièrement aux questions les plus pertinentes venant de professionnels des divers métiers du web. Cette semaine, Matt Cutts, responsable de l'équipe anti-spam de Google, répond à une question particulièrement intéressante : « Si vous étiez un expert en optimisation pour les moteurs de recherche dans une grande entreprise, quelles sont les trois choses que vous incluriez dans votre stratégie pour 2011 ? » Pour Matt Cutts, la première chose qu'i...

    Read the article

  • Does Using ASP Or PHP Affect Your SEO Strategies?

    We most often hear web developers as well as website design development companies asking in forums and developer boards about use of ASP, PHP & other scripting language and its possible negative effects on search engine optimization and effective SEO strategies for the website. There are many server side based scripting languages such as ASP, PHP, Cold Fusion, Python, and Pearl; among which PHP & ASP are more common.

    Read the article

  • WCF RIA Services DomainContext Abstraction Strategies–Say That 10 Times!

    - by dwahlin
    The DomainContext available with WCF RIA Services provides a lot of functionality that can help track object state and handle making calls from a Silverlight client to a DomainService. One of the questions I get quite often in our Silverlight training classes (and see often in various forums and other areas) is how the DomainContext can be abstracted out of ViewModel classes when using the MVVM pattern in Silverlight applications. It’s not something that’s super obvious at first especially if you don’t work with delegates a lot, but it can definitely be done. There are various techniques and strategies that can be used but I thought I’d share some of the core techniques I find useful. To start, let’s assume you have the following ViewModel class (this is from my Silverlight Firestarter talk available to watch online here if you’re interested in getting started with WCF RIA Services): public class AdminViewModel : ViewModelBase { BookClubContext _Context = new BookClubContext(); public AdminViewModel() { if (!DesignerProperties.IsInDesignTool) { LoadBooks(); } } private void LoadBooks() { _Context.Load(_Context.GetBooksQuery(), LoadBooksCallback, null); } private void LoadBooksCallback(LoadOperation<Book> books) { Books = new ObservableCollection<Book>(books.Entities); } } Notice that BookClubContext is being used directly in the ViewModel class. There’s nothing wrong with that of course, but if other ViewModel objects need to load books then code would be duplicated across classes. Plus, the ViewModel has direct knowledge of how to load data and I like to make it more loosely-coupled. To do this I create what I call a “Service Agent” class. This class is responsible for getting data from the DomainService and returning it to a ViewModel. It only knows how to get and return data but doesn’t know how data should be stored and isn’t used with data binding operations. An example of a simple ServiceAgent class is shown next. Notice that I’m using the Action<T> delegate to handle callbacks from the ServiceAgent to the ViewModel object. Because LoadBooks accepts an Action<ObservableCollection<Book>>, the callback method in the ViewModel must accept ObservableCollection<Book> as a parameter. The callback is initiated by calling the Invoke method exposed by Action<T>: public class ServiceAgent { BookClubContext _Context = new BookClubContext(); public void LoadBooks(Action<ObservableCollection<Book>> callback) { _Context.Load(_Context.GetBooksQuery(), LoadBooksCallback, callback); } public void LoadBooksCallback(LoadOperation<Book> lo) { //Check for errors of course...keeping this brief var books = new ObservableCollection<Book>(lo.Entities); var action = (Action<ObservableCollection<Book>>)lo.UserState; action.Invoke(books); } } This can be simplified by taking advantage of lambda expressions. Notice that in the following code I don’t have a separate callback method and don’t have to worry about passing any user state or casting any user state (the user state is the 3rd parameter in the _Context.Load method call shown above). public class ServiceAgent { BookClubContext _Context = new BookClubContext(); public void LoadBooks(Action<ObservableCollection<Book>> callback) { _Context.Load(_Context.GetBooksQuery(), (lo) => { var books = new ObservableCollection<Book>(lo.Entities); callback.Invoke(books); }, null); } } A ViewModel class can then call into the ServiceAgent to retrieve books yet never know anything about the DomainContext object or even know how data is loaded behind the scenes: public class AdminViewModel : ViewModelBase { ServiceAgent _ServiceAgent = new ServiceAgent(); public AdminViewModel() { if (!DesignerProperties.IsInDesignTool) { LoadBooks(); } } private void LoadBooks() { _ServiceAgent.LoadBooks(LoadBooksCallback); } private void LoadBooksCallback(ObservableCollection<Book> books) { Books = books } } You could also handle the LoadBooksCallback method using a lambda if you wanted to minimize code just like I did earlier with the LoadBooks method in the ServiceAgent class.  If you’re into Dependency Injection (DI), you could create an interface for the ServiceAgent type, reference it in the ViewModel and then inject in the object to use at runtime. There are certainly other techniques and strategies that can be used, but the code shown here provides an introductory look at the topic that should help get you started abstracting the DomainContext out of your ViewModel classes when using WCF RIA Services in Silverlight applications.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >