Search Results

Search found 2667 results on 107 pages for 'peopletools strategy'.

Page 13/107 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Strategy for Offline/Online data synchronization

    - by Adi
    My requirement is I have server J2EE web application and client J2EE web application. Sometimes client can go offline. When client comes online he should be able to synchronize changes to and fro. Also I should be able to control which rows/tables need to be synchronized based on some filters/rules. Is there any existing Java frameworks for doing it? If I need to implement on my own, what are the different strategies that you can suggest? One solution in my mind is maintaining sql logs and executing same statements at other side during synchronization. Do you see any problems with this strategy?

    Read the article

  • Advice on e-commerce logging strategy

    - by yalestar
    I recently inherited an e-commerce app (Java/Struts) that I'm porting to Rails. The thing is, we frequently have to do forensics on orders by poring through the log files, and with the old app's logs (log4j wall of text) it's pretty hard to make sense of the individual orders when several people are placing orders simultaneously. So I'm soliciting advice on a good strategy for logging of these orders, like maybe logging each individual order to its own MongoDB collection based on unique cart ID? Or maybe group them by IP address? Something different entirely? Essentially, what is the best approach for logging of an online store so that it's easy to backtrace each user's interaction with the site?

    Read the article

  • Strategy to structure a search index in a relational database

    - by neilc
    I am interested in suggestions for building an efficient and robust structure for indexing products in a new database I am building (i'm using MySql) When a product is entered through the form there are three parts I am interested in indexing for searching purposes. The product title The product description Tags The most important is title, followed by tags, followed by the description. I was thinking of using the following structure CREATE TABLE `searchindex` ( `id` INT NOT NULL , `word` VARCHAR( 255 ) NOT NULL , `weighting` INT NOT NULL , `product_id` INT NOT NULL , PRIMARY KEY ( `id` ) ) Then each time a product is created I would split apart the title, description and tags (removing common words) and award them a weighting. Then it is trivial to select out the words and corresponding products and order them by weighting. Is there a better way to do this? I would be worried that this strategy would slow down over time and as the database filled up.

    Read the article

  • automatic push to CDN deployment strategy

    - by imanc
    Does anyone have ideas for a strategy to push content to a CDN upon deployment? The key issue I'm facing is that we have a site that is available in various contexts: local development, development server, staging, then finally live. The liver version of the site needs to load assets from a domain, which will be pointed to a CDN: assets.domain.com. However, we will have numerous references to the assets pointing to a relative folder, e.g. /images/ in css, possibly in js, and in HTML & source. Our new site will use capistrano for deployment and it may be that we can hook in another build tool (apache ant?) or some custom script to search / replace paths. I am wondering if anyone has had to deal with this issue before and what solutions you put in place to automate managing the CDN in terms of pushing content up to the CDN and managing html & css references to assets in the CDN. Thanks Imanc

    Read the article

  • Need better Java application deployment strategy

    - by Jeff Post
    I have various Java Swing applications that are used by multiple users. My deployment strategy is to locate the .jar file on a network share, and users create shortcuts to that file. When the user launches an application, the file is copied to their machine and executed locally. This method allows for a single copy of the code and easy updating. The problem is that I can't update the file on the network share if any user is using the application at that time. I can't use Web Start because I don't have access to a cert for signing the jar. My current workaround is a separate application that copies the desired app to the user's local machine, launches the application, and then the launcher exits. There is a several second delay from when the launcher app exits and the user's app becomes visible. Can anyone suggest a better deployment method where I can easily update a central copy of the application, one where Windows XP won't maintain a lock on the file?

    Read the article

  • Strategy for Storing Multiple Nullable Booleans in SQL

    - by Eric J.
    I have an object (happens to be C#) with about 20 properties that are nullable booleans. There will be perhaps a few million such objects persisted to a SQL database (currently SQL Server 2008 R2, but MySQL may need to be supported in the future). The instances themselves are relatively large because they contain about a paragraph of text as well as some other unrelated properties. For a given object instance, most of the properties will be null most of the time. When users search for instances of such objects, they will select perhaps 1-3 of the nullable boolean properties and search for instances where at least one of those 1-3 properties is non-null (OR search). My first thought is to persist the object to a single table with nullable BIT columns representing the nullable boolean properties. However, this strategy will require one index per BIT column to avoid performing a table scan when searching. Further, each index would not be particularly selective since there are only three possible values per index. Is there a better way to approach this problem?

    Read the article

  • strategy to allocate/free lots of small objects

    - by aaa
    hello I am toying with certain caching algorithm, which is challenging somewhat. Basically, it needs to allocate lots of small objects (double arrays, < 256 elements), with objects accessible through mapped value, map[key] = array. time to initialized array may be quite large, generally more than 10 thousand cpu cycles. By lots I mean around gigabyte in total. objects may need to be popped/pushed as needed, generally in random places, one object at a time. lifetime of an object is generally long, minutes or more, however, object may be subject to allocation/deallocation several times during duration of program. What would be good strategy to avoid memory fragmentation, while still maintaining reasonable allocate deallocate speed? I am using C++, so I can use new and malloc. Thanks. I know there a similar questions on website, http://stackoverflow.com/questions/2156745/efficiently-allocating-many-short-lived-small-objects, are somewhat different, thread safety is not immediate issue for me.

    Read the article

  • Optimal strategy to make a C++ hash table, thread safe

    - by Ajeet
    (I am interested in design of implementation NOT a readymade construct that will do it all.) Suppose we have a class HashTable (not hash-map implemented as a tree but hash-table) and say there are eight threads. Suppose read to write ratio is about 100:1 or even better 1000:1. Case A) Only one thread is a writer and others including writer can read from HashTable(they may simply iterate over entire hash table) Case B) All threads are identical and all could read/write. Can someone suggest best strategy to make the class thread safe with following consideration 1. Top priority to least lock contention 2. Second priority to least number of locks My understanding so far is thus : One BIG reader-writer lock(semaphore). Specialize the semaphore so that there could be eight instances writer-resource for case B, where each each writer resource locks one row(or range for that matter). (so i guess 1+8 mutexes) Please let me know if I am thinking on the correct line, and how could we improve on this solution.

    Read the article

  • What strategy do you use to sync your code when working from home

    - by Ben Daniel
    At my work I currently have my development environment inside a Virtual Machine. When I need to do work from home I copy my VM and any databases I need onto a laptop drive sized external USB drive. After about 10 minutes of copying I put the drive in my pocket and head home, copy back the VM and databases onto my personal computer and I'm ready to work. I follow the same steps to take the work back with me. So if I count the total amount of time I spend waiting around for files to finish copying in order for me to take work home and bring it back again, it comes to around 40 minutes! I do have a VPN connection to my work from home (providing the internet is up at both sites) and a decent internet speed (8mbits down/?up) but I find Remote Desktoping into my work machine laggy enough for me to want to work on my VM directly. So in looking at what other options I have or how I could improve my existing option I'm interested in what strategy you use or recommend to do work at home and keeping your code/environment in sync. EDIT: I'd prefer an option where I don't have to commit my changes into version control before I leave work - as I like to make meaningful descriptive comments in my commits, committing would take longer than just copying my VM onto a portable drive! lol Also I'd prefer a solution where my dev environment stays in sync too. Having said that I'm still very interested in your own solutions even if they don't exactly solve my problem as best as I'd like. :)

    Read the article

  • PHP XML Strategy: Parsing DOM to fill "Bean"

    - by Mike
    I have a question concerning a good strategy on how to fill a data "bean" with data inside an xml file. The bean might look like this: class Person { var $id; var $forename = ""; var $surname = ""; var $bio = new Biography(); } class Biography { var $url = ""; var $id; } the xml subtree containing the info might look like this: <root> <!-- some more parent elements before node(s) of interest --> <person> <name pre="forename"> Foo </name> <name pre="surname"> Bar </name> <id> 1254 </id> <biography> <url> http://www.someurl.com </url> <id> 5488 </id> </biography> </person> </root> At the moment, I have one approach using DOMDocument. A method iterates over the entries and fills the bean by "remembering" the last node. I think thats not a good approach. What I have in mind is something like preconstructing some xpath expression(s) and then iterate over the subtrees/nodeLists. Return an array containing the beans as defined above eventually. However, it seems not to be possible reusing a subtree /DOMNode as DOMXPath constructor parameter. Has anyone of you encountered such a problem?

    Read the article

  • Getter and Setter vs. Builder strategy

    - by Extrakun
    I was reading a JavaWorld's article on Getter and Setter where the basic premise is that getters expose internal content of an object, hence tightening coupling, and go on to provide examples using builder objects. I was rather leery of abolishing getter/setter but on second reading of the article, see to quite like the idea. However, sometimes I just need one cruical element of an entity class, such as the user's id and writing one whole class just to extract that cruical element seems like overkill. It also implies that for different view, a different type of importer/exporter must be implemented (or the whole data of the class to be exported out, thus resulting in waste). Usually I tend towards filtering the result of a getter - for example, if I need to output the price of a product in different currency, I would code it as: return CurrencyOutput::convertTo($product->price(), 'USD'); This is with the understanding that the raw output of a getter is not necessary the final result to be pushed onto a screen or a database. Is getter/setter really as bad as it is protrayed to be? When should one adopt a builder strategy, or a 'get the result and filter it' approach? How do you avoid having a class needing to know about every other objects if you are not using getter/setter?

    Read the article

  • Memcache key generation strategy

    - by Maxim Veksler
    Given function f1 which receives n String arguments, would be considered better random key generation strategy for memcache for the scenario described below ? Our Memcache client does internal md5sum hashing on the keys it gets public class MemcacheClient { public Object get(String key) { String md5 = Md5sum.md5(key) // Talk to memcached to get the Serialization... return memcached(md5); } } First option public static String f1(String s1, String s2, String s3, String s4) { String key = s1 + s2 + s3 + s4; return get(key); } Second option /** * Calculate hash from Strings * * @param objects vararg list of String's * * @return calculated md5sum hash */ public static String stringHash(Object... strings) { if(strings == null) throw new NullPointerException("D'oh! Can't calculate hash for null"); MD5 md5sum = new MD5(); // if(prevHash != null) // md5sum.Update(prevHash); for(int i = 0; i < strings.length; i++) { if(strings[i] != null) { md5sum.Update("_" + strings[i] + "_"); // Convert to String... } else { // If object is null, allow minimum entropy by hashing it's position md5sum.Update("_" + i + "_"); } } return md5sum.asHex(); } public static String f1(String s1, String s2, String s3, String s4) { String key = stringHash(s1, s2, s3, s4); return get(key); } Note that the possible problem with the second option is that we are doing second md5sum (in the memcache client) on an already md5sum'ed digest result. Thanks for reading, Maxim.

    Read the article

  • Best strategy for moving data between physical tiers in ASP.net

    - by Pete Lunenfeld
    Building a new ASP.net application, and planning to separate DB, 'service' tier and Web/UI tier into separate physical layers. What is the best/easiest strategy to move serialized objects between the service tier and the UI tier? I was considering serializing POCOs into JSON using simple ASP.net pages to serve the middle tier. Meaning that the UI/Web tier will request data from a (hidden to the outside user) web server that will return a JSON string. This kind of JSON 'emitter' seems easily testable. It also seems easily compressible for efficiently moving data over the WAN between tiers. I know that some folks use .asmx webservices for this kind of task, but this seems like there is excess overhead with SOAP, and the package is not as human readable (testable) as POCOs serialized as JSON. Others are using more complex technology like WCF which we have never used. Does anyone have advice for choosing a method for moving data/objects between the data (db) tier and the web (UI) tier over the WAN using .net technologies? Thanks!!!

    Read the article

  • ID3D10Device Memory Allocation Strategy and E_OUTOFMEMORY

    - by Buzz
    Hi,guys, I want to know more detail of memory allocation strategy in D3D10Device. Could you give me some help? First questions is: I know D3D10 has done some work on memory virtualization that means client don't need to consider where the buffer was reserved, GPU memory, AGP memory or Process system memory. Is this correct? Second question is: When I use ID3D10Device to CreateBuffer continuously, no matter what buffer desc type is, for example ID3D10Device::CreateBuffer( ... D3D10_USAGE_DEFAULT ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_IMMUTABLE ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_DYNAMIC ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_STAGING ... ); etc, if CreateBuffer return error code "E_OUTOFMEMORY", does that mean process virtual memory is exhausted? And at this time, memory allocation on process default heap would also be failed? Thanks in advance!

    Read the article

  • Strategy for locale sensitive sort with pagination

    - by Thom Birkeland
    Hi, I work on an application that is deployed on the web. Part of the app is search functions where the result is presented in a sorted list. The application targets users in several countries using different locales (= sorting rules). I need to find a solution for sorting correctly for all users. I currently sort with ORDER BY in my SQL query, so the sorting is done according to the locale (or LC_LOCATE) set for the database. These rules are incorrect for those users with a locale different than the one set for the database. Also, to further complicate the issue, I use pagination in the application, so when I query the database I ask for rows 1 - 15, 16 - 30, etc. depending on the page I need. However, since the sorting is wrong, each page contains entries that are incorrectly sorted. In a worst case scenario, the entire result set for a given page could be out of order, depending on the locale/sorting rules of the current user. If I were to sort in (server side) code, I need to retrieve all rows from the database and then sort. This results in a tremendous performance hit given the amount of data. Thus I would like to avoid this. Does anyone have a strategy (or even technical solution) for attacking this problem that will result in correctly sorted lists without having to take the performance hit of loading all data? Tech details: The database is PostgreSQL 8.3, the application an EJB3 app using EJB QL for data query, running on JBoss 4.5.

    Read the article

  • Game engine deployment strategy for the Android?

    - by Jeremy Bell
    In college, my senior project was to create a simple 2D game engine complete with a scripting language which compiled to bytecode, which was interpreted. For fun, I'd like to port the engine to android. I'm new to android development, so I'm not sure which way to go as far as deploying the engine on the phone. The easiest way I suppose would be to require the engine/interpreter to be bundled with every game that uses it. This solves any versioning issues. There are two problems with this. One: this makes each game app larger and two: I originally released the engine under the LGPL license (unfortunately), but this deployment strategy makes it difficult to conform to the rules of that license, particularly with respect to allowing users to replace the lib easily with another version. So, my other option is to somehow have the engine stand alone as an Activity or service that somehow responds to intents raised by game apps, and somehow give the engine app permissions to read the scripts and other assets to "run" the game. The user could then be able to replace the engine app with a different version (possibly one they made themselves). Is this even possible? What would you recommend? How could I handle it in a secure way?

    Read the article

  • Connection Pool Strategy: Good, Bad or Ugly?

    - by Drew
    I'm in charge of developing and maintaining a group of Web Applications that are centered around similar data. The architecture I decided on at the time was that each application would have their own database and web-root application. Each application maintains a connection pool to its own database and a central database for shared data (logins, etc.) A co-worker has been positing that this strategy will not scale because having so many different connection pools will not be scalable and that we should refactor the database so that all of the different applications use a single central database and that any modifications that may be unique to a system will need to be reflected from that one database and then use a single pool powered by Tomcat. He has posited that there is a lot of "meta data" that goes back and forth across the network to maintain a connection pool. My understanding is that with proper tuning to use only as many connections as necessary across the different pools (low volume apps getting less connections, high volume apps getting more, etc.) that the number of pools doesn't matter compared to the number of connections or more formally that the difference in overhead required to maintain 3 pools of 10 connections is negligible compared to 1 pool of 30 connections. The reasoning behind initially breaking the systems into a one-app-one-database design was that there are likely going to be differences between the apps and that each system could make modifications on the schema as needed. Similarly, it eliminated the possibility of system data bleeding through to other apps. Unfortunately there is not strong leadership in the company to make a hard decision. Although my co-worker is backing up his worries only with vagueness, I want to make sure I understand the ramifications of multiple small databases/connections versus one large database/connection pool.

    Read the article

  • Resource placement (optimal strategy)

    - by blackened
    I know that this is not exactly the right place to ask this question, but maybe a wise guy comes across and has the solution. I'm trying to write a computer game and I need an algorithm to solve this question: The game is played between 2 players. Each side has 1.000 dollars. There are three "boxes" and each player writes down the amount of money he is going to place into those boxes. Then these amounts are compared. Whoever placed more money in a box scores 1 point (if draw half point each). Whoever scores more points wins his opponents 1.000 dollars. Example game: Player A: [500, 500, 0] Player B: [333, 333, 334] Player A wins because he won Box A and Box B (but lost Box C). Question: What is the optimal strategy to place the money? I have more questions to ask (algorithm related, not math related) but I need to know the answer to this one first. Update (1): After some more research I've learned that these type of problems/games are called Colonel Blotto Games. I did my best and found few (highly technical) documents on the subject. Cutting it short, the problem I have (as described above) is called simple Blotto Game (only three battlefields with symmetric resources). The difficult ones are the ones with, say, 10+ battle fields with non-symmetric resources. All the documents I've read say that the simple Blotto game is easy to solve. The thing is, none of them actually say what that "easy" solution is.

    Read the article

  • Which design pattern fits - strategy makes sense ?

    - by user554833
    --Bump *One desperate try to get someone's attention I have a simple database table that stores list of users who have subscribed to folders either by email OR to show up on the site (only on the web UI). In the storage table this is controlled by a number(1 - show on site 2- by email). When I am showing in UI I need to show a checkbox next to each of folders for which the user has subscribed (both email & on site). There is a separate table which stores a set of default subscriptions which would apply to each user if user has not expressed his subscription. This is basically a folder ID and a virtual group name. But, Email subscriptions do not count for applying these default groups. So if no "on site" subscription apply default group. Thats the rule. How about a strategy pattern here (Pseudo code) Interface ISubscription public ArrayList GetSubscriptionData(Pass query object) Public class SubscriptionWithDefaultGroup Implement ArrayList GetSubscriptionData(Pass query object) Public class SubscriptionWithoutDefaultGroup Implement ArrayList GetSubscriptionData(Pass query object) Public class SubscriptionOnlyDefaultGroup Implement ArrayList GetSubscriptionData(Pass query object) does this even make sense? I would be more than glad for receive any criticism / help / notes. I am learning. Cheers

    Read the article

  • What is "Call By Name"?

    - by forellana
    Hi to everyone! I'm working in a homework, and the professor asked me to implement the evaluation strategy called "call by name" in scheme in a certain language that we developed and he gave us an example at http://www.scala-lang.org/node/138 in the scala language, but i don't understand in what consists the call by name evaluation strategy? what differences it has with call by need? thanks, greetings

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 3 – Table per Concrete Type (TPC) and Choosing Strategy Guidelines

    - by mortezam
    This is the third (and last) post in a series that explains different approaches to map an inheritance hierarchy with EF Code First. I've described these strategies in previous posts: Part 1 – Table per Hierarchy (TPH) Part 2 – Table per Type (TPT)In today’s blog post I am going to discuss Table per Concrete Type (TPC) which completes the inheritance mapping strategies supported by EF Code First. At the end of this post I will provide some guidelines to choose an inheritance strategy mainly based on what we've learned in this series. TPC and Entity Framework in the Past Table per Concrete type is somehow the simplest approach suggested, yet using TPC with EF is one of those concepts that has not been covered very well so far and I've seen in some resources that it was even discouraged. The reason for that is just because Entity Data Model Designer in VS2010 doesn't support TPC (even though the EF runtime does). That basically means if you are following EF's Database-First or Model-First approaches then configuring TPC requires manually writing XML in the EDMX file which is not considered to be a fun practice. Well, no more. You'll see that with Code First, creating TPC is perfectly possible with fluent API just like other strategies and you don't need to avoid TPC due to the lack of designer support as you would probably do in other EF approaches. Table per Concrete Type (TPC)In Table per Concrete type (aka Table per Concrete class) we use exactly one table for each (nonabstract) class. All properties of a class, including inherited properties, can be mapped to columns of this table, as shown in the following figure: As you can see, the SQL schema is not aware of the inheritance; effectively, we’ve mapped two unrelated tables to a more expressive class structure. If the base class was concrete, then an additional table would be needed to hold instances of that class. I have to emphasize that there is no relationship between the database tables, except for the fact that they share some similar columns. TPC Implementation in Code First Just like the TPT implementation, we need to specify a separate table for each of the subclasses. We also need to tell Code First that we want all of the inherited properties to be mapped as part of this table. In CTP5, there is a new helper method on EntityMappingConfiguration class called MapInheritedProperties that exactly does this for us. Here is the complete object model as well as the fluent API to create a TPC mapping: public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } }          public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } }          public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } }      public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; }              protected override void OnModelCreating(ModelBuilder modelBuilder)     {         modelBuilder.Entity<BankAccount>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("BankAccounts");         });         modelBuilder.Entity<CreditCard>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("CreditCards");         });                 } } The Importance of EntityMappingConfiguration ClassAs a side note, it worth mentioning that EntityMappingConfiguration class turns out to be a key type for inheritance mapping in Code First. Here is an snapshot of this class: namespace System.Data.Entity.ModelConfiguration.Configuration.Mapping {     public class EntityMappingConfiguration<TEntityType> where TEntityType : class     {         public ValueConditionConfiguration Requires(string discriminator);         public void ToTable(string tableName);         public void MapInheritedProperties();     } } As you have seen so far, we used its Requires method to customize TPH. We also used its ToTable method to create a TPT and now we are using its MapInheritedProperties along with ToTable method to create our TPC mapping. TPC Configuration is Not Done Yet!We are not quite done with our TPC configuration and there is more into this story even though the fluent API we saw perfectly created a TPC mapping for us in the database. To see why, let's start working with our object model. For example, the following code creates two new objects of BankAccount and CreditCard types and tries to add them to the database: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount();     CreditCard creditCard = new CreditCard() { CardType = 1 };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Running this code throws an InvalidOperationException with this message: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. The reason we got this exception is because DbContext.SaveChanges() internally invokes SaveChanges method of its internal ObjectContext. ObjectContext's SaveChanges method on its turn by default calls AcceptAllChanges after it has performed the database modifications. AcceptAllChanges method merely iterates over all entries in ObjectStateManager and invokes AcceptChanges on each of them. Since the entities are in Added state, AcceptChanges method replaces their temporary EntityKey with a regular EntityKey based on the primary key values (i.e. BillingDetailId) that come back from the database and that's where the problem occurs since both the entities have been assigned the same value for their primary key by the database (i.e. on both BillingDetailId = 1) and the problem is that ObjectStateManager cannot track objects of the same type (i.e. BillingDetail) with the same EntityKey value hence it throws. If you take a closer look at the TPC's SQL schema above, you'll see why the database generated the same values for the primary keys: the BillingDetailId column in both BankAccounts and CreditCards table has been marked as identity. How to Solve The Identity Problem in TPC As you saw, using SQL Server’s int identity columns doesn't work very well together with TPC since there will be duplicate entity keys when inserting in subclasses tables with all having the same identity seed. Therefore, to solve this, either a spread seed (where each table has its own initial seed value) will be needed, or a mechanism other than SQL Server’s int identity should be used. Some other RDBMSes have other mechanisms allowing a sequence (identity) to be shared by multiple tables, and something similar can be achieved with GUID keys in SQL Server. While using GUID keys, or int identity keys with different starting seeds will solve the problem but yet another solution would be to completely switch off identity on the primary key property. As a result, we need to take the responsibility of providing unique keys when inserting records to the database. We will go with this solution since it works regardless of which database engine is used. Switching Off Identity in Code First We can switch off identity simply by placing DatabaseGenerated attribute on the primary key property and pass DatabaseGenerationOption.None to its constructor. DatabaseGenerated attribute is a new data annotation which has been added to System.ComponentModel.DataAnnotations namespace in CTP5: public abstract class BillingDetail {     [DatabaseGenerated(DatabaseGenerationOption.None)]     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } As always, we can achieve the same result by using fluent API, if you prefer that: modelBuilder.Entity<BillingDetail>()             .Property(p => p.BillingDetailId)             .HasDatabaseGenerationOption(DatabaseGenerationOption.None); Working With The Object Model Our TPC mapping is ready and we can try adding new records to the database. But, like I said, now we need to take care of providing unique keys when creating new objects: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount()      {          BillingDetailId = 1                          };     CreditCard creditCard = new CreditCard()      {          BillingDetailId = 2,         CardType = 1     };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Polymorphic Associations with TPC is Problematic The main problem with this approach is that it doesn’t support Polymorphic Associations very well. After all, in the database, associations are represented as foreign key relationships and in TPC, the subclasses are all mapped to different tables so a polymorphic association to their base class (abstract BillingDetail in our example) cannot be represented as a simple foreign key relationship. For example, consider the the domain model we introduced here where User has a polymorphic association with BillingDetail. This would be problematic in our TPC Schema, because if User has a many-to-one relationship with BillingDetail, the Users table would need a single foreign key column, which would have to refer both concrete subclass tables. This isn’t possible with regular foreign key constraints. Schema Evolution with TPC is Complex A further conceptual problem with this mapping strategy is that several different columns, of different tables, share exactly the same semantics. This makes schema evolution more complex. For example, a change to a base class property results in changes to multiple columns. It also makes it much more difficult to implement database integrity constraints that apply to all subclasses. Generated SQLLet's examine SQL output for polymorphic queries in TPC mapping. For example, consider this polymorphic query for all BillingDetails and the resulting SQL statements that being executed in the database: var query = from b in context.BillingDetails select b; Just like the SQL query generated by TPT mapping, the CASE statements that you see in the beginning of the query is merely to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type). TPC's SQL Queries are Union Based As you can see in the above screenshot, the first SELECT uses a FROM-clause subquery (which is selected with a red rectangle) to retrieve all instances of BillingDetails from all concrete class tables. The tables are combined with a UNION operator, and a literal (in this case, 0 and 1) is inserted into the intermediate result; (look at the lines highlighted in yellow.) EF reads this to instantiate the correct class given the data from a particular row. A union requires that the queries that are combined, project over the same columns; hence, EF has to pad and fill up nonexistent columns with NULL. This query will really perform well since here we can let the database optimizer find the best execution plan to combine rows from several tables. There is also no Joins involved so it has a better performance than the SQL queries generated by TPT where a Join is required between the base and subclasses tables. Choosing Strategy GuidelinesBefore we get into this discussion, I want to emphasize that there is no one single "best strategy fits all scenarios" exists. As you saw, each of the approaches have their own advantages and drawbacks. Here are some rules of thumb to identify the best strategy in a particular scenario: If you don’t require polymorphic associations or queries, lean toward TPC—in other words, if you never or rarely query for BillingDetails and you have no class that has an association to BillingDetail base class. I recommend TPC (only) for the top level of your class hierarchy, where polymorphism isn’t usually required, and when modification of the base class in the future is unlikely. If you do require polymorphic associations or queries, and subclasses declare relatively few properties (particularly if the main difference between subclasses is in their behavior), lean toward TPH. Your goal is to minimize the number of nullable columns and to convince yourself (and your DBA) that a denormalized schema won’t create problems in the long run. If you do require polymorphic associations or queries, and subclasses declare many properties (subclasses differ mainly by the data they hold), lean toward TPT. Or, depending on the width and depth of your inheritance hierarchy and the possible cost of joins versus unions, use TPC. By default, choose TPH only for simple problems. For more complex cases (or when you’re overruled by a data modeler insisting on the importance of nullability constraints and normalization), you should consider the TPT strategy. But at that point, ask yourself whether it may not be better to remodel inheritance as delegation in the object model (delegation is a way of making composition as powerful for reuse as inheritance). Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. EF acts as a buffer between the domain and relational models, but that doesn’t mean you can ignore persistence concerns when designing your classes. SummaryIn this series, we focused on one of the main structural aspect of the object/relational paradigm mismatch which is inheritance and discussed how EF solve this problem as an ORM solution. We learned about the three well-known inheritance mapping strategies and their implementations in EF Code First. Hopefully it gives you a better insight about the mapping of inheritance hierarchies as well as choosing the best strategy for your particular scenario. Happy New Year and Happy Code-Firsting! References ADO.NET team blog Java Persistence with Hibernate book a { color: #5A99FF; } a:visited { color: #5A99FF; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } .exception { background-color: #f0f0f0; font-style: italic; padding-bottom: 5px; padding-left: 5px; padding-top: 5px; padding-right: 5px; }

    Read the article

  • Load balancing and HTTPS strategies

    - by Dan
    I am faced with the following problem: Servers get saturated since current load balancing strategy is based on client IP. Some corporate clients access our servers from behind large proxies so all clients appear with same IP to our load balancer. I think we are using some hardware load balancing device (can investigate further if necessary). We need to maintain session affinity (site is constructed in ASP), so all requests with same IP get routed to the same node. Since all the communication goes over the HTTPS, no request data (like session Id) is available to balancer as a client discriminator. Is there a way to use some other data besides the IP to distinguish between clients and route the clients even when coming from same IP to different nodes? Note: I need to maintain the traffic between the balancer and nodes safe (encrypted).

    Read the article

  • Partner Showcase -- GreyHeller

    - by PeopleTools Strategy
    This is the next in a series of posts spotlighting some of our creative partners.  GreyHeller is a PeopleSoft-focused software company founded by PeopleTools alumni Larry Grey and Chris Heller.  GreyHeller’s products focus on addressing the technology needs of PeopleSoft customers in the areas of mobile Enablement, reporting/business intelligence, security, and change management.  The company helps customers protect and extend their investment in PeopleSoft.GreyHeller’s products and services are in use by nearly 100 PeopleSoft customers on 6 continents.  Their product solutions are lightweight bolt-ons--extensions to a customer’s PeopleSoft environment requiring no new infrastructure.  This makes for rapid implementations.A major area of interest for PeopleSoft customers these days is mobile enablement.  GreyHeller's current mobile implementations include the following customers: Texas Christian University (Live:  TCU student newspaper article here) Coppin State University (Live) University of Cambridge (June go-live) HealthSouth (June go-live) Frostburg State Univrsity (Q3 go-live) Amedisys (Q3 go-live) GreyHeller maintains a PeopleTools-focused blog that provides tips, techniques, and code snippets aimed at helping PeopleSoft customers make the most of their PeopleSoft system.  In addition to their blog, the GreyHeller team conducts and records weekly webinars that demonstrate latest PeopleTools features and Tips and techniques.  Recordings of these webinars can be accessed here.Visit GreyHeller’s web site for more information on the company and its work.

    Read the article

  • Fastest Strategy for Drawing Flash Particles

    - by user146543
    I am wondering what the fastest approach might be for using Flash/AS3 to render a 2D point plot (sonar lofargram) ; basically screens full of tiny 2D dots using Flash as fast as possible. Any suggestions on where to start? Is Flash capable of displaying 10's of thousands of 2D points (or more) with an acceptable fps?

    Read the article

  • Mercurial Tagging/Branching Strategy

    - by Tony Trozzo
    My current project is broken down into 3 parts: Website, Desktop Client, and a Plug-in for a third party program. We had started out originally with Subversion for our source control but decided to try Mercurial after reading Joel Spolsky's final post. Considering we haven't really used the majority of svn's potential before, we figured starting fresh with some basic ideas of how source control worked would make this transition easy. However, after setting up our initial repository, we're lost as to how tagging and branching should work on a project like this. Essentially, we're working on all 3 of these parts at the same time. We want a release to be a combination of the 3 parts. Currently we're working in one repository. For the Plug-in part, we have the first iteration finished which we've been referring to as Plug-In v0.1. For the first official build of the other two parts, we'd also like to refer to them as Website v0.1 and Desktop Client v0.1. When all three parts are at v0.1, we'd like to have a Full Project v0.1. Our problem is we're not sure how to manage all of this in the Hg repository. Would the best way to handle this be to create 3 separate repositories for the 3 stable versions and then 3 more repositories for the current developments? Currently we have this all in one repository. Should we do this in branches (are branches any different from cloning repositories?) and tags? Any help is greatly appreciated.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >