Search Results

Search found 2536 results on 102 pages for 'entities'.

Page 30/102 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • LLBLGen Pro feature highlights: automatic element name construction

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) One of the things one might take for granted but which has a huge impact on the time spent in an entity modeling environment is the way the system creates names for elements out of the information provided, in short: automatic element name construction. Element names are created in both directions of modeling: database first and model first and the more names the system can create for you without you having to rename them, the better. LLBLGen Pro has a rich, fine grained system for creating element names out of the meta-data available, which I'll describe more in detail below. First the model element related element naming features are highlighted, in the section Automatic model element naming features and after that I'll go more into detail about the relational model element naming features LLBLGen Pro has to offer in the section Automatic relational model element naming features. Automatic model element naming features When working database first, the element names in the model, e.g. entity names, entity field names and so on, are in general determined from the relational model element (e.g. table, table field) they're mapped on, as the model elements are reverse engineered from these relational model elements. It doesn't take rocket science to automatically name an entity Customer if the entity was created after reverse engineering a table named Customer. It gets a little trickier when the entity which was created by reverse engineering a table called TBL_ORDER_LINES has to be named 'OrderLine' automatically. Automatic model element naming also takes into effect with model first development, where some settings are used to provide you with a default name, e.g. in the case of navigator name creation when you create a new relationship. The features below are available to you in the Project Settings. Open Project Settings on a loaded project and navigate to Conventions -> Element Name Construction. Strippers! The above example 'TBL_ORDER_LINES' shows that some parts of the table name might not be needed for name creation, in this case the 'TBL_' prefix. Some 'brilliant' DBAs even add suffixes to table names, fragments you might not want to appear in the entity names. LLBLGen Pro offers you to define both prefix and suffix fragments to strip off of table, view, stored procedure, parameter, table field and view field names. In the example above, the fragment 'TBL_' is a good candidate for such a strip pattern. You can specify more than one pattern for e.g. the table prefix strip pattern, so even a really messy schema can still be used to produce clean names. Underscores Be Gone Another thing you might get rid of are underscores. After all, most naming schemes for entities and their classes use PasCal casing rules and don't allow for underscores to appear. LLBLGen Pro can automatically strip out underscores for you. It's an optional feature, so if you like the underscores, you're not forced to see them go: LLBLGen Pro will leave them alone when ordered to to so. PasCal everywhere... or not, your call LLBLGen Pro can automatically PasCal case names on word breaks. It determines word breaks in a couple of ways: a space marks a word break, an underscore marks a word break and a case difference marks a word break. It will remove spaces in all cases, and based on the underscore removal setting, keep or remove the underscores, and upper-case the first character of a word break fragment, and lower case the rest. Say, we keep the defaults, which is remove underscores and PasCal case always and strip the TBL_ fragment, we get with our example TBL_ORDER_LINES, after stripping TBL_ from the table name two word fragments: ORDER and LINES. The underscores are removed, the first character of each fragment is upper-cased, the rest lower-cased, so this results in OrderLines. Almost there! Pluralization and Singularization In general entity names are singular, like Customer or OrderLine so LLBLGen Pro offers a way to singularize the names. This will convert OrderLines, the result we got after the PasCal casing functionality, into OrderLine, exactly what we're after. Show me the patterns! There are other situations in which you want more flexibility. Say, you have an entity Customer and an entity Order and there's a foreign key constraint defined from the target of Order and the target of Customer. This foreign key constraint results in a 1:n relationship between the entities Customer and Order. A relationship has navigators mapped onto the relationship in both entities the relationship is between. For this particular relationship we'd like to have Customer as navigator in Order and Orders as navigator in Customer, so the relationship becomes Customer.Orders 1:n Order.Customer. To control the naming of these navigators for the various relationship types, LLBLGen Pro defines a set of patterns which allow you, using macros, to define how the auto-created navigator names will look like. For example, if you rather have Customer.OrderCollection, you can do so, by changing the pattern from {$EndEntityName$P} to {$EndEntityName}Collection. The $P directive makes sure the name is pluralized, which is not what you want if you're going for <EntityName>Collection, hence it's removed. When working model first, it's a given you'll create foreign key fields along the way when you define relationships. For example, you've defined two entities: Customer and Order, and they have their fields setup properly. Now you want to define a relationship between them. This will automatically create a foreign key field in the Order entity, which reflects the value of the PK field in Customer. (No worries if you hate the foreign key fields in your classes, on NHibernate and EF these can be hidden in the generated code if you want to). A specific pattern is available for you to direct LLBLGen Pro how to name this foreign key field. For example, if all your entities have Id as PK field, you might want to have a different name than Id as foreign key field. In our Customer - Order example, you might want to have CustomerId instead as foreign key name in Order. The pattern for foreign key fields gives you that freedom. Abbreviations... make sense of OrdNr and friends I already described word breaks in the PasCal casing paragraph, how they're used for the PasCal casing in the constructed name. Word breaks are used for another neat feature LLBLGen Pro has to offer: abbreviation support. Burt, your friendly DBA in the dungeons below the office has a hate-hate relationship with his keyboard: he can't stand it: typing is something he avoids like the plague. This has resulted in tables and fields which have names which are very short, but also very unreadable. Example: our TBL_ORDER_LINES example has a lovely field called ORD_NR. What you would like to see in your fancy new OrderLine entity mapped onto this table is a field called OrderNumber, not a field called OrdNr. What you also like is to not have to rename that field manually. There are better things to do with your time, after all. LLBLGen Pro has you covered. All it takes is to define some abbreviation - full word pairs and during reverse engineering model elements from tables/views, LLBLGen Pro will take care of the rest. For the ORD_NR field, you need two values: ORD as abbreviation and Order as full word, and NR as abbreviation and Number as full word. LLBLGen Pro will now convert every word fragment found with the word breaks which matches an abbreviation to the given full word. They're case sensitive and can be found in the Project Settings: Navigate to Conventions -> Element Name Construction -> Abbreviations. Automatic relational model element naming features Not everyone works database first: it may very well be the case you start from scratch, or have to add additional tables to an existing database. For these situations, it's key you have the flexibility that you can control the created table names and table fields without any work: let the designer create these names based on the entity model you defined and a set of rules. LLBLGen Pro offers several features in this area, which are described in more detail below. These features are found in Project Settings: navigate to Conventions -> Model First Development. Underscores, welcome back! Not every database is case insensitive, and not every organization requires PasCal cased table/field names, some demand all lower or all uppercase names with underscores at word breaks. Say you create an entity model with an entity called OrderLine. You work with Oracle and your organization requires underscores at word breaks: a table created from OrderLine should be called ORDER_LINE. LLBLGen Pro allows you to do that: with a simple checkbox you can order LLBLGen Pro to insert an underscore at each word break for the type of database you're working with: case sensitive or case insensitive. Checking the checkbox Insert underscore at word break case insensitive dbs will let LLBLGen Pro create a table from the entity called Order_Line. Half-way there, as there are still lower case characters there and you need all caps. No worries, see below Casing directives so everyone can sleep well at night For case sensitive databases and case insensitive databases there is one setting for each of them which controls the casing of the name created from a model element (e.g. a table created from an entity definition using the auto-mapping feature). The settings can have the following values: AsProjectElement, AllUpperCase or AllLowerCase. AsProjectElement is the default, and it keeps the casing as-is. In our example, we need to get all upper case characters, so we select AllUpperCase for the setting for case sensitive databases. This will produce the name ORDER_LINE. Sequence naming after a pattern Some databases support sequences, and using model-first development it's key to have sequences, when needed, to be created automatically and if possible using a name which shows where they're used. Say you have an entity Order and you want to have the PK values be created by the database using a sequence. The database you're using supports sequences (e.g. Oracle) and as you want all numeric PK fields to be sequenced, you have enabled this by the setting Auto assign sequences to integer pks. When you're using LLBLGen Pro's auto-map feature, to create new tables and constraints from the model, it will create a new table, ORDER, based on your settings I previously discussed above, with a PK field ID and it also creates a sequence, SEQ_ORDER, which is auto-assigns to the ID field mapping. The name of the sequence is created by using a pattern, defined in the Model First Development setting Sequence pattern, which uses plain text and macros like with the other patterns previously discussed. Grouping and schemas When you start from scratch, and you're working model first, the tables created by LLBLGen Pro will be in a catalog and / or schema created by LLBLGen Pro as well. If you use LLBLGen Pro's grouping feature, which allows you to group entities and other model elements into groups in the project (described in a future blog post), you might want to have that group name reflected in the schema name the targets of the model elements are in. Say you have a model with a group CRM and a group HRM, both with entities unique for these groups, e.g. Employee in HRM, Customer in CRM. When auto-mapping this model to create tables, you might want to have the table created for Employee in the HRM schema but the table created for Customer in the CRM schema. LLBLGen Pro will do just that when you check the setting Set schema name after group name to true (default). This gives you total control over where what is placed in the database from your model. But I want plural table names... and TBL_ prefixes! For now we follow best practices which suggest singular table names and no prefixes/suffixes for names. Of course that won't keep everyone happy, so we're looking into making it possible to have that in a future version. Conclusion LLBLGen Pro offers a variety of options to let the modeling system do as much work for you as possible. Hopefully you enjoyed this little highlight post and that it has given you new insights in the smaller features available to you in LLBLGen Pro, ones you might not have thought off in the first place. Enjoy!

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2

    - by rajbk
    In the previous post, you saw how to create an OData feed and pre-filter the data. In this post, we will see how to shape the data. A sample project is attached at the bottom of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1 Shaping the feed The Product feed we created earlier returns too much information about our products. Let’s change this so that only the following properties are returned – ProductID, ProductName, QuantityPerUnit, UnitPrice, UnitsInStock. We also want to return only Products that are not discontinued.  Splitting the Entity To shape our data according to the requirements above, we are going to split our Product Entity into two and expose one through the feed. The exposed entity will contain only the properties listed above. We will use the other Entity in our Query Interceptor to pre-filter the data so that discontinued products are not returned. Go to the design surface for the Entity Model and make a copy of the Product entity. A “Product1” Entity gets created.   Rename Product1 to ProductDetail. Right click on the Product entity and select “Add Association” Make a one to one association between Product and ProductDetails.   Keep only the properties we wish to expose on the Product entity and delete all other properties on it (see diagram below). You delete a property on an Entity by right clicking on the property and selecting “delete”. Keep the ProductID on the ProductDetail. Delete any other property on the ProductDetail entity that is already present in the Product entity. Your design surface should look like below:    Mapping Entity to Database Tables Right click on “ProductDetail” and go to “Table Mapping”   Add a mapping to the “Products” table in the Mapping Details.   After mapping ProductDetail, you should see the following.   Add a referential constraint. Lets add a referential constraint which is similar to a referential integrity constraint in SQL. Double click on the Association between the Entities and add the constraint with “Principal” set to “Product”. Let us review what we did so far. We made a copy of the Product entity and called it ProductDetail We created a one to one association between these entities Excluding the ProductID, we made sure properties were not duplicated between these entities  We added a ProductDetail entity to Products table mapping (Entity to Database). We added a referential constraint between the entities. Lets build our project. We get the following error: ”'NortwindODataFeed.Product' does not contain a definition for 'Discontinued' and no extension method 'Discontinued' accepting a first argument of type 'NortwindODataFeed.Product' could be found …" The reason for this error is because our Product Entity no longer has a “Discontinued” property. We “moved” it to the ProductDetail entity since we want our Product Entity to contain only properties that will be exposed by our feed. Since we have a one to one association between the entities, we can easily rewrite our Query Interceptor like so: [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.ProductDetail.Discontinued == false; } Similarly, all “hidden” properties of the Product table are available to us internally (through the ProductDetail Entity) for any additional logic we wish to implement. Compile the project and view the feed. We see that the feed returns only the properties that were part of the requirement.   To see the data in JSON format, you have to create a request with the following request header Accept: application/json, text/javascript, */* (easy to do in jQuery) The result should look like this: { "d" : { "results": [ { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(1)", "type": "NorthwindModel.Product" }, "ProductID": 1, "ProductName": "Chai", "QuantityPerUnit": "10 boxes x 20 bags", "UnitPrice": "18.0000", "UnitsInStock": 39 }, { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(2)", "type": "NorthwindModel.Product" }, "ProductID": 2, "ProductName": "Chang", "QuantityPerUnit": "24 - 12 oz bottles", "UnitPrice": "19.0000", "UnitsInStock": 17 }, { ... ... If anyone has the $format operation working, please post a comment. It was not working for me at the time of writing this.  We have successfully pre-filtered our data to expose only products that have not been discontinued and shaped our data so that only certain properties of the Entity are exposed. Note that there are several other ways you could implement this like creating a QueryView, Stored Procedure or DefiningQuery. You have seen how easy it is to create an OData feed, shape the data and pre-filter it by hardly writing any code of your own. For more details on OData, Google it with your favorite search engine :-) Also check out the one of the most passionate persons I have ever met, Pablo Castro – the Architect of Aristoria WCF Data Services. Watch his MIX 2010 presentation titled “OData: There's a Feed for That” here. Download Sample Project for VS 2010 RTM NortwindODataFeed.zip

    Read the article

  • Domain Validation in a CQRS architecture

    - by Jupaol
    Basically I want to know if there is a better way to validate my domain entities. This is how I am planning to do it but I would like your opinion The first approach I considered was: class Customer : EntityBase<Customer> { public void ChangeEmail(string email) { if(string.IsNullOrWhitespace(email)) throw new DomainException(“...”); if(!email.IsEmail()) throw new DomainException(); if(email.Contains(“@mailinator.com”)) throw new DomainException(); } } I actually do not like this validation because even when I am encapsulating the validation logic in the correct entity, this is violating the Open/Close principle (Open for extension but Close for modification) and I have found that violating this principle, code maintenance becomes a real pain when the application grows up in complexity. Why? Because domain rules change more often than we would like to admit, and if the rules are hidden and embedded in an entity like this, they are hard to test, hard to read, hard to maintain but the real reason why I do not like this approach is: if the validation rules change, I have to come and edit my domain entity. This has been a really simple example but in RL the validation could be more complex So following the philosophy of Udi Dahan, making roles explicit, and the recommendation from Eric Evans in the blue book, the next try was to implement the specification pattern, something like this class EmailDomainIsAllowedSpecification : IDomainSpecification<Customer> { private INotAllowedEmailDomainsResolver invalidEmailDomainsResolver; public bool IsSatisfiedBy(Customer customer) { return !this.invalidEmailDomainsResolver.GetInvalidEmailDomains().Contains(customer.Email); } } But then I realize that in order to follow this approach I had to mutate my entities first in order to pass the value being valdiated, in this case the email, but mutating them would cause my domain events being fired which I wouldn’t like to happen until the new email is valid So after considering these approaches, I came out with this one, since I am going to implement a CQRS architecture: class EmailDomainIsAllowedValidator : IDomainInvariantValidator<Customer, ChangeEmailCommand> { public void IsValid(Customer entity, ChangeEmailCommand command) { if(!command.Email.HasValidDomain()) throw new DomainException(“...”); } } Well that’s the main idea, the entity is passed to the validator in case we need some value from the entity to perform the validation, the command contains the data coming from the user and since the validators are considered injectable objects they could have external dependencies injected if the validation requires it. Now the dilemma, I am happy with a design like this because my validation is encapsulated in individual objects which brings many advantages: easy unit test, easy to maintain, domain invariants are explicitly expressed using the Ubiquitous Language, easy to extend, validation logic is centralized and validators can be used together to enforce complex domain rules. And even when I know I am placing the validation of my entities outside of them (You could argue a code smell - Anemic Domain) but I think the trade-off is acceptable But there is one thing that I have not figured out how to implement it in a clean way. How should I use this components... Since they will be injected, they won’t fit naturally inside my domain entities, so basically I see two options: Pass the validators to each method of my entity Validate my objects externally (from the command handler) I am not happy with the option 1 so I would explain how I would do it with the option 2 class ChangeEmailCommandHandler : ICommandHandler<ChangeEmailCommand> { public void Execute(ChangeEmailCommand command) { private IEnumerable<IDomainInvariantValidator> validators; // here I would get the validators required for this command injected, and in here I would validate them, something like this using (var t = this.unitOfWork.BeginTransaction()) { var customer = this.unitOfWork.Get<Customer>(command.CustomerId); this.validators.ForEach(x =. x.IsValid(customer, command)); // here I know the command is valid // the call to ChangeEmail will fire domain events as needed customer.ChangeEmail(command.Email); t.Commit(); } } } Well this is it. Can you give me your thoughts about this or share your experiences with Domain entities validation EDIT I think it is not clear from my question, but the real problem is: Hiding the domain rules has serious implications in the future maintainability of the application, and also domain rules change often during the life-cycle of the app. Hence implementing them with this in mind would let us extend them easily. Now imagine in the future a rules engine is implemented, if the rules are encapsulated outside of the domain entities, this change would be easier to implement

    Read the article

  • Where'd My Data Go? (and/or...How Do I Get Rid of It?)

    - by David Paquette
    Want to get a better idea of how cascade deletes work in Entity Framework Code First scenarios? Want to see it in action? Stick with us as we quickly demystify what happens when you tell your data context to nuke a parent entity. This post is authored by Calgary .NET User Group Leader David Paquette with help from Microsoft MVP in Asp.Net James Chambers. We got to spend a great week back in March at Prairie Dev Con West, chalk full of sessions, presentations, workshops, conversations and, of course, questions.  One of the questions that came up during my session: "How does Entity Framework Code First deal with cascading deletes?". James and I had different thoughts on what the default was, if it was different from SQL server, if it was the same as EF proper and if there was a way to override whatever the default was.  So we built a set of examples and figured out that the answer is simple: it depends.  (Download Samples) Consider the example of a hockey league. You have several different entities in the league including games, teams that play the games and players that make up the teams. Each team also has a mascot.  If you delete a team, we need a couple of things to happen: The team, games and mascot will be deleted, and The players for that team will remain in the league (and therefore the database) but they should no longer be assigned to a team. So, let's make this start to come together with a look at the default behaviour in SQL when using an EDMX-driven project. The Reference – Understanding EF's Behaviour with an EDMX/DB First Approach First up let’s take a look at the DB first approach.  In the database, we defined 4 tables: Teams, Players, Mascots, and Games.  We also defined 4 foreign keys as follows: Players.Team_Id (NULL) –> Teams.Id Mascots.Id (NOT NULL) –> Teams.Id (ON DELETE CASCADE) Games.HomeTeam_Id (NOT NULL) –> Teams.Id Games.AwayTeam_Id (NOT NULL) –> Teams.Id Note that by specifying ON DELETE CASCADE for the Mascots –> Teams foreign key, the database will automatically delete the team’s mascot when the team is deleted.  While we want the same behaviour for the Games –> Teams foreign keys, it is not possible to accomplish this using ON DELETE CASCADE in SQL Server.  Specifying a ON DELETE CASCADE on these foreign keys would cause a circular reference error: The series of cascading referential actions triggered by a single DELETE or UPDATE must form a tree that contains no circular references. No table can appear more than one time in the list of all cascading referential actions that result from the DELETE or UPDATE – MSDN When we create an entity data model from the above database, we get the following:   In order to get the Games to be deleted when the Team is deleted, we need to specify End1 OnDelete action of Cascade for the HomeGames and AwayGames associations.   Now, we have an Entity Data Model that accomplishes what we set out to do.  One caveat here is that Entity Framework will only properly handle the cascading delete when the the players and games for the team have been loaded into memory.  For a more detailed look at Cascade Delete in EF Database First, take a look at this blog post by Alex James.   Building The Same Sample with EF Code First Next, we're going to build up the model with the code first approach.  EF Code First is defined on the Ado.Net team blog as such: Code First allows you to define your model using C# or VB.Net classes, optionally additional configuration can be performed using attributes on your classes and properties or by using a Fluent API. Your model can be used to generate a database schema or to map to an existing database. Entity Framework Code First follows some conventions to determine when to cascade delete on a relationship.  More details can be found on MSDN: If a foreign key on the dependent entity is not nullable, then Code First sets cascade delete on the relationship. If a foreign key on the dependent entity is nullable, Code First does not set cascade delete on the relationship, and when the principal is deleted the foreign key will be set to null. The multiplicity and cascade delete behavior detected by convention can be overridden by using the fluent API. For more information, see Configuring Relationships with Fluent API (Code First). Our DbContext consists of 4 DbSets: public DbSet<Team> Teams { get; set; } public DbSet<Player> Players { get; set; } public DbSet<Mascot> Mascots { get; set; } public DbSet<Game> Games { get; set; } When we set the Mascot –> Team relationship to required, Entity Framework will automatically delete the Mascot when the Team is deleted.  This can be done either using the [Required] data annotation attribute, or by overriding the OnModelCreating method of your DbContext and using the fluent API. Data Annotations: public class Mascot { public int Id { get; set; } public string Name { get; set; } [Required] public virtual Team Team { get; set; } } Fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); } The Player –> Team relationship is automatically handled by the Code First conventions. When a Team is deleted, the Team property for all the players on that team will be set to null.  No additional configuration is required, however all the Player entities must be loaded into memory for the cascading to work properly. The Game –> Team relationship causes some grief in our Code First example.  If we try setting the HomeTeam and AwayTeam relationships to required, Entity Framework will attempt to set On Cascade Delete for the HomeTeam and AwayTeam foreign keys when creating the database tables.  As we saw in the database first example, this causes a circular reference error and throws the following SqlException: Introducing FOREIGN KEY constraint 'FK_Games_Teams_AwayTeam_Id' on table 'Games' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. To solve this problem, we need to disable the default cascade delete behaviour using the fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); modelBuilder.Entity<Team>() .HasMany(t => t.HomeGames) .WithRequired(g => g.HomeTeam) .WillCascadeOnDelete(false); modelBuilder.Entity<Team>() .HasMany(t => t.AwayGames) .WithRequired(g => g.AwayTeam) .WillCascadeOnDelete(false); base.OnModelCreating(modelBuilder); } Unfortunately, this means we need to manually manage the cascade delete behaviour.  When a Team is deleted, we need to manually delete all the home and away Games for that Team. foreach (Game awayGame in jets.AwayGames.ToArray()) { entities.Games.Remove(awayGame); } foreach (Game homeGame in homeGames) { entities.Games.Remove(homeGame); } entities.Teams.Remove(jets); entities.SaveChanges();   Overriding the Defaults – When and How To As you have seen, the default behaviour of Entity Framework Code First can be overridden using the fluent API.  This can be done by overriding the OnModelCreating method of your DbContext, or by creating separate model override files for each entity.  More information is available on MSDN.   Going Further These were simple examples but they helped us illustrate a couple of points. First of all, we were able to demonstrate the default behaviour of Entity Framework when dealing with cascading deletes, specifically how entity relationships affect the outcome. Secondly, we showed you how to modify the code and control the behaviour to get the outcome you're looking for. Finally, we showed you how easy it is to explore this kind of thing, and we're hoping that you get a chance to experiment even further. For example, did you know that: Entity Framework Code First also works seamlessly with SQL Azure (MSDN) Database creation defaults can be overridden using a variety of IDatabaseInitializers  (Understanding Database Initializers) You can use Code Based migrations to manage database upgrades as your model continues to evolve (MSDN) Next Steps There's no time like the present to start the learning, so here's what you need to do: Get up-to-date in Visual Studio 2010 (VS2010 | SP1) or Visual Studio 2012 (VS2012) Build yourself a project to try these concepts out (or download the sample project) Get into the community and ask questions! There are a ton of great resources out there and community members willing to help you out (like these two guys!). Good luck! About the Authors David Paquette works as a lead developer at P2 Energy Solutions in Calgary, Alberta where he builds commercial software products for the energy industry.  Outside of work, David enjoys outdoor camping, fishing, and skiing. David is also active in the software community giving presentations both locally and at conferences. David also serves as the President of Calgary .Net User Group. James Chambers crafts software awesomeness with an incredible team at LogiSense Corp, based in Cambridge, Ontario. A husband, father and humanitarian, he is currently residing in the province of Manitoba where he resists the urge to cheer for the Jets and maintains he allegiance to the Calgary Flames. When he's not active with the family, outdoors or volunteering, you can find James speaking at conferences and user groups across the country about web development and related technologies.

    Read the article

  • Entity component system -> handling components that depend on one another

    - by jtedit
    I really like the idea of an entity component system and feel it has great flexibility, but have a question. How should dependent components be handled? I'm not talking about how components should communicate with other components they depend on, I have that sorted, but rather how to ensure components are present. For example, an entity cannot have a "velocity" component if it doesn't have a "position" component, in the same way it cant have an "acceleration" component if it doesn't have a "velocity" component. My first idea was every component class overrides an "onAddedToEntity(Entity ent)" function. Then in that function it checks that prerequisite components are also added to the entity, eg: struct EntCompVelocity() : public EntityComponent{ //member variables here void onAddedToEntity(Entity ent){ if(!ent.hasComponent(EntCompPosition::Id)){ ent.addComponent(new EntCompPosition()); } } } This has the nice property that if the acceleration component adds the velocity component, the velocity component will itself add the position component to the entity so dependency "trees" will sort themselves out. However my concern is if I do this components will silently be added with default values and, in the example of adding position, many entities will appear at the origin. Another idea was to simple have the "Entity.addComponent();" function return false if the component's prerequisite components aren't already on the entity, this would force you to manually add the position component and set its value before adding the velocity component. Finally I could simply not ensure a components prerequisite components are added, the "UpdatePosition" system only deals with entities with both a position and velocity component, so therefore adding a velocity component without having a position component wont be a problem (it wont cause crashes due to null pointer/etc), but it does mean entities will carry useless unused data if you add components but not their prerequisite components. Does anyone have experience with this problem and/or any of these methods to solve it? How did you solve the problem?

    Read the article

  • NHibernate Tools: Visual NHibernate

    - by Ricardo Peres
    You probably know that I’m a big fan of Slyce Software’s Visual NHibernate. To me, it is the best tool for generating your entities and mappings from an existing database (it also allows you to go the other way, but I honestly have never used it that way). What I like most about it: Great support: folks at Slyce always listen to your suggestions, give you feedback in a timely manner, and I was even lucky enough to have some of my suggestions implemented! The templating engine, which is very powerful, and more user-friendly than, for example, MyGeneration’s; one of the included templates is Sharp Architecture; Advanced model validations: it even warns you about having lazy properties declared in non-lazy entities; Integration with NHibernate Validator and generation of validation rules automatically based on the database, or on user-defined model settings; The designer: they opted for not displaying all entities in a single screen, which I think was a good decision; has support for all inheritance strategies (table per class hierarchy, table per class, table per concrete class); Generation of FluentNHibernate mappings as well as hbm.xml. I could name others, but… why don’t you see for yourself? There is a demo version available for downloading. By the way, I am in no way related to Slyce, I just happen to like their software!

    Read the article

  • How to update entity states and animations in a component-based game

    - by mivic
    I'm trying to design a component-based entity system for learning purposes (and later use on some games) and I'm having some troubles when it comes to updating entity states. I don't want to have an update() method inside the Component to prevent dependencies between Components. What I currently have in mind is that components hold data and systems update components. So, if I have a simple 2D game with some entities (e.g. player, enemy1, enemy 2) that have Transform, Movement, State, Animation and Rendering components I think I should have: A MovementSystem that moves all the Movement components and updates the State components And a RenderSystem that updates the Animation components (the animation component should have one animation (i.e. a set of frames/textures) for each state and updating it means selecting the animation corresponding to the current state (e.g. jumping, moving_left, etc), and updating the frame index). Then, the RenderSystem updates the Render components with the texture corresponding to the current frame of each entity's Animation and renders everything on screen. I've seen some implementations like Artemis framework, but I don't know how to solve this situation: Let's say that my game has the following entities. Each entity have a set of states and one animation for each state: player: "idle", "moving_right", "jumping" enemy1: "moving_up", "moving_down" enemy2: "moving_left", "moving_right" What are the most accepted approaches in order to update the current state of each entity? The only thing that I can think of is having separate systems for each group of entities and separate State and Animation components so I would have PlayerState, PlayerAnimation, Enemy1State, Enemy1Animation... PlayerMovementSystem, PlayerRenderingSystem... but I think this is a bad solution and breaks the purpose of having a component-based system. As you can see, I'm quite lost here, so I'd very much appreciate any help.

    Read the article

  • Quaternion LookAt for camera

    - by Homar
    I am using the following code to rotate entities to look at points. glm::vec3 forwardVector = glm::normalize(point - position); float dot = glm::dot(glm::vec3(0.0f, 0.0f, 1.0f), forwardVector); float rotationAngle = (float)acos(dot); glm::vec3 rotationAxis = glm::normalize(glm::cross(glm::vec3(0.0f, 0.0f, 1.0f), forwardVector)); rotation = glm::normalize(glm::quat(rotationAxis * rotationAngle)); This works fine for my usual entities. However, when I use this on my Camera entity, I get a black screen. If I flip the subtraction in the first line, so that I take the forward vector to be the direction from the point to my camera's position, then my camera works but naturally my entities rotate to look in the opposite direction of the point. I compute the transformation matrix for the camera and then take the inverse to be the View Matrix, which I pass to my OpenGL shaders: glm::mat4 viewMatrix = glm::inverse( cameraTransform->GetTransformationMatrix() ); The orthographic projection matrix is created using glm::ortho. What's going wrong?

    Read the article

  • Proper method to update and draw from game loop?

    - by Lost_Soul
    Recently I've took up the challenge for myself to create a basic 2d side scrolling monster truck game for my little brother. Which seems easy enough in theory. After working with XNA it seems strange jumping into Java (which is what I plan to program it in). Inside my game class I created a private class called GameLoop that extends from Runnable, then in the overridden run() method I made a while loop that handles time and such and I implemented a targetFPS for drawing as well. The loop looks like this: @Override public void run() { long fpsTime = 0; gameStart = System.currentTimeMillis(); lastTime = System.currentTimeMillis(); while(game.isGameRunning()) { currentTime = System.currentTimeMillis(); long ellapsedTime = currentTime - lastTime; if(mouseState.leftIsDown) { que.add(new Dot(mouseState.getPosition())); } entities.addAll(que); game.updateGame(ellapsedTime); fpsTime += ellapsedTime; if(fpsTime >= (1000 / targetedFPS)) { game.drawGame(ellapsedTime); } lastTime = currentTime; } The problem I've ran into is adding of entities after a click. I made a class that has another private class that extends MouseListener and MouseMotionListener then on changes I have it set a few booleans to tell me if the mouse is pressed or not which seems to work great but when I add the entity it throws a CME (Concurrent Modification Exception) sometimes. I have all the entities stored in a LinkedList so later I tried adding a que linkedlist where I later add the que to the normal list in the update loop. I think this would work fine if it was just the update method in the gameloop but with the repaint() method (called inside game.drawGame() method) it throws the CME. The only other thing is that I'm currently drawing directly from the overridden paintComponent() method in a custom class that extends JPanel. Maybe there is a better way to go about this? As well as fix my CME? Thanks in advance!!!

    Read the article

  • Entity system in Lua, communication with C++ and level editor. Need advice.

    - by Notbad
    Hi!, I know this is a really difficult subject. I have been reading a lot this days about Entity systems, etc... And now I'm ready to ask some questions (if you don't mind answering them) because I'm really messed. First of all, I have a 2D basic editor written in Qt, and I'm in the process of adding entitiy edition. I want the editor to be able to receive RTTI information from entities to change properties, create some logic being able to link published events to published actions (Ex:A level activate event throws a door open action), etc... Because all of this I guess my entity system should be written in scripting, in my case Lua. In the other hand I want to use a component based design for my entities, and here starts my questions: 1) Should I define my componentes en C++? If I do this en C++ won't I loose all the RTTI information I want for my editor?. In the other hand, I use box2d for physics, if I define all my components in script won't it be a lot of work to expose third party libs to lua? 2) Where should I place the messa system for my game engine? Lua? C++?. I'm tempted to just have C++ object to behave as servers, offering services to lua business logic. Things like physics system, rendering system, input system, World class, etc... And for all the other things, lua. Creation/Composition of entities based on components, game logic, etc... Could anyone give any insight on how to accomplish this? And what aproach is better?. Thanks in advance, HexDump.

    Read the article

  • Creating shooting arrow class [on hold]

    - by I.Hristov
    OK I am trying to write an XNA game with one controllable by the player entity, while the rest are bots (enemy and friendly) wondering around and... shooting each other from range. Now the shooting I suppose should be done with a separate class Arrow (for example). The resulting object would be the arrow appearing on screen moving from shooting entity to target entity. When target is reached arrow is no longer active, probably removed from the list. I plan to make a class with fields: Vector2 shootingEntity; Vector2 targetEntity; float arrowSpeed; float arrowAttackSpeed; int damageDone; bool isActive; Then when enemy entities get closer than a int rangeToShoot (which each entity will have as a field/prop) I plan to make a list of arrows emerging from each entity and going to the closest opposite one. I wonder if that logic will enable me later to make possible many entities to be able to shoot independently at different enemy entities at the same time. I know the question is broad but it would be wise to ask if the foundations of the idea are correct.

    Read the article

  • Faster 2D Collision detection

    - by eShredder
    Recently I've been working on a fast-paced 2d shooter and I came across a mighty problem. Collision detection. Sure, it is working, but it is very slow. My goal is: Have lots of enemies on screen and have them to not touch each other. All of the enemies are chasing the player entity. Most of them have the same speed so sooner or later they all end up taking the same space while chasing the player. This really drops the fun factor since, for the player, it looks like you are being chased by one enemy only. To prevent them to take the same space I added a collision detection (a very basic 2D detection, the only method I know of) which is. Enemy class update method Loop through all enemies (continue; if the loop points at this object) If enemy object intersects with this object Push enemy object away from this enemy object This works fine. As long as I only have <200 enemy entities that is. When I get closer to 300-350 enemy entities my frame rate begins to drop heavily. First I thought it was bad rendering so I removed their draw call. This did not help at all so of course I realised it was the update method. The only heavy part in their update method is this each-enemy-loops-through-every-enemy part. When I get closer to 300 enemies the game does a 90000 (300x300) step itteration. My my~ I'm sure there must be another way to aproach this collision detection. Though I have no idea how. The pages I find is about how to actually do the collision between two objects or how to check collision between an object and a tile. I already know those two things. tl;dr? How do I aproach collision detection between LOTS of entities? Quick edit: If it is to any help, I'm using C# XNA.

    Read the article

  • [EF + Oracle]Object Context

    - by JTorrecilla
    Prologue After EF episodes I and II, we are going to see the Object Context. What is Object Context? It is a class which manages the DB connection, and the different Entities of our model. When Visual Studio creates the EF model, like I explain previously, also generates a Class that extends ObjectContext. ObjectContext provides: - DB connection - Add, update and delete functions. - Object Sets of Entities. - State of Pending Changes. This class will give a function, for each Entity, like  Esta clase va a contar con una función, para cada entidad, del tipo “AddTo{ENTITY}({Entity_Type } value)”, which are going to add a Entity to the related ObjectSet. In addition, it has a property, for each Entity, like “ObjectSet<TEntity> Entity”, does will keep the related record set. It will be filled with the CreateObjectSet<TEntity> function of Base class (ObjectContext). What is an ObjectSet? It is a class that allows us to manage the Entity Set from a Type. It inherits from: · ObjectQuery<TEntity> · IObjectSet<TEntity> · IQueryAble<TEntity · IEnumerable<TEntity · IQueryAble · IEnumerable An ObjectSet is a class property that allows query, insert, delete and update records from a determinate Entity. In following chapters we will see how to query Entities. LazyLoadingEnabled A very important property of the Context is “LazyLoadingEnabled”. This Boolean property lets indicate if the data loading is lazy, in other words, the Object will not be created and query until not be needed. Finally In this post we have seen what the VS generated context is, some of the characteristics, and where to see Entity data. In next chapters we will see, CRUD operations, and how to query ObjectSets.

    Read the article

  • what's a good approach to working with multiple databases?

    - by Riz
    I'm working on a project that has its own database call it InternalDb, but also it queries two other databases, call them ExternalDb1 and ExternalDb2. Both ExternalDb1 and ExternalDb2 are actually required by a few other projects. I'm wondering what the best approach for dealing with this is? Currently, I've just created a project for each of these external databases and then generated Edmx and entities using the entity-framework approach. My thought was that I could then include these projects in any of my solutions that require access to these databases. Also, I don't have any separate business layers. I just have a solution like below: Project.Domain ExternalDb1Project.Domain ExternalDb2Project.Domain Project.Web So my Domain projects contain the data access as well as the POCOs generated by Entity Framework and any business logic. But I'm not sure if this is a good approach. For example if I want to do Validation in my Project.Domain on the entities in the InternalDb, it's fine. But if I want to do Validation for entities from either of the ExternalDbs, then I wonder where it should go? To be more specific, I retrieve Employees from ExternalDb1Project.Domain. However, I want to make sure they are Active. Where should this Validation go? How to architect a project like this at a high level? Also, I want to make sure that I use IoC for my data contexts so I can create Fakes when writing tests. I wonder where the interfaces for these various data contexts would reside?

    Read the article

  • EF4 CTP5 Conflicting changes detected. This may happen when trying to insert multiple entities with the same key.

    - by user658332
    hi, I am new to EF4 CTP5. I am just hanging at one problem.. I am using CodeFirst without Database, so when i execute application, it generated DB for me. here is my scenario, i have following Class structure... public class KVCalculationWish { public KVCalculationWish() { } public int KVCalculationWishId { get; set; } public string KVCalculationWishName { get; set; } public int KVSingleOfferId { get; set; } public virtual KVSingleOffer SingleOffer { get; set; } public int KVCalculationsForPersonId { get; set; } public virtual KVCalculationsForPerson CaculationsForPerson { get; set; } } public class KVSingleOffer { public KVSingleOffer() { } public int KVSingleOfferId { get; set; } public string KVSingleOfferName { get; set; } public KVCalculationWish CalculationWish { get; set; } } public class KVCalculationsForPerson { public KVCalculationsForPerson() { } public int KVCalculationsForPersonId { get; set; } public string KVCalculationsForPersonName { get; set; } public KVCalculationWish CalculationWish { get; set; } } public class EntiyRelation : DbContext { public EntiyRelation() { } public DbSet<KVCalculationWish> CalculationWish { get; set; } public DbSet<KVSingleOffer> SingleOffer { get; set; } public DbSet<KVCalculationsForPerson> CalculationsForPerson { get; set; } protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) { base.OnModelCreating(modelBuilder); modelBuilder.Entity<KVCalculationWish>().HasOptional(m => m.SingleOffer).WithRequired(p => p.CalculationWish); modelBuilder.Entity<KVCalculationWish>().HasOptional(m => m.CaculationsForPerson).WithRequired(p => p.CalculationWish); } } i want to use KVCalcuationWish object in KVCalcuationsForPerson and KVSingleOffer class. So when i am creating object of KVCalcuationForPerson and KVSingleOffer class i initialize both object with New KVCalcuationWish object. like this KVCalculationsForPerson calcPerson = new KVCalculationsForPerson(); KVCalculationWish wish = new KVCalculationWish() { CaculationsForPerson = calcPerson }; calcPerson.KVCalculationsForPersonName = "Person Name"; calcPerson.CalculationWish = wish; KVSingleOffer singleOffer = new KVSingleOffer(); KVCalculationWish wish1 = new KVCalculationWish() { SingleOffer = singleOffer }; singleOffer.KVSingleOfferName = "Offer Name"; singleOffer.CalculationWish = wish1; but my problem is when i save this records using following code try { db.CalculationsForPerson.Add(calcPerson); db.SingleOffer.Add(singleOffer); db.SaveChanges(); } catch (Exception ex) { } i can save successfully in DB, but in Table KVCalcuationWish i am not able to get the ID of SingleOffer and CalcuationsForPerson class object. Following is the data of KVCalcuationWish table. KVCalcuationWishID KVCalcuationWishName KVSingleOfferID KVCalcuationsForPersonID 1 NULL 0 0 Following is the data of KVSingleOFfer Table KVSingleOfferID KVSingleOfferName 1 Offer Name Follwing is the data of KVCalcuationsForPerson Table KVSingleOfferID KVSingleOfferName 1 Person Name I want to have following possible output in KVCalcuationWish table. KVCalcuationWishID KVCalcuationWishName KVSingleOfferID KVCalcuationsForPersonID 1 NULL 1 NULL 2 NULL NULL 1 so what i want to achieve is ...... when i am save KVSingleOffer object i want separate record to be inserted and when i save KVCalcuationsForPerson object another separate record should be save to KVCalcuationwish table. Is that possible? Sorry for long description... but i really hang on this situation... Thanks & Regards, Joyous Suhas

    Read the article

  • ASP.NET MVC 2: Updating a Linq-To-Sql Entity with an EntitySet

    - by Simon
    I have a Linq to Sql Entity which has an EntitySet. In my View I display the Entity with it's properties plus an editable list for the child entites. The user can dynamically add and delete those child entities. The DefaultModelBinder works fine so far, it correctly binds the child entites. Now my problem is that I just can't get Linq To Sql to delete the deleted child entities, it will happily add new ones but not delete the deleted ones. I have enabled cascade deleting in the foreign key relationship, and the Linq To Sql designer added the "DeleteOnNull=true" attribute to the foreign key relationships. If I manually delete a child entity like this: myObject.Childs.Remove(child); context.SubmitChanges(); This will delete the child record from the DB. But I can't get it to work for a model binded object. I tried the following: // this does nothing public ActionResult Update(int id, MyObject obj) // obj now has 4 child entities { var obj2 = _repository.GetObj(id); // obj2 has 6 child entities if(TryUpdateModel(obj2)) //it sucessfully updates obj2 and its childs { _repository.SubmitChanges(); // nothing happens, records stay in DB } else ..... return RedirectToAction("List"); } and this throws an InvalidOperationException, I have a german OS so I'm not exactly sure what the error message is in english, but it says something along the lines of that the entity needs a Version (Timestamp row?) or no update check policies. I have set UpdateCheck="Never" to every column except the primary key column. public ActionResult Update(MyObject obj) { _repository.MyObjectTable.Attach(obj, true); _repository.SubmitChanges(); // never gets here, exception at attach } I've read alot about similar "problems" with Linq To Sql, but it seems most of those "problems" are actually by design. So am I right in my assumption that this doesn't work like I expect it to work? Do I really have to manually iterate through the child entities and delete, update and insert them manually? For such a simple object this may work, but I plan to create more complex objects with nested EntitySets and so on. This is just a test to see what works and what not. So far I'm disappointed with Linq To Sql (maybe I just don't get it). Would be the Entity Framework or NHibernate a better choice for this scenario? Or would I run into the same problem?

    Read the article

  • Using MVC2 to update an Entity Framework v4 object with foreign keys fails

    - by jbjon
    With the following simple relational database structure: An Order has one or more OrderItems, and each OrderItem has one OrderItemStatus. Entity Framework v4 is used to communicate with the database and entities have been generated from this schema. The Entities connection happens to be called EnumTestEntities in the example. The trimmed down version of the Order Repository class looks like this: public class OrderRepository { private EnumTestEntities entities = new EnumTestEntities(); // Query Methods public Order Get(int id) { return entities.Orders.SingleOrDefault(d => d.OrderID == id); } // Persistence public void Save() { entities.SaveChanges(); } } An MVC2 app uses Entity Framework models to drive the views. I'm using the EditorFor feature of MVC2 to drive the Edit view. When it comes to POSTing back any changes to the model, the following code is called: [HttpPost] public ActionResult Edit(int id, FormCollection formValues) { // Get the current Order out of the database by ID Order order = orderRepository.Get(id); var orderItems = order.OrderItems; try { // Update the Order from the values posted from the View UpdateModel(order, ""); // Without the ValueProvider suffix it does not attempt to update the order items UpdateModel(order.OrderItems, "OrderItems.OrderItems"); // All the Save() does is call SaveChanges() on the database context orderRepository.Save(); return RedirectToAction("Details", new { id = order.OrderID }); } catch (Exception e) { return View(order); // Inserted while debugging } } The second call to UpdateModel has a ValueProvider suffix which matches the auto-generated HTML input name prefixes that MVC2 has generated for the foreign key collection of OrderItems within the View. The call to SaveChanges() on the database context after updating the OrderItems collection of an Order using UpdateModel generates the following exception: "The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted." When debugging through this code, I can still see that the EntityKeys are not null and seem to be the same value as they should be. This still happens when you are not changing any of the extracted Order details from the database. Also the entity connection to the database doesn't change between the act of Getting and the SaveChanges so it doesn't appear to be a Context issue either. Any ideas what might be causing this problem? I know EF4 has done work on foreign key properties but can anyone shed any light on how to use EF4 and MVC2 to make things easy to update; rather than having to populate each property manually. I had hoped the simplicity of EditorFor and DisplayFor would also extend to Controllers updating data. Thanks

    Read the article

  • Cross-table linq query with EF4/POCO

    - by Basiclife
    Hi All, I'm new to EF(any version) and POCO. I'm trying to use POCO entities with a generic repository in a "code-first" mode(?) I've got some POCO Entities (no proxies, no lazy loading, nothing). I have a repository(of T as Entity) which provides me with basic get/getsingle/getfirst functionality which takes a lambda as a parameter (specifically a System.Func(Of T, Boolean)) Now as I'm returning the simplest possible POCO object, none of the relationship parameters work once they've been retrieved from the database (as I would expect). However, I had assumed (wrongly) that my lambda query passed to the repository would be able to use the links between entities as it would be executed against the DB before the simple POCO entities are generated. The flow is: GUI calls: Public Function GetAllTypesForCategory(ByVal CategoryID As Guid) As IEnumerable(Of ItemType) Return ItemTypeRepository.Get(Function(x) x.Category.ID = CategoryID) End Function Get is defined in Repository(of T as Entity): Public Function [Get](ByVal Query As System.Func(Of T, Boolean)) As IEnumerable(Of T) Implements Interfaces.IRepository(Of T).Get Return ObjectSet.Where(Query).ToList() End Function The code doesn't error when this method is called but does when I try to use the result set. (This seems to be a lazy loading behaviour so I tried adding the .ToList() to force eager loading - no difference) I'm using unity/IOC to wire it all up but I believe that's irrelevant to the issue I'm having NB: Relationships between entities are being configured properly and if I turn on proxies/lazy loading/etc... this all just works. I'm intentionally leaving all that turned off as some calls to the BL will be from a website but some will be via WCF - So I want the simplest possible objects. Also, I don't want a change in an object passed to the UI to be committed to the DB if another BL method calls Commit() Can someone please either point out how to make this work or explain why it's not possible? All I want to do is make sure the lambda I pass in is performed against the DB before the results are returned Many thanks. In case it matters, the container is being populated with everything as shown below: Container.AddNewExtension(Of EFRepositoryExtension)() Container.Configure(Of IEFRepositoryExtension)(). WithConnection(ConnectionString). WithContextLifetime(New HttpContextLifetimeManager(Of IObjectContext)()). ConfigureEntity(New CategoryConfig(), "Categories"). ConfigureEntity(New ItemConfig()). ... )

    Read the article

  • Web Services, Memory Leaks and CRM

    - by Neil
    Hi, I have a website that allows users to upload a csv file. This calls a service that reads the information from the csv, puts it into DynamicEntity objects and calls the CRM service to Create/Update entities in CRM. When this service creates/updates an entity this kicks off other plugins to apply certain business rules. These rules can also Create or Update entites in CRM. The issue here is that the handle count of the w3wp.exe process that the website is calling increases every time the an entity is created or updated and it never comes back down. I tried putting Garbage Collection code in the business rules and this reduces the handle count of the CRM w3wp process (run by the Network Service), but not the other w3wp process. Should I have Dispose methods on the Web Service that calls the CRM service? I hope that makes sense. I'm not overly familiar with memory management issues so any help is appreciated. Can anybody give me some tips on how to stop this from occurring? Thanks, Neil -- EDIT Okay well the handle count goes up when I call the Service.Create(DynamicEntity) method. I don't think placing any code here would be beneficial. When I exit the method/class/service that contains this call the handle count stays as it is. What I need to know is whether this is something I should be managing or is it something CRM takes care of (or doesn't take care of but I can't do anything about it) -- Another Edit Right this is how it works. 1) We have CRM and its related services 2) We have another service independent of CRM that uses the CRM services (number 1 above) to create entities based on csv info passed into it 3) We have a website that allows a user to upload a csv, and calls service no 2 above to Create/Update entities in CRM 4) We have plugins fired by CRM which use Service 1 above to create/update entities So the user uploads a csv to the website (3), this fires a service(2). When service 2 creates an entity using service 1, Service 4 fires. Service 4 calls also uses service 1 to Create entities, and when these services are called (using the Service.Create() method) the handle count of the process increases. When the method/class/services finish the handle count remains the same, and so when the whole process occurs again the handle count will increased again.

    Read the article

  • How to reflect over T to build an expression tree for a query?

    - by Alex
    Hi all, I'm trying to build a generic class to work with entities from EF. This class talks to repositories, but it's this class that creates the expressions sent to the repositories. Anyway, I'm just trying to implement one virtual method that will act as a base for common querying. Specifically, it will accept a an int and it only needs to perform a query over the primary key of the entity in question. I've been screwing around with it and I've built a reflection which may or may not work. I say that because I get a NotSupportedException with a message of LINQ to Entities does not recognize the method 'System.Object GetValue(System.Object, System.Object[])' method, and this method cannot be translated into a store expression. So then I tried another approach and it produced the same exception but with the error of The LINQ expression node type 'ArrayIndex' is not supported in LINQ to Entities. I know it's because EF will not parse the expression the way L2S will. Anyway, I'm hopping someone with a bit more experience can point me into the right direction on this. I'm posting the entire class with both attempts I've made. public class Provider<T> where T : class { protected readonly Repository<T> Repository = null; private readonly string TEntityName = typeof(T).Name; [Inject] public Provider( Repository<T> Repository) { this.Repository = Repository; } public virtual void Add( T TEntity) { this.Repository.Insert(TEntity); } public virtual T Get( int PrimaryKey) { // The LINQ expression node type 'ArrayIndex' is not supported in // LINQ to Entities. return this.Repository.Select( t => (((int)(t as EntityObject).EntityKey.EntityKeyValues[0].Value) == PrimaryKey)).Single(); // LINQ to Entities does not recognize the method // 'System.Object GetValue(System.Object, System.Object[])' method, // and this method cannot be translated into a store expression. return this.Repository.Select( t => (((int)t.GetType().GetProperties().Single( p => (p.Name == (this.TEntityName + "Id"))).GetValue(t, null)) == PrimaryKey)).Single(); } public virtual IList<T> GetAll() { return this.Repository.Select().ToList(); } protected virtual void Save() { this.Repository.Update(); } }

    Read the article

  • Getting 2D Platformer entity collision Response Correct (side-to-side + jumping/landing on heads)

    - by jbrennan
    I've been working on a 2D (tile based) 2D platformer for iOS and I've got basic entity collision detection working, but there's just something not right about it and I can't quite figure out how to solve it. There are 2 forms of collision between player entities as I can tell, either the two players (human controlled) are hitting each other side-to-side (i. e. pushing against one another), or one player has jumped on the head of the other player (naturally, if I wanted to expand this to player vs enemy, the effects would be different, but the types of collisions would be identical, just the reaction should be a little different). In my code I believe I've got the side-to-side code working: If two entities press against one another, then they are both moved back on either side of the intersection rectangle so that they are just pushing on each other. I also have the "landed on the other player's head" part working. The real problem is, if the two players are currently pushing up against each other, and one player jumps, then at one point as they're jumping, the height-difference threshold that counts as a "land on head" is passed and then it registers as a jump. As a life-long player of 2D Mario Bros style games, this feels incorrect to me, but I can't quite figure out how to solve it. My code: (it's really Objective-C but I've put it in pseudo C-style code just to be simpler for non ObjC readers) void checkCollisions() { // For each entity in the scene, compare it with all other entities (but not with one it's already compared against) for (int i = 0; i < _allGameObjects.count(); i++) { // GameObject is an Entity GEGameObject *firstGameObject = _allGameObjects.objectAtIndex(i); // Don't check against yourself or any previous entity for (int j = i+1; j < _allGameObjects.count(); j++) { GEGameObject *secondGameObject = _allGameObjects.objectAtIndex(j); // Get the collision bounds for both entities, then see if they intersect // CGRect is a C-struct with an origin Point (x, y) and a Size (w, h) CGRect firstRect = firstGameObject.collisionBounds(); CGRect secondRect = secondGameObject.collisionBounds(); // Collision of any sort if (CGRectIntersectsRect(firstRect, secondRect)) { //////////////////////////////// // // // Check for jumping first (???) // // //////////////////////////////// if (firstRect.origin.y > (secondRect.origin.y + (secondRect.size.height * 0.7))) { // the top entity could be pretty far down/in to the bottom entity.... firstGameObject.didLandOnEntity(secondGameObject); } else if (secondRect.origin.y > (firstRect.origin.y + (firstRect.size.height * 0.7))) { // second entity was actually on top.... secondGameObject.didLandOnEntity.(firstGameObject); } else if (firstRect.origin.x > secondRect.origin.x && firstRect.origin.x < (secondRect.origin.x + secondRect.size.width)) { // Hit from the RIGHT CGRect intersection = CGRectIntersection(firstRect, secondRect); // The NUDGE just offsets either object back to the left or right // After the nudging, they are exactly pressing against each other with no intersection firstGameObject.nudgeToRightOfIntersection(intersection); secondGameObject.nudgeToLeftOfIntersection(intersection); } else if ((firstRect.origin.x + firstRect.size.width) > secondRect.origin.x) { // hit from the LEFT CGRect intersection = CGRectIntersection(firstRect, secondRect); secondGameObject.nudgeToRightOfIntersection(intersection); firstGameObject.nudgeToLeftOfIntersection(intersection); } } } } } I think my collision detection code is pretty close, but obviously I'm doing something a little wrong. I really think it's to do with the way my jumps are checked (I wanted to make sure that a jump could happen from an angle (instead of if the falling player had been at a right angle to the player below). Can someone please help me here? I haven't been able to find many resources on how to do this properly (and thinking like a game developer is new for me). Thanks in advance!

    Read the article

  • How to setup linux permissions for the WWW folder?

    - by Xeoncross
    Updated Summery The /var/www directory is owned by root:root which means that no one can use it and it's entirely useless. Since we all want a web server that actually works (and no-one should be logging in as "root"), then we need to fix this. Only two entities need access. PHP/Perl/Ruby/Python all need access to the folders and files since they create many of them (i.e. /uploads/). These scripting languages should be running under nginx or apache (or even some other thing like FastCGI for PHP). The developers How do they get access? I know that someone, somewhere has done this before. With however-many billions of websites out there you would think that there would be more information on this topic. I know that 777 is full read/write/execute permission for owner/group/other. So this doesn't seem to be needed as it leaves random users full permissions. What permissions are need to be used on /var/www so that... Source control like git or svn Users in a group like "websites" (or even added to "www-data") Servers like apache or lighthttpd And PHP/Perl/Ruby can all read, create, and run files (and directories) there? If I'm correct, Ruby and PHP scripts are not "executed" directly - but passed to an interpreter. So there is no need for execute permission on files in /var/www...? Therefore, it seems like the correct permission would be chmod -R 1660 which would make all files shareable by these four entities all files non-executable by mistake block everyone else from the directory entirely set the permission mode to "sticky" for all future files Is this correct? Update: I just realized that files and directories might need different permissions - I was talking about files above so i'm not sure what the directory permissions would need to be. Update 2: The folder structure of /var/www changes drastically as one of the four entities above are always adding (and sometimes removing) folders and sub folders many levels deep. They also create and remove files that the other 3 entities might need read/write access to. Therefore, the permissions need to do the four things above for both files and directories. Since non of them should need execute permission (see question about ruby/php above) I would assume that rw-rw-r-- permission would be all that is needed and completely safe since these four entities are run by trusted personal (see #2) and all other users on the system only have read access. Update 3: This is for personal development machines and private company servers. No random "web customers" like a shared host. Update 4: This article by slicehost seems to be the best at explaining what is needed to setup permissions for your www folder. However, I'm not sure what user or group apache/nginx with PHP OR svn/git run as and how to change them. Update 5: I have (I think) finally found a way to get this all to work (answer below). However, I don't know if this is the correct and SECURE way to do this. Therefore I have started a bounty. The person that has the best method of securing and managing the www directory wins.

    Read the article

  • How to setup linux permissions the WWW folder?

    - by Xeoncross
    Updated Summery The /var/www directory is owned by root:root which means that no one can use it and it's entirely useless. Since we all want a web server that actually works (and no-one should be logging in as "root"), then we need to fix this. Only two entities need access. PHP/Perl/Ruby/Python all need access to the folders and files since they create many of them (i.e. /uploads/). These scripting languages should be running under nginx or apache (or even some other thing like FastCGI for PHP). The developers How do they get access? I know that someone, somewhere has done this before. With however-many billions of websites out there you would think that there would be more information on this topic. I know that 777 is full read/write/execute permission for owner/group/other. So this doesn't seem to be needed as it leaves random users full permissions. What permissions are need to be used on /var/www so that... Source control like git or svn Users in a group like "websites" (or even added to "www-data") Servers like apache or lighthttpd And PHP/Perl/Ruby can all read, create, and run files (and directories) there? If I'm correct, Ruby and PHP scripts are not "executed" directly - but passed to an interpreter. So there is no need for execute permission on files in /var/www...? Therefore, it seems like the correct permission would be chmod -R 1660 which would make all files shareable by these four entities all files non-executable by mistake block everyone else from the directory entirely set the permission mode to "sticky" for all future files Is this correct? Update: I just realized that files and directories might need different permissions - I was talking about files above so i'm not sure what the directory permissions would need to be. Update 2: The folder structure of /var/www changes drastically as one of the four entities above are always adding (and sometimes removing) folders and sub folders many levels deep. They also create and remove files that the other 3 entities might need read/write access to. Therefore, the permissions need to do the four things above for both files and directories. Since non of them should need execute permission (see question about ruby/php above) I would assume that rw-rw-r-- permission would be all that is needed and completely safe since these four entities are run by trusted personal (see #2) and all other users on the system only have read access. Update 3: This is for personal development machines and private company servers. No random "web customers" like a shared host. Update 4: This article by slicehost seems to be the best at explaining what is needed to setup permissions for your www folder. However, I'm not sure what user or group apache/nginx with PHP OR svn/git run as and how to change them. Update 5: I have (I think) finally found a way to get this all to work (answer below). However, I don't know if this is the correct and SECURE way to do this. Therefore I have started a bounty. The person that has the best method of securing and managing the www directory wins.

    Read the article

  • iPhone core data problem : referenceData64 only defined for abstract class

    - by occe
    I have an application that downloads/parses a big XML file and store the information using core data (approx. 4000 objects (entities)). The XML is loaded/parsed in a different thread, which has its own NSManagedObjectContext. When trying to save the entities to the persistent store, I sometimes get the following error (about 20%) 2010-03-03 23:41:42.802 xxx[7487:4203] Exception in XML saving 2010-03-03 23:41:42.802 xxx[7487:4203] Description: * -_referenceData64 only defined for abstract class. Define -[NSTemporaryObjectID_default _referenceData64]! 2010-03-03 23:41:42.803 xxx[7487:4203] Name: NSInvalidArgumentException 2010-03-03 23:41:42.804 xxx[7487:4203] UserInfo: (null) 2010-03-03 23:41:42.805 xxx[7487:4203] Reason: * -_referenceData64 only defined for abstract class. Define -[NSTemporaryObjectID_default _referenceData64]! I have a simple integer to keep track of the entities the application creates compared to the insertedObjects property in the NSManagedObjectContext before saving, and when I get the error, these numbers do not match, insertedObjects in the NSManagedObjectContext is missing about 10 entities. I do not know how I should continue to investigate this problem, anyone has any idea how to fix this? Thanks /oscar

    Read the article

  • entity framework POCO template in a n-tiers design question

    - by bryan
    HI all I was trying to follow the POCO Template walkthrough . And now I am having problems using it in n-tiers design. By following the article, I put my edmx model, and the template generated context.tt in my DAL project, and moved the generated model.tt entity classes to my Business Logic layer (BLL) project. By doing this, I could use those entities inside my BLL without referencing the DAL, I guess that is the idea of PI; without knowing anything about the data source. Now, I want to extend the entities (inside the model.tt) to perform some CUD action in the BLL project,so I added a new partial class same name as the one generated from template, public partial class Company { public static IEnumerable AllCompanies() { using(var context = new Entities()){ var q = from p in context.Companies select p; return q.ToList(); } } } however visual studio won't let me do that, and I think it was because the context.tt is in the DAL project, and the BLL project could not add a reference to the DAL project as DAL has already reference to the BLL. So I tried to added this class to the DAL and it compiled, but intelisense won't show up the BLL.Company.AllCompanies() in my web service method from my webservice project which has reference to my BLL project. What should I do now? I want to add CUD methods to the template generated entities in my BLL project, and call them in my web services from another project. I have been looking for this answer a few days already, and I really need some guides from here please. Bryan

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >