Search Results

Search found 2536 results on 102 pages for 'entities'.

Page 29/102 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Fluent NHibernate automap a HasManyToMany using a generic type

    - by zulkamal
    I have a bunch of domain entities that can be keyword tagged (a Tag is also an entity.) I want to do a normal many-to-many (Tag - TagReview <- Review) table relationship but I don't want to have to create a new concrete relationship on both the Entity and Tag every single time I add a new entity. I was hoping to do a generic based Tag and do this: // Tag public class Tag<T> { public virtual int Id { get; private set; } public virtual string Name { get; set; } public virtual IList<T> Entities { get; set; } public Tag() { Entities = new List<T>(); } } // Review public class Review { public virtual string Id { get; private set; } public virtual string Title { get; set; } public virtual string Content { get; set; } public virtual IList<Tag<Review>> Tags { get; set; } public Review() { Tags = new List<Tag<Review>>(); } } Unfortunately I get an exception: ----> System.ArgumentException : Cannot create an instance of FluentNHibernate.Automapping.AutoMapping`1[Example.Entities.Tag`1[T]] because Type.ContainsGenericParameters is true. I anticipate there will be maybe 5-10 entities so mapping normally would be ok but is there a way to do something like this?

    Read the article

  • Creating self-referential tables with polymorphism in SQLALchemy

    - by Jace
    I'm trying to create a db structure in which I have many types of content entities, of which one, a Comment, can be attached to any other. Consider the following: from datetime import datetime from sqlalchemy import create_engine from sqlalchemy import Column, ForeignKey from sqlalchemy import Unicode, Integer, DateTime from sqlalchemy.orm import relation, backref from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Entity(Base): __tablename__ = 'entities' id = Column(Integer, primary_key=True) created_at = Column(DateTime, default=datetime.utcnow, nullable=False) edited_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow, nullable=False) type = Column(Unicode(20), nullable=False) __mapper_args__ = {'polymorphic_on': type} # <...insert some models based on Entity...> class Comment(Entity): __tablename__ = 'comments' __mapper_args__ = {'polymorphic_identity': u'comment'} id = Column(None, ForeignKey('entities.id'), primary_key=True) _idref = relation(Entity, foreign_keys=id, primaryjoin=id == Entity.id) attached_to_id = Column(Integer, ForeignKey('entities.id'), nullable=False) #attached_to = relation(Entity, remote_side=[Entity.id]) attached_to = relation(Entity, foreign_keys=attached_to_id, primaryjoin=attached_to_id == Entity.id, backref=backref('comments', cascade="all, delete-orphan")) text = Column(Unicode(255), nullable=False) engine = create_engine('sqlite://', echo=True) Base.metadata.bind = engine Base.metadata.create_all(engine) This seems about right, except SQLAlchemy doesn't like having two foreign keys pointing to the same parent. It says ArgumentError: Can't determine join between 'entities' and 'comments'; tables have more than one foreign key constraint relationship between them. Please specify the 'onclause' of this join explicitly. How do I specify onclause?

    Read the article

  • Turning a JSON list into a POJO

    - by Josh L
    I'm having trouble getting this bit of JSON into a POJO. I'm using Jackson configured like this: protected ThreadLocal<ObjectMapper> jparser = new ThreadLocal<ObjectMapper>(); public void receive(Object object) { try { if (object instanceof String && ((String)object).length() != 0) { ObjectDefinition t = null ; if (parserChoice==0) { if (jparser.get()==null) { jparser.set(new ObjectMapper()); } t = jparser.get().readValue((String)object, ObjectDefinition.class); } Object key = t.getKey(); if (key == null) return; transaction.put(key,t); } } catch (Exception e) { e.printStackTrace(); } } Here's the JSON that needs to be turned into a POJO: { "id":"exampleID1", "entities":{ "tags":[ { "text":"textexample1", "indices":[ 2, 14 ] }, { "text":"textexample2", "indices":[ 31, 36 ] }, { "text":"textexample3", "indices":[ 37, 43 ] } ] } And lastly, here's what I currently have for the java class: protected Entities entities; @JsonIgnoreProperties(ignoreUnknown = true) protected class Entities { public Entities() {} protected Tags tags; @JsonIgnoreProperties(ignoreUnknown = true) protected class Tags { public Tags() {} protected String text; public String getText() { return text; } public void setText(String text) { this.text = text; } }; public Tags getTags() { return tags; } public void setTags(Tags tags) { this.tags = tags; } }; //Getters & Setters ... I've been able to translate the more simple objects into a POJO, but the list has me stumped. Any help is appreciated. Thanks!

    Read the article

  • flex/actionscript client entity state refresh on JPA update using Pimento EntityManager

    - by Chris
    My Flex application uses a client-side pimento EntityManager which fetches quite a few objects and associations. It does this by forcing eager fetching of particular association ends in the form of fetch plans. I would like to update the client whenever a change has been made to an entity existing in the EntityManager's cache. Is it possible to update the state of the changed entity ONLY, including updating which entities are associated, without resetting the state of these associated entities? I have setup an EntityListener with a JPA post-update method that notifies clients when a persisted entity has been updated. I want this to trigger a refresh for the modified client-side entity, but calling EntityManager.refresh(entity) resets all lazy associations to proxies. Initializing these proxies resets the associated entities, even if they were loaded previously. I'm looking for an efficient way to keep the client's state in synch with the server's state, at least with respect to the entities that have already been retrieved by the initial load.

    Read the article

  • help with DOMDocument class in php

    - by noname
    here is my xml DOM. <entities> <entity id="7006276"> <title>xxx</title> <entities> <entity id="7877579"> <title>xxx</title> <entities> i want to get the 'entity' with id 7006276 so i could access its child 'entities' to create some 'entity' elements for it. i have tried: $xmlElementById = $this->DOMDocument_object->getElementById('7006276'); but it doesnt seem to work. any idea how i get it so i can create more 'entity' elements?

    Read the article

  • Java OO design confusion: how to handle actions modified by states modified by actions...

    - by Arvanem
    Hi folks, Given an entity, whose action is potentially modified by states (of the entity and other entities) in turn potentially modified by other actions (of the entity and other entities) , what is the best way to code or design to handle the potential existence of the modifiers? Speaking metaphorically, I am coding a Java application representing a piano. As you know a piano has keys (which, when pressed, emit sound) and pedals (which, when pressed, modify the keys' sounds). My base class structure is as follows: Entity (for keys and pedals) State (this holds each entity's states, e.g. name such as "soft pedal", and boolean "Pressed"), Action (this holds each entity's actions, e.g. play sound when pressed, or modify others sounds). By composition, the Entity class has a copy of each of State and Action inside it. e.g.: public class Entity { State entityState = new State(); Action entityAction = new Action(); Thus I have coded a "C-Sharp" key Entity. When I "press" that entity (set its "Pressed" state to true), its action plays a "C-Sharp" sound and then sets its "Pressed" state to false. At the same time, if the "C-Sharp" key entity is not "tuned", its sound deviates from "C-Sharp". Meanwhile I have coded a "soft pedal" Entity. When that entity is "pressed", no sound plays but its action is to make softer the sound of the "C-Sharp" and other key entities. I have also coded a "sustain pedal" Entity. When that entity is "pressed", no sound plays but its action is to enable reverberation of the sound of the "C-Sharp" and other key entities. Both the "soft" and "sustain pedals" can be pressed at the same time with the result that keys entities become both softened and reverberating. In short, I do not understand how to make this simultaneous series of states and actions modify each other in a sensible OO way. I am wary of coding a massive series of "if" statements or "switches". Thanks in advance for any help or links you can offer.

    Read the article

  • What's the best way to validate EntityFramwork 4.0 classes?

    - by lsb
    Hi! I've done a fair amount of searching but I've yet to find an easy way to validate EntityFramework 4.0 entities passed accross the wire via WCF Data Services. Basically, I want to do something on the client like: Proxy.MyEntities entities = new Proxy.MyEntities( new Uri("http://localhost:2679/Service.svc")); Proxy.Vendor vendor = new Proxy.Vendor(); vendor.Code = "ABC/XYZ"; vendor.Status = "ACTIVE"; // I'd like to do something like the following: vendor.Validate(); entities.AddToVendors(vendor); entities.SaveChanges(); Any help in this regard would be greatly appreciated!

    Read the article

  • When to call Dispose in Entity Framework?

    - by Abdel Olakara
    Hi All, In my application I am making use of Spring.Net for IoC. The service objects are called from the ASP.Net files to perform CRUD operations using these service object. For example, I have CustomerService to do all CRUD operations on Customer table. I use entity framework and the entities are injected .. my question is where do I call the dispose method? As far as I understood from the API documentations, unless I call Dispose() there is no guaranty it will be garbage collected! So where and how do I do it? Example Service Class: public class CustomerService { public ecommEntities entities = {get; set;} public bool AddCustomer(Customer customer) { try { entities.AddToCustomer(customer); entities.SaveChanges(); return true; } catch (Exception e) { log.Error("Error occured during creation of new customer: " + e.Message + e.StackTrace); return false; } } public bool UpdateCustomer(Customer customer) { entities.SaveChanges(); return true; } public bool DeleteCustomer(Customer customer) . . . And I just create an object of CustomerService at the ASP partial class and call the necessary methods. Thanks in advance for the best practice and ideas.. Regards, Abdel Raoof

    Read the article

  • Hibernate 3.5.0 causes extreme performance problems

    - by user303396
    I've recently updated from hibernate 3.3.1.GA to hibernate 3.5.0 and I'm having a lot of performance issues. As a test, I added around 8000 entities to my DB (which in turn cause other entities to be saved). These entities are saved in batches of 20 so that the transactions aren't too large for performance reasons. When using hibernate 3.3.1.GA all 8000 entities get saved in about 3 minutes. When using hibernate 3.5.0 it starts out slower than with hibernate 3.3.1. But it gets slower and slower. At around 4,000 entities, it sometimes takes 5 minutes just to save a batch of 20. If I then go to a mysql console and manually type in an insert statement from the mysql general query log, half of them run perfect in 0.00 seconds. And half of them take a long time (maybe 40 seconds) or timeout with "ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction" from MySQL. Has something changed in hibernate's transaction management in version 3.5.0 that I should be aware of? The ONLY thing I changed to experience these unusable performance issues is replace the following hibernate 3.3.1.GA jar files: com.springsource.org.hibernate-3.3.1.GA.jar, com.springsource.org.hibernate.annotations-3.4.0.GA.jar, com.springsource.org.hibernate.annotations.common-3.3.0.ga.jar, com.springsource.javassist-3.3.0.ga.jar with the new hibernate 3.5.0 release hibernate3.jar and javassist-3.9.0.GA.jar. Thanks.

    Read the article

  • domain modeling naming problem

    - by cherouvim
    Hello There are some simple entities in an application (e.g containing only id and title) which rarely change and are being referenced by the more complex entities of the application. These are usually entities such as Country, City, Language etc. How are these called? I've used the following names for those in the past but I'm not sure which is the best way to call them: reference data lookup values dictionaries thanks

    Read the article

  • WCF data services (OData), query with inheritance limitation?

    - by Mathieu Hétu
    Project: WCF Data service using internally EF4 CTP5 Code-First approach. I configured entities with inheritance (TPH). See previous question on this topic: Previous question about multiple entities- same table The mapping works well, and unit test over EF4 confirms that queries runs smoothly. My entities looks like this: ContactBase (abstract) Customer (inherits from ContactBase), this entity has also several Navigation properties toward other entities Resource (inherits from ContactBase) I have configured a discriminator, so both Customer and Resource map to the same table. Again, everythings works fine on the Ef4 point of view (unit tests all greens!) However, when exposing this DBContext over WCF Data services, I get: - CustomerBases sets exposed (Customers and Resources sets seems hidden, is it by design?) - When I query over Odata on Customers, I get this error: Navigation Properties are not supported on derived entity types. Entity Set 'ContactBases' has a instance of type 'CodeFirstNamespace.Customer', which is an derived entity type and has navigation properties. Please remove all the navigation properties from type 'CodeFirstNamespace.Customer'. Stacktrace: at System.Data.Services.Serializers.SyndicationSerializer.WriteObjectProperties(IExpandedResult expanded, Object customObject, ResourceType resourceType, Uri absoluteUri, String relativeUri, SyndicationItem item, DictionaryContent content, EpmSourcePathSegment currentSourceRoot) at System.Data.Services.Serializers.SyndicationSerializer.WriteEntryElement(IExpandedResult expanded, Object element, ResourceType expectedType, Uri absoluteUri, String relativeUri, SyndicationItem target) at System.Data.Services.Serializers.SyndicationSerializer.<DeferredFeedItems>d__b.MoveNext() at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteItems(XmlWriter writer, IEnumerable`1 items, Uri feedBaseUri) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeedTo(XmlWriter writer, SyndicationFeed feed, Boolean isSourceFeed) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeed(XmlWriter writer) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteTo(XmlWriter writer) at System.Data.Services.Serializers.SyndicationSerializer.WriteTopLevelElements(IExpandedResult expanded, IEnumerator elements, Boolean hasMoved) at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved) at System.Data.Services.ResponseBodyWriter.Write(Stream stream) Seems like a limitation of WCF Data services... is it? Not much documentation can be found on the web about WCF Data services (OData) and inheritance specifications. How can I overpass this exception? I need these navigation properties on derived entities, and inheritance seems the only way to provide mapping of 2 entites on the same table with Ef4 CTP5... Any thoughts?

    Read the article

  • Hibernate: how to maintain insertion order

    - by jwaddell
    I have a list of entities where creation order is important, but they do not contain a timestamp to use for sorting. Entities are added to the end of the list as they are created so they will be ordered correctly in the list itself. After persisting the list using Hibernate the entities appear in the database table in the order that they were created. However when retrieving the list using a new Hibernate session the list is now in reverse order of insertion/creation. Is this expected behaviour? Is there any way to retrieve the list in the same order as it appears in the table? The primary key is a UUID, and the list of entities should always have been created on the same IP address and JVM. This mean sorting by UUID is a possibility but I'd rather not make assumptions. Another possibility is if the list is guaranteed to always come out in reverse order I could always just work through it backwards.

    Read the article

  • Strategies for dealing with Circular references caused by JPA relationships?

    - by ams
    I am trying to partition my application into modules by features packaged into separate jars such as feature-a.jar, feature-b.jar, ... etc. Individual feature jars such as feature-a.jar should contain all the code for a feature a including jpa entities, business logic, rest apis, unit tests, integration test ... etc. The problem I am running into is that bi-directional relationships between JPA entities cause circular references between the jar files. For example Customer entity should be in customer.jar and the Order should be in order.jar but Customer references order and order references customer making it hard to split them into separate jars / eclipse projects. Options I see for dealing with the circular dependencies in JPA entities: Option 1: Put all the entities into one jar / one project Option 2: Don't map certain bi-directianl relationships to avoid circular dependencies across projects. Questions: What rules / principles have you used to decide when to do bi-directional mapping vs. not? Have you been able to break jpa entities into their own projects / jar by features if so how did you avoid the circular dependencies issues?

    Read the article

  • populate CoreData data model from JSON files prior to app start

    - by johannes_d
    I am creating an iPad App that displays data I got from an API in JSON format. My Core Data model has several entities(Countries, Events, Talks, ...). For each entity I have one .json file that contains all instances of the entity and its attributes as well as its relationships. I would like to populate my Core Data data model with these entities before the start of the App (otherwise it takes about 15 minutes for the iPad to create all the instances of the entities from the several JSON files using factory methods). I am currently importing the data into CoreData like this: -(void)fetchDataIntoDocument:(UIManagedDocument *)document { dispatch_queue_t dataQ = dispatch_queue_create("Data import", NULL); dispatch_async(dataQ, ^{ //Fetching data from application bundle NSURL *tedxgroupsurl = [[NSBundle mainBundle] URLForResource:@"contries" withExtension:@"json"]; NSURL *tedxeventsurl = [[NSBundle mainBundle] URLForResource:@"events" withExtension:@"json"]; //converting the JSON files to NSDictionaries NSError *error = nil; NSDictionary *countries = [NSJSONSerialization JSONObjectWithData:[NSData dataWithContentsOfURL:countriesurl] options:kNilOptions error:&error]; countries = [countries objectForKey:@"countries"]; NSDictionary *events = [NSJSONSerialization JSONObjectWithData:[NSData dataWithContentsOfURL:eventsurl] options:kNilOptions error:&error]; events = [events objectForKey:@"events"]; //creating entities using factory methods in NSManagedObject Subclasses (Country / Event) [document.managedObjectContext performBlock:^{ NSLog(@"creating countries"); for (NSDictionary *country in countries) { [Country countryWithCountryInfo:country inManagedObjectContext:document.managedObjectContext]; //creating Country entities } NSLog(@"creating events"); for (NSDictionary *event in events) { [Event eventWithEventInfo:event inManagedObjectContext:document.managedObjectContext]; // creating Event entities } NSLog(@"done creating, saving document"); [document saveToURL:document.fileURL forSaveOperation:UIDocumentSaveForOverwriting completionHandler:NULL]; }]; }); dispatch_release(dataQ); } This combines the different JSON files into one UIManagedDocument which i can then perform fetchRequests on to populate tableViews, mapView, etc. I'm looking for a way to create this document outside my application & add it to the mainBundle. Then I could copy it once to the apps DocumentsDirectory and be able I use it (instead of creating the Document within the app from the original JSON files). Any help is appreciated!

    Read the article

  • Handling large (object) datasets with PHP

    - by Aron Rotteveel
    I am currently working on a project that extensively relies on the EAV model. Both entities as their attributes are individually represented by a model, sometimes extending other models (or at least, base models). This has worked quite well so far since most areas of the application only rely on filtered sets of entities, and not the entire dataset. Now, however, I need to parse the entire dataset (IE: all entities and all their attributes) in order to provide a sorting/filtering algorithm based on the attributes. The application currently consists of aproximately 2200 entities, each with aproximately 100 attributes. Every entity is represented by a single model (for example Client_Model_Entity) and has a protected property called $_attributes, which is an array of Attribute objects. Each entity object is about 500KB, which results in an incredible load on the server. With 2000 entities, this means a single task would take 1GB of RAM (and a lot of CPU time) in order to work, which is unacceptable. Are there any patterns or common approaches to iterating over such large datasets? Paging is not really an option, since everything has to be taken into account in order to provide the sorting algorithm.

    Read the article

  • I never really understood: what is Application Binary Interface (ABI)?

    - by claws
    I never clearly understood what is an ABI. I'm sorry for such a lengthy question. I just want to clearly understand things. Please don't point me to wiki article, If could understand it, I wouldn't be here posting such a lengthy post. This is my mindset about different interfaces: TV remote is an interface between user and TV. It is an existing entity but useless (doesn't provide any functionality) by itself. All the functionality for each of those buttons on the remote is implemented in the Television set. Interface: It is a "existing entity" layer between the functionality and consumer of that functionality. An, interface by itself is doesn't do anything. It just invokes the functionality lying behind. Now depending on who the user is there are different type of interfaces. Command Line Interface(CLI) commands are the existing entities, consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: commands consumer: user Graphical User Interface(GUI) window,buttons etc.. are the existing entities, again consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: window,buttons etc.. consumer: user Application Programming Interface(API) functions or to be more correct, interfaces (in interfaced based programming) are the existing entities, consumer here is another program not a user. and again functionality lies behind this layer. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: functions, Interfaces(array of functions). consumer: another program/application. Application Binary Interface (ABI) Here is my problem starts. functionality: ??? existing entities: ??? consumer: ??? I've wrote few softwares in different languages and provided different kind of interfaces (CLI, GUI, API) but I'm not sure, if I ever, provided any ABI. http://en.wikipedia.org/wiki/Application_binary_interface says: ABIs cover details such as data type, size, and alignment; the calling convention, which controls how functions' arguments are passed and return values retrieved; the system call numbers and how an application should make system calls to the operating system; Other ABIs standardize details such as the C++ name mangling,[2] . exception propagation,[3] and calling convention between compilers on the same platform, but do not require cross-platform compatibility. Who needs these details? Please don't say, OS. I know assembly programming. I know how linking & loading works. I know what exactly happens inside. Where did C++ name mangling come in between? I thought we are talking at the binary level. Where did languages come in between? anyway, I've downloaded the [PDF] System V Application Binary Interface Edition 4.1 (1997-03-18) to see what exactly it contains. Well, most of it didn't make any sense. Why does it contain 2 chapters (4th & 5th) which describe the ELF file format.Infact, these are the only 2 significant chapters that specification. Rest of all the chapters "Processor Specific". Anyway, I thought that it is completely different topic. Please don't say that ELF file format specs are the ABI. It doesn't qualify to be Interface according to the definition. I know, since we are talking at such low level it must be very specific. But I'm not sure how is it "Instruction Set Architecture(ISA)" specific? Where can I find MS Window's ABI? So, these are the major queries that are bugging me.

    Read the article

  • Linqbuilder Query with an OrderBy

    - by Renshai
    I have a 1 : M relationship. I built a dynamic query based on input from users to return the listing of parents entities along with their children (using predicate builder: (done successfully new TDataContext().Ps.Where(predicate) )... but need to order the results by a field found only on the child entities. I'm at a loss: new TDataContext().Ps.Where(predicate).OrderBy(p = p.Cs. ??) where Ps = parents collection relationship with Cs = child entities any help appreciated.

    Read the article

  • How to automatically display relationships in logical diagram?

    - by Cliff Chaney
    Consider a Logical Model where Entity A and Entity B are connected via Relationship Z. If I create a Logical Diagram (note: not another logical MODEL), I am able to drag Entity A and Entity B onto the diagram. Since the Logical Model already "knows" that Entity A and Entity B have a relationship, I would like to be able to easily add it to my logical diagram. I am already aware of the "Show Symbols" option whereby I can select the specific relationship that I want and have it appear. That is not a solution for me. The problem is that I have a LARGE logical model consisting of HUNDREDS of Entities and their various relationships. I then need to create application-specific diagrams consisting of a subset of those entities. I can easily identify and drag the 50+ entities onto a new diagram. But identifying and selecting all the associations is an exercise in frustration. Is there an option or some sort of feature that I'm missing - or any other trick - that would allow me to add only the relationships between selected entities or all entities on a diagram?

    Read the article

  • Criteria: search for two different entity classes...

    - by RoCMe
    Hi! I have a "super entity" SuperEntity and three entities ChildEntity1, ..., ChildEntity3 which extends the super class. It's easy to search for all entities in the database, i.e. we could use session.createCriteria(SuperEntity.class); It's no problem to search for one specific entity type, too, just replace the SuperEntity with any of the children to look for entities of that type. But I have a problem when allowing 'multiple choice' for the types. I.e., it could be neccessary to search all entities of type 1 and 2, but not of type 3. A first idea was to create two independent queries and join the results in a final list - but that would destroy the paging which uses offset and limit functionality of the database... Is there a possibility in Criteria to join two different queries in one single result list? Kind regards, RoCMe

    Read the article

  • Azure &ndash; Part 5 &ndash; Repository Pattern for Table Service

    - by Shaun
    In my last post I created a very simple WCF service with the user registration functionality. I created an entity for the user data and a DataContext class which provides some methods for operating the entities such as add, delete, etc. And in the service method I utilized it to add a new entity into the table service. But I didn’t have any validation before registering which is not acceptable in a real project. So in this post I would firstly add some validation before perform the data creation code and show how to use the LINQ for the table service.   LINQ to Table Service Since the table service utilizes ADO.NET Data Service to expose the data and the managed library of ADO.NET Data Service supports LINQ we can use it to deal with the data of the table service. Let me explain with my current example: I would like to ensure that when register a new user the email address should be unique. So I need to check the account entities in the table service before add. If you remembered, in my last post I mentioned that there’s a method in the TableServiceContext class – CreateQuery, which will create a IQueryable instance from a given type of entity. So here I would create a method under my AccountDataContext class to return the IQueryable<Account> which named Load. 1: public class AccountDataContext : TableServiceContext 2: { 3: private CloudStorageAccount _storageAccount; 4:  5: public AccountDataContext(CloudStorageAccount storageAccount) 6: : base(storageAccount.TableEndpoint.AbsoluteUri, storageAccount.Credentials) 7: { 8: _storageAccount = storageAccount; 9:  10: var tableStorage = new CloudTableClient(_storageAccount.TableEndpoint.AbsoluteUri, 11: _storageAccount.Credentials); 12: tableStorage.CreateTableIfNotExist("Account"); 13: } 14:  15: public void Add(Account accountToAdd) 16: { 17: AddObject("Account", accountToAdd); 18: SaveChanges(); 19: } 20:  21: public IQueryable<Account> Load() 22: { 23: return CreateQuery<Account>("Account"); 24: } 25: } The method returns the IQueryable<Account> so that I can perform the LINQ operation on it. And back to my service class, I will use it to implement my validation. 1: public bool Register(string email, string password) 2: { 3: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 4: var accountToAdd = new Account(email, password) { DateCreated = DateTime.Now }; 5: var accountContext = new AccountDataContext(storageAccount); 6:  7: // validation 8: var accountNumber = accountContext.Load() 9: .Where(a => a.Email == accountToAdd.Email) 10: .Count(); 11: if (accountNumber > 0) 12: { 13: throw new ApplicationException(string.Format("Your account {0} had been used.", accountToAdd.Email)); 14: } 15:  16: // create entity 17: try 18: { 19: accountContext.Add(accountToAdd); 20: return true; 21: } 22: catch (Exception ex) 23: { 24: Trace.TraceInformation(ex.ToString()); 25: } 26: return false; 27: } I used the Load method to retrieve the IQueryable<Account> and use Where method to find the accounts those email address are the same as the one is being registered. If it has I through an exception back to the client side. Let’s run it and test from my simple client application. Oops! Looks like we encountered an unexpected exception. It said the “Count” is not support by the ADO.NET Data Service LINQ managed library. That is because the table storage managed library (aka. TableServiceContext) is based on the ADO.NET Data Service and it supports very limit LINQ operation. Although I didn’t find a full list or documentation about which LINQ methods it supports I could even refer a page on msdn here. It gives us a roughly summary of which query operation the ADO.NET Data Service managed library supports and which doesn't. As you see the Count method is not in the supported list. Not only the query operation, there inner lambda expression in the Where method are limited when using the ADO.NET Data Service managed library as well. For example if you added (a => !a.DateDeleted.HasValue) in the Where method to exclude those deleted account it will raised an exception said "Invalid Input". Based on my experience you should always use the simple comparison (such as ==, >, <=, etc.) on the simple members (such as string, integer, etc.) and do not use any shortcut methods (such as string.Compare, string.IsNullOrEmpty etc.). 1: // validation 2: var accountNumber = accountContext.Load() 3: .Where(a => a.Email == accountToAdd.Email) 4: .ToList() 5: .Count; 6: if (accountNumber > 0) 7: { 8: throw new ApplicationException(string.Format("Your account {0} had been used.", accountToAdd.Email)); 9: } We changed the a bit and try again. Since I had created an account with my mail address so this time it gave me an exception said that the email had been used, which is correct.   Repository Pattern for Table Service The AccountDataContext takes the responsibility to save and load the account entity but only for that specific entity. Is that possible to have a dynamic or generic DataContext class which can operate any kinds of entity in my system? Of course yes. Although there's no typical database in table service we can threat the entities as the records, similar with the data entities if we used OR Mapping. As we can use some patterns for ORM architecture here we should be able to adopt the one of them - Repository Pattern in this example. We know that the base class - TableServiceContext provide 4 methods for operating the table entities which are CreateQuery, AddObject, UpdateObject and DeleteObject. And we can create a relationship between the enmity class, the table container name and entity set name. So it's really simple to have a generic base class for any kinds of entities. Let's rename the AccountDataContext to DynamicDataContext and make the type of Account as a type parameter if it. 1: public class DynamicDataContext<T> : TableServiceContext where T : TableServiceEntity 2: { 3: private CloudStorageAccount _storageAccount; 4: private string _entitySetName; 5:  6: public DynamicDataContext(CloudStorageAccount storageAccount) 7: : base(storageAccount.TableEndpoint.AbsoluteUri, storageAccount.Credentials) 8: { 9: _storageAccount = storageAccount; 10: _entitySetName = typeof(T).Name; 11:  12: var tableStorage = new CloudTableClient(_storageAccount.TableEndpoint.AbsoluteUri, 13: _storageAccount.Credentials); 14: tableStorage.CreateTableIfNotExist(_entitySetName); 15: } 16:  17: public void Add(T entityToAdd) 18: { 19: AddObject(_entitySetName, entityToAdd); 20: SaveChanges(); 21: } 22:  23: public void Update(T entityToUpdate) 24: { 25: UpdateObject(entityToUpdate); 26: SaveChanges(); 27: } 28:  29: public void Delete(T entityToDelete) 30: { 31: DeleteObject(entityToDelete); 32: SaveChanges(); 33: } 34:  35: public IQueryable<T> Load() 36: { 37: return CreateQuery<T>(_entitySetName); 38: } 39: } I saved the name of the entity type when constructed for performance matter. The table name, entity set name would be the same as the name of the entity class. The Load method returned a generic IQueryable instance which supports the lazy load feature. Then in my service class I changed the AccountDataContext to DynamicDataContext and that's all. 1: var accountContext = new DynamicDataContext<Account>(storageAccount); Run it again and register another account. The DynamicDataContext now can be used for any entities. For example, I would like the account has a list of notes which contains 3 custom properties: Account Email, Title and Content. We create the note entity class. 1: public class Note : TableServiceEntity 2: { 3: public string AccountEmail { get; set; } 4: public string Title { get; set; } 5: public string Content { get; set; } 6: public DateTime DateCreated { get; set; } 7: public DateTime? DateDeleted { get; set; } 8:  9: public Note() 10: : base() 11: { 12: } 13:  14: public Note(string email) 15: : base(email, string.Format("{0}_{1}", email, Guid.NewGuid().ToString())) 16: { 17: AccountEmail = email; 18: } 19: } And no need to tweak the DynamicDataContext we can directly go to the service class to implement the logic. Notice here I utilized two DynamicDataContext instances with the different type parameters: Note and Account. 1: public class NoteService : INoteService 2: { 3: public void Create(string email, string title, string content) 4: { 5: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 6: var accountContext = new DynamicDataContext<Account>(storageAccount); 7: var noteContext = new DynamicDataContext<Note>(storageAccount); 8:  9: // validate - email must be existed 10: var accounts = accountContext.Load() 11: .Where(a => a.Email == email) 12: .ToList() 13: .Count; 14: if (accounts <= 0) 15: throw new ApplicationException(string.Format("The account {0} does not exsit in the system please register and try again.", email)); 16:  17: // save the note 18: var noteToAdd = new Note(email) { Title = title, Content = content, DateCreated = DateTime.Now }; 19: noteContext.Add(noteToAdd); 20: } 21: } And updated our client application to test the service. I didn't implement any list service to show all notes but we can have a look on the local SQL database if we ran it at local development fabric.   Summary In this post I explained a bit about the limited LINQ support for the table service. And then I demonstrated about how to use the repository pattern in the table service data access layer and make the DataContext dynamically. The DynamicDataContext I created in this post is just a prototype. In fact we should create the relevant interface to make it testable and for better structure we'd better separate the DataContext classes for each individual kind of entity. So it should have IDataContextBase<T>, DataContextBase<T> and for each entity we would have class AccountDataContext<Account> : IDataContextBase<Account>, DataContextBase<Account> { … } class NoteDataContext<Note> : IDataContextBase<Note>, DataContextBase<Note> { … }   Besides the structured data saving and loading, another common scenario would be saving and loading some binary data such as images, files. In my next post I will show how to use the Blob Service to store the bindery data - make the account be able to upload their logo in my example.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

  • Movement prediction for non-shooters

    - by ShadowChaser
    I'm working on an isometric 2D game with moderate-scale multiplayer, approximately 20-30 players connected at once to a persistent server. I've had some difficulty getting a good movement prediction implementation in place. Physics/Movement The game doesn't have a true physics implementation, but uses the basic principles to implement movement. Rather than continually polling input, state changes (ie/ mouse down/up/move events) are used to change the state of the character entity the player is controlling. The player's direction (ie/ north-east) is combined with a constant speed and turned into a true 3D vector - the entity's velocity. In the main game loop, "Update" is called before "Draw". The update logic triggers a "physics update task" that tracks all entities with a non-zero velocity uses very basic integration to change the entities position. For example: entity.Position += entity.Velocity.Scale(ElapsedTime.Seconds) (where "Seconds" is a floating point value, but the same approach would work for millisecond integer values). The key point is that no interpolation is used for movement - the rudimentary physics engine has no concept of a "previous state" or "current state", only a position and velocity. State Change and Update Packets When the velocity of the character entity the player is controlling changes, a "move avatar" packet is sent to the server containing the entity's action type (stand, walk, run), direction (north-east), and current position. This is different from how 3D first person games work. In a 3D game the velocity (direction) can change frame to frame as the player moves around. Sending every state change would effectively transmit a packet per frame, which would be too expensive. Instead, 3D games seem to ignore state changes and send "state update" packets on a fixed interval - say, every 80-150ms. Since speed and direction updates occur much less frequently in my game, I can get away with sending every state change. Although all of the physics simulations occur at the same speed and are deterministic, latency is still an issue. For that reason, I send out routine position update packets (similar to a 3D game) but much less frequently - right now every 250ms, but I suspect with good prediction I can easily boost it towards 500ms. The biggest problem is that I've now deviated from the norm - all other documentation, guides, and samples online send routine updates and interpolate between the two states. It seems incompatible with my architecture, and I need to come up with a better movement prediction algorithm that is closer to a (very basic) "networked physics" architecture. The server then receives the packet and determines the players speed from it's movement type based on a script (Is the player able to run? Get the player's running speed). Once it has the speed, it combines it with the direction to get a vector - the entity's velocity. Some cheat detection and basic validation occurs, and the entity on the server side is updated with the current velocity, direction, and position. Basic throttling is also performed to prevent players from flooding the server with movement requests. After updating its own entity, the server broadcasts an "avatar position update" packet to all other players within range. The position update packet is used to update the client side physics simulations (world state) of the remote clients and perform prediction and lag compensation. Prediction and Lag Compensation As mentioned above, clients are authoritative for their own position. Except in cases of cheating or anomalies, the client's avatar will never be repositioned by the server. No extrapolation ("move now and correct later") is required for the client's avatar - what the player sees is correct. However, some sort of extrapolation or interpolation is required for all remote entities that are moving. Some sort of prediction and/or lag-compensation is clearly required within the client's local simulation / physics engine. Problems I've been struggling with various algorithms, and have a number of questions and problems: Should I be extrapolating, interpolating, or both? My "gut feeling" is that I should be using pure extrapolation based on velocity. State change is received by the client, client computes a "predicted" velocity that compensates for lag, and the regular physics system does the rest. However, it feels at odds to all other sample code and articles - they all seem to store a number of states and perform interpolation without a physics engine. When a packet arrives, I've tried interpolating the packet's position with the packet's velocity over a fixed time period (say, 200ms). I then take the difference between the interpolated position and the current "error" position to compute a new vector and place that on the entity instead of the velocity that was sent. However, the assumption is that another packet will arrive in that time interval, and it's incredibly difficult to "guess" when the next packet will arrive - especially since they don't all arrive on fixed intervals (ie/ state changes as well). Is the concept fundamentally flawed, or is it correct but needs some fixes / adjustments? What happens when a remote player stops? I can immediately stop the entity, but it will be positioned in the "wrong" spot until it moves again. If I estimate a vector or try to interpolate, I have an issue because I don't store the previous state - the physics engine has no way to say "you need to stop after you reach position X". It simply understands a velocity, nothing more complex. I'm reluctant to add the "packet movement state" information to the entities or physics engine, since it violates basic design principles and bleeds network code across the rest of the game engine. What should happen when entities collide? There are three scenarios - the controlling player collides locally, two entities collide on the server during a position update, or a remote entity update collides on the local client. In all cases I'm uncertain how to handle the collision - aside from cheating, both states are "correct" but at different time periods. In the case of a remote entity it doesn't make sense to draw it walking through a wall, so I perform collision detection on the local client and cause it to "stop". Based on point #2 above, I might compute a "corrected vector" that continually tries to move the entity "through the wall" which will never succeed - the remote avatar is stuck there until the error gets too high and it "snaps" into position. How do games work around this?

    Read the article

  • TomEE Integration in NetBeans Next

    - by Geertjan
    At JavaOne 2013, there was a lot of buzz around the TomEE server, e.g., many Tweets, nice party, and a new TomEE consulting company. For those tracking TomEE developments, it is interesting to note that recently the NetBeans IDE development builds have had added to them... TomEE support. Note: The TomEE support described here is not in NetBeans IDE 7.4, but in development builds for the next release of NetBeans IDE.For example, with NetBeans IDE development builds you're able to: register TomEE as a server in the Services window (TomEE has several distributions, e.g., one can use the "with JAX-RS" one, for example) create a Java EE 6 web project (e.g., Maven based) against this server create JPA entities from database create JAX-RS classes from JPA entities create JSF pages from JPA entities the IDE lets you create a new data source for TomEE and deploy it to the server the IDE figures out the components that are already packaged in TomEE, and the fact that (unlike with regular Tomcat), it does not need to package any components such as JSF implementation, persistence provider, or JAX-RS runtime, so that the resulting WAR file is very small the IDE can also do "deploy on save" with TomEE, so that your development cycle is very fast Adam Bien blogged about how he set up TomEE sometime ago, here. The official support in NetBeans IDE will be much more tightly integrated, simplifying the steps Adam describes. For example, the IDE does step 2 from Adam's blog for you, i.e., it sets up TomEE deployment roles. Moreover, it knows about all the technologies included in TomEE so that it can optimize the packaging; it knows about TomEE's persistence setup; it can work with TomEE data sources, etc. Below you see a Maven-based Java EE 6 PrimeFaces application (all entities and JSF pages generated from a database) deployed to TomEE in NetBeans IDE: And here's the management console for configuring and finetuning TomEE in NetBeans IDE: When I tried out the NetBeans IDE development build and TomEE, to see how everything fits together, I was surprised at how fast TomEE started up. Not sure what they did to it, but seems like a server on steroids. And setting it up in NetBeans IDE was trivial. Add the simple set up of TomEE in NetBeans IDE to the many benefits that the widely praised out of the box NetBeans Maven tools make possible, together with the fact that not one single plugin had to be installed to get everything you see described here up and running... and you have a really powerful combination of dev tools, all for free.

    Read the article

  • How to generate a metamodel for POJOs

    - by Stefan Haberl
    I'm looking for a metamodel generation library for simple Java POJOs. I'm thinking about something along the lines of JPA 2.0's metamodel, which can be used to generate type safe criterias for JPA entities (see this Question on SO for JPA specific implementations). Does something similar exist for general purpose JavaBeans, i.e., POJOS, that are not JPA entities? Specifically, I want to implement Spring MVC's WebDataBinder.setAllowedFields() method in a type safe way in my Spring MVC @Controllers

    Read the article

  • [EF + Oracle] Inserting Data (1/2)

    - by JTorrecilla
    Prologue Following EF series (I ,II y III) in this chapter we will see how to create DB record from EF. Inserting Data Like we indicated in the 2º post: “One Entity matches with a DB record, and one property match with a Table Column”. To start, we need to create an object from one of the Entities: 1: EMPLEADOS empleado = new EMPLEADOS(); Also like, I told previously, Exists the possibility to use the Static Function defined by VS for each Entity: Once we have created the object, we can Access to it properties to fill like a common class:   1: empleado.NOMBRE = "Javier Torrecilla";   After finish of fill our Entity properties, it must be needed to add the object to the appropriate ObjectSet in the ObjectContext: 1: enti.EMPLEADOS.AddObject(empleado); or 1: enti.AddToEMPLEADOS(empleado); Both methods will do the same action, create an insert statement. Have we finished? No. Any Entity has a property called “EntityState”. This prop is an Enum from “EntityState”, which has the following: Detached: the Entity is created, but not added to the Context. Unchanged: There is no pending changes in the Entity. Added: The entity is added to the ObjectSet, but it is not yet sent to the DB. Deleted: The object is deleted form the ObjectSet, but not yet from the DB. Modified: There is Pending Changes to confirm. Let’s see, the several values of the property during the Creation steps: 1. While the Object is created and we are filling the props: EntityState.Detached; 2. After adding to the ObjectSet: EntityState.Added. This not indicated that the record is in the DB 3. Saving the Data: To sabe the data in the DB, we are going to call “SaveChanges” method of the Object Context. After invoke it, the property will be EntityState.Unchanged.   What does SaveChanges Method? This function will synchronize and send all pending changes to DB. It will add, modify or delete all Entities, whose EntityState property, is setted to Added, Deleted or Modified. After finishing, all added or modified entities will be change the State to “Unchanged”, and deleted Entities must take the “Detached” state.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >