Search Results

Search found 7266 results on 291 pages for 'entity relationship'.

Page 121/291 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • how to link a c++ object to a local variable in Lua

    - by MahanGM
    I'm completing my scripting interface with Lua, but recently I've stuck at some point. I have several functions for my Entitiy events like Update(). I have a function called create_entitiy() which instantiate a new entity from a given entity index: function Update() local bullet = create_entity(0, 0, "obj_bullet") end create_entity returns a table which is the properties of the created entity. Now how can I make a connection between bullet variable and my newly created object? Right now for previously added objects to the scene, I simply set a global table for each of them and then after every call to Update(), I go through registered names to find object tables and perform new changes. Like the one below: function Update() if keyboard_key_press(vk_right) then obj_player.x += 3 end I can get obj_player table because I know its name from C++, plus I can get it as a global table and simply reach for the first instance named obj_player. Is there any solution for me to make bullet variable act like this? I was thinking to get all local variables in Update() function and check for every one to see if is it table and it has an unique field attached to it like id, this way I can determine that this is an object table and do the rest of the process. By the way, is this interface going to work easier with luaBind if I implement it? Bottom line: How can I make a local variable in Lua that receives a table from create_entity function and track that local variable to capture it from C++. e. g. function Update() local bullet = create_entity(0, 0, "obj_bullet") bullet.x = 10 <== Commit a change in table end Now I want to get variable bullet from C++. And it's not just this variable, there might be a ton of these local variables with different names.

    Read the article

  • Working with QuickBooks using LINQPad

    - by dataintegration
    The RSSBus ADO.NET Providers can be used from many applications and development environments. In this article, we show how to use LINQPad to connect to QuickBooks using the RSSBus ADO.NET Provider for QuickBooks. Although this example uses the QuickBooks Data Provider, the same process applies to any of our ADO.NET Providers. Create the Data Model Step 1: Download and install both the Data Provider from RSSBus and LINQPad (available at www.linqpad.net Step 2: Create a new project in Visual Studio and create a data model for it using the ADO.NET Entity Data Model wizard. Step 3: Create a new connection by clicking "New Connection", specify the connection string options, and click Next. Step 4: Select the desired tables and views and click Finish to create the data model. Step 5: Right click on the entity diagram and select 'Add Code Generation Item'. Choose the 'ADO.NET DbContext Generator'. Step 6: Now build the project. The generated files can be used to create a QuickBooks connection in LINQPad. Create the connection to QuickBooks in LINQPad Step 7:Open LINQPad and click 'Add New Connection'. Step 8: Choose 'Entity Framework DbContext POCO'. Step 9: Choose the data model assembly ('.dll') created by Visual Studio as the 'Path to Custom Assembly'. Choose the name of the custom DbContext, the path to the config file, and assign a name to the connection that will allow you to recognize its purpose. Step 10: Congratulations! Now you have a connection to QuickBooks, and you can query data through LINQPad.

    Read the article

  • How to draw an E-R diagram?

    - by Appy
    I am learning DBMS in my college. Recently I was given an assignment to draw a E-R Model of a Bus Reservation system which handles Reservations, Ticketing and cancellations. I understand the theory about E-R model when I study from a book, but it gets confusing when I try to draw one from scratch. How should one proceed? There seem to be a lot of ways to model a E-R diagram for a particular requirement. It's really confusing. Can anyone explain taking Bus reservation System as an example? Here is the model I made (But I am not confident with it because at every step, I could think of many more alternatives!) - Entity_Set Passenger(passengerID,name,age,gender) Entity_Set Ticket(ticketID,status) //Status is either WaitingList , Confirmed or Cancelled Entity_Set Bus (busID,MaxSeats,Type) //Type is Ac or Non-AC Entity_Set Route(routeID,ArrivalTime,DepartureTime,Source,Destination) And a Ternary relationship between Passenger, Ticket and Bus with attributes as passengerID, ticketID, busID . Binary relationship between Bus and Route with attributes as busID, routeID . I have few doubts regarding - 1 . Should we take Time as a composite attribute with Arrival and Departure as its attributes (What's the difference if we take that way?) 2 . The same with Source and Destination. Should they be made into a composite attribute "Place" or something like "Location"? 3. Are there any weak entity sets here? Can you please 'create' a weak entity set and explain? Because I have no idea at all what to take as a Weak entity set?

    Read the article

  • Alternatives to type casting in your domain

    - by Mr Happy
    In my Domain I have an entity Activity which has a list of ITasks. Each implementation of this task has it's own properties beside the implementation of ITask itself. Now each operation of the Activity entity (e.g. Execute()) only needs to loop over this list and call an ITask method (e.g. ExecuteTask()). Where I'm having trouble is when a specific tasks' properties need to be updated. How do I get an instance of that task? The options I see are: Get the Activity by Id and cast the task I need. This'll either sprinkle my code with: Tasks.OfType<SpecificTask>().Single(t => t.Id == taskId) or Tasks.Single(t => t.Id == taskId) as SpecificTask Make each task unique in the whole system (make each task an entity), and create a new repository for each ITask implementation I don't like either option, the first because I don't like casting: I'm using NHibernate and I'm sure this'll come back and bite me when I start using Lazy Loading (NHibernate currently uses proxies to implement this). I don't like the second option because there are/will be dozens of different kind of tasks. Which would mean I'd have to create as many repositories. Am I missing a third option here? Or are any of my objections to the two options not justified? How have you solved this problem in the past?

    Read the article

  • A tip: Updating Data in SharePoint 2010 using REST API

    - by Sahil Malik
    SharePoint 2010 Training: more information Here is a little tip that will save you hours of head scratching. See there are two ways to update data in SharePoint using REST based API. A PUT request is used to update an entire entity. If no values are specified for fields in the entity, the fields will be set to default values. A MERGE request is used to update only those field values that have changed. Any fields that are not specified by the operation will remain set to their current value.   Now, sit back and think about it. You are going to update the entire entity! Hmm. Which means, you need to a) specify every column value, and b) ensure that the read only values match what was supplied to you. What a pain in the donkey! So 99/100 times, a PUT request will give you a HTTP 500 internal server error occurred, which is just so helpful. Read full article ....

    Read the article

  • State Changes in a Component Based Architecture [closed]

    - by Maxem
    I'm currently working on a game and using the naive component based architecture thingie (Entities are a bag of components, entity.Update() calls Update on each updateable component), while the addition of new features is really simple, it makes a few things really difficult: a) multithreading / currency b) networking c) unit testing. Multithreading / Concurrency is difficult because I basically have to do poor mans concurrency (running the entity updates in separate threads while locking only stuff that crashes (like lists) and ignoring the staleness of read state (some states are already updated, others aren't)) Networking: There are no explicit state changes that I could efficiently push over the net. Unit testing: All updates may or may not conflict, so automated testing is at least awkward. I was thinking about these issues a bit and would like your input on these changes / idea: Switch from the naive cba to a cba with sub systems that work on lists of components Make all state changes explicit Combine 1 and 2 :p Example world update: statePostProcessing.Wait() // ensure that post processing has finished Apply(postProcessedState) state = new StateBag() Concurrently( () => LifeCycleSubSystem.Update(state), // populates the state bag () => MovementSubSystem.Update(state), // populates the state bag .... }) statePostProcessing = Future(() => PostProcess(state)) statePostProcessing.Start() // Tick is finished, the post processing happens in the background So basically the changes are (consistently) based on the data for the last tick; the post processing can a) generate network packages and b) fix conflicts / remove useless changes (example: entity has been destroyed - ignore movement etc.). EDIT: To clarify the granularity of the state changes: If I save these post processed state bags and apply them to an empty world, I see exactly what has happened in the game these state bags originated from - "Free" replay capability. EDIT2: I guess I should have used the term Event instead of State Change and point out that I kind of want to use the Event Sourcing pattern

    Read the article

  • Repository query conditions, dependencies and DRY

    - by vFragosop
    To keep it simple, let's suppose an application which has Accounts and Users. Each account may have any number of users. There's also 3 consumers of UserRepository: An admin interface which may list all users Public front-end which may list all users An account authenticated API which should only list it's own users Assuming UserRepository is something like this: class UsersRepository extends DatabaseAbstraction { private function query() { return $this->database()->select('users.*'); } public function getAll() { return $this->query()->exec(); } // IMPORTANT: // Tons of other methods for searching, filtering, // joining of other tables, ordering and such... } Keeping in mind the comment above, and the necessity to abstract user querying conditions, How should I handle querying of users filtering by account_id? I can picture three possible roads: 1. Should I create an AccountUsersRepository? class AccountUsersRepository extends UserRepository { public function __construct(Account $account) { $this->account = $account; } private function query() { return parent::query() ->where('account_id', '=', $this->account->id); } } This has the advantage of reducing the duplication of UsersRepository methods, but doesn't quite fit into anything I've read about DDD so far (I'm rookie by the way) 2. Should I put it as a method on AccountsRepository? class AccountsRepository extends DatabaseAbstraction { public function getAccountUsers(Account $account) { return $this->database() ->select('users.*') ->where('account_id', '=', $account->id) ->exec(); } } This requires the duplication of all UserRepository methods and may need another UserQuery layer, that implements those querying logic on chainable way. 3. Should I query UserRepository from within my account entity? class Account extends Entity { public function getUsers() { return UserRepository::findByAccountId($this->id); } } This feels more like an aggregate root for me, but introduces dependency of UserRepository on Account entity, which may violate a few principles. 4. Or am I missing the point completely? Maybe there's an even better solution? Footnotes: Besides permissions being a Service concern, in my understanding, they shouldn't implement SQL query but leave that to repositories since those may not even be SQL driven.

    Read the article

  • Learning project Custom c# Cms [closed]

    - by user313378
    I want to start new project customCms, cause I think it's a good starting point to implement my collected knowledge from c#, ddd, nhibernate, mvc3, js. It will be great if I hear some guidlines from expirienced users here. I will use C# ASP.NET MVC3 razor view engine. Also I was thinking of NHibernate ORM, I dont know if using Nhibernate will cause performanse down. Initially MSSQL 2008 will be used, but using ORM layer cause that I can switch to some other db with no pain. I was thinking to create News entity which will have properties Id Name Created Updated IntroText Content Title Author ListPhotos Every input will be validated with untroub. java script on the view, and it will be validated on db level as well. Maybe it is best approach to create some interface which will be implemented by my cmsClient entity like NewsEntity. In this interface will be included everything it should be requested from my client in future. At least some stuff which are not included in entity right now, consumed data by rss feed, wcf, etc. So basically everything you think its good idea from documentating project, to coding. Everyone is welcomed to brainstorm for custom cms.

    Read the article

  • JPQL: unknown state or association field (EclipseLink)

    - by Kawu
    I have an Employee entity which inherits from Person and OrganizationalUnit: OrganizationalUnit: @MappedSuperclass public abstract class OrganizationalUnit implements Serializable { @Id private Long id; @Basic( optional = false ) private String name; public Long getId() { return this.id; } public void setId( Long id ) { this.id = id; } public String getName() { return this.name; } public void setName( String name ) { this.name = name; } // ... } Person: @MappedSuperclass public abstract class Person extends OrganizationalUnit { private String lastName; private String firstName; public String getLastName() { return this.lastName; } public void setLastName( String lastName ) { this.lastName = lastName; } public String getFirstName() { return this.firstName; } public void setFirstName( String firstName ) { this.firstName = firstName; } /** * Returns names of the form "John Doe". */ @Override public String getName() { return this.firstName + " " + this.lastName; } @Override public void setName( String name ) { throw new UnsupportedOperationException( "Name cannot be set explicitly!" ); } /** * Returns names of the form "Doe, John". */ public String getFormalName() { return this.lastName + ", " + this.firstName; } // ... } Employee entity: @Entity @Table( name = "EMPLOYEES" ) @AttributeOverrides ( { @AttributeOverride( name = "id", column = @Column( name = "EMPLOYEE_ID" ) ), @AttributeOverride( name = "name", column = @Column( name = "LASTNAME", insertable = false, updatable = false ) ), @AttributeOverride( name = "firstName", column = @Column( name = "FIRSTNAME" ) ), @AttributeOverride( name = "lastName", column = @Column( name = "LASTNAME" ) ), } ) @NamedQueries ( { @NamedQuery( name = "Employee.FIND_BY_FORMAL_NAME", query = "SELECT emp " + "FROM Employee emp " + "WHERE emp.formalName = :formalName" ) } ) public class Employee extends Person { @Column( name = "EMPLOYEE_NO" ) private String nbr; // lots of other stuff... } I then attempted to find an employee by its formal name, e.g. "Doe, John" using the query above: SELECT emp FROM Employee emp WHERE emp.formalName = :formalName However, this gives me an exception on deploying to EclipseLink: Exception while preparing the app : Exception [EclipseLink-8030] (Eclipse Persistence Services - 2.3.2.v20111125-r10461): org.eclipse.persistence.exceptions.JPQLException Exception Description: Error compiling the query [Employee.FIND_BY_CLIENT_AND_FORMAL_NAME: SELECT emp FROM Employee emp JOIN FETCH emp.client JOIN FETCH emp.unit WHERE emp.client.id = :clientId AND emp.formalName = :formalName], line 1, column 115: unknown state or association field [formalName] of class [de.bnext.core.common.entity.Employee]. Local Exception Stack: Exception [EclipseLink-8030] (Eclipse Persistence Services - 2.3.2.v20111125-r10461): org.eclipse.persistence.exceptions.JPQLException Exception Description: Error compiling the query [Employee.FIND_BY_CLIENT_AND_FORMAL_NAME: SELECT emp FROM Employee emp JOIN FETCH emp.client JOIN FETCH emp.unit WHERE emp.client.id = :clientId AND emp.formalName = :formalName], line 1, column 115: unknown state or association field [formalName] of class [de.bnext.core.common.entity.Employee]. Qs: What's wrong? Is it prohibited to use "artificial" properties in JPQL, here the WHERE clause? What are the premises here? I checked the capitalization and spelling many times, I'm out of luck.

    Read the article

  • Insert a doctype into an XML document (Java/ SAX)

    - by Thom Nichols
    Imagine you have an XML document and imagine you have the DTD but the document itself doesn't actually specify a DOCTYPE ... How would you insert the DOCTYPE declaration, preferably by specifying it on the parser (similar to how you can set the schema for a document that will be parsed) or by inserting the necessary SAX events via an XMLFilter or the like? I've found many references to EntityResolver, but that is what's invoked once a DOCTYPE is found during parsing and it's used to point to a local DTD file. EntityResolver2 appears to have what I'm looking for but I haven't found any examples of usage. This is the closest I've come thus far: (code is Groovy, but close enough that you should be able to understand it...) import org.xml.sax.* import org.xml.sax.ext.* import org.xml.sax.helpers.* class XmlFilter extends XMLFilterImpl { public XmlFilter( XMLReader reader ) { super(reader) } @Override public void startDocument() { super.startDocument() super.resolveEntity( null, 'file:///./entity.dtd') println "filter startDocument" } } class MyHandler extends DefaultHandler2 { public InputSource resolveEntity(String name, String publicId, String baseURI, String systemId) { println "entity: $name, $publicId, $baseURI, $systemId" return new InputSource(new StringReader('<!ENTITY asdf "&#161;">')) } } def handler = new MyHandler() def parser = XMLReaderFactory.createXMLReader() parser.setFeature 'http://xml.org/sax/features/use-entity-resolver2', true def filter = new XmlFilter( parser ) filter.setContentHandler( handler ) filter.setEntityResolver( handler ) filter.parse( new InputSource(new StringReader('''<?xml version="1.0" ?> <test>one &asdf; two! &nbsp; &iexcl;&pound;&cent;</test>''')) ); I see resolveEntity called but still hit org.xml.sax.SAXParseException: The entity "asdf" was referenced, but not declared. at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1231) at org.xml.sax.helpers.XMLFilterImpl.parse(XMLFilterImpl.java:333) I guess this is because there's no way to add SAX events that the parser knows about, I can only add events via a filter that's upstream from the parser which are passed along to the ContentHandler. So the document has to be valid going into the XMLReader. Any way around this? I know I can modify the raw stream to add a doctype or possibly do a transform to set a DTD... Any other options?

    Read the article

  • Passing a ManagedObjectContext to a second view

    - by amo
    I'm writing my first iPhone/Cocoa app. It has two table views inside a navigation view. When you touch a row in the first table view, you are taken to the second table view. I would like the second view to display records from the CoreData entities related to the row you touched in the first view. I have the CoreData data showing up fine in the first table view. You can touch a row and go to the second table view. I'm able to pass info from the selected object from the first to the second view. But I cannot get the second view to do its own CoreData fetching. For the life of me I cannot get the managedObjectContext object to pass to the second view controller. I don't want to do the lookups in the first view and pass a dictionary because I want to be able to use a search field to refine results in the second view, as well as insert new entries to the CoreData data from there. Here's the function that transitions from the first to the second view. - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // Navigation logic may go here -- for example, create and push another view controller. NSManagedObject *selectedObject = [[self fetchedResultsController] objectAtIndexPath:indexPath]; SecondViewController *secondViewController = [[SecondViewController alloc] initWithNibName:@"SecondView" bundle:nil]; secondViewController.tName = [[selectedObject valueForKey:@"name"] description]; secondViewController.managedObjectContext = [self managedObjectContext]; [self.navigationController pushViewController:secondViewController animated:YES]; [secondViewController release]; } And this is the function inside SecondViewController that crashes: - (void)viewDidLoad { [super viewDidLoad]; self.title = tName; NSError *error; if (![[self fetchedResultsController] performFetch:&error]) { // <-- crashes here // Handle the error... } } - (NSFetchedResultsController *)fetchedResultsController { if (fetchedResultsController != nil) { return fetchedResultsController; } /* Set up the fetched results controller. */ // Create the fetch request for the entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; // Edit the entity name as appropriate. // **** crashes on the next line because managedObjectContext == 0x0 NSEntityDescription *entity = [NSEntityDescription entityForName:@"SecondEntity" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // <snip> ... more code here from Apple template, never gets executed because of the crashing return fetchedResultsController; } Any ideas on what I am doing wrong here? managedObjectContext is a retained property. UPDATE: I inserted a NSLog([[managedObjectContext registeredObjects] description]); in viewDidLoad and it appears managedObjectContext is being passed just fine. Still crashing, though. Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '+entityForName: could not locate an NSManagedObjectModel for entity name 'SecondEntity''

    Read the article

  • Filtering a collection based on filtering rules

    - by Mike
    I have an observable collection of Entities, with each entity having a status added, deleted, modified and cancelled. I have four buttons (toggle) when clicked should filter my collection as below: If I select the button Added, then my collection should contain entities with status added. If I select the button Deleted and Added, then my collection should contain entities with status Deleted AND entities with status Added, none of the rest. If I select the button Deleted,Added and Modified, then my collection should contain entities with status Deleted, Added AND Modified. . . so on. If I unselect one of the buttons, it should remove those entities from the collection with that status. For example if I unselect Deleted, but select Added and Modified, then my collection should contain items with Added and Modified status and NOT Deleted ones. For implementing this I have created a master collection and a filtered collection. The Filter collection gets filtered based on the selections and unselections. The following is my code: private bool _clickedAdded; public bool ClickedAdded { get { return _clickedAdded; } set { _clickedAdded = value; if(!_clickedAdded) FilterAny(typeof(Added)); } } private bool _clickedDeleted; public bool ClickedDeleted { get { return _clickedDeleted; } set { _clickedDeleted = value; if (!_clickedDeleted) FilterAny(typeof(Deleted)); } } private bool _clickedModified; public bool ClickedModified { get { return _clickedModified; } set { _clickedModified = value; if (!_clickedModified) FilterAny(typeof(Modified)); } } private void FilterAny(Type status) { Func<Entity, bool> predicate = entity => entity.Status.GetType() != status; var filteredItems = MasterEntites.Where(predicate); FilteredEntities = new ObservableCollection<Entity>(filteredItems); } This however breaks the above rules - for example if I have all selected, and then I remove Added followed by deleted then it still shows the list of Added, Modified and Cancelled. It should be just Modified and Cancelled in the filtered collection. Can you please help me in solving this issue? Also do I need 2 different list to solve this. Please note that I'm using .NET 3.5.

    Read the article

  • org.hibernate.TransientObjectException during Criteria.list()

    - by rancidfishbreath
    I have seen posts all over the internet that talk about how to fix the TransientObjectExceptions during save/update/delete but I am having this problem when calling list on my Criteria. I have two objects A and B. A has a field named b which is of type B. In my mapping b is mapped as a many-to-one. This all runs in a larger persistence framework (the framework is kind of like Core Data) and so I don't use any cascades in my hibernate mappings since cascades are handled at a higher level. This is the interesting code surrounding my criteria: A a = new A(); B b = new B(); a.setB(b); session.save("B", b); // Actually handled by the higher level session.save("A", a); // framework, this is just for clarity // transaction committed and session closed ... // new session opened Criteria criteria = session.createCriteria(A.class); criteria.add(Restrictions.eq("b", b)); List<?> objects = criteria.list(); Basically I am looking for all objects of type A such that A.b equals a particular instance of b (I actually tried restructuring a query so that I was passing in the id of b just to make sure that b wasn't causing me problems). Here is the stack trace that occurs when I call criteria.list(): org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: B at org.hibernate.engine.ForeignKeys.getEntityIdentifierIfNotUnsaved(ForeignKeys.java:244) at org.hibernate.type.EntityType.getIdentifier(EntityType.java:449) at org.hibernate.type.ManyToOneType.nullSafeSet(ManyToOneType.java:141) at org.hibernate.loader.Loader.bindPositionalParameters(Loader.java:1769) at org.hibernate.loader.Loader.bindParameterValues(Loader.java:1740) at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1612) at org.hibernate.loader.Loader.doQuery(Loader.java:717) at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:270) at org.hibernate.loader.Loader.doList(Loader.java:2294) at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2172) at org.hibernate.loader.Loader.list(Loader.java:2167) at org.hibernate.loader.criteria.CriteriaLoader.list(CriteriaLoader.java:119) at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1706) at org.hibernate.impl.CriteriaImpl.list(CriteriaImpl.java:347) Here is my mapping: <class entity-name="A" lazy="false"> <tuplizer entity-mode="dynamic-map" class="MyTuplizer" /> <id type="long" column="id"> <generator class="native" /> </id> <many-to-one name="b" entity-name="B" column="b_id" lazy="false" /> </class> <class entity-name="B" lazy="false"> <tuplizer entity-mode="dynamic-map" class="MyTuplizer" /> <id type="long" column="id"> <generator class="native" /> </id> </class> Can anyone help me figure out why I would be getting a TransientObjectException during a fetch? Preferably I would like to find a solution that does not rely on cascades since they tend to mask problems that occur in the higher level framework.

    Read the article

  • Different behaviour using unidirectional or bidirectional relation

    - by sinuhepop
    I want to persist a mail entity which has some resources (inline or attachment). First I related them as a bidirectional relation: @Entity public class Mail extends BaseEntity { @OneToMany(mappedBy = "mail", cascade = CascadeType.ALL, orphanRemoval = true) private List<MailResource> resource; private String receiver; private String subject; private String body; @Temporal(TemporalType.TIMESTAMP) private Date queued; @Temporal(TemporalType.TIMESTAMP) private Date sent; public Mail(String receiver, String subject, String body) { this.receiver = receiver; this.subject = subject; this.body = body; this.queued = new Date(); this.resource = new ArrayList<>(); } public void addResource(String name, MailResourceType type, byte[] content) { resource.add(new MailResource(this, name, type, content)); } } @Entity public class MailResource extends BaseEntity { @ManyToOne(optional = false) private Mail mail; private String name; private MailResourceType type; private byte[] content; } And when I saved them: Mail mail = new Mail("[email protected]", "Hi!", "..."); mail.addResource("image", MailResourceType.INLINE, someBytes); mail.addResource("documentation.pdf", MailResourceType.ATTACHMENT, someOtherBytes); mailRepository.save(mail); Three inserts were executed: INSERT INTO MAIL (ID, BODY, QUEUED, RECEIVER, SENT, SUBJECT) VALUES (?, ?, ?, ?, ?, ?) INSERT INTO MAILRESOURCE (ID, CONTENT, NAME, TYPE, MAIL_ID) VALUES (?, ?, ?, ?, ?) INSERT INTO MAILRESOURCE (ID, CONTENT, NAME, TYPE, MAIL_ID) VALUES (?, ?, ?, ?, ?) Then I thought it would be better using only a OneToMany relation. No need to save which Mail is in every MailResource: @Entity public class Mail extends BaseEntity { @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true) @JoinColumn(name = "mail_id") private List<MailResource> resource; ... public void addResource(String name, MailResourceType type, byte[] content) { resource.add(new MailResource(name, type, content)); } } @Entity public class MailResource extends BaseEntity { private String name; private MailResourceType type; private byte[] content; } Generated tables are exactly the same (MailResource has a FK to Mail). The problem is the executed SQL: INSERT INTO MAIL (ID, BODY, QUEUED, RECEIVER, SENT, SUBJECT) VALUES (?, ?, ?, ?, ?, ?) INSERT INTO MAILRESOURCE (ID, CONTENT, NAME, TYPE) VALUES (?, ?, ?, ?) INSERT INTO MAILRESOURCE (ID, CONTENT, NAME, TYPE) VALUES (?, ?, ?, ?) UPDATE MAILRESOURCE SET mail_id = ? WHERE (ID = ?) UPDATE MAILRESOURCE SET mail_id = ? WHERE (ID = ?) Why this two updates? I'm using EclipseLink, will this behaviour be the same using another JPA provider as Hibernate? Which solution is better?

    Read the article

  • Linq to XML query returining a list of strings

    - by objectsbinding
    Hi all, I have the following class public class CountrySpecificPIIEntity { public string Country { get; set; } public string CreditCardType { get; set; } public String Language { get; set; } public List<String> PIIData { get; set; } } I have the following XML that I want to query using Linq-to-xml and shape in to the class above <?xml version="1.0" encoding="utf-8" ?> <piisettings> <piifilter country="DE" creditcardype="37" language="ALL" > <filters> <filter>FIRSTNAME</filter> <filter>SURNAME</filter> <filter>STREET</filter> <filter>ADDITIONALADDRESSINFO</filter> <filter>ZIP</filter> <filter>CITY</filter> <filter>STATE</filter> <filter>EMAIL</filter> </filters> </piifilter> <piifilter country="DE" creditcardype="37" Language="en" > <filters> <filter>EMAIL</filter> </filters> </piifilter> </piisettings> I'm using the following query but I'm having trouble with the last attribute i.e. PIIList. var query = from pii in xmlDoc.Descendants("piifilter") select new CountrySpecificPIIEntity { Country = pii.Attribut("country").Value, CreditCardType = pii.Attribute("creditcardype").Value, Language = pii.Attribute("Language").Value, PIIList = (List<string>)pii.Elements("filters") }; foreach (var entity in query) { Debug.Write(entity.Country); Debug.Write(entity.CreditCardType); Debug.Write(entity.Language); Debug.Write(entity.PIIList); } How Can I get the query to return a List to be hydrated in to the PIIList property.

    Read the article

  • JPA Database strcture for internationalisation

    - by IrishDubGuy
    I am trying to get a JPA implementation of a simple approach to internationalisation. I want to have a table of translated strings that I can reference in multiple fields in multiple tables. So all text occurrences in all tables will be replaced by a reference to the translated strings table. In combination with a language id, this would give a unique row in the translated strings table for that particular field. For example, consider a schema that has entities Course and Module as follows :- Course int course_id, int name, int description Module int module_id, int name The course.name, course.description and module.name are all referencing the id field of the translated strings table :- TranslatedString int id, String lang, String content That all seems simple enough. I get one table for all strings that could be internationalised and that table is used across all the other tables. How might I do this in JPA, using eclipselink 2.4? I've looked at embedded ElementCollection, ala this... JPA 2.0: Mapping a Map - it isn't exactly what i'm after cos it looks like it is relating the translated strings table to the pk of the owning table. This means I can only have one translatable string field per entity (unless I add new join columns into the translatable strings table, which defeats the point, its the opposite of what I am trying to do). I'm also not clear on how this would work across entites, presumably the id of each entity would have to use a database wide sequence to ensure uniqueness of the translatable strings table. BTW, I tried the example as laid out in that link and it didn't work for me - as soon as the entity had a localizedString map added, persisting it caused the client side to bomb but no obvious error on the server side and nothing persisted in the DB :S I been around the houses on this about 9 hours so far, I've looked at this Internationalization with Hibernate which appears to be trying to do the same thing as the link above (without the table definitions it hard to see what he achieved). Any help would be gratefully achieved at this point... Edit 1 - re AMS anwser below, I'm not sure that really addresses the issue. In his example it leaves the storing of the description text to some other process. The idea of this type of approach is that the entity object takes the text and locale and this (somehow!) ends up in the translatable strings table. In the first link I gave, the guy is attempting to do this by using an embedded map, which I feel is the right approach. His way though has two issues - one it doesn't seem to work! and two if it did work, it is storing the FK in the embedded table instead of the other way round (I think, I can't get it to run so I can't see exactly how it persists). I suspect the correct approach ends up with a map reference in place of each text that needs translating (the map being locale-content), but I can't see how to do this in a way that allows for multiple maps in one entity (without having corresponding multiple columns in the translatable strings table)...

    Read the article

  • AuthnRequest Settings in OIF / SP

    - by Damien Carru
    In this article, I will list the various OIF/SP settings that affect how an AuthnRequest message is created in OIF in a Federation SSO flow. The AuthnRequest message is used by an SP to start a Federation SSO operation and to indicate to the IdP how the operation should be executed: How the user should be challenged at the IdP Whether or not the user should be challenged at the IdP, even if a session already exists at the IdP for this user Which NameID format should be requested in the SAML Assertion Which binding (Artifact or HTTP-POST) should be requested from the IdP to send the Assertion Which profile should be used by OIF/SP to send the AuthnRequest message Enjoy the reading! Protocols The SAML 2.0, SAML 1.1 and OpenID 2.0 protocols define different message elements and rules that allow an administrator to influence the Federation SSO flows in different manners, when the SP triggers an SSO operation: SAML 2.0 allows extensive customization via the AuthnRequest message SAML 1.1 does not allow any customization, since the specifications do not define an authentication request message OpenID 2.0 allows for some customization, mainly via the OpenID 2.0 extensions such as PAPE or UI SAML 2.0 OIF/SP allows the customization of the SAML 2.0 AuthnRequest message for the following elements: ForceAuthn: Boolean indicating whether or not the IdP should force the user for re-authentication, even if the user has still a valid session By default set to false IsPassive Boolean indicating whether or not the IdP is allowed to interact with the user as part of the Federation SSO operation. If false, the Federation SSO operation might result in a failure with the NoPassive error code, because the IdP will not have been able to identify the user By default set to false RequestedAuthnContext Element indicating how the user should be challenged at the IdP If the SP requests a Federation Authentication Method unknown to the IdP or for which the IdP is not configured, then the Federation SSO flow will result in a failure with the NoAuthnContext error code By default missing NameIDPolicy Element indicating which NameID format the IdP should include in the SAML Assertion If the SP requests a NameID format unknown to the IdP or for which the IdP is not configured, then the Federation SSO flow will result in a failure with the InvalidNameIDPolicy error code If missing, the IdP will generally use the default NameID format configured for this SP partner at the IdP By default missing ProtocolBinding Element indicating which SAML binding should be used by the IdP to redirect the user to the SP with the SAML Assertion Set to Artifact or HTTP-POST By default set to HTTP-POST OIF/SP also allows the administrator to configure the server to: Set which binding should be used by OIF/SP to redirect the user to the IdP with the SAML 2.0 AuthnRequest message: Redirect or HTTP-POST By default set to Redirect Set which binding should be used by OIF/SP to redirect the user to the IdP during logout with SAML 2.0 Logout messages: Redirect or HTTP-POST By default set to Redirect SAML 1.1 The SAML 1.1 specifications do not define a message for the SP to send to the IdP when a Federation SSO operation is started. As such, there is no capability to configure OIF/SP on how to affect the start of the Federation SSO flow. OpenID 2.0 OpenID 2.0 defines several extensions that can be used by the SP/RP to affect how the Federation SSO operation will take place: OpenID request: mode: String indicating if the IdP/OP can visually interact with the user checkid_immediate does not allow the IdP/OP to interact with the user checkid_setup allows user interaction By default set to checkid_setup PAPE Extension: max_auth_age : Integer indicating in seconds the maximum amount of time since when the user authenticated at the IdP. If MaxAuthnAge is bigger that the time since when the user last authenticated at the IdP, then the user must be re-challenged. OIF/SP will set this attribute to 0 if the administrator configured ForceAuthn to true, otherwise this attribute won't be set Default missing preferred_auth_policies Contains a Federation Authentication Method Element indicating how the user should be challenged at the IdP By default missing Only specified in the OpenID request if the IdP/OP supports PAPE in XRDS, if OpenID discovery is used. UI Extension Popup mode Boolean indicating the popup mode is enabled for the Federation SSO By default missing Language Preference String containing the preferred language, set based on the browser's language preferences. By default missing Icon: Boolean indicating if the icon feature is enabled. In that case, the IdP/OP would look at the SP/RP XRDS to determine how to retrieve the icon By default missing Only specified in the OpenID request if the IdP/OP supports UI Extenstion in XRDS, if OpenID discovery is used. ForceAuthn and IsPassive WLST Command OIF/SP provides the WLST configureIdPAuthnRequest() command to set: ForceAuthn as a boolean: In a SAML 2.0 AuthnRequest, the ForceAuthn field will be set to true or false In an OpenID 2.0 request, if ForceAuthn in the configuration was set to true, then the max_auth_age field of the PAPE request will be set to 0, otherwise, max_auth_age won't be set IsPassive as a boolean: In a SAML 2.0 AuthnRequest, the IsPassive field will be set to true or false In an OpenID 2.0 request, if IsPassive in the configuration was set to true, then the mode field of the OpenID request will be set to checkid_immediate, otherwise set to checkid_setup Test In this test, OIF/SP is integrated with a remote SAML 2.0 IdP Partner, with the OOTB configuration. Based on this setup, when OIF/SP starts a Federation SSO flow, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> Let's configure OIF/SP for that IdP Partner, so that the SP will require the IdP to re-challenge the user, even if the user is already authenticated: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the configureIdPAuthnRequest() command:configureIdPAuthnRequest(partner="AcmeIdP", forceAuthn="true") Exit the WLST environment:exit() After the changes, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ForceAuthn="true" ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> To display or delete the ForceAuthn/IsPassive settings, perform the following operatons: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the configureIdPAuthnRequest() command: To display the ForceAuthn/IsPassive settings on the partnerconfigureIdPAuthnRequest(partner="AcmeIdP", displayOnly="true") To delete the ForceAuthn/IsPassive settings from the partnerconfigureIdPAuthnRequest(partner="AcmeIdP", delete="true") Exit the WLST environment:exit() Requested Fed Authn Method In my earlier "Fed Authentication Method Requests in OIF / SP" article, I discussed how OIF/SP could be configured to request a specific Federation Authentication Method from the IdP when starting a Federation SSO operation, by setting elements in the SSO request message. WLST Command The OIF WLST commands that can be used are: setIdPPartnerProfileRequestAuthnMethod() which will configure the requested Federation Authentication Method in a specific IdP Partner Profile, and accepts the following parameters: partnerProfile: name of the IdP Partner Profile authnMethod: the Federation Authentication Method to request displayOnly: an optional parameter indicating if the method should display the current requested Federation Authentication Method instead of setting it delete: an optional parameter indicating if the method should delete the current requested Federation Authentication Method instead of setting it setIdPPartnerRequestAuthnMethod() which will configure the specified IdP Partner entry with the requested Federation Authentication Method, and accepts the following parameters: partner: name of the IdP Partner authnMethod: the Federation Authentication Method to request displayOnly: an optional parameter indicating if the method should display the current requested Federation Authentication Method instead of setting it delete: an optional parameter indicating if the method should delete the current requested Federation Authentication Method instead of setting it This applies to SAML 2.0 and OpenID 2.0 protocols. See the "Fed Authentication Method Requests in OIF / SP" article for more information. Test In this test, OIF/SP is integrated with a remote SAML 2.0 IdP Partner, with the OOTB configuration. Based on this setup, when OIF/SP starts a Federation SSO flow, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> Let's configure OIF/SP for that IdP Partner, so that the SP will request the IdP to use a mechanism mapped to the urn:oasis:names:tc:SAML:2.0:ac:classes:X509 Federation Authentication Method to authenticate the user: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setIdPPartnerRequestAuthnMethod() command:setIdPPartnerRequestAuthnMethod("AcmeIdP", "urn:oasis:names:tc:SAML:2.0:ac:classes:X509") Exit the WLST environment:exit() After the changes, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/>   <samlp:RequestedAuthnContext Comparison="minimum">      <saml:AuthnContextClassRef xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion">         urn:oasis:names:tc:SAML:2.0:ac:classes:X509      </saml:AuthnContextClassRef>   </samlp:RequestedAuthnContext></samlp:AuthnRequest> NameID Format The SAML 2.0 protocol allows for the SP to request from the IdP a specific NameID format to be used when the Assertion is issued by the IdP. Note: SAML 1.1 and OpenID 2.0 do not provide such a mechanism Configuring OIF The administrator can configure OIF/SP to request a NameID format in the SAML 2.0 AuthnRequest via: The OAM Administration Console, in the IdP Partner entry The OIF WLST setIdPPartnerNameIDFormat() command that will modify the IdP Partner configuration OAM Administration Console To configure the requested NameID format via the OAM Administration Console, perform the following steps: Go to the OAM Administration Console: http(s)://oam-admin-host:oam-admin-port/oamconsole Navigate to Identity Federation -> Service Provider Administration Open the IdP Partner you wish to modify In the Authentication Request NameID Format dropdown box with one of the values None The NameID format will be set Default Email Address The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress X.509 Subject The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName Windows Name Qualifier The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:WindowsDomainQualifiedName Kerberos The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:kerberos Transient The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:transient Unspecified The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified Custom In this case, a field would appear allowing the administrator to indicate the custom NameID format to use The NameID format will be set to the specified format Persistent The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:persistent I selected Email Address in this example Save WLST Command To configure the requested NameID format via the OIF WLST setIdPPartnerNameIDFormat() command, perform the following steps: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setIdPPartnerNameIDFormat() command:setIdPPartnerNameIDFormat("PARTNER", "FORMAT", customFormat="CUSTOM") Replace PARTNER with the IdP Partner name Replace FORMAT with one of the following: orafed-none The NameID format will be set Default orafed-emailaddress The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress orafed-x509 The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName orafed-windowsnamequalifier The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:WindowsDomainQualifiedName orafed-kerberos The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:kerberos orafed-transient The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:transient orafed-unspecified The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified orafed-custom In this case, a field would appear allowing the administrator to indicate the custom NameID format to use The NameID format will be set to the specified format orafed-persistent The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:persistent customFormat will need to be set if the FORMAT is set to orafed-custom An example would be:setIdPPartnerNameIDFormat("AcmeIdP", "orafed-emailaddress") Exit the WLST environment:exit() Test In this test, OIF/SP is integrated with a remote SAML 2.0 IdP Partner, with the OOTB configuration. Based on this setup, when OIF/SP starts a Federation SSO flow, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer> <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> After the changes performed either via the OAM Administration Console or via the OIF WLST setIdPPartnerNameIDFormat() command where Email Address would be requested as the NameID Format, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ForceAuthn="false" IsPassive="false" ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer> <samlp:NameIDPolicy Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress" AllowCreate="true"/></samlp:AuthnRequest> Protocol Binding The SAML 2.0 specifications define a way for the SP to request which binding should be used by the IdP to redirect the user to the SP with the SAML 2.0 Assertion: the ProtocolBinding attribute indicates the binding the IdP should use. It is set to: Either urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST for HTTP-POST Or urn:oasis:names:tc:SAML:2.0:bindings:Artifact for Artifact The SAML 2.0 specifications also define different ways to redirect the user from the SP to the IdP with the SAML 2.0 AuthnRequest message, as the SP can send the message: Either via HTTP Redirect Or HTTP POST (Other bindings can theoretically be used such as Artifact, but these are not used in practice) Configuring OIF OIF can be configured: Via the OAM Administration Console or the OIF WLST configureSAMLBinding() command to set the Assertion Response binding to be used Via the OIF WLST configureSAMLBinding() command to indicate how the SAML AuthnRequest message should be sent Note: the binding for sending the SAML 2.0 AuthnRequest message will also be used to send the SAML 2.0 LogoutRequest and LogoutResponse messages. OAM Administration Console To configure the SSO Response/Assertion Binding via the OAM Administration Console, perform the following steps: Go to the OAM Administration Console: http(s)://oam-admin-host:oam-admin-port/oamconsole Navigate to Identity Federation -> Service Provider Administration Open the IdP Partner you wish to modify Check the "HTTP POST SSO Response Binding" box to request the IdP to return the SSO Response via HTTP POST, otherwise uncheck it to request artifact Save WLST Command To configure the SSO Response/Assertion Binding as well as the AuthnRequest Binding via the OIF WLST configureSAMLBinding() command, perform the following steps: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the configureSAMLBinding() command:configureSAMLBinding("PARTNER", "PARTNER_TYPE", binding, ssoResponseBinding="httppost") Replace PARTNER with the Partner name Replace PARTNER_TYPE with the Partner type (idp or sp) Replace binding with the binding to be used to send the AuthnRequest and LogoutRequest/LogoutResponse messages (should be httpredirect in most case; default) httppost for HTTP-POST binding httpredirect for HTTP-Redirect binding Specify optionally ssoResponseBinding to indicate how the SSO Assertion should be sent back httppost for HTTP-POST binding artifactfor for Artifact binding An example would be:configureSAMLBinding("AcmeIdP", "idp", "httpredirect", ssoResponseBinding="httppost") Exit the WLST environment:exit() Test In this test, OIF/SP is integrated with a remote SAML 2.0 IdP Partner, with the OOTB configuration which requests HTTP-POST from the IdP to send the SSO Assertion. Based on this setup, when OIF/SP starts a Federation SSO flow, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> In the next article, I will cover the various crypto configuration properties in OIF that are used to affect the Federation SSO exchanges.Cheers,Damien Carru

    Read the article

  • A basic T4 template for generating Model Metadata in ASP.NET MVC2

    - by rajbk
    I have been learning about T4 templates recently by looking at the awesome ADO.NET POCO entity generator. By using the POCO entity generator template as a base, I created a T4 template which generates metadata classes for a given Entity Data Model. This speeds coding by reducing the amount of typing required when creating view specific model and its metadata. To use this template, Download the template provided at the bottom. Set two values in the template file. The first one should point to the EDM you wish to generate metadata for. The second is used to suffix the namespace and classes that get generated. string inputFile = @"Northwind.edmx"; string suffix = "AutoMetadata"; Add the template to your MVC 2 Visual Studio 2010 project. Once you add it, a number of classes will get added to your project based on the number of entities you have.    One of these classes is shown below. Note that the DisplayName, Required and StringLength attributes have been added by the t4 template. //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------   using System; using System.ComponentModel; using System.ComponentModel.DataAnnotations;   namespace NorthwindSales.ModelsAutoMetadata { public partial class CustomerAutoMetadata { [DisplayName("Customer ID")] [Required] [StringLength(5)] public string CustomerID { get; set; } [DisplayName("Company Name")] [Required] [StringLength(40)] public string CompanyName { get; set; } [DisplayName("Contact Name")] [StringLength(30)] public string ContactName { get; set; } [DisplayName("Contact Title")] [StringLength(30)] public string ContactTitle { get; set; } [DisplayName("Address")] [StringLength(60)] public string Address { get; set; } [DisplayName("City")] [StringLength(15)] public string City { get; set; } [DisplayName("Region")] [StringLength(15)] public string Region { get; set; } [DisplayName("Postal Code")] [StringLength(10)] public string PostalCode { get; set; } [DisplayName("Country")] [StringLength(15)] public string Country { get; set; } [DisplayName("Phone")] [StringLength(24)] public string Phone { get; set; } [DisplayName("Fax")] [StringLength(24)] public string Fax { get; set; } } } The gen’d class can be used from your project by creating a partial class with the entity name and setting the MetadataType attribute.namespace MyProject.Models{ [MetadataType(typeof(CustomerAutoMetadata))] public partial class Customer { }} You can also copy the code in the metadata class generated and create your own ViewModel class. Note that the template is super basic  and does not take into account complex properties. I have tested it with the Northwind database. This is a work in progress. Feel free to modify the template to suite your requirements. Standard disclaimer follows: Use At Your Own Risk, Works on my machine running VS 2010 RTM/ASP.NET MVC 2 AutoMetaData.zip Mr. Incredible: Of course I have a secret identity. I don't know a single superhero who doesn't. Who wants the pressure of being super all the time?

    Read the article

  • Introducing a new Umbraco datatype for Multi-lingual websites.

    - by Vizioz Limited
    Over the last 6 months we have been building various multi-lingual sites for different clients and for some of the clients they have 1 to 1 relationships between some or all of their pages.Within Umbraco, you can copy a page ( or whole tree of pages ) and keep a relationship between each of the pages and their new copy, this allows content editors to subscribe to change notifications that Umbraco can create if one of the linked pages is changed.Unfortunately one thing that is missing in Umbraco is any way to see which pages are related to each other and to have a quick and easy way to jump between the related pages.We created a datatype that solves these problems and thought we would release it as an open source project ( which we are still maintaining )Currently you can:1) See current relationships2) Add relationships3) Limit the number of relationships that can be added ( by the data type )4) See the Country flag ( assuming a culture has been set on each of your top level site nodes for each country site )5) Link between the documents6) Change or delete the linksAn example where multiple languages are allowed:An example where only 2 languages exist (1 relationship):You can download the datatype from the Umbraco project page:Vizioz Relationships for UmbracoPlease do let us know what you think :)

    Read the article

  • Podcast interview with Michael Kane

    - by mhornick
    In this podcast interview with Michael Kane, Data Scientist and Associate Researcher at Yale University, Michael discusses the R statistical programming language, computational challenges associated with big data, and two projects involving data analysis he conducted on the stock market "flash crash" of May 6, 2010, and the tracking of transportation routes bird flu H5N1. Michael also worked with Oracle on Oracle R Enterprise, a component of the Advanced Analytics option to Oracle Database Enterprise Edition. In the closing segment of the interview, Michael comments on the relationship between the data analyst and the database administrator and how Oracle R Enterprise provides secure data management, transparent access to data, and improved performance to facilitate this relationship. Listen now...

    Read the article

  • Announcement: Employee Info Starter Kit (v5.0) is Released

    - by Mohammad Ashraful Alam
    Ever wanted to have a simple jQuery menu bound with ASP.NET web site map file? Ever wanted to have cool css design stuffs implemented on your ASP.NET data bound controls? Ever wanted to let Visual Studio generate logical layers for you, which can be easily tested, customized and bound with ASP.NET data controls? If your answers with respect to above questions are ‘yes’, then you will probably happy to try out latest release (v5.0) of Employee Starter Kit, which is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, the current release illustrates how to utilize Microsoft ASP.NET 4.0 Web Form Data Controls, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is an open source ASP.NET project template that is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. This project template is titled as “Employee Info Starter Kit”, which was initially hosted on Microsoft Code Gallery and been downloaded 1, 50,000+ of copies afterword.  The latest version of this starter kit is hosted in Codeplex. Release Highlights User End Functional Specification The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee records Update an existing employee record Delete existing employee records Architectural Overview Simple 3 layer architecture (presentation, business logic and data access layer) ASP.NET web form based user interface Built-in code generators for logical layers, implemented in Visual Studio default template engine (T4) Built-in Entity Framework entities as business entities (aka: data containers) Data Mapper design pattern based Data Access Layer, implemented in C# and Entity Framework Domain Model design pattern based Business Logic Layer, implemented in C# Object Model for Cross Cutting Concerns (such as validation, logging, exception management) Minimum System Requirements Visual Studio 2010 (Web Developer Express Edition) or higher Sql Server 2005 (Express Edition) or higher Technology Utilized Programming Languages/Scripts Browser side: JavaScript Web server side: C# Code Generation Template: T-4 Template Frameworks .NET Framework 4.0 JavaScript Framework: jQuery 1.5.1 CSS Framework: 960 grid system .NET Framework Components .NET Entity Framework .NET Optional/Named Parameters (new in .net 4.0) .NET Tuple (new in .net 4.0) .NET Extension Method .NET Lambda Expressions .NET Anonymous Type .NET Query Expressions .NET Automatically Implemented Properties .NET LINQ .NET Partial Classes and Methods .NET Generic Type .NET Nullable Type ASP.NET Meta Description and Keyword Support (new in .net 4.0) ASP.NET Routing (new in .net 4.0) ASP.NET Grid View (CSS support for sorting - (new in .net 4.0)) ASP.NET Repeater ASP.NET Form View ASP.NET Login View ASP.NET Site Map Path ASP.NET Skin ASP.NET Theme ASP.NET Master Page ASP.NET Object Data Source ASP.NET Role Based Security Getting Started Guide To see Employee Info Starter Kit in action is pretty easy! Download the latest version. Extract the file. From the extracted folder click the C# project file (Eisk.Web.csproj) to open it in Visual Studio 2010 Hit Ctrl+F5! The current release (v5.0) of Employee Info Starter Kit is properly packaged, fully documented and well tested. If you want to learn more about it in details, just check the following links: Release Home Page Installation Walkthrough Hand on Coding Walkthrough Technical Reference Enjoy!

    Read the article

  • Big Data – Operational Databases Supporting Big Data – RDBMS and NoSQL – Day 12 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Cloud in the Big Data Story. In this article we will understand the role of Operational Databases Supporting Big Data Story. Even though we keep on talking about Big Data architecture, it is extremely crucial to understand that Big Data system can’t just exist in the isolation of itself. There are many needs of the business can only be fully filled with the help of the operational databases. Just having a system which can analysis big data may not solve every single data problem. Real World Example Think about this way, you are using Facebook and you have just updated your information about the current relationship status. In the next few seconds the same information is also reflected in the timeline of your partner as well as a few of the immediate friends. After a while you will notice that the same information is now also available to your remote friends. Later on when someone searches for all the relationship changes with their friends your change of the relationship will also show up in the same list. Now here is the question – do you think Big Data architecture is doing every single of these changes? Do you think that the immediate reflection of your relationship changes with your family member is also because of the technology used in Big Data. Actually the answer is Facebook uses MySQL to do various updates in the timeline as well as various events we do on their homepage. It is really difficult to part from the operational databases in any real world business. Now we will see a few of the examples of the operational databases. Relational Databases (This blog post) NoSQL Databases (This blog post) Key-Value Pair Databases (Tomorrow’s post) Document Databases (Tomorrow’s post) Columnar Databases (The Day After’s post) Graph Databases (The Day After’s post) Spatial Databases (The Day After’s post) Relational Databases We have earlier discussed about the RDBMS role in the Big Data’s story in detail so we will not cover it extensively over here. Relational Database is pretty much everywhere in most of the businesses which are here for many years. The importance and existence of the relational database are always going to be there as long as there are meaningful structured data around. There are many different kinds of relational databases for example Oracle, SQL Server, MySQL and many others. If you are looking for Open Source and widely accepted database, I suggest to try MySQL as that has been very popular in the last few years. I also suggest you to try out PostgreSQL as well. Besides many other essential qualities PostgreeSQL have very interesting licensing policies. PostgreSQL licenses allow modifications and distribution of the application in open or closed (source) form. One can make any modifications and can keep it private as well as well contribute to the community. I believe this one quality makes it much more interesting to use as well it will play very important role in future. Nonrelational Databases (NOSQL) We have also covered Nonrelational Dabases in earlier blog posts. NoSQL actually stands for Not Only SQL Databases. There are plenty of NoSQL databases out in the market and selecting the right one is always very challenging. Here are few of the properties which are very essential to consider when selecting the right NoSQL database for operational purpose. Data and Query Model Persistence of Data and Design Eventual Consistency Scalability Though above all of the properties are interesting to have in any NoSQL database but the one which most attracts to me is Eventual Consistency. Eventual Consistency RDBMS uses ACID (Atomicity, Consistency, Isolation, Durability) as a key mechanism for ensuring the data consistency, whereas NonRelational DBMS uses BASE for the same purpose. Base stands for Basically Available, Soft state and Eventual consistency. Eventual consistency is widely deployed in distributed systems. It is a consistency model used in distributed computing which expects unexpected often. In large distributed system, there are always various nodes joining and various nodes being removed as they are often using commodity servers. This happens either intentionally or accidentally. Even though one or more nodes are down, it is expected that entire system still functions normally. Applications should be able to do various updates as well as retrieval of the data successfully without any issue. Additionally, this also means that system is expected to return the same updated data anytime from all the functioning nodes. Irrespective of when any node is joining the system, if it is marked to hold some data it should contain the same updated data eventually. As per Wikipedia - Eventual consistency is a consistency model used in distributed computing that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. In other words -  Informally, if no additional updates are made to a given data item, all reads to that item will eventually return the same value. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • IDC Analyst Report Touts Oracle–Accenture Strategic Initiative

    - by kristin.jellison
    Hi there, partners! Oracle Engineered Systems have been getting some love lately, and we want to share it with you! The market intelligence and advisory firm IDC recently released a report lauding Oracle and Accenture’s strategic initiative to route the performance and flexibility of Oracle Engineered Systems to clients. The report, "Oracle and Accenture Strategic Alliance Places Big Bet on Engineered Systems,” by Steve White, reflects a largely positive analysis of the relationship. White notes that the alliance is “one of the largest in the industry.” Under the relationship, Accenture has incorporated Oracle Engineered Systems—including Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle SuperCluster, and Oracle Exalytics In-Memory Machine—into its leading datacenter transformation consulting services. Together, the two companies have also created bespoke platforms, such as the Accenture Foundation Platform for Oracle, which helps clients accelerate deployments on Oracle Fusion Middleware, running Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine. Oracle Engineered Systems deliver a single, engineered platform—including server to storage and networking. This makes it easier and cheaper for Accenture clients around the world to prepare their datacenters for managing, processing and analyzing the massive amounts of data they (rightly) anticipate seeing in the next decade. The new solutions can help reduce the effort and cost to migrate any vendor database to an Oracle Engineered Systems platform, which can lower the cost of ownership by up to 50 percent. For its part, Accenture has built a team of 300 consultants to implement and increase the flexibility and stability of client datacenters. This move further expands one of the fastest-growing full-service Oracle Enterprise solutions. Over 52,000 Accenture consultants are qualified to implement, upgrade and outsource the Oracle product suite. Accenture is a Diamond-level member of Oracle PartnerNetwork (OPN). For Oracle Partners, this update should give you at least two things to walk away with. First, this initiative is showing signs of success. As Marty Cole, group chief executive for Accenture’s Technology growth platform, put it, “We are seeing an increasing number of clients recognizing the value of consolidating their databases and taking advantage of the cost and performance benefits delivered by these solutions.” The pipeline is there—and not just for Accenture. Use this example to show your clients that investments in Oracle Engineered Systems are on the rise. Second, recognize that Oracle Engineered Systems represent one of the biggest platforms for growth that Oracle has to offer partners. As part of the agreement, Accenture is able to provide: Platform Readiness Assessments Platform Implementation App Rationalization Database Rationalization Managed Services These are all enablement opportunities you can offer customers under Oracle’s partner programs —to continue building the value of their investments, and the value of your relationship with Oracle. Take a read through the IDC report. To learn more about the partnership, see this press release. Happy selling! The OPN Communications Team

    Read the article

  • Architecture for Social Graph data that has a Time Frame Associated?

    - by Jay Stevens
    I am adding some "social" type features to an existing application. There are a limited # of node & edge types. Overall the data itself is relatively small (50,000 - 70,000 for each type of node) there will be a number of edges (relationships) between them (almost all directional). This, I know, is relatively easy to represent with an SDF store (such as BrightstarDB) or something like Microsoft's Trinity (or really many of the noSQL options). The thing that, I think, makes this a unique use case is that each relationship will have a timeframe associated with it (start and end dates). Right now, I'm thinking of just storing this in a relational structure and dealing with the headaches of "traversing the graph", but I'm looking for suggestions on a better approach (both in terms of data structure and server): Column ================ From_Node_ID Relationship To_Node_ID StartDate EndDate Any suggestions or thoughts are welcomed.

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >