Search Results

Search found 8349 results on 334 pages for 'entity groups'.

Page 101/334 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Is dynamic casting Entities A good design?

    - by Milo
    For my game, Everything inherits from Entity, then other things like Player, PhysicsObject, etc, inherit from Entity. The physics engine sends collision callbacks which has an Entity* to the B that A collided on. Then, lets say A is a Bullet, A tries to cast the entity as a player, if it succeeds, it reduces the player's health. Is this a good design? The problem I have with a message system is that I'd need messages for everything, like: entity.sendMessage(SET_PLAYER_HEALTH,16); So that's why I think casting is cleaner.

    Read the article

  • Under what circumstances would a LINQ-to-SQL Entity "lose" a changed field?

    - by John Rudy
    I'm going nuts over what should be a very simple situation. In an ASP.NET MVC 2 app (not that I think this matters), I have an edit action which takes a very small entity and makes a few changes. The key portion (outside of error handling/security) looks like this: Todo t = Repository.GetTodoByID(todoID); UpdateModel(t); Repository.Save(); Todo is the very simple, small entity with the following fields: ID (primary key), FolderID (foreign key), PercentComplete, TodoText, IsDeleted and SaleEffortID (foreign key). Each of these obviously corresponds to a field in the database. When UpdateModel(t) is called, t does get correctly updated for all fields which have changed. When Repository.Save() is called, by the time the SQL is written out, FolderID reverts back to its original value. The complete code to Repository.Save(): public void Save() { myDataContext.SubmitChanges(); } myDataContext is an instance of the DataContext class created by the LINQ-to-SQL designer. Nothing custom has been done to this aside from adding some common interfaces to some of the entities. I've validated that the FolderID is getting lost before the call to Repository.Save() by logging out the generated SQL: UPDATE [Todo].[TD_TODO] SET [TD_PercentComplete] = @p4, [TD_TodoText] = @p5, [TD_IsDeleted] = @p6 WHERE ([TD_ID] = @p0) AND ([TD_TDF_ID] = @p1) AND /* Folder ID */ ([TD_PercentComplete] = @p2) AND ([TD_TodoText] = @p3) AND (NOT ([TD_IsDeleted] = 1)) AND ([TD_SE_ID] IS NULL) /* SaleEffort ID */ -- @p0: Input BigInt (Size = -1; Prec = 0; Scale = 0) [5] -- @p1: Input BigInt (Size = -1; Prec = 0; Scale = 0) [1] /* this SHOULD be 4 and in the update list */ -- @p2: Input TinyInt (Size = -1; Prec = 0; Scale = 0) [90] -- @p3: Input NVarChar (Size = 4000; Prec = 0; Scale = 0) [changing text] -- @p4: Input TinyInt (Size = -1; Prec = 0; Scale = 0) [0] -- @p5: Input NVarChar (Size = 4000; Prec = 0; Scale = 0) [changing text foo] -- @p6: Input Bit (Size = -1; Prec = 0; Scale = 0) [True] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 4.0.30319.1 So somewhere between UpdateModel(t) (where I've validated in the debugger that FolderID updated) and the output of this SQL, the FolderID reverts. The other fields all save. (Well, OK, I haven't validated SaleEffortID yet, because that subsystem isn't really ready yet, but everything else saves.) I've exhausted my own means of research on this: Does anyone know of conditions which would cause a partial entity reset (EG, something to do with long foreign keys?), and/or how to work around this?

    Read the article

  • Are there ways to improve NHibernate's performance regarding entity instantiation?

    - by denny_ch
    Hi folks, while profiling NHibernate with NHProf I noticed that a lot of time is spend for entity building or at least spend outside the query duration (database roundtrip). The project I'm currently working on prefetches some static data (which goes into the 2nd level cache) at application start. There are about 3000 rows in the result set (and maybe 30 columns) that is queried in 75 ms. The overall duration observed by NHProf is about 13 SECONDS! Is this typical beheviour? I know that NHibernate shouldn't be used for bulk operations, but I didn't thought that entity instantiation would be so expensive. Are there ways to improve performance in such situations or do I have to live with it? Thx, denny_ch

    Read the article

  • Accessing managers from game entities/components

    - by Boreal
    I'm designing an entity-component engine in C# right now, and all components need to have access to the global event manager, which sends off inter-entity events (every entity also has a local event manager). I'd like to be able to simply call functions like this: GlobalEventManager.Publish("Foo", new EventData()); GlobalEventManager.Subscribe("Bar", OnBarEvent); without having to do this: class HealthComponent { private EventManager globalEventManager; public HealthComponent(EventManager gEM) { globalEventManager = gEM; } } // later on... EventManager globalEventManager = new EventManager(); Entity playerEntity = new Entity(); playerEntity.AddComponent(new HealthComponent(globalEventManager)); How can I accomplish this? EDIT: I solved it by creating a singleton called GlobalEventManager. It derives from the local EventManager class and I use it like this: GlobalEventManager.Instance.Publish("Foo", new EventData());

    Read the article

  • Entity Framework insert error ("The Version field is required.")

    - by Graham
    I am using Silverlight 4 and RIA services. When I try to insert into my database, I get the following error: "Submit operation failed validation. Please inspect Entity.ValidationErrors for each entity in EntitiesInError for more information." Upon inspecting the ValidationErrors, I see: "The Version field is required." Isn't the Version field updated and maintained by the framework? If so, why is it null? If not, how am I supposed to set it?

    Read the article

  • how do i use a .dat file as a model for an entity in minecraft?

    - by user1835502
    the entity does not have to do anything, just stand there like a statue, or it doesn't have to be an entity at all, just so the .dat file renders in minecraft i saw this http://pastebin.com/jP4diLJ9 in a java file that came with a .exe, and in the .exe you can choose one of the .dat files and it will show you it as a model, i also asked on minecraft forums and someone said it was possible but i have no idea how maybe has something to do with this try { FileOutputStream fileoutputstream = new FileOutputStream("./Models/"+id+".dat"); fileoutputstream.write(b, 0, b.length); fileoutputstream.close(); } catch(Throwable _ex) {} or if(i 0) {//grabs all models System.out.println("Sending request for Model: "+i); addRequest( 7, i); byte b[] = pack.unpack(download().buffer); if(pack.unpacked) { savemodel(i,b); System.out.println("Model Saved"); }else{ System.out.println("Model Grabbing Complete"); break;

    Read the article

  • What is the "Entity" reffering to in fluent Nhibernate Mapping Configuration?

    - by percent20
    I am trying to learn fluent nhibernate better so am doing a basic sample application from scratch, instead of using someone elses framework. However, I am finding I really don't understand what is going on in assigning mapping files. I have seen a lot of code examples which are all showing the same code, but nothing that spells it out. No description of how it works just that it works. Here is a code example that I see often. return Fluently.Configure() .Database(config) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<Entity>()) .BuildSessionFactory(); So in the code example what is Entity? and how does that piece of code work? Part of me thinks it is the name of the assembly, but seeing as how the namespace I am using is usually the name of the assembly the compiler complains that I am using a namespace as a type. I feel this is important and am rather flustered by the fact I can't figure it out. Thanks

    Read the article

  • database modeling for google app engine for multiple revison of entity.

    - by iamgopal
    hi, in my application ( kind of wiki clone ) - an article is frequently changing. and i need to track all changes that are done on that article. { text only. } one crude way i have done it, is to add a datetime property and create a new entity everytime something change. which is too much database wasting. { and also un-necessary index waste too. } and also need to re-create parent-child and entity relationships. i also have log which can show changes -- but i want some thing easier , so that jumping from one version to another version could be easier. ideas ? thanks.

    Read the article

  • How to find the entity with the greatest primary key?

    - by simpatico
    I've an entity LearningUnit that has an int primary key. Actually, it has nothing more. Entity Concept has the following relationship with it: @ManyToOne @Size(min=1,max=7) private LearningUnit learningUnit; In a constructor of Concept I need to retrieve the LearningUnit with the greatest primary key. If no LearningUnit exists yet I instantiate one. I then set this.learningUnit to the retrieved/instantied. Finally, I call the empty constructor of Concept in a try-catch block, to have the entitymanager do the cardinality check. If an exception is thrown (I expect one in the case that already another 7 Concepts are referring to the same LearningUnit. In that case, I case instantiate a new LearningUnit with a new greater primary key. Please, also point out, if any, clear pitfalls in my outlined algorithm above.

    Read the article

  • How to load an entity by a key other than primary key?

    - by stacker
    In a customized servlet (seam 2.1.2) this works fine TableNameHome tableNameHome = (TableNameHome) Component.getInstance( "tableNameHome " ); tableName entity = tableNameHome.getInstance(); entity.setXXX(); tableNameHome.persit(); However this one fails: entityManager = tableNameHome .getEntityManager(); Query query = entityManager.createQuery( "SELECT b FROM tablename b WHERE b.box_id = :key2nd" ); query.setParameter( "key2nd", value); List results = query.getResultList(); and leads to this error message: org.hibernate.hql.ast.QuerySyntaxException: tablename is not mapped [SELECT b FROM tablename b WHERE b.key2nd = :key2nd] In EJB 2.1 I could implement other finder-methods. EntityHome.find() searches only by primary key. What do I need to do in order to query by a different criteria than primary key?

    Read the article

  • How do I check that an entity is unreferenced in JPA?

    - by Martin
    I have the following model @Entity class Element { @Id int id; @Version int version; @ManyToOne Type type; } @Entity class Type { @Id int id; @Version int version; @OneToMany(mappedBy="type") Collection<Element> elements; @Basic(optional=false) boolean disabled; } and would like to allow Type.disabled = true only if Type.elements is empty. Is there a way to do it atomically? I would like to prevent an insertion of an Element in a transaction while the corresponding Type is being disabled by an other transaction.

    Read the article

  • GWT Editor: How to set last modified time on the entity when saved?

    - by Mike
    Suppose at client side i have an Entity proxy to edit by the UI and when i click save button, the last modified time is save in the entity as a field. //start MyEntityProxy proxy = getProxy();//fetched from server Request<Void> saveRequest = requestFact.myEntityProxyRequest().save(proxy); editorDriver.edit(proxy, saveRequest.getRequestContext()); editorDriver.flush(); //user modifies UI .... //save editorDriver.flush(); saveRequest.fire(); The problem is, where to insert the proxy.setLastModifiedTime(data) call? I always got java.lang.IllegalStateException: The AutoBean has been frozen. Thanks.

    Read the article

  • AutoMapper is not working for a Container class

    - by rboarman
    Hello, I have an AutoMapper issue that has been driving me crazy for way too long now. A similar question was also posted on the AutoMapper user site but has not gotten much love. The summary is that I have a container class that holds a Dictionary of components. The components are a derived object of a common base class. I also have a parallel structure that I am using as DTO objects to which I want to map. The error that gets generated seems to say that the mapper cannot map between two of the classes that I have included in the CreateMap calls. I think the error has to do with the fact that I have a Dictionary of objects that are not part of the container‘s hierarchy. I apologize in advance for the length of the code below. My simple test cases work. Needless to say, it’s only the more complex case that is failing. Here are the classes: #region Dto objects public class ComponentContainerDTO { public Dictionary<string, ComponentDTO> Components { get; set; } public ComponentContainerDTO() { this.Components = new Dictionary<string, ComponentDTO>(); } } public class EntityDTO : ComponentContainerDTO { public int Id { get; set; } } public class ComponentDTO { public EntityDTO Owner { get; set; } public int Id { get; set; } public string Name { get; set; } public string ComponentType { get; set; } } public class HealthDTO : ComponentDTO { public decimal CurrentHealth { get; set; } } public class PhysicalLocationDTO : ComponentDTO { public Point2D Location { get; set; } } #endregion #region Domain objects public class ComponentContainer { public Dictionary<string, Component> Components { get; set; } public ComponentContainer() { this.Components = new Dictionary<string, Component>(); } } public class Entity : ComponentContainer { public int Id { get; set; } } public class Component { public Entity Owner { get; set; } public int Id { get; set; } public string Name { get; set; } public string ComponentType { get; set; } } public class Health : Component { public decimal CurrentHealth { get; set; } } public struct Point2D { public decimal X; public decimal Y; public Point2D(decimal x, decimal y) { X = x; Y = y; } } public class PhysicalLocation : Component { public Point2D Location { get; set; } } #endregion The code: var entity = new Entity() { Id = 1 }; var healthComponent = new Health() { CurrentHealth = 100, Owner = entity, Name = "Health", Id = 2 }; entity.Components.Add("1", healthComponent); var locationComponent = new PhysicalLocation() { Location = new Point2D() { X = 1, Y = 2 }, Owner = entity, Name = "PhysicalLocation", Id = 3 }; entity.Components.Add("2", locationComponent); Mapper.CreateMap<ComponentContainer, ComponentContainerDTO>() .Include<Entity, EntityDTO>(); Mapper.CreateMap<Entity, EntityDTO>(); Mapper.CreateMap<Component, ComponentDTO>() .Include<Health, HealthDTO>() .Include<PhysicalLocation, PhysicalLocationDTO>(); Mapper.CreateMap<Component, ComponentDTO>(); Mapper.CreateMap<Health, HealthDTO>(); Mapper.CreateMap<PhysicalLocation, PhysicalLocationDTO>(); Mapper.AssertConfigurationIsValid(); var targetEntity = Mapper.Map<Entity, EntityDTO>(entity); The error when I call Map() (abbreviated stack crawls): AutoMapper.AutoMapperMappingException was unhandled Message=Trying to map MapperTest1.Entity to MapperTest1.EntityDTO. Using mapping configuration for MapperTest1.Entity to MapperTest1.EntityDTO Exception of type 'AutoMapper.AutoMapperMappingException' was thrown. Source=AutoMapper StackTrace: at AutoMapper.MappingEngine.AutoMapper.IMappingEngineRunner.Map(ResolutionContext context) . . . InnerException: AutoMapper.AutoMapperMappingException Message=Trying to map System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[MapperTest1.Component, ElasticTest1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]] to System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[MapperTest1.ComponentDTO, ElasticTest1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]]. Using mapping configuration for MapperTest1.Entity to MapperTest1.EntityDTO Destination property: Components Exception of type 'AutoMapper.AutoMapperMappingException' was thrown. Source=AutoMapper StackTrace: at AutoMapper.Mappers.TypeMapObjectMapperRegistry.PropertyMapMappingStrategy.MapPropertyValue(ResolutionContext context, IMappingEngineRunner mapper, Object mappedObject, PropertyMap propertyMap) . . InnerException: AutoMapper.AutoMapperMappingException Message=Trying to map System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[MapperTest1.Component, ElasticTest1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]] to System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[MapperTest1.ComponentDTO, ElasticTest1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]]. Using mapping configuration for MapperTest1.Entity to MapperTest1.EntityDTO Destination property: Components Exception of type 'AutoMapper.AutoMapperMappingException' was thrown. Source=AutoMapper StackTrace: at AutoMapper.MappingEngine.AutoMapper.IMappingEngineRunner.Map(ResolutionContext context) . InnerException: AutoMapper.AutoMapperMappingException Message=Trying to map MapperTest1.Component to MapperTest1.ComponentDTO. Using mapping configuration for MapperTest1.Health to MapperTest1.HealthDTO Destination property: Components Exception of type 'AutoMapper.AutoMapperMappingException' was thrown. Source=AutoMapper StackTrace: at AutoMapper.MappingEngine.AutoMapper.IMappingEngineRunner.Map(ResolutionContext context) . . InnerException: AutoMapper.AutoMapperMappingException Message=Trying to map System.Decimal to System.Decimal. Using mapping configuration for MapperTest1.Health to MapperTest1.HealthDTO Destination property: CurrentHealth Exception of type 'AutoMapper.AutoMapperMappingException' was thrown. Source=AutoMapper StackTrace: at AutoMapper.Mappers.TypeMapObjectMapperRegistry.PropertyMapMappingStrategy.MapPropertyValue(ResolutionContext context, IMappingEngineRunner mapper, Object mappedObject, PropertyMap propertyMap) . . InnerException: System.InvalidCastException Message=Unable to cast object of type 'MapperTest1.ComponentDTO' to type 'MapperTest1.HealthDTO'. Source=Anonymously Hosted DynamicMethods Assembly StackTrace: at SetCurrentHealth(Object , Object ) . . Thank you in advance. Rick

    Read the article

  • Is this a bug in Profiler or Entity Framework?

    - by AjarnMark
    Using Entity Framework 4 with stored procedures and SQL Server 2008 SP1... When running SQL Server Profiler (TSQL_SPs template), the lines that show my stored procedure call and its statements say that they executed in DatabaseID = 1 (Master) but it is actually happening in my application database (ID = 8). The procedures execute properly and return the data, and they only exist in my application database, so why does Profiler mark those lines as being in Master? Is this a bug in Profiler? Is it a bug in EF4? Note that running the same code against a SQL 2000 instance, Profiler correctly shows the application's database ID.

    Read the article

  • setIncludesSubentities: in an NSFetchRequest is broken for entities across multiple persistent store

    - by SG
    Prior art which doesn't quite address this: http://stackoverflow.com/questions/1774359/core-data-migration-error-message-model-does-not-contain-configuration-xyz I have narrowed this down to a specific issue. It takes a minute to set up, though; please bear with me. The gist of the issue is that a persistentStoreCoordinator (apparently) cannot preserve the part of an object graph where a managedObject is marked as a subentity of another when they are stored in different files. Here goes... 1) I have 2 xcdatamodel files, each containing a single entity. In runtime, when the managed object model is constructed, I manually define one entity as subentity of another using setSubentities:. This is because defining subentities across multiple files in the editor is not supported yet. I then return the complete model with modelByMergingModels. //Works! [mainEntity setSubentities:canvasEntities]; NSLog(@"confirm %@ is super for %@", [[[canvasEntities lastObject] superentity] name], [[canvasEntities lastObject] name]); //Output: "confirm Note is super for Browser" 2) I have modified the persistentStoreCoordinator method so that it sets a different store for each entity. Technically, it uses configurations, and each entity has one and only one configuration defined. //Also works! for ( NSString *configName in [[HACanvasPluginManager shared].registeredCanvasTypes valueForKey:@"viewControllerClassName"] ) { storeUrl = [NSURL fileURLWithPath:[[self applicationDocumentsDirectory] stringByAppendingPathComponent:[configName stringByAppendingPathExtension:@"sqlite"]]]; //NSLog(@"entities for configuration '%@': %@", configName, [[[self managedObjectModel] entitiesForConfiguration:configName] valueForKey:@"name"]); //Output: "entities for configuration 'HATextCanvasController': (Note)" //Output: "entities for configuration 'HAWebCanvasController': (Browser)" if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:configName URL:storeUrl options:options error:&error]) //etc 3) I have a fetchRequest set for the parent entity, with setIncludesSubentities: and setAffectedStores: just to be sure we get both 1) and 2) covered. When inserting objects of either entity, they both are added to the context and they both are fetched by the fetchedResultsController and displayed in the tableView as expected. // Create the fetch request for the entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; [fetchRequest setEntity:entity]; [fetchRequest setIncludesSubentities:YES]; //NECESSARY to fetch all canvas types [fetchRequest setSortDescriptors:sortDescriptors]; [fetchRequest setFetchBatchSize:20]; // Set the batch size to a suitable number. [fetchRequest setAffectedStores:[[managedObjectContext persistentStoreCoordinator] persistentStores]]; [fetchRequest setReturnsObjectsAsFaults:NO]; Here is where it starts misbehaving: after closing and relaunching the app, ONLY THE PARENT ENTITY is fetched. If I change the entity of the request using setEntity: to the entity for 'Note', all notes are fetched. If I change it to the entity for 'Browser', all the browsers are fetched. Let me reiterate that during the run in which an object is first inserted into the context, it will appear in the list. It is only after save and relaunch that a fetch request fails to traverse the hierarchy. Therefore, I can only conclude that it is the storage of the inheritance that is the problem. Let's recap why: - Both entities can be created, inserted into the context, and viewed, so the model is working - Both entities can be fetched with a single request, so the inheritance is working - I can confirm that the files are being stored separately and objects are going into their appropriate stores, so saving is working - Launching the app with either entity set for the request works, so retrieval from the store is working - This also means that traversing different stores with the request is working - By using a single store instead of multiple, the problem goes away completely, so creating, storing, fetching, viewing etc is working correctly. This leaves only one culprit (to my mind): the inheritance I'm setting with setSubentities: is effective only for objects creating during the session. Either objects/entities are being stored stripped of the inheritance info, or entity inheritance as defined programmatically only applies to new instances, or both. Either of these is unacceptable. Either it's a bug or I am way, way off course. I have been at this every which way for two days; any insight is greatly appreciated. The current workaround - just using a single store - works completely, except it won't be future-proof in the event that I remove one of the models from the app etc. It also boggles the mind because I can't see why you would have all this infrastructure for storing across multiple stores and for setting affected stores in fetch requests if it by core definition (of setSubentities:) doesn't work.

    Read the article

  • Hibernate - Persisting polymorphic joins

    - by Marty Pitt
    Hi I'm trying to understand how to best implement a polymorphic one-to-many in hibernate. Eg: @MappedSuperclass public class BaseEntity { Integer id; // etc... } @Entity public class Author extends BaseEntity {} @Entity public class Post extends BaseEntity {} @Entity public class Comment extends BaseEntity {} And now, I'd like to also persist audit information, with the following class: @Entity public class AuditEvent { @ManyToOne // ? BaseEntity entity; } What is the appropriate mapping for auditEvent.entity? Also, how will Hibernate actually persist this? Would a series of join tables be generated (AuditEvent_Author , AuditEvent_Post, AuditEvent_Comment), or is there a better way? Note, I'd rather not have my other entity classes expose the other side of the join (eg., List<AuditEvent> events on BaseEntity) - but if that's the cleanest way to implement, then it will suffice.

    Read the article

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • The new workflow management of Oracle´s Hyperion Planning: Define more details with Planning Unit Hierarchies and Promotional Paths

    - by Alexandra Georgescu
    After having been almost unchanged for several years, starting with the 11.1.2 release of Oracle´s Hyperion Planning the Process Management has not only got a new name: “Approvals” now is offering the possibility to further split Planning Units (comprised of a unique Scenario-Version-Entity combination) into more detailed combinations along additional secondary dimensions, a so called Planning Unit Hierarchy, and also to pre-define a path of planners, reviewers and approvers, called Promotional Path. I´d like to introduce you to changes and enhancements in this new process management and arouse your curiosity for checking out more details on it. One reason of using the former process management in Planning was to limit data entry rights to one person at a time based on the assignment of a planning unit. So the lowest level of granularity for this assignment was, for a given Scenario-Version combination, the individual entity. Even if in many cases one person wasn´t responsible for all data being entered into that entity, but for only part of it, it was not possible to split the ownership along another additional dimension, for example by assigning ownership to different accounts at the same time. By defining a so called Planning Unit Hierarchy (PUH) in Approvals this gap is now closed. Complementing new Shared Services roles for Planning have been created in order to manage set up and use of Approvals: The Approvals Administrator consisting of the following roles: Approvals Ownership Assigner, who assigns owners and reviewers to planning units for which Write access is assigned (including Planner responsibilities). Approvals Supervisor, who stops and starts planning units and takes any action on planning units for which Write access is assigned. Approvals Process Designer, who can modify planning unit hierarchy secondary dimensions and entity members for which Write access is assigned, can also modify scenarios and versions that are assigned to planning unit hierarchies and can edit validation rules on data forms for which access is assigned. (this includes as well Planner and Ownership Assigner responsibilities) Set up of a Planning Unit Hierarchy is done under the Administration menu, by selecting Approvals, then Planning Unit Hierarchy. Here you create new PUH´s or edit existing ones. The following window displays: After providing a name and an optional description, a pre-selection of entities can be made for which the PUH will be defined. Available options are: All, which pre-selects all entities to be included for the definitions on the subsequent tabs None, manual entity selections will be made subsequently Custom, which offers the selection for an ancestor and the relative generations, that should be included for further definitions. Finally a pattern needs to be selected, which will determine the general flow of ownership: Free-form, uses the flow/assignment of ownerships according to Planning releases prior to 11.1.2 In Bottom-up, data input is done at the leaf member level. Ownership follows the hierarchy of approval along the entity dimension, including refinements using a secondary dimension in the PUH, amended by defined additional reviewers in the promotional path. Distributed, uses data input at the leaf level, while ownership starts at the top level and then is distributed down the organizational hierarchy (entities). After ownership reaches the lower levels, budgets are submitted back to the top through the approval process. Proceeding to the next step, now a secondary dimension and the respective members from that dimension might be selected, in order to create more detailed combinations underneath each entity. After selecting the Dimension and a Parent Member, the definition of a Relative Generation below this member assists in populating the field for Selected Members, while the Count column shows the number of selected members. For refining this list, you might click on the icon right beside the selected member field and use the check-boxes in the appearing list for deselecting members. -------------------------------------------------------------------------------------------------------- TIP: In order to reduce maintenance of the PUH due to changes in the dimensions included (members added, moved or removed) you should consider to dynamically link those dimensions in the PUH with the dimension hierarchies in the planning application. For secondary dimensions this is done using the check-boxes in the Auto Include column. For the primary dimension, the respective selection criteria is applied by right-clicking the name of an entity activated as planning unit, then selecting an item of the shown list of include or exclude options (children, descendants, etc.). Anyway in order to apply dimension changes impacting the PUH a synchronization must be run. If this is really necessary or not is shown on the first screen after selecting from the menu Administration, then Approvals, then Planning Unit Hierarchy: under Synchronized you find the statuses Yes, No or Locked, where the last one indicates, that another user is just changing or synchronizing the PUH. Select one of the not synchronized PUH´s (status No) and click the Synchronize option in order to execute. -------------------------------------------------------------------------------------------------------- In the next step owners and reviewers are assigned to the PUH. Using the icons with the magnifying glass right besides the columns for Owner and Reviewer the respective assignments can be made in the ordermthat you want them to review the planning unit. While it is possible to assign only one owner per entity or combination of entity+ member of the secondary dimension, the selection for reviewers might consist of more than one person. The complete Promotional Path, including the defined owners and reviewers for the entity parents, can be shown by clicking the icon. In addition optional users might be defined for being notified about promotions for a planning unit. -------------------------------------------------------------------------------------------------------- TIP: Reviewers cannot change data, but can only review data according to their data access permissions and reject or promote planning units. -------------------------------------------------------------------------------------------------------- In order to complete your PUH definitions click Finish - this saves the PUH and closes the window. As a final step, before starting the approvals process, you need to assign the PUH to the Scenario-Version combination for which it should be used. From the Administration menu select Approvals, then Scenario and Version Assignment. Expand the PUH in order to see already existing assignments. Under Actions click the add icon and select scenarios and versions to be assigned. If needed, click the remove icon in order to delete entries. After these steps, set up is completed for starting the approvals process. Start, stop and control of the approvals process is now done under the Tools menu, and then Manage Approvals. The new PUH feature is complemented by various additional settings and features; some of them at least should be mentioned here: Export/Import of PUHs: Out of Office agent: Validation Rules changing promotional/approval path if violated (including the use of User-defined Attributes (UDAs)): And various new and helpful reviewer actions with corresponding approval states. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

    Read the article

  • Hibernate - how to delete bidirectional many-to-many association

    - by slomir
    Problem: I have many-to-many association between two entities A and B. I set A entity as an owner of their relationship(inverse=true is on A's collection in b.hbm.xml). When i delete an A entity, corresponding records in join table are deleted. When i delete an B entity, corresponding records in join table are not deleted (integrity violation exception). -- Let's consider some very simple example: class A{ Set<B> bset=new HashSet<B>(); //... } class B{ Set<A> aset=new HashSet<A>(); //... } File a.hbm.xml [m-to-m mappings only]: <set name="bset" table="AB"> <key name="a_id"/> <many-to-many column="b_id" class="B"/> </set> File b.hbm.xml [m-to-m mappings only]: <set name="aset" table="AB" inverse="true"> <key name="b_id"/> <many-to-many column="a_id" class="A"/> </set> Database relations: A(id,...) B(id,...) AB(a_id,b_id) Suppose that we have some records in AB joint table. For example: AB = {(1,1),(1,2)} where AB= { (a_id , b_id) | ... ... } -- Situation 1 - works probably because A is owner of AB relationship: A a=aDao.read(1); //read A entity with id=1 aDao.delete(a); //delete 'a' entity and both relations with B-entities Situation 2 - doesn't work: B b=bDao.read(1); //read B entity with id=1 bDao.delete(b); //foreign key integrity violation On the one hand, this is somehow logical to me, because the A entity is responsible for his relation with B. But, on the other hand, it is not logical or at least it is not orm-like solution that I have to explicitly delete all records in join table where concrete B entity appears, and then to delete the B entity, as I show in situation 3: Situation 3 - works, but it is not 'elegant': B b=bDao.read(1); Set<A> aset=b.getA(); //get set with A entities Iterator i=aset.iterator(); //while removes 'b' from all related A entities //while breaks relationships on A-side of relation (A is owner) while(i.hasNext()){ A a=i.next(); a.bset.remove(b); //remove entity 'b' from related 'a' entity aDao.update(a); //key point!!! this line breaks relation in database } bDao.delete(b); //'b' is deleted because there is no related A-entities -- So, my question: is there any more convenient way to delete no-owner entity (B in my example) in bidirectional many-to-many association and all of his many-to-many relations from joint table?

    Read the article

  • Kohana 3 ORM limitations

    - by yoda
    Hi, What are the limitations of Kohana 3 ORM regarding table relationships? I'm trying to modify the build-in Auth module in order to accept groups of users in adition, having now the following tables : groups groups_users roles roles_groups users user_tokens By default, this module is set to work without groups, and linking the users and roles using a third table named roles_users, but I need to add groups to it. I'm linking, as you can see by the names, the groups to users and the roles to groups, but I'm failing building the ORM code for it, so that's pretty much the question here, if ORM is limited to 2 relationships or if it can handle 3 in this case. Cheers!

    Read the article

  • Django many-to-many relationship to self with extra data, how do I select from a certain direction?

    - by Jake
    I have some hierarchical data where each Set can have many members and can belong to more than one Set(group) Here are the models: class Set(models.Model): ... groups = models.ManyToManyField('self', through='Membership', symmetrical=False) members = models.ManyToManyField('self', through='Membership', symmetrical=False) class Membership(models.Model): group = models.ForeignKey( Set, related_name='Members' ) member = models.ForeignKey( Set, related_name='Groups' ) order = models.IntegerField( default=-1 ) I want to know how to get all the members or all the groups for a Set instance. I think I can do it as follows, but it's not very logical, can anyone tell me what's going on and how I should be doing it? # This gives me a set of Sets # Which seems to be the groups this Set belongs to set_instance.set_set.all() # These give me a set of Memberships, not Sets set_instance.Members.all() set_instance.Groups.all() # These they both return a set of Sets # which seem to be the members of this one set_instance.members.all() set_instance.groups.all()

    Read the article

  • How to organise a many to many relationship in MongoDB

    - by Gareth Elms
    I have two tables/collections; Users and Groups. A user can be a member of any number of groups and a user can also be an owner of any number of groups. In a relational database I'd probably have a third table called UserGroups with a UserID column, a GroupID column and an IsOwner column. I'm using MongoDB and I'm sure there is a different approach for this kind of relationship in a document database. Should I embed the list of groups and groups-as-owner inside the Users table as two arrays of ObjectIDs? Should I also store the list of members and owners in the Groups table as two arrays, effectively mirroring the relationship causing a duplication of relationship information? Or is a bridging UserGroups table a legitimate concept in document databases for many to many relationships? Thanks

    Read the article

  • I Admit I Misspoke

    - by Patrick Liekhus
    OK.  I admit it.  The last post I hade mentioned that we moved the XAF DSL to the Entity Framework.  This has caused a lot of confusion.  I meant to say that we have used the ADO.NET Entity Data Model extensions.  This is the design surface that can be tailored to create Entity Framework. We leveraged the code generation within the ADO.NET Entity Data Model (EDMX) file to generate XAF/XPO classes.  This allows you to visually create the entity model, set a few XAF properties and then generate the business objects from there.  I am presenting all these topics at the Kansas City Developers Conference on June 19th.  I will post the presentation after the conference.  I have a full presentation that will demonstrate the power of the ADO.NET Entity Data Model extensions, create a small project and then add the OData layer to XAF to connect to the PowerPivot in Excel 2010. The latest code can be found at http://efxaf.codeplex.com. More details to come soon.  Sorry for the confusion in the last post. Thanks again.

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >