Search Results

Search found 14590 results on 584 pages for 'entity manager'.

Page 2/584 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Databinding to an Entity Framework in WPF

    - by King Chan
    Is it good to use databinding to Entity Framework's Entity in WPF? I created a singleton entity framework context: To have only one connection and it won't open and close all the time. So I can pass the Entity around to any class, and can modify the Entity and make changes to the database. All ViewModels getting the entity out from the same Context and databinding to the View saves me time from mapping new object, but now I imagine there is problem in not using the newest Context: A ViewModel databinding to a Entity, then someone else updated the data. The ViewModel will still display the old data, because the Context is never being dispose to refresh. I always create new Context and then dispose of it. If I want to pass the Entity around, then there will be conflicts between Context and Entity. What is the suggested way of doing this ?

    Read the article

  • What is an Entity? Why is it called Entity?

    - by rkrauter
    What is the deal with Entities (when talking about the Entity Framework)? From what I understand, it is pretty much an in memory representation of a data store like sql tables. Entities are smart enough to track changes and apply those changes to the data store. Is there anything more to it? Thanks in advance.

    Read the article

  • Entity Framework self referencing entity deletion.

    - by Viktor
    Hello. I have a structure of folders like this: Folder1 Folder1.1 Folder1.2 Folder2 Folder2.1 Folder2.1.1 and so on.. The question is how to cascade delete them(i.e. when remove folder2 all children are also deleted). I can't set an ON DELETE action because MSSQL does not allow it. Can you give some suggesions? UPDATE: I wrote this stored proc, can I just leave it or it needs some modifications? SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE sp_DeleteFoldersRecursive @parent_folder_id int AS BEGIN SET NOCOUNT ON; IF @parent_folder_id = 0 RETURN; CREATE TABLE #temp(fid INT ); DECLARE @Count INT; INSERT INTO #temp(fid) SELECT FolderId FROM Folders WHERE FolderId = @parent_folder_id; SET @Count = @@ROWCOUNT; WHILE @Count > 0 BEGIN INSERT INTO #temp(fid) SELECT FolderId FROM Folders WHERE EXISTS (SELECT FolderId FROM #temp WHERE Folders.ParentId = #temp.fid) AND NOT EXISTS (SELECT FolderId FROM #temp WHERE Folders.FolderId = #temp.fid); SET @Count = @@ROWCOUNT; END DELETE Folders FROM Folders INNER JOIN #temp ON Folders.FolderId = #temp.fid; DROP TABLE #temp; END GO

    Read the article

  • Entity Framework version 1- Brief Synopsis and Tips &ndash; Part 1

    - by Rohit Gupta
    To Do Eager loading use Projections (for e.g. from c in context.Contacts select c, c.Addresses)  or use Include Query Builder Methods (Include(“Addresses”)) If there is multi-level hierarchical Data then to eager load all the relationships use Include Query Builder methods like customers.Include("Order.OrderDetail") to include Order and OrderDetail collections or use customers.Include("Order.OrderDetail.Location") to include all Order, OrderDetail and location collections with a single include statement =========================================================================== If the query uses Joins then Include() Query Builder method will be ignored, use Nested Queries instead If the query does projections then Include() Query Builder method will be ignored Use Address.ContactReference.Load() OR Contact.Addresses.Load() if you need to Deferred Load Specific Entity – This will result in extra round trips to the database ObjectQuery<> cannot return anonymous types... it will return a ObjectQuery<DBDataRecord> Only Include method can be added to Linq Query Methods Any Linq Query method can be added to Query Builder methods. If you need to append a Query Builder Method (other than Include) after a LINQ method  then cast the IQueryable<Contact> to ObjectQuery<Contact> and then append the Query Builder method to it =========================================================================== Query Builder methods are Select, Where, Include Methods which use Entity SQL as parameters e.g. "it.StartDate, it.EndDate" When Query Builder methods do projection then they return ObjectQuery<DBDataRecord>, thus to iterate over this collection use contact.Item[“Name”].ToString() When Linq To Entities methods do projection, they return collection of anonymous types --- thus the collection is strongly typed and supports Intellisense EF Object Context can track changes only on Entities, not on Anonymous types. If you use a Defining Query for a EntitySet then the EntitySet becomes readonly since a Defining Query is the same as a View (which is treated as a ReadOnly by default). However if you want to use this EntitySet for insert/update/deletes then we need to map stored procs (as created in the DB) to the insert/update/delete functions of the Entity in the Designer You can use either Execute method or ToList() method to bind data to datasources/bindingsources If you use the Execute Method then remember that you can traverse through the ObjectResult<> collection (returned by Execute) only ONCE. In WPF use ObservableCollection to bind to data sources , for keeping track of changes and letting EF send updates to the DB automatically. Use Extension Methods to add logic to Entities. For e.g. create extension methods for the EntityObject class. Create a method in ObjectContext Partial class and pass the entity as a parameter, then call this method as desired from within each entity. ================================================================ DefiningQueries and Stored Procedures: For Custom Entities, one can use DefiningQuery or Stored Procedures. Thus the Custom Entity Collection will be populated using the DefiningQuery (of the EntitySet) or the Sproc. If you use Sproc to populate the EntityCollection then the query execution is immediate and this execution happens on the Server side and any filters applied will be applied in the Client App. If we use a DefiningQuery then these queries are composable, meaning the filters (if applied to the entityset) will all be sent together as a single query to the DB, returning only filtered results. If the sproc returns results that cannot be mapped to existing entity, then we first create the Entity/EntitySet in the CSDL using Designer, then create a dummy Entity/EntitySet using XML in the SSDL. When creating a EntitySet in the SSDL for this dummy entity, use a TSQL that does not return any results, but does return the relevant columns e.g. select ContactID, FirstName, LastName from dbo.Contact where 1=2 Also insure that the Entity created in the SSDL uses the SQL DataTypes and not .NET DataTypes. If you are unable to open the EDMX file in the designer then note the Errors ... they will give precise info on what is wrong. The Thrid option is to simply create a Native Query in the SSDL using <Function Name="PaymentsforContact" IsComposable="false">   <CommandText>SELECT ActivityId, Activity AS ActivityName, ImagePath, Category FROM dbo.Activities </CommandText></FuncTion> Then map this Function to a existing Entity. This is a quick way to get a custom Entity which is regular Entity with renamed columns or additional columns (which are computed columns). The disadvantage to using this is that It will return all the rows from the Defining query and any filter (if defined) will be applied only at the Client side (after getting all the rows from DB). If you you DefiningQuery instead then we can use that as a Composable Query. The Fourth option (for mapping a READ stored proc results to a non-existent Entity) is to create a View in the Database which returns all the fields that the sproc also returns, then update the Model so that the model contains this View as a Entity. Then map the Read Sproc to this View Entity. The other option would be to simply create the View and remove the sproc altogether. ================================================================ To Execute a SProc that does not return a entity, use a EntityCommand to execute that proc. You cannot call a sproc FunctionImport that does not return Entities From Code, the only way is to use SSDL function calls using EntityCommand.  This changes with EntityFramework Version 4 where you can return Scalar Types, Complex Types, Entities or NonQuery ================================================================ UDF when created as a Function in SSDL, we need to set the Name & IsComposable properties for the Function element. IsComposable is always false for Sprocs, for UDF's set this to true. You cannot call UDF "Function" from within code since you cannot import a UDF Function into the CSDL Model (with Version 1 of EF). only stored procedures can be imported and then mapped to a entity ================================================================ Entity Framework requires properties that are involved in association mappings to be mapped in all of the function mappings for the entity (Insert, Update and Delete). Because Payment has an association to Reservation... hence we need to pass both the paymentId and reservationId to the Delete sproc even though just the paymentId is the PK on the Payment Table. ================================================================ When mapping insert, update and delete procs to a Entity, insure that all the three or none are mapped. Further if you have a base class and derived class in the CSDL, then you must map (ins, upd, del) sprocs to all parent and child entities in the inheritance relationship. Note that this limitation that base and derived entity methods must all must be mapped does not apply when you are mapping Read Stored Procedures.... ================================================================ You can write stored procedures SQL directly into the SSDL by creating a Function element in the SSDL and then once created, you can map this Function to a CSDL Entity directly in the designer during Function Import ================================================================ You can do Entity Splitting such that One Entity maps to multiple tables in the DB. For e.g. the Customer Entity currently derives from Contact Entity...in addition it also references the ContactPersonalInfo Entity. One can copy all properties from the ContactPersonalInfo Entity into the Customer Entity and then Delete the CustomerPersonalInfo entity, finall one needs to map the copied properties to the ContactPersonalInfo Table in Table Mapping (by adding another table (ContactPersonalInfo) to the Table Mapping... this is called Entity Splitting. Thus now when you insert a Customer record, it will automatically create SQL to insert records into the Contact, Customers and ContactPersonalInfo tables even though you have a Single Entity called Customer in the CSDL =================================================================== There is Table by Type Inheritance where another EDM Entity can derive from another EDM entity and absorb the inherted entities properties, for example in the Break Away Geek Adventures EDM, the Customer entity derives (inherits) from the Contact Entity and absorbs all the properties of Contact entity. Thus when you create a Customer Entity in Code and then call context.SaveChanges the Object Context will first create the TSQL to insert into the Contact Table followed by a TSQL to insert into the Customer table =================================================================== Then there is the Table per Hierarchy Inheritance..... where different types are created based on a condition (similar applying a condition to filter a Entity to contain filtered records)... the diference being that the filter condition populates a new Entity Type (derived from the base Entity). In the BreakAway sample the example is Lodging Entity which is a Abstract Entity and Then Resort and NonResort Entities which derive from Lodging Entity and records are filtered based on the value of the Resort Boolean field =================================================================== Then there is Table per Concrete Type Hierarchy where we create a concrete Entity for each table in the database. In the BreakAway sample there is a entity for the Reservation table and another Entity for the OldReservation table even though both the table contain the same number of fields. The OldReservation Entity can then inherit from the Reservation Entity and configure the OldReservation Entity to remove all Scalar Properties from the Entity (since it inherits the properties from Reservation and filters based on ReservationDate field) =================================================================== Complex Types (Complex Properties) Entities in EF can also contain Complex Properties (in addition to Scalar Properties) and these Complex Properties reference a ComplexType (not a EntityType) DropdownList, ListBox, RadioButtonList, CheckboxList, Bulletedlist are examples of List server controls (not data bound controls) these controls cannot use Complex properties during databinding, they need Scalar Properties. So if a Entity contains Complex properties and you need to bind those to list server controls then use projections to return Scalar properties and bind them to the control (the disadvantage is that projected collections are not tracked by the Object Context and hence cannot persist changes to the projected collections bound to controls) ObjectDataSource and EntityDataSource do account for Complex properties and one can bind entities with Complex Properties to Data Source controls and they will be tracked for changes... with no additional plumbing needed to persist changes to these collections bound to controls So DataBound controls like GridView, FormView need to use EntityDataSource or ObjectDataSource as a datasource for entities that contain Complex properties so that changes to the datasource done using the GridView can be persisted to the DB (enabling the controls for updates)....if you cannot use the EntityDataSource you need to flatten the ComplexType Properties using projections With EF Version 4 ComplexTypes are supported by the Designer and can add/remove/compose Complex Types directly using the Designer =================================================================== Conditional Mapping ... is like Table per Hierarchy Inheritance where Entities inherit from a base class and then used conditions to populate the EntitySet (called conditional Mapping). Conditional Mapping has limitations since you can only use =, is null and IS NOT NULL Conditions to do conditional mapping. If you need more operators for filtering/mapping conditionally then use QueryView(or possibly Defining Query) to create a readonly entity. QueryView are readonly by default... the EntitySet created by the QueryView is enabled for change tracking by the ObjectContext, however the ObjectContext cannot create insert/update/delete TSQL statements for these Entities when SaveChanges is called since it is QueryView. One way to get around this limitation is to map stored procedures for the insert/update/delete operations in the Designer. =================================================================== Difference between QueryView and Defining Query : QueryView is defined in the (MSL) Mapping File/section of the EDM XML, whereas the DefiningQuery is defined in the store schema (SSDL). QueryView is written using Entity SQL and is this database agnostic and can be used against any database/Data Layer. DefiningQuery is written using Database Lanaguage i.e. TSQL or PSQL thus you have more control =================================================================== Performance: Lazy loading is deferred loading done automatically. lazy loading is supported with EF version4 and is on by default. If you need to turn it off then use context.ContextOptions.lazyLoadingEnabled = false To improve Performance consider PreCompiling the ObjectQuery using the CompiledQuery.Compile method

    Read the article

  • Ubuntu 12.04 - after update it boots in command line only (no windows manager)

    - by ZSW
    I have Ubuntu 12.04, which has been working great for me since I installed it a year ago. Yesterday I ran the updates that update manager recommended. (Last time I ran updates before was about a week ago). After the updates were installed, it asked me to reboot. So I rebooted it, and then it stopped at login prompt (as a command line, with no windows). I logged in and tried to manually start the windows manager, and the screen went blank, and stayed blank I waited several minutes, and then turned it off. How can I figure out what is the problem? Which logs should I check?

    Read the article

  • Using unordered_multimap as entity and component storage

    - by natebot13
    The Setup I've made a few games (more like animations) using the Object Oriented method with base classes for objects that extend them, and objects that extend those, and found I couldn't wrap my head around expanding that system to larger game ideas. So I did some research and discovered the Entity-Component system of designing games. I really like the idea, and thoroughly understood the usefulness of it after reading Byte54's perfect answer here: Role of systems in entity systems architecture. With that said, I have decided to create my current game idea using the described Entity-Component system. Having basic knowledge of C++, and SFML, I would like to implement the backbone of this entity component system using an unordered_multimap without classes for the entities themselves. Here's the idea: An unordered_mulitmap stores entity IDs as the lookup term, while the value is an inherited Component object. Examlpe: ____________________________ |ID |Component | ---------------------------- |0 |Movable | |0 |Accelable | |0 |Renderable | |1 |Movable | |1 |Renderable | |2 |Renderable | ---------------------------- So, according to this map of objects, the entity with ID 0 has three components: Movable, Accelable, and Renderable. These component objects store the entity specific data, such as the location, the acceleration, and render flags. The entity is simply and ID, with the components attached to that ID describing its attributes. Problem I want to store the component objects within the map, allowing the map have full ownership of the components. The problem I'm having, is I don't quite understand enough about pointers, shared pointers, and references in order to get that set up. How can I go about initializing these components, with their various member variables, within the unordered_multimap? Can the base component class take on the member variables of its child classes, when defining the map as unordered_multimap<int, component>? Requirements I need a system to be able to grab an entity, with all of its' attached components, and access members from the components in order to do the necessary calculations and reassignments for position, velocity, etc. Need a clarification? Post a comment with your concerns and I will gladly edit or comment back! Thanks in advance! natebot13

    Read the article

  • Entity Framework won't SaveChanges on new entity with two-level relationship

    - by Tim Rourke
    I'm building an ASP.NET MVC site using the ADO.NET Entity Framework. I have an entity model that includes these entities, associated by foreign keys: Report(ID, Date, Heading, Report_Type_ID, etc.) SubReport(ID, ReportText, etc.) - one-to-one relationship with Report. ReportSource(ID, Name, Description) - one-to-many relationship with Sub_Report. ReportSourceType(ID, Name, Description) - one-to-many relationship with ReportSource. Contact (ID, Name, Address, etc.) - one-to-one relationship with Report_Source. There is a Create.aspx page for each type of SubReport. The post event method returns a new Sub_Report entity. Before, in my post method, I followed this process: Set the properties for a new Report entity from the page's fields. Set the SubReport entity's specific properties from the page's fields. Set the SubReport entity's Report to the new Report entity created in 1. Given an ID provided by the page, look up the ReportSource and set the Sub_Report entity's ReportSource to the found entity. SaveChanges. This workflow succeeded just fine for a couple of weeks. Then last week something changed and it doesn't work any more. Now instead of the save operation, I get this Exception: UpdateException: "Entities in 'DIR2_5Entities.ReportSourceSet' participate in the 'FK_ReportSources_ReportSourceTypes' relationship. 0 related 'ReportSourceTypes' were found. 1 'Report_Source_Types' is expected." The debug visualizer shows the following: The SubReport's ReportSource is set and loaded, and all of its properties are correct. The Report_Source has a valid ReportSourceType entity attached. In SQL Profiler the prepared SQL statement looks OK. Can anybody point me to what obvious thing I'm missing? TIA Notes: The Report and SubReport are always new entities in this case. The Report entity contains properties common to many types of reports and is used for generic queries. SubReports are specific reports with extra parameters varying by type. There is actually a different entity set for each type of SubReport, but this question applies to all of them, so I use SubReport as a simplified example.

    Read the article

  • Cannot connect to Onda GSM phone using network-manager

    - by marcobra
    On Oneiric unstable fully updated/upgraded using network-manager broadband connection with: ID 19d2:1015 ONDA Communication S.p.A. I want some hints to make a Onda GSM usbkey 19d2:1015 work? route show me Destination Gateway Genmask Flags Metric Ref Use Iface default * 0.0.0.0 U 0 0 0 usb0 link-local * 255.255.0.0 U 1000 0 0 usb0 It connects and the usb0 device get ip-address from the provider but I cannot get webpages (also typing a known ip in firefox) It seem to be something about routing default gw, there is no active firewall on this pc btw, all works using sakis3g or wvdial or gnome-ppp

    Read the article

  • State / Screen management in Entity Component Systems

    - by David Lively
    My entity/component system is happily humming along and, despite some performance concerns I initially had, everything is working fine. However, I've realized that I missed a crucial point when starting this thing: how do you handle different screens? At the moment, I have a GameManager class which owns a component manager and entity manager. When I create an entity, the entity manager assigns it an ID and makes sure it's tracked. When I modify the components that are assigned to an entity. an UpdateEntity method is called, which alerts each of the systems that they may need to add or remove the entity from their respective entity lists. A problem with this is that the collection of entities operated on by each system is determined solely by the individual Systems, typically based on a "required component" filter. (An entity has to have a Renderable component to be rendered, for instance.) In this situation, I can't just keep collections of entities per screen and only Update/Draw those collections. They'd have to either be added and removed depending on their applicability to the current screen, which would cause their associated components to be removed, or enable/disable entities in a group per screen to hide what's not supposed to be visible. These approaches seem like really, really crappy kludges. What's a good way to handle this? A pretty straightforward way that comes to mind is to create a separate GameManager (which in my implementation owns all of the systems, entities, etc.) per screen, which means that everything outside of the device context would be duplicated. That's bothersome because some things are always visible, or I might want to continue to display the game under a translucent menu window. Another option would be to add a "layer" key to the GameManager class, which could be checked against a displayable layer stack held by the game manager. *System.Draw() would be called for each active layer, in the required order as determined by the stack. When the systems request an iterator for their respective entity collections, it would be pre-filtered to a (cached) set of those entities that participate in the active layer. Those collections could be updated from the same UpdateEntity event that's already used to maintain each system's entity collections. Still, kinda feels like a hack. If I've coded myself into a corner, feel free to throw tomatoes as long as they're labeled with a helpful suggestion. Hooray for learning curves.

    Read the article

  • Entity Framework 4.1 Release Candidate Released

    - by shiju
    Microsoft ADO.NET team has announced the Release Candidate version of Entity Framework 4.1. You can download the  Entity Framework 4.1 Release Candidate from here . You can also use NuGet to get Entity Framework 4.1 Release Candidate. The Code First approach is available with Entity Framework 4.1.I have upgraded my ASP.NET MVC 3/EF Code First demo web app (http://efmvc.codeplex.com) to Entity Framework 4.1  version

    Read the article

  • Entity Framework 4.0: My Favorite Books

    - by nannette
    I'm in the process of reading several Entity Framework 4.0 books. I'm going to recommend two such books: 1) Programming Entity Framework: Building Data Centric Apps with the ADO.NET Entity Framework by Julia Lerman 2) Entity Framework 4.0 Recipes: A Problem-Solution Approach by Larry Tenny and Zeeshan Hirani Visit these Entity Framework 4.0 Quick Start videos by Julia Lerman. The book links include numerous detailed reviews. If you can only afford one of the books, I'd recommend Julia's book as the...(read more)

    Read the article

  • Many sources of movement in an entity system

    - by Sticky
    I'm fairly new to the idea of entity systems, having read a bunch of stuff (most usefully, this great blog and this answer). Though I'm having a little trouble understanding how something as simple as being able to manipualate the position of an object by an undefined number of sources. That is, I have my entity, which has a position component. I then have some event in the game which tells this entity to move a given distance, in a given time. These events can happen at any time, and will have different values for position and time. The result is that they'd be compounded together. In a traditional OO solution, I'd have some sort of MoveBy class, that contains the distance/time, and an array of those inside my game object class. Each frame, I'd iterate through all the MoveBy, and apply it to the position. If a MoveBy has reached its finish time, remove it from the array. With the entity system, I'm a little confused as how I should replicate this sort of behavior. If there were just one of these at a time, instead of being able to compound them together, it'd be fairly straightforward (I believe) and look something like this: PositionComponent containing x, y MoveByComponent containing x, y, time Entity which has both a PositionComponent and a MoveByComponent MoveBySystem that looks for an entity with both these components, and adds the value of MoveByComponent to the PositionComponent. When the time is reached, it removes the component from that entity. I'm a bit confused as to how I'd do the same thing with many move by's. My initial thoughts are that I would have: PositionComponent, MoveByComponent the same as above MoveByCollectionComponent which contains an array of MoveByComponents MoveByCollectionSystem that looks for an entity with a PositionComponent and a MoveByCollectionComponent, iterating through the MoveByComponents inside it, applying/removing as necessary. I guess this is a more general problem, of having many of the same component, and wanting a corresponding system to act on each one. My entities contain their components inside a hash of component type - component, so strictly have only 1 component of a particular type per entity. Is this the right way to be looking at this? Should an entity only ever have one component of a given type at all times?

    Read the article

  • Oracle Enterprise Manager 12c Integration With Oracle Enterprise Manager Ops Center 11g

    - by Scott Elvington
    In a blog entry earlier this year, we announced the availability of the Ops Center 11g plug-in for Enterprise Manager 12c. In this article I will walk you through the process of deploying the plug-in on your existing Enterprise Manager agents and show you some of the capabilities the plug-in provides. We'll also look at the integration from the Ops Center perspective. I will show you how to set up the connection to Enterprise Manager and give an overview of the information that is available. Installing and Configuring the Ops Center Plug-in The plug-in is available for download from the Self Update page (Setup ? Extensibility ? Self Update). The plug-in name is “Ops Center Infrastructure stack”. Once you have downloaded the plug-in you can navigate to the Plug-In management page (Setup ? Extensibility ? Plug-ins) to begin deployment. The plug-in must first be deployed on the Management Server. You will need to provide the repository password of the SYS user in order to deploy the plug-in to the Management Server. There are a few pre-requisites that need to be completed on the Ops Center side before the plug-in can be deployed and configured on the desired Enterprise Manager agents. Any servers, whether physical or virtual, for which you wish to see metrics and alerts need to be managed by Ops Center. This means that the Operating System needs to have an Ops Center management agent installed as a minimum. The plug-in can provide even more value when Ops Center is also managing the other “layers of the stack”, for example the service processor, the blade chassis or the XSCF of an M-Series server. The more information that Ops Center has about the stack, the more information that will be visible within Enterprise Manager via the plug-in. In order to access the information within Ops Center, the plug-in requires a user to connect as. This user does not require any particular Ops Center permissions or roles, it simply needs to exist. You can create a specific “EMPlugin” user within Ops Center or use an existing user. Oracle recommends creating a specific, non-privileged user account within Ops Center for this purpose. From the Ops Center Administration section, select Enterprise Controller, click the Users tab and finally click the Add User icon to create the desired user account. For the purpose of this article I have discovered and managed the OS and service processor of the server where my Enterprise Manager 12c installation is hosted. With the plug-in deployed to the Management Server and the setup done within Ops Center, we're now ready to deploy the plug-in to the agents and configure the targets to communicate with the Ops Center Enterprise Controller. From the Setup menu select Add Targets then Add Targets Manually. Select the bottom radio button “Add Targets Manually by Specifying Target Monitoring Properties”, select Infrastructure Stack from the Target Type dropdown and finally, select the Monitoring Agent where you wish to deploy the plug-in. Click the Add Manually.... button and fill in the details for the new target using the appropriate hostname for your Enterprise Controller and the user and password details for the plug-in access user. After the target has been added to the agent you will need to allow a few minutes for the initial data collection to complete. Once completed you can see the new target in the All Targets list. All metric collections are enabled by default except one. To enable Infrastructure Stack Alarms collection, navigate to the newly added target and then to Target ? Monitoring ? Metric and Collection Settings. There you can click the “Disabled” link under Collection Schedule to enable collection and set your desired collection frequency. By default, a Warning level alert in Ops Center will equate to a Warning level event in Enterprise Manager and a Critical alert will equate to a Critical event. This mapping can be altered in the Metric and Collection Settings also. The default incident rules in Enterprise Manager only create incidents from Critical events so keep this in mind in case you want to see incidents generated for Warning or Info level alerts from Ops Center. Also, because Enterprise Manager already monitors the OS through it's Host target type, the plug-in does not pull OS alerts from Ops Center so as to prevent duplication. In addition to alert propagation, the plug-in also provides data for several reports detailing the topology and configuration of the stack as well as any hardware sensor data that is available. These are available from the Information Publisher Reports. Navigate there from the Enterprise ? Reports menu or directly from the Infrastructure Stack target of interest. As an example, here is a sample of the Hardware Sensors report showing some of the available sensor data. The report can also be exported to a CSV file format if desired. Connecting Ops Center to Enterprise Manager Repository For an Enterprise Manager user, the plug-in provides a deeper visibility to the state of the infrastructure underlying the databases and middleware. On the Ops Center side, there is also a greater visibility to the targets running on the infrastructure. To set up the Ops Center data collection, just navigate to the Administration section and select the Grid Control link. Select the Configure/Connect action from the right-hand menu and complete the wizard forms to enable the connection to the Enterprise Manager repository and UI. Be sure to use the sysman account when configuring the database connection. Once the job completes and the initial data synchronization is done you will see new Target tabs on your OS assets. The new tab lists all the Enterprise Manager targets and any alerts, availability and performance data specific to the selected target. It is also possible to use the GoTo icon to launch the Enterprise Manager BUI in context of the specific target or alert to drill into more detail. Hopefully this brief overview of the integration between Enterprise Manager and Ops Center has provided a jumpstart to getting a more complete view of the full stack of your enterprise systems.

    Read the article

  • Organizing an entity system with external component managers?

    - by Gustav
    I'm designing a game engine for a top-down multiplayer 2D shooter game, which I want to be reasonably reuseable for other top-down shooter games. At the moment I'm thinking about how something like an entity system in it should be designed. First I thought about this: I have a class called EntityManager. It should implement a method called Update and another one called Draw. The reason for me separating Logic and Rendering is because then I can omit the Draw method if running a standalone server. EntityManager owns a list of objects of type BaseEntity. Each entity owns a list of components such as EntityModel (the drawable representation of an entity), EntityNetworkInterface, and EntityPhysicalBody. EntityManager also owns a list of component managers like EntityRenderManager, EntityNetworkManager and EntityPhysicsManager. Each component manager keeps references to the entity components. There are various reasons for moving this code out of the entity's own class and do it collectively instead. For example, I'm using an external physics library, Box2D, for the game. In Box2D, you first add the bodies and shapes to a world (owned by the EntityPhysicsManager in this case) and add collision callbacks (which would be dispatched to the entity object itself in my system). Then you run a function which simulates everything in the system. I find it hard to find a better solution to do this than doing it in an external component manager like this. Entity creation is done like this: EntityManager implements the method RegisterEntity(entityClass, factory) which registers how to create an entity if that class. It also implements the method CreateEntity(entityClass) which would return an object of type BaseEntity. Well now comes my problem: How would the reference to a component be registered to the component managers? I have no idea how I would reference the component managers from a factory/closure.

    Read the article

  • Prevent Custom Entity being deleted from Entity Framework during Update Model Wizard

    - by rbg
    In Entity Framework v1 If you create a Custom Entity to map it to a Stored Procedure during FunctionImport and then Select "Update Model from Database" the Update Model Wizard removes the Custom Entity (as added Manually in the SSDL XML prior to functionImport) from the SSDL. Does anyone know if this limitation has been dealt with Entity Framework v4? I mean is there a way to prevent the Wizard from removing the Custom Entity from the Storage Schema (SSDL) during Model Update from Database????

    Read the article

  • Qualcomm Gobi WWAN modem no longer appears in Network Manager

    - by Andy E
    Not sure what happened here. I installed Cinnamon on my Ubuntu 12.10 environment yesterday, rebooted when finished and everything was working fine. I even used my WWAN modem after my fixed line broadband went down. However, after starting my machine this morning and seeing that my fixed line is still having problems (intermittently), I clicked the network applet and my WWAN device wasn't listed. It's not in the main network manager window either. It is still present on the system, however: $ lsusb Bus 001 Device 003: ID 05ca:18b0 Ricoh Co., Ltd Sony Vaio Integrated Webcam Bus 001 Device 004: ID 05c6:9221 Qualcomm, Inc. Gobi Wireless Modem (QDL mode) Bus 002 Device 005: ID 04e8:6865 Samsung Electronics Co., Ltd Bus 002 Device 002: ID 0409:005a NEC Corp. HighSpeed Hub Bus 003 Device 002: ID 147e:1000 Upek Biometric Touchchip/Touchstrip Fingerprint Sensor Bus 008 Device 002: ID 044e:3017 Alps Electric Co., Ltd BCM2046 Bluetooth Device Debug output from modem-manager, refers to a device that is blacklisted: modem-manager[10186]: [1355478137.024491] [mm-manager.c:866] device_added(): (tty/ttyUSB0): port's parent device is blacklisted modem-manager[10186]: [1355478137.024607] [mm-manager.c:875] device_added(): (tty/ttyS0): port's parent platform driver is not whitelisted modem-manager[10186]: [1355478137.024700] [mm-manager.c:875] device_added(): (tty/ttyS1): port's parent platform driver is not whitelisted ... I couldn't see anything relevant in the debug output for network-manager, but I've created a paste for it just in case. In /lib/udev/rules.d/77-mm-qdl-device-blacklist.rules, I found the following line that matches the device IDs from the lsusb output: # Generic Gobi QDL device ATTRS{idVendor}=="05c6", ATTRS{idProduct}=="9221", ENV{ID_MM_DEVICE_IGNORE}="1" I tried commenting out the second line and restarting network-manager/modem-manager but it didn't help. I've no idea where to go from here, does anyone else have any ideas?

    Read the article

  • Hibernate is persisting entity during flush when the entity has not changed

    - by Preston
    I'm having a problem where the entity manger is persisting an entity that I don't think has changed during the flush. I know the following code is the problem because if I comment it out the entity doesn't persist. In this code all I'm doing is loading the entity and calling some getters. Query qry = em.createNamedQuery("Clients.findByClientID"); qry.setParameter("clientID", clientID); Clients client = (Clients) qry.getSingleResult(); results.setFname(client.getFirstName()); results.setLname(client.getLastName()); ... return results; Later in a different method I do another namedQuery which causes the entity manger to flush. For some reason the client loaded above is persisted. The reason this is a problem is because in the middle of all this, there is some old code that is making some straight JDBC changes to the client. When the entity manger persists the changes made by the straight JDBC are lost. The theory we have at moment is that the entity manger is comparing the entity to the underlying record, sees that it's different, then persists it. Can someone explain or confirm the behavior we're seeing?

    Read the article

  • Entity System with C++

    - by Dono
    I'm working on a game engine using the Entity System and I have some questions. How I see Entity System : Components : A class with attributs, set and get. Sprite Physicbody SpaceShip ... System : A class with a list of components. (Component logic) EntityManager Renderer Input Camera ... Entity : Just a empty class with a list of components. What I've done : Currently, I've got a program who allow me to do that : // Create a new entity/ Entity* entity = game.createEntity(); // Add some components. entity->addComponent( new TransformableComponent() ) ->setPosition( 15, 50 ) ->setRotation( 90 ) ->addComponent( new PhysicComponent() ) ->setMass( 70 ) ->addComponent( new SpriteComponent() ) ->setTexture( "name.png" ) ->addToSystem( new RendererSystem() ); My questions Did the system stock a list of components or a list of entities ? In the case where I stock a list of entities, I need to get the component of this entities on each frame, that's probably heavy isn't it ? Did the system stock a list of components or a list of entities ? In the case where I stock a list of entities, I need to get the component of this entities on each frame, that's probably heavy isn't it ?

    Read the article

  • Entity framework, adding custom entity to diagram

    - by molgan
    Hello I'm using the entity framework that comes with 3.5sp1, and I'm trying to add a custom entity that I will propulate with searchresults from a stored procedure. But I have problems with the designer....... when I pick "Add + Entity" and give it a name, and hit save, the whole diagram stops working...... blah entities namespace could not be found..... all the other entities have lost its reference or summet. Works when I add a "table" to it that exists in database. But when custom it screws everything up. How should I fix this? I've imported the function, I just need to add an entity for it... I've read http://blogs.microsoft.co.il/blogs/gilf/archive/2009/03/13/mapping-stored-procedure-results-to-a-custom-entity-in-entity-framework.aspx but I cant get to step 3 since of all the errors when saving on step 2. /M

    Read the article

  • Getting MySQL work with Entity Framework 4.0

    - by DigiMortal
    Does MySQL work with Entity Framework 4.0? The answer is: yes, it works! I just put up one experimental project to play with MySQL and Entity Framework 4.0 and in this posting I will show you how to get MySQL data to EF. Also I will give some suggestions how to deploy your applications to hosting and cloud environments. MySQL stuff As you may guess you need MySQL running somewhere. I have MySQL installed to my development machine so I can also develop stuff when I’m offline. The other thing you need is MySQL Connector for .NET Framework. Currently there is available development version of MySQL Connector/NET 6.3.5 that supports Visual Studio 2010. Before you start download MySQL and Connector/NET: MySQL Community Server Connector/NET 6.3.5 If you are not big fan of phpMyAdmin then you can try out free desktop client for MySQL – HeidiSQL. I am using it and I am really happy with this program. NB! If you just put up MySQL then create also database with couple of table there. To use all features of Entity Framework 4.0 I suggest you to use InnoDB or other engine that has support for foreign keys. Connecting MySQL to Entity Framework 4.0 Now create simple console project using Visual Studio 2010 and go through the following steps. 1. Add new ADO.NET Entity Data Model to your project. For model insert the name that is informative and that you are able later recognize. Now you can choose how you want to create your model. Select “Generate from database” and click OK. 2. Set up database connection Change data connection and select MySQL Database as data source. You may also need to set provider – there is only one choice. Select it if data provider combo shows empty value. Click OK and insert connection information you are asked about. Don’t forget to click test connection button to see if your connection data is okay. If everything works then click OK. 3. Insert context name Now you should see the following dialog. Insert your data model name for application configuration file and click OK. Click next button. 4. Select tables for model Now you can select tables and views your classes are based on. I have small database with events data. Uncheck the checkbox “Include foreign key columns in the model” – it is damn annoying to get them away from model later. Also insert informative and easy to remember name for your model. Click finish button. 5. Define your classes Now it’s time to define your classes. Here you can see what Entity Framework generated for you. Relations were detected automatically – that’s why we needed foreign keys. The names of classes and their members are not nice yet. After some modifications my class model looks like on the following diagram. Note that I removed attendees navigation property from person class. Now my classes look nice and they follow conventions I am using when naming classes and their members. NB! Don’t forget to see properties of classes (properties windows) and modify their set names if set names contain numbers (I changed set name for Entity from Entity1 to Entities). 6. Let’s test! Now let’s write simple testing program to see if MySQL data runs through Entity Framework 4.0 as expected. My program looks for events where I attended. using(var context = new MySqlEntities()) {     var myEvents = from e in context.Events                     from a in e.Attendees                     where a.Person.FirstName == "Gunnar" &&                             a.Person.LastName == "Peipman"                     select e;       Console.WriteLine("My events: ");       foreach(var e in myEvents)     {         Console.WriteLine(e.Title);     } }   Console.ReadKey(); And when I run it I get the result shown on screenshot on right. I checked out from database and these results are correct. At first run connector seems to work slow but this is only the effect of first run. As connector is loaded to memory by Entity Framework it works fast from this point on. Now let’s see what we have to do to get our program work in hosting and cloud environments where MySQL connector is not installed. Deploying application to hosting and cloud environments If your hosting or cloud environment has no MySQL connector installed you have to provide MySQL connector assemblies with your project. Add the following assemblies to your project’s bin folder and include them to your project (otherwise they are not packaged by WebDeploy and Azure tools): MySQL.Data MySQL.Data.Entity MySQL.Web You can also add references to these assemblies and mark references as local so these assemblies are copied to binary folder of your application. If you have references to these assemblies then you don’t have to include them to your project from bin folder. Also add the following block to your application configuration file. <?xml version="1.0" encoding="utf-8"?> <configuration> ...   <system.data>     <DbProviderFactories>         <add              name=”MySQL Data Provider”              invariant=”MySql.Data.MySqlClient”              description=”.Net Framework Data Provider for MySQL”              type=”MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data,                   Version=6.2.0.0, Culture=neutral,                   PublicKeyToken=c5687fc88969c44d”          />     </DbProviderFactories>   </system.data> ... </configuration> Conclusion It was not hard to get MySQL connector installed and MySQL connected to Entity Framework 4.0. To use full power of Entity Framework we used InnoDB engine because it supports foreign keys. It was also easy to query our model. To get our project online we needed some easy modifications to our project and configuration files.

    Read the article

  • Keep Xubuntu Network Manager from overwriting resolv.conf

    - by leeand00
    How do I keep Xubuntu 11.10 from overwriting resolv.conf everytime I reboot my machine? Everytime I reboot, I get an overwritten resolv.conf that has the words # Generated by Network Manager and no nameservers specified. I ran the following to get rid of Network Manager, but it's still replacing my resolv.conf when I restart the machine. sudo apt-get --purge remove network-manager sudo apt-get --purge remove network-manager-gnome sudo apt-get --purge remove network-manager-pptp sudo apt-get --purge remove network-manager-pptp-gnome

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1

    - by rajbk
    The Open Data Protocol, referred to as OData, is a new data-sharing standard that breaks down silos and fosters an interoperative ecosystem for data consumers (clients) and producers (services) that is far more powerful than currently possible. It enables more applications to make sense of a broader set of data, and helps every data service and client add value to the whole ecosystem. WCF Data Services (previously known as ADO.NET Data Services), then, was the first Microsoft technology to support the Open Data Protocol in Visual Studio 2008 SP1. It provides developers with client libraries for .NET, Silverlight, AJAX, PHP and Java. Microsoft now also supports OData in SQL Server 2008 R2, Windows Azure Storage, Excel 2010 (through PowerPivot), and SharePoint 2010. Many other other applications in the works. * This post walks you through how to create an OData feed, define a shape for the data and pre-filter the data using Visual Studio 2010, WCF Data Services and the Entity Framework. A sample project is attached at the bottom of Part 2 of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Create the Web Application File –› New –› Project, Select “ASP.NET Empty Web Application” Add the Entity Data Model Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “ADO.NET Entity Data Model” under "Data”. Name the Model “Northwind” and click “Add”.   In the “Choose Model Contents”, select “Generate Model From Database” and click “Next”   Define a connection to your database containing the Northwind database in the next screen. We are going to expose the Products table through our OData feed. Select “Products” in the “Choose your Database Object” screen.   Click “Finish”. We are done creating our Entity Data Model. Save the Northwind.edmx file created. Add the WCF Data Service Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “WCF Data Service” from the list and call the service “DataService” (creative, huh?). Click “Add”.   Enable Access to the Data Service Open the DataService.svc.cs class. The class is well commented and instructs us on the next steps. public class DataService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } Replace the comment that starts with “/* TODO:” with “NorthwindEntities” (the entity container name of the Model we created earlier).  WCF Data Services is initially locked down by default, FTW! No data is exposed without you explicitly setting it. You have explicitly specify which Entity sets you wish to expose and what rights are allowed by using the SetEntitySetAccessRule. The SetServiceOperationAccessRule on the other hand sets rules for a specified operation. Let us define an access rule to expose the Products Entity we created earlier. We use the EnititySetRights.AllRead since we want to give read only access. Our modified code is shown below. public class DataService : DataService<NorthwindEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Products", EntitySetRights.AllRead); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } We are done setting up our ODataFeed! Compile your project. Right click on DataService.svc and select “View in Browser” to see the OData feed. To view the feed in IE, you must make sure that "Feed Reading View" is turned off. You set this under Tools -› Internet Options -› Content tab.   If you navigate to “Products”, you should see the Products feed. Note also that URIs are case sensitive. ie. Products work but products doesn’t.   Filtering our data OData has a set of system query operations you can use to perform common operations against data exposed by the model. For example, to see only Products in CategoryID 2, we can use the following request: /DataService.svc/Products?$filter=CategoryID eq 2 At the time of this writing, supported operations are $orderby, $top, $skip, $filter, $expand, $format†, $select, $inlinecount. Pre-filtering our data using Query Interceptors The Product feed currently returns all Products. We want to change that so that it contains only Products that have not been discontinued. WCF introduces the concept of interceptors which allows us to inject custom validation/policy logic into the request/response pipeline of a WCF data service. We will use a QueryInterceptor to pre-filter the data so that it returns only Products that are not discontinued. To create a QueryInterceptor, write a method that returns an Expression<Func<T, bool>> and mark it with the QueryInterceptor attribute as shown below. [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.Discontinued == false; } Viewing the feed after compilation will only show products that have not been discontinued. We also confirm this by looking at the WHERE clause in the SQL generated by the entity framework. SELECT [Extent1].[ProductID] AS [ProductID], ... ... [Extent1].[Discontinued] AS [Discontinued] FROM [dbo].[Products] AS [Extent1] WHERE 0 = [Extent1].[Discontinued] Other examples of Query/Change interceptors can be seen here including an example to filter data based on the identity of the authenticated user. We are done pre-filtering our data. In the next part of this post, we will see how to shape our data. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Foot Notes * http://msdn.microsoft.com/en-us/data/aa937697.aspx † $format did not work for me. The way to get a Json response is to include the following in the  request header “Accept: application/json, text/javascript, */*” when making the request. This is easily done with most JavaScript libraries.

    Read the article

  • Role of systems in entity systems architecture

    - by bio595
    I've been reading a lot about entity components and systems and have thought that the idea of an entity just being an ID is quite interesting. However I don't know how this completely works with the components aspect or the systems aspect. A component is just a data object managed by some relevant system. A collision system uses some BoundsComponent together with a spatial data structure to determine if collisions have happened. All good so far, but what if multiple systems need access to the same component? Where should the data live? An input system could modify an entities BoundsComponent, but the physics system(s) need access to the same component as does some rendering system. Also, how are entities constructed? One of the advantages I've read so much about is flexibility in entity construction. Are systems intrinsically tied to a component? If I want to introduce some new component, do I also have to introduce a new system or modify an existing one? Another thing that I've read often is that the 'type' of an entity is inferred by what components it has. If my entity is just an id how can I know that my robot entity needs to be moved or rendered and thus modified by some system? Sorry for the long post (or at least it seems so from my phone screen)!

    Read the article

  • Actually utilizing relational databases for entity systems

    - by Marc Müller
    Recently I was researching several entity systems and obviously I came across T=Machine's fantastic articles on the subject. In Part 5 of the series the author uses a relational schema to explain how an entity system is built and works. Since reading this, I have been wondering whether or not actually using a compact SQL library would be fast enough for real-time usage in video games. Performance seems to be the main issue with a full blown SQL database for management of all entities and components. However, as mentioned in T=Machine's post, basically all access to data inside the SQLDB is done sequentlially by each system over each component. Additionally, using a library like SQLite, one could easily improve performance by storing the entity data exclusively in RAM to increase access speeds. Disregarding possible performance issues, using a SQL database, in my opinion, would allow for a very intuitive implementation of entity systems and bring a long certain other benefits like easy de/serialization of game states and consistency checks like the uniqueness of entity IDs. Edit for clarification: The main question was whether using a SQL database for the actual entity management (not just storing the game state on the disk) in a real-time game would still yield a framerate appropriate for a game or even if someone is aware of projects that demonstrate SQL in a video game.

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >