Search Results

Search found 14297 results on 572 pages for 'self tracking entities'.

Page 423/572 | < Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >

  • Access Control Lists for Roles

    - by Kyle Hatlestad
    Back in an earlier post, I wrote about how to enable entity security (access control lists, aka ACLs) for UCM 11g PS3.  Well, there was actually an additional security option that was included in that release but not fully supported yet (only for Fusion Applications).  It's the ability to define Roles as ACLs to entities (documents and folders).  But now in PS5, this security option is now fully supported.   The benefit of defining Roles for ACLs is that those user roles come from the enterprise security directory (e.g. OID, Active Directory, etc) and thus the WebCenter Content administrator does not need to define them like they do with ACL Groups (Aliases).  So it's a bit of best of both worlds.  Users are managed through the LDAP repository and are automatically granted/denied access through their group membership which are mapped to Roles in WCC.  A different way to think about it is being able to add multiple Accounts to content items...which I often get asked about.  Because LDAP groups can map to Accounts, there has always been this association between the LDAP groups and access to the entity in WCC.  But that mapping had to define the specific level of access (RWDA) and you could only apply one Account per content item or folder.  With Roles for ACLs, it basically takes away both of those restrictions by allowing users to define more then one Role and define the level of access on-the-fly. To turn on ACLs for Roles, there is a component to enable.  On the Component Manager page, click the 'advanced component manager' link in the description paragraph at the top.   In the list of Disabled Components, enable the RoleEntityACL component. Then restart.  This is assuming the other configuration settings have been made for the other ACLs in the earlier post.   Once enabled, a new metadata field called xClbraRoleList will be created.  If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. For Users and Groups, these values are automatically picked up from the corresponding database tables.  In the case of Roles, there is an explicitly defined list of choices that are made available.  These values must match the roles that are coming from the enterprise security repository. To add these values, go to Administration -> Admin Applets -> Configuration Manager.  On the Views tab, edit the values for the ExternalRolesView.  By default, 'guest' and 'authenticated' are added.  Once added, you can assign the roles to your content or folder. If you are a user that can both access the Security Group for that item and you belong to that particular Role, you now have access to that item.  If you don't belong to that Role, you won't! [Extra] Because the selection mechanism for the list is using a type-ahead field, users may not even know the possible choices to start typing to.  To help them, one thing you can add to the form is a placeholder field which offers the entire list of roles as an option list they can scroll through (assuming its a manageable size)  and view to know what to type to.  By being a placeholder field, it won't need to be added to the custom metadata database table or search engine.  

    Read the article

  • Executing Stored Procedures in Visual Studio LightSwitch.

    - by dataintegration
    A LightSwitch Project is very easy way to visualize and manipulate information directly from one of our ADO.NET Providers. But when it comes to executing the Stored Procedures, it can be a bit more complicated. In this article, we will demonstrate how to execute a Stored Procedure in LightSwitch. For the purposes of this article, we will be using the RSSBus Email Data Provider, but the same process will work with any of our ADO.NET Providers. Creating the RIA Service. Step 1: Open Visual Studio and create a new WCF RIA Service Class Project. Step 2:Add the reference to the RSSBus Email Data Provider dll in the (ProjectName).Web project. Step 3: Add a new Domain Service Class to the (ProjectName).Web project. Step 4: In the new Domain Service Class, create a new class with the attributes needed for the Stored Procedure's parameters. In this demo, the Stored Procedure we are executing is called SendMessage. The parameters we will need are as follows: public class NewMessage{ [Key] public int ID { get; set; } public string FromEmail { get; set; } public string ToEmail { get; set; } public string Subject { get; set; } public string Text { get; set; } } Note: The created class must have an ID which will serve as the key value. Step 5: Create a new method that will executed when the insert event fires. Inside this method you can use the standards ADO.NET code which will execute the stored procedure. [Insert] public void SendMessage(NewMessage newMessage) { try { EmailConnection conn = new EmailConnection(connectionString); EmailCommand comm = new EmailCommand("SendMessage", conn); comm.CommandType = System.Data.CommandType.StoredProcedure; if (!newMessage.FromEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@From", newMessage.FromEmail)); if (!newMessage.ToEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@To", newMessage.ToEmail)); if (!newMessage.Subject.Equals("")) comm.Parameters.Add(new EmailParameter("@Subject", newMessage.Subject)); if (!newMessage.Text.Equals("")) comm.Parameters.Add(new EmailParameter("@Text", newMessage.Text)); comm.ExecuteNonQuery(); } catch (Exception exc) { Console.WriteLine(exc.Message); } } Step 6: Create a query method. We are not going to be using getNewMessages(), so it does not matter what it returns for the purpose of our example, but you will need to create a method for the query event as well. [Query(IsDefault=true)] public IEnumerable<NewMessage> getNewMessages() { return null; } Step 7: Rebuild the whole solution. Creating the LightSwitch Project. Step 8: Open Visual Studio and create a new LightSwitch Application Project. Step 9: On the Data Sources, add a new data source. Choose a WCF RIA Service Step 10: Choose to add a new reference and select the (Project Name).Web.dll generated from the RIA Service. Step 11: Select the entities you would like to import. In this case, we are using the recently created NewMessage entity. Step 13: On the Screens section, create a new screen and select the NewMessage entity as the Screen Data. Step 14: After you run the project, you will be able to add a new record and save it. This will execute the Stored Procedure and send the new message. If you create a screen to check the sent messages, you can refresh this screen to see the mail you sent. Sample Project To help you with get started using stored procedures in LightSwitch, download the fully functional sample project. You will also need the RSSBus Email Data Provider to make the connection. You can download a free trial here.

    Read the article

  • Why is the framerate (fps) capped at 60?

    - by dennmat
    ISSUE I recently moved a project from my laptop to my desktop(machine info below). On my laptop the exact same code displays the fps(and ms/f) correctly. On my desktop it does not. What I mean by this is on the laptop it will display 300 fps(for example) where on my desktop it will show only up to 60. If I add 100 objects to the game on the laptop I'll see my frame rate drop accordingly; the same test on the desktop results in no change and the frames stay at 60. It takes a lot(~300) entities before I'll see a frame drop on the desktop, then it will descend. It seems as though its "theoretical" frames would be 400 or 500 but will never actually get to that and only do 60 until there's too much to handle at 60. This 60 frame cap is coming from no where. I'm not doing any frame limiting myself. It seems like something external is limiting my loop iterations on the desktop, but for the last couple days I've been scratching my head trying to figure out how to debug this. SETUPS Desktop: Visual Studio Express 2012 Windows 7 Ultimate 64-bit Laptop: Visual Studio Express 2010 Windows 7 Ultimate 64-bit The libraries(allegro, box2d) are the same versions on both setups. CODE Main Loop: while(!abort) { frameTime = al_get_time(); if (frameTime - lastTime >= 1.0) { lastFps = fps/(frameTime - lastTime); lastTime = frameTime; avgMspf = cumMspf/fps; cumMspf = 0.0; fps = 0; } /** DRAWING/UPDATE CODE **/ fps++; cumMspf += al_get_time() - frameTime; } Note: There is no blocking code in the loop at any point. Where I'm at My understanding of al_get_time() is that it can return different resolutions depending on the system. However the resolution is never worse than seconds, and the double is represented as [seconds].[finer-resolution] and seeing as I'm only checking for a whole second al_get_time() shouldn't be responsible. My project settings and compiler options are the same. And I promise its the same code on both machines. My googling really didn't help me much, and although technically it's not that big of a deal. I'd really like to figure this out or perhaps have it explained, whichever comes first. Even just an idea of how to go about figuring out possible causes, because I'm out of ideas. Any help at all is greatly appreciated. EDIT: Thanks All. For any others that find this to disable vSync(windows only) in opengl: First get "wglext.h". It's all over the web. Then you can use a tool like GLee or just write your own quick extensions manager like: bool WGLExtensionSupported(const char *extension_name) { PFNWGLGETEXTENSIONSSTRINGEXTPROC _wglGetExtensionsStringEXT = NULL; _wglGetExtensionsStringEXT = (PFNWGLGETEXTENSIONSSTRINGEXTPROC) wglGetProcAddress("wglGetExtensionsStringEXT"); if (strstr(_wglGetExtensionsStringEXT(), extension_name) == NULL) { return false; } return true; } and then create and setup your function pointers: PFNWGLSWAPINTERVALEXTPROC wglSwapIntervalEXT = NULL; PFNWGLGETSWAPINTERVALEXTPROC wglGetSwapIntervalEXT = NULL; if (WGLExtensionSupported("WGL_EXT_swap_control")) { // Extension is supported, init pointers. wglSwapIntervalEXT = (PFNWGLSWAPINTERVALEXTPROC) wglGetProcAddress("wglSwapIntervalEXT"); // this is another function from WGL_EXT_swap_control extension wglGetSwapIntervalEXT = (PFNWGLGETSWAPINTERVALEXTPROC) wglGetProcAddress("wglGetSwapIntervalEXT"); } Then just call wglSwapIntervalEXT(0) to disable vSync and 1 to enable vSync. I found the reason this is windows only is that openGl actually doesn't deal with anything other than rendering it leaves the rest up to the OS and Hardware. Thanks everyone saved me a lot of time!

    Read the article

  • Entity System with C++ templates

    - by tommaisey
    I've been getting interested in the Entity/Component style of game programming, and I've come up with a design in C++ which I'd like a critique of. I decided to go with a fairly pure Entity system, where entities are simply an ID number. Components are stored in a series of vectors - one for each Component type. However, I didn't want to have to add boilerplate code for every new Component type I added to the game. Nor did I want to use macros to do this, which frankly scare me. So I've come up with a system based on templates and type hinting. But there are some potential issues I'd like to check before I spend ages writing this (I'm a slow coder!) All Components derive from a Component base class. This base class has a protected constructor, that takes a string parameter. When you write a new derived Component class, you must initialise the base with the name of your new class in a string. When you first instantiate a new DerivedComponent, it adds the string to a static hashmap inside Component mapped to a unique integer id. When you subsequently instantiate more Components of the same type, no action is taken. The result (I think) should be a static hashmap with the name of each class derived from Component that you instantiate at least once, mapped to a unique id, which can by obtained with the static method Component::getTypeId ("DerivedComponent"). Phew. The next important part is TypedComponentList<typename PropertyType>. This is basically just a wrapper to an std::vector<typename PropertyType> with some useful methods. It also contains a hashmap of entity ID numbers to slots in the array so we can find Components by their entity owner. Crucially TypedComponentList<> is derived from the non-template class ComponentList. This allows me to maintain a list of pointers to ComponentList in my main ComponentManager, which actually point to TypedComponentLists with different template parameters (sneaky). The Component manager has template functions such as: template <typename ComponentType> void addProperty (ComponentType& component, int componentTypeId, int entityId) and: template <typename ComponentType> TypedComponentList<ComponentType>* getComponentList (int componentTypeId) which deal with casting from ComponentList to the correct TypedComponentList for you. So to get a list of a particular type of Component you call: TypedComponentList<MyComponent>* list = componentManager.getComponentList<MyComponent> (Component::getTypeId("MyComponent")); Which I'll admit looks pretty ugly. Bad points of the design: If a user of the code writes a new Component class but supplies the wrong string to the base constructor, the whole system will fail. Each time a new Component is instantiated, we must check a hashed string to see if that component type has bee instantiated before. Will probably generate a lot of assembly because of the extensive use of templates. I don't know how well the compiler will be able to minimise this. You could consider the whole system a bit complex - perhaps premature optimisation? But I want to use this code again and again, so I want it to be performant. Good points of the design: Components are stored in typed vectors but they can also be found by using their entity owner id as a hash. This means we can iterate them fast, and minimise cache misses, but also skip straight to the component we need if necessary. We can freely add Components of different types to the system without having to add and manage new Component vectors by hand. What do you think? Do the good points outweigh the bad?

    Read the article

  • RIA Services EntitySet does not support 'Edit' opperation

    - by Savvas Sopiadis
    Hello everbody! Making my first steps in RIA Services (VS2010Beta2) and i encountered this problem: created an EF Model (no POCOs), generic repository on top of it and a RIA Service(hosted in an ASP.NET MVC application) and tryed to get data from within the ASP.NET MVC application: worked well. Next step: Silverlight client. Got a reference to the RIAService (through its context), queried for all the records of the repository and got them into the SL application as well (using this code sample): private ObservableCollection<Culture> _cultures = new ObservableCollection<Culture>(); public ObservableCollection<Culture> cultures { get { return _cultures; } set { _cultures = value; RaisePropertyChanged("cultures"); } } .... //Get cultures EntityQuery<Culture> queryCultures = from cu in dsCtxt.GetAllCulturesQuery() select cu; loCultures = dsCtxt.Load(queryCultures); loCultures.Completed += new EventHandler(lo_Completed); .... void loAnyCulture_Completed(object sender, EventArgs e) { ObservableCollection<Culture> temp= new ObservableCollection<Culture>loAnyCulture.Entities); AnyCulture = temp[0]; } The problem is this: whenever i try to edit some data of a record (in this example the first record) i get this error: This EntitySet of type 'Culture' does not support the 'Edit' operation. I thought that i did something weird and tryed to create an object of type Culture and assign a value to it: it worked well! What am i missing? Do i have to declare an EntitySet? Do i have to mark it? Do i have to...what? Thanks in advance

    Read the article

  • MVC2 DataAnnotations on ViewModel - ModelState.isValid Always Returns true

    - by ScottSEA
    I have an MVC2 Application that uses MVVM pattern. I am trying use Data Annotations to validate form input. In my ThingsController I have two methods: [HttpGet] public ActionResult Index() { return View(); } public ActionResult Details(ThingsViewModel tvm) { if (!ModelState.IsValid) return View(tvm); try { Query q = new Query(tvm.Query); ThingRepository repository = new ThingRepository(q); tvm.Airplanes = repository.All(); return View(tvm); } catch (Exception) { return View(); } } My Details.aspx view is strongly typed to the ThingsViewModel: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Config.Web.Models.ThingsViewModel>" %> The ViewModel is a class consisting of a IList of returned Thing objects and the Query string (which is submitted on the form) and has the Required data annotation: public class ThingsViewModel { public IList<Thing> Things{ get; set; } [Required(ErrorMessage="You must enter a query")] public string Query { get; set; } } When I run this, and click the submit button on the form without entering a value I get a YSOD with the following error: The model item passed into the dictionary is of type 'Config.Web.Models.ThingsViewModel', but this dictionary requires a model item of type System.Collections.Generic.IEnumerable`1[Config.Domain.Entities.Thing]'. How can I get Data Annotations to work with a ViewModel? I cannot see what I'm missing or where I'm going wrong - the VM was working just fine before I started mucking around with validation.

    Read the article

  • Dynamics CRM error "A currency is required if a value exists in a money field" after converting Acti

    - by Evgeny
    We have a Dynamics CRM 4.0 instance with some custom attributes of type "money" on the Case entity and on all Activity entities (Email, Phone Call, etc.) When I use the built-in "Convert Activity to Case" functionality I find that the resulting Case does not have a Currency set, even if the Activity it was created from does have it. Whenever the case is opened the user then gets this JavaScript error: A currency is required if a value exists in a money field. Select a currency and try again. This is extremely annoying! How do I fix it? Is there any way I can set the currency? It needs to be done synchronously, because the Case is opened immediately when it's created from an Activity. So even if I started a workflow to set the currency the user would still get that error at least once. Alterntatively, can I just suppress the warning somehow? I don't really care about setting the Currency, I just want the error gone. Thanks in advance for any help!

    Read the article

  • What causes this org.hibernate.MappingException?

    - by stacker
    I'm trying to configure an ejb3 sample application, it's entities where mapped to postgres now I want the app run on Jboss4.3 and Informix using JPA. If the DDL creation <property name="hibernate.hbm2ddl.auto" value="create"/> is active this error appears > WARN [ServiceController] Problem > starting service > persistence.units:ear=weblog.ear,jar=weblog.jar,unitName=weblog > javax.persistence.PersistenceException: > [PersistenceUnit: weblog] Unable to > build EntityManagerFactory > at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:677) > at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:132) > at org.jboss.ejb3.entity.PersistenceUnitDeployment.start(PersistenceUnitDeployment.java:246) followed by Caused by: org.hibernate.MappingException: No Dialect mapping for JDBC type: 2005 at org.hibernate.dialect.TypeNames.get(TypeNames.java:56) at org.hibernate.dialect.TypeNames.get(TypeNames.java:81) at org.hibernate.dialect.Dialect.getTypeName(Dialect.java:291) at org.hibernate.mapping.Column.getSqlType(Column.java:182) at org.hibernate.mapping.Table.sqlCreateString(Table.java:394) at org.hibernate.cfg.Configuration.generateSchemaCreationScript(Configuration.java:854) at org.hibernate.tool.hbm2ddl.SchemaExport.<init>(SchemaExport.java:74) at org.hibernate.impl.SessionFactoryImpl.<init>(SessionFactoryImpl.java:311) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1300) at org.hibernate.cfg.AnnotationConfiguration.buildSessionFactory(AnnotationConfiguration.java:874) at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:669) What does JDBC type: 2005 mean? Any idea how I can track down the entity/column causes the problem? Thanks

    Read the article

  • ASP.Net MVC2 DropDownListFor

    - by hermiod
    Hi all I am trying to learn MVC2, C# and Linq to Entities all in one project (yes, I am mad) and I am experiencing some problems with DropDownListFor and passing the SelectList to it. This is the code in my controller: public ActionResult Create() { var Methods = te.Methods.Select(a => a); List<SelectListItem> MethodList = new List<SelectListItem>(); foreach (Method me in Methods) { SelectListItem sli=new SelectListItem(); sli.Text = me.Description; sli.Value = me.method_id.ToString(); MethodList.Add(sli); } ViewData["MethodList"] = MethodList.AsEnumerable(); Talkback tb = new Talkback(); return View(tb); } and I am having troubles trying to get the DropDownListFor to take the MethodList in ViewData. When I try: <%:Html.DropDownListFor(model => model.method_id,new SelectList("MethodList","method_id","Description",Model.method_id)) %> It errors out with the following message DataBinding: 'System.Char' does not contain a property with the name 'method_id'. I know why this is, as it is taking MethodList as a string, but I can't figure out how to get it to take the SelectList. If I do the following with a normal DropDownList: <%: Html.DropDownList("MethodList") %> It is quite happy with this. Can anyone help?

    Read the article

  • entity framework 4 POCO's stored procedure error - "The FunctionImport could not be found in the container"

    - by user331884
    Entity Framework with POCO Entities generated by T4 template. Added Function Import named it "procFindNumber" specified complex collection named it "NumberResult". Here's what got generated in Context.cs file: public ObjectResult<NumberResult> procFindNumber(string lookupvalue) { ObjectParameter lookupvalueParameter; if (lookupvalue != null) { lookupvalueParameter = new ObjectParameter("lookupvalue", lookupvalue); } else { lookupvalueParameter = new ObjectParameter("lookupvalue", typeof(string)); } return base.ExecuteFunction<NumberResult>("procFindNumber", lookupvalueParameter); } Here's the stored procedure: ALTER PROCEDURE [dbo].[procFindNumber] @lookupvalue varchar(255) AS BEGIN SET NOCOUNT ON; DECLARE @sql nvarchar(MAX); IF @lookupvalue IS NOT NULL AND @lookupvalue <> '' BEGIN SELECT @sql = 'SELECT dbo.HBM_CLIENT.CLIENT_CODE, dbo.HBM_MATTER.MATTER_NAME, dbo.HBM_MATTER.CLIENT_MAT_NAME FROM dbo.HBM_MATTER INNER JOIN dbo.HBM_CLIENT ON dbo.HBM_MATTER.CLIENT_CODE = dbo.HBM_CLIENT.CLIENT_CODE LEFT OUTER JOIN dbo.HBL_CLNT_CAT ON dbo.HBM_CLIENT.CLNT_CAT_CODE = dbo.HBL_CLNT_CAT.CLNT_CAT_CODE LEFT OUTER JOIN dbo.HBL_CLNT_TYPE ON dbo.HBM_CLIENT.CLNT_TYPE_CODE = dbo.HBL_CLNT_TYPE.CLNT_TYPE_CODE WHERE (LTRIM(RTRIM(dbo.HBM_MATTER.CLIENT_CODE)) <> '''')' SELECT @sql = @sql + ' AND (dbo.HBM_MATTER.MATTER_NAME like ''%' + @lookupvalue + '%'')' SELECT @sql = @sql + ' OR (dbo.HBM_MATTER.CLIENT_MAT_NAME like ''%' + @lookupvalue + '%'')' SELECT @sql = @sql + ' ORDER BY dbo.HBM_MATTER.MATTER_NAME' -- Execute the SQL query EXEC sp_executesql @sql END END In my WCF service I try to execute the stored procedure: [WebGet(UriTemplate = "number/{value}/?format={format}")] public IEnumerable<NumberResult> GetNumber(string value, string format) { if (string.Equals("json", format, StringComparison.OrdinalIgnoreCase)) { WebOperationContext.Current.OutgoingResponse.Format = WebMessageFormat.Json; } using (var ctx = new MyEntities()) { ctx.ContextOptions.ProxyCreationEnabled = false; var results = ctx.procFindNumber(value); return results.ToList(); } } Error message says "The FunctionImport ... could not be found in the container ..." What am I doing wrong?

    Read the article

  • Box2d: Set active and inactive

    - by Rosarch
    I'm writing an XNA game in C# using the XNA port of Box2d - Box2dx. Entities like trees or zombies are represented as GameObjects. GameObjectManager adds and removes them from the game world: /// <summary> /// Does the work of removing the GameObject. /// </summary> /// <param name="controller">The GameObject to be removed.</param> private void removeGameObjectFromWorld(GameObjectController controller) { controllers.Remove(controller); worldState.Models.Remove(controller.Model); controller.Model.Body.SetActive(false); } public void addGameObjectToWorld(GameObjectController controller) { controllers.Add(controller); worldState.Models.Add(controller.Model); controller.Model.Body.SetActive(true); } controllers is a collection of GameObjectController instances. worldState.Models is a collection of GameObjectModel instances. When I remove GameObjects from Box2d this way, this method gets called: void IContactListener.EndContact(Contact contact) { GameObjectController collider1 = worldQueryUtils.gameObjectOfBody(contact.GetFixtureA().GetBody()); GameObjectController collider2 = worldQueryUtils.gameObjectOfBody(contact.GetFixtureB().GetBody()); collisionRecorder.removeCollision(collider1, collider2); } worldQueryUtils: // this could be cached if we know bodies never change public GameObjectController gameObjectOfBody(Body body) { return worldQueryEngine.GameObjectsForPredicate(x => x.Model.Body == body).Single(); } This method throws an error: System.InvalidOperationException was unhandled Message="Sequence contains no elements" Source="System.Core" StackTrace: at System.Linq.Enumerable.Single[TSource](IEnumerable`1 source) at etc Why is this happening? What can I do to avoid it? This method has been called many times before the body.SetActive() was called. I feel that this may be messing it up.

    Read the article

  • Entity Framework - Condition on one to many join (Lambda)

    - by nirpi
    Hi, I have 2 entities: Customer & Account, where a customer can have multiple accounts. On the account, I have a "PlatformTypeId" field, which I need to condition on (multiple values), among other criterions. I'm using Lambda expressions, to build the query. Here's a snippet: var customerQuery = (from c in context.CustomerSet.Include("Accounts") select c); if (criterions.UserTypes != null && criterions.UserTypes.Count() > 0) { List<short> searchCriterionsUserTypes = criterions.UserTypes.Select(i => (short)i).ToList(); customerQuery = customerQuery.Where(CommonDataObjects.LinqTools.BuildContainsExpression<Customer, short>(c => c.UserTypeId, searchCriterionsUserTypes)); } // Other criterions, including the problematic platforms condition (below) var customers = customerQuery.ToList(); I can't figure out how to build the accounts' platforms condition: if (criterions.Platforms != null && criterions.Platforms.Count() > 0) { List<short> searchCriterionsPlatforms = criterions.Platforms.Select(i => (short)i).ToList(); customerQuery = customerQuery.Where(c => c.Accounts.Where(LinqTools.BuildContainsExpression<Account, short>(a => a.PlatformTypeId, searchCriterionsPlatforms))); } (The BuildContainsExpression is a method we use to build the expression for the multi-select) I'm getting a compilation error: The type arguments for method 'System.Linq.Enumerable.Where(System.Collections.Generic.IEnumerable, System.Func)' cannot be inferred from the usage. Try specifying the type arguments explicitly. Any idea how to fix this? Thanks, Nir.

    Read the article

  • Entity Framework + AutoMapper ( Entity to DTO and DTO to Entity )

    - by vbobruisk
    Hello. i got some problems using EF with AutoMapper. =/ for example : i got 2 related entities ( Customers and Orders ) and theyr DTO classes : class CustomerDTO { public string CustomerID {get;set;} public string CustomerName {get;set;} public IList< OrderDTO Orders {get;set;} } class OrderDTO { public string OrderID {get;set;} public string OrderDetails {get;set;} public CustomerDTO Customers {get;set;} } //when mapping Entity to DTO the code works Customers cust = getCustomer(id); Mapper.CreateMap< Customers, CustomerDTO (); Mapper.CreateMap< Orders, OrderDTO (); CustomerDTO custDTO = Mapper.Map(cust); //but when i try to map back from DTO to Entity it fails with AutoMapperMappingException. Mapper.Reset(); Mapper.CreateMap< CustomerDTO , Customers (); Mapper.CreateMap< OrderDTO , Orders (); Customers customerModel = Mapper.Map< CustomerDTO ,Customers (custDTO); // exception is thrown here Am i doeing something wrong ? Thanks in Advance !

    Read the article

  • Entity Framework 4 "Generate Database from Model" to SQLEXPRESS mdf results in "Could not locate ent

    - by InfinitiesLoop
    I'm using Visual Studio 2010 RTM. I want to do model-first, so I started a new MVC app and added a new blank edmx. Created a few entities. No problem. Then I "Generate Database from Model", and allow the dialog to create a new database for me, which it does successfully as 'mydatabase.mdf' in the app's App_Data directory. Then I open the generated sql file (in Visual Studio). To run it of course I have to give it a connection. I am not sure if it's right, but I used '.\SQLEXPRESS' and Windows authentication. No idea how I'd tell it where the MDF is. Then the problem -- upon executing it, I get: Msg 911, Level 16, State 1, Line 1 Could not locate entry in sysdatabases for database 'mydatabase'. No entry found with that name. Make sure that the name is entered correctly. And indeed there were no tables created in the MDF. So... what am I doing wrong, or am I off my rocker expecting this to work? :)

    Read the article

  • How to properly test for constraint violation in hibernate?

    - by Cesar
    I'm trying to test Hibernate mappings, specifically a unique constraint. My POJO is mapped as follows: <property name="name" type="string" unique="true" not-null="true" /> What I want to do is to test that I can't persist two entities with the same name: @Test(expected=ConstraintViolationException.class) public void testPersistTwoExpertiseAreasWithTheSameNameIsNotAllowed(){ ExpertiseArea ea = new ExpertiseArea("Design"); ExpertiseArea otherEA = new ExpertiseArea("Design"); ead.setSession(getSessionFactory().getCurrentSession()); ead.getSession().beginTransaction(); ead.makePersistent(ea); ead.makePersistent(otherEA); ead.getSession().getTransaction().commit(); } On commiting the current transaction, I can see in the logs that a ConstraintViolationException is thrown: 16:08:47,571 DEBUG SQL:111 - insert into ExpertiseArea (VERSION, name, id) values (?, ?, ?) Hibernate: insert into ExpertiseArea (VERSION, name, id) values (?, ?, ?) 16:08:47,571 DEBUG SQL:111 - insert into ExpertiseArea (VERSION, name, id) values (?, ?, ?) Hibernate: insert into ExpertiseArea (VERSION, name, id) values (?, ?, ?) 16:08:47,572 WARN JDBCExceptionReporter:100 - SQL Error: -104, SQLState: 23505 16:08:47,572 ERROR JDBCExceptionReporter:101 - integrity constraint violation: unique constraint or index violation; SYS_CT_10036 table: EXPERTISEAREA 16:08:47,573 ERROR AbstractFlushingEventListener:324 - Could not synchronize database state with session org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update So I would expect the test to pass, since the expected ConstraintViolationException is thrown. However, the test never completes (neither pass nor fails) and I have to manually kill the test runner. What's the correct way to test this?

    Read the article

  • LinqToSql - Parallel - DataContext and Parallel

    - by Gregoire
    In .NET 4 and multicore environment, does the linq to sql datacontext object take advantage of the new parallels if we use DataLoadOptions.LoadWith? EDIT I know linq to sql does not parallelize ordinary queries. What I want to know is when we specify DataLoadOption.LoadWith, does it use parallelization to perform the match between each entity and its sub entities? Example: using(MyDataContext context = new MyDataContext()) { DataLaodOptions options =new DataLoadOptions(); options.LoadWith<Product>(p=>p.Category); return this.DataContext.Products.Where(p=>p.SomeCondition); } generates the following sql: Select Id,Name from Categories Select Id,Name, CategoryId from Products where p.SomeCondition when all the products are created, will we have a categories.ToArray(); Parallel.Foreach(products, p => { p.Category == categories.FirstOrDefault(c => c.Id == p.CategoryId); }); or categories.ToArray(); foreach(Product product in products) { product.Category = categories.FirstOrDefault(c => c.Id == product.CategoryId); } ?

    Read the article

  • C#: Delegate syntax?

    - by Rosarch
    I'm developing a game. I want to have game entities each have their own Damage() function. When called, they will calculate how much damage they want to do: public class CombatantGameModel : GameObjectModel { public int Health { get; set; } /// <summary> /// If the attack hits, how much damage does it do? /// </summary> /// <param name="randomSample">A random value from [0 .. 1]. Use to introduce randomness in the attack's damage.</param> /// <returns>The amount of damage the attack does</returns> public delegate int Damage(float randomSample); public CombatantGameModel(GameObjectController controller) : base(controller) {} } public class CombatantGameObject : GameObjectController { private new readonly CombatantGameModel model; public new virtual CombatantGameModel Model { get { return model; } } public CombatantGameObject() { model = new CombatantGameModel(this); } } However, when I try to call that method, I get a compiler error: /// <summary> /// Calculates the results of an attack, and directly updates the GameObjects involved. /// </summary> /// <param name="attacker">The aggressor GameObject</param> /// <param name="victim">The GameObject under assault</param> public void ComputeAttackUpdate(CombatantGameObject attacker, CombatantGameObject victim) { if (worldQuery.IsColliding(attacker, victim, false)) { victim.Model.Health -= attacker.Model.Damage((float) rand.NextDouble()); // error here Debug.WriteLine(String.Format("{0} hits {1} for {2} damage", attacker, victim, attackTraits.Damage)); } } The error is: 'Damage': cannot reference a type through an expression; try 'HWAlphaRelease.GameObject.CombatantGameModel.Damage' instead What am I doing wrong?

    Read the article

  • F# Powerpack's Metadata doesn't recognize FSharp.Core as an F# library

    - by Nathan Sanders
    Here's my test code to isolate the problem: open Microsoft.FSharp.Metadata [<EntryPoint>] let main args = let core = FSharpAssembly.FromFile @"C:\Program Files\FSharp-2.0.0.0\\bin\FSharp.Core.dll" let core2 = FSharpAssembly.FSharpLibrary let core3 = System.AppDomain.CurrentDomain.GetAssemblies() |> Seq.find (fun a -> a.FullName.Contains "Core") |> FSharpAssembly.FromAssembly core.Entities |> Seq.iter (printfn "%A") 0 All three lets should give me the same FSharpAssembly. Instead, all 3 throw an exception that FSharp.Core is not an F# assembly (details below, re-formatted for readability). Two more clues: Using the core3 method, I get the same error for the test F# assembly itself I don't get the error at FSI after doing #r "@C:\Program Files...\FSharp.Powerpack.Metadata.dll". Any ideas? I'm using Visual Studio 2008, F# 2.0 and F# Powerpack 2.0.0.0 (May 20, 2010) release on an oldish XP VM, I think it's updated to SP3 though. (I got the error this morning with Powerpack 1.9.9.9, so I upgraded to 2.0.0.0. I thought that if 1.9.9.9 doesn't recognise F#'s 2.0.0.0's assemblies, then maybe bugfixes in Powerpack 2.0.0.0 would help.) Unhandled Exception: System.TypeInitializationException: The type initializer for 'Microsoft.FSharp.Metadata.AssemblyLoader' threw an exception. ---> System.TypeInitializationException: The type initializer for '<StartupCode$FSharp-PowerPack-Metadata>.$Metadata' threw an exception. ---> System.ArgumentException: could not produce an FSharpAssembly object for the assembly 'FSharp.Core' because this is not an F# assembly Parameter name: name at Microsoft.FSharp.Metadata.AssemblyLoader.Add(String name,Assembly assembly) at <StartupCode$FSharp-PowerPack-Metadata>.$Metadata..cctor() --- End of inner exception stack trace --- at Microsoft.FSharp.Metadata.AssemblyLoader..cctor() --- End of inner exception stack trace --- at Microsoft.FSharp.Metadata.AssemblyLoader.Get(Assembly assembly) at Microsoft.FSharp.Metadata.FSharpAssembly.FromAssembly(Assembly assembly) at Program.main(String[] args) in C:\Documents an...\FSMetadataTest\Program.fs:line 11 Press any key to continue . . .

    Read the article

  • NHibernate Query across multiple tables

    - by Dai Bok
    I am using NHibernate, and am trying to figure out how to write a query, that searchs all the names of my entities, and lists the results. As a simple example, I have the following objects; public class Cat { public string name {get; set;} } public class Dog { public string name {get; set;} } public class Owner { public string firstname {get; set;} public string lastname {get; set;} } Eventaully I want to create a query , say for example, which and returns all the pet owners with an name containing "ted", OR pets with a name containing "ted". Here is an example of the SQL I want to execute: SELECT TOP 10 d.*, c.*, o.* FROM owners AS o INNER JOIN dogs AS d ON o.id = d.ownerId INNER JOIN cats AS c ON o.id = c.ownerId WHERE o.lastname like '%ted%' OR o.firstname like '%ted%' OR c.name like '%ted%' OR d.name like '%ted%' When I do it using Criteria like this: var criteria = session.CreateCriteria<Owner>() .Add( Restrictions.Disjunction() .Add(Restrictions.Like("FirstName", keyword, MatchMode.Anywhere)) .Add(Restrictions.Like("LastName", keyword, MatchMode.Anywhere)) ) .CreateCriteria("Dog").Add(Restrictions.Like("Name", keyword, MatchMode.Anywhere)) .CreateCriteria("Cat").Add(Restrictions.Like("Name", keyword, MatchMode.Anywhere)); return criteria.List<Owner>(); The following query is generated: SELECT TOP 10 d.*, c.*, o.* FROM owners AS o INNER JOIN dogs AS d ON o.id = d.ownerId INNER JOIN cats AS c ON o.id = c.ownerId WHERE o.lastname like '%ted%' OR o.firstname like '%ted%' AND d.name like '%ted%' AND c.name like '%ted%' How can I adjust my query so that the .CreateCriteria("Dog") and .CreateCriteria("Cat") generate an OR instead of the AND? thanks for your help.

    Read the article

  • AndroMDA maven code generation and JPA Annotations

    - by ArsenioM
    I am using the AndroMDA plugin for maven to generate code from an uml diagram made in MagicDraw. When the code is generated, AndroMDA desings the JPA annotation for the persitence layer. I think that at the compilation process AndroMDA uses Naming Strategies to determine the Table and Column names for the DataBase. I want to determine how AndroMDA desings this JPA annotations, because I need to display this DataBase names based on the UML entity and atributtes names. I was regarding if there is an API of AndroMDA that I could use to do this by giving it the uml diagram. Or at least, to know the Naming Strategies used by AndroMDA to achive that. AndroMDA at the compilation process design the JPA annotations for the Entities, Attributes, etc that are written in my java classes under a series of rules that exist within the EJB3 cartridge of AndroMDA. (The further Database is created using those JPA annotations). I want to create a program that returns me the same Table and Attributes names wrote on the JPA annotations, by giving it the .xml file of the uml diagram of a project. I was hoping that I could take advantage of the EJB3 cartridge to generate those Tables and Attribute names with my program. One way could be using an API of AndroMDA that do this(if it exits), or at least, by implementing the same rules used by the EJB3 cartridge for that matter. To be more illustrative, For example: If in my uml model I have an Entity called “CompanyGroup”, AndroMDA would generate the following code for the class definition: @javax.persistence.Entity @javax.persistence.Table(name = "COMPANY_GR") Public class CompanyGroup implements java.io.Serializable, Comparable< CompanyGroup This is just an example (not a real case), but nevertheless, the way how AndroMDA do the translation from “CompanyGroup” to “COMPANY_GR” has to be specified somewhere. Hope this explanation is useful enough. Thanks.

    Read the article

  • Serializing java objects with respect to xml schema loaded at runtime

    - by kohomologie
    I call an XML document three-layered if its structure is laid out as following: the root element contains some container elements (I'll call them entities), each of them has some simpleType elements inside (I'll call them properties). Something like that: <data> <spaceship> <number>1024</number> <name>KTHX</name> </spaceship> <spaceship> <number>1624</number> <name>LEXX</name> </spaceship> <knife> <length>10</length> </knife> </data> where spaceship is an entity, and number is a property. My problem is stated below: Given schema: an arbitrary xsd file describing a three-layered document, loaded at runtime. xmlDocument: an xml document conforming to the schema. Create A Map<String, Map <String, Object>> containing data from the xmlDocument, where first key corresponds to entity, second key correponds to this entity's property, and the value corresponds to this property's value, after casting it to a proper java type (for example, if the schema sets the property value to be xs:int, then it should be cast to Integer). What is the easiest way to achieve this result with existing libraries? P. S. JAXB is not really an option here. The schema might be arbitrary and unknown at compile-time. Also I wish to avoid an excessive use of reflection (associated with converting the beans to maps). I'm looking for something that would allow me to make the typecasts while xml is being parsed.

    Read the article

  • Write-only collections in MongoDB

    - by rcoder
    I'm currently using MongoDB to record application logs, and while I'm quite happy with both the performance and with being able to dump arbitrary structured data into log records, I'm troubled by the mutability of log records once stored. In a traditional database, I would structure the grants for my log tables such that the application user had INSERT and SELECT privileges, but not UPDATE or DELETE. Similarly, in CouchDB, I could write a update validator function that rejected all attempts to modify an existing document. However, I've been unable to find a way to restrict operations on a MongoDB database or collection beyond the three access levels (no access, read-only, "god mode") documented in the security topic on the MongoDB wiki. Has anyone else deployed MongoDB as a document store in a setting where immutability (or at least change tracking) for documents was a requirement? What tricks or techniques did you use to ensure that poorly-written or malicious application code could not modify or destroy existing log records? Do I need to wrap my MongoDB logging in a service layer that enforces the write-only policy, or can I use some combination of configuration, query hacking, and replication to ensure a consistent, audit-able record is maintained?

    Read the article

  • Linq to SQL and concurrency with Rob Conery repository pattern

    - by David Hall
    I have implemented a DAL using Rob Conery's spin on the repository pattern (from the MVC Storefront project) where I map database objects to domain objects using Linq and use Linq to SQL to actually get the data. This is all working wonderfully giving me the full control over the shape of my domain objects that I want, but I have hit a problem with concurrency that I thought I'd ask about here. I have concurrency working but the solution feels like it might be wrong (just one of those gitchy feelings). The basic pattern is: private MyDataContext _datacontext private Table _tasks; public Repository(MyDataContext datacontext) { _dataContext = datacontext; } public void GetTasks() { _tasks = from t in _dataContext.Tasks; return from t in _tasks select new Domain.Task { Name = t.Name, Id = t.TaskId, Description = t.Description }; } public void SaveTask(Domain.Task task) { Task dbTask = null; // Logic for new tasks omitted... dbTask = (from t in _tasks where t.TaskId == task.Id select t).SingleOrDefault(); dbTask.Description = task.Description, dbTask.Name = task.Name, _dataContext.SubmitChanges(); } So with that implementation I've lost concurrency tracking because of the mapping to the domain task. I get it back by storing the private Table which is my datacontext list of tasks at the time of getting the original task. I then update the tasks from this stored Table and save what I've updated This is working - I get change conflict exceptions raised when there are concurrency violations, just as I want. However, it just screams to me that I've missed a trick. Is there a better way of doing this? I've looked at the .Attach method on the datacontext but that appears to require storing the original version in a similar way to what I'm already doing. I also know that I could avoid all this by doing away with the domain objects and letting the Linq to SQL generated objects all the way up my stack - but I dislike that just as much as I dislike the way I'm handling concurrency.

    Read the article

  • Kanban vs. Scrum

    - by Andrew Siemer
    Can someone with Kanban experience tell me how Kanban and Scrum differ? What are the pro's and con's of each of the different project management methodologies? Kanban seems to be getting a lot of press these days. I don't want to miss the hottest new way of tracking my teams failures (...and successes). Responses @S. Lott - What part of this article wasn't clear enough? infoq.com/articles/hiranabe-lean-agile-kanban/…. Do you have a more specific question? That is a great article but technically no it is not clear enough. That article gives a great amount of detail about kanban (and thank you for it...good read) but it does not specifically contrast Kanban vs. Scrum. That article will help someone like me make a decision but it most certainly won't help someone like my boss or in general someone less experienced! I was hoping for a quick overview of kanban pros and cons contrasted to scrum pros and cons. Thanks though! @S. Lott - Why do you say kanban vs. scrum? What leads you to conclude they are conflicting approaches? Can you make your question more specific? I don't think that they are necessarily conflicting. But they are different enough for a user to adhere to one over the other. Perhaps one fits a project or company better than the other? How would I sell one over the other when presenting a project management approach. Say I went to a company that was currently stuck in the rutt that is "water fall" - why would I sell one approach over the other?

    Read the article

  • DDD/NHibernate Use of Aggregate root and impact on web design - ex. Editing children of aggregate ro

    - by pbrophy
    Hopefully, this fictitious example will illustrate my problem: Suppose you are writing a system which tracks complaints for a software product, as well as many other attributes about the product. In this case the SoftwareProduct is our aggregate root and Complaints are entities that only can exist as a child of the product. In other words, if the software product is removed from the system, so shall the complaints. In the system, there is a dashboard like web page which displays many different aspects of a single SoftwareProduct. One section in the dashboard, displays a list of Complaints in a grid like fashion, showing only some very high level information for each complaint. When an admin type user chooses one of these complaints, they are directed to an edit screen which allows them to edit the detail of a single Complaint. The question is: what is the best way for the edit screen to retrieve the single Complaint, so that it can be displayed for editing purposes? Keep in mind we have already established the SoftwareProduct as an aggregate root, therefore direct access to a Complaint should not be allowed. Also, the system is using NHibernate, so eager loading is an option, but my understanding is that even if a single Complaint is eager loaded via the SoftwareProduct, as soon as the Complaints collection is accessed the rest of the collection is loaded. So, how do you get the single Complaint through the SoftwareProduct without incurring the overhead of loading the entire Complaints collection?

    Read the article

< Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >