Search Results

Search found 16253 results on 651 pages for 'entity framework ctp5'.

Page 65/651 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Too Many Left Outer Joins in Entity Framework 4?

    - by Adam
    I have a product entity, which has 0 or 1 "BestSeller" entities. For some reason when I say: db.Products.OrderBy(p = p.BestSeller.rating).ToList(); the SQL I get has an "extra" outer join (below). And if I add on a second 0 or 1 relation ship, and order by both, then I get 4 outer joins. It seems like each such entity is producing 2 outer joins rather than one. LINQ to SQL behaves exactly as you'd expect, with no extra join. Has anyone else experienced this, or know how to fix it? SELECT [Extent1].[id] AS [id], [Extent1].[ProductName] AS [ProductName] FROM [dbo].[Products] AS [Extent1] LEFT OUTER JOIN [dbo].[BestSeller] AS [Extent2] ON [Extent1].[id] = [Extent2].[id] LEFT OUTER JOIN [dbo].[BestSeller] AS [Extent3] ON [Extent2].[id] = [Extent3].[id] ORDER BY [Extent3].[rating] ASC

    Read the article

  • Can I create many tables according to the same entity?

    - by jacob
    What I want to do is that I want to make the many tables dinamically which are the same entity structures. And then I want to refer to the dinamically created tables according to the table name. What I understood from hibernate reference is that I can only create only one table and it should be matched exactly with entity. So I can't find any solution to my problem. If you know any relevent open source related to my problem or any tip or web site, let me know. Thanks allways

    Read the article

  • Can I create a transaction using ADO NET Entity Data Model?

    - by Junior Mayhé
    Hi is it possible on the following try-catch to execute a set of statements as a transaction using ADO NET Entity Data Model? [ValidateInput(false)] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(Customer c) { try { c.Created = DateTime.Now; c.Active = true; c.FullName = Request.Form["FirstName"]; db.AddToCustomer(c); db.SaveChanges(); Log log = new Log();//another entity model object log.Created = DateTime.Now; log.Message = string.Format(@"A new customer was created with customerID {0}", c.CustomerID); db.AddToLog(log); db.SaveChanges(); return RedirectToAction("CreateSuccess", "Customer"); } catch { return View(); } } Any thoughts would be very appreciated.

    Read the article

  • Foreign Key vs. Independent Relationships - is there improvement with Entity Framework 5?

    - by zam6ak
    I have read several articles and questions on concept of foreign key vs independent relationship when using Entity Framework. And I am still not 100% sure which way to go.... I would prefer not to "pollute" my domain POCOs by having a property that will be used in FK relationship when I already have a property reference to "has a" object. My questions are (looking at you @EFTeam, @Ladislav Mrnka) are there any improvements on this subject in the upcoming Entity Framework v5? are there more advantages if I use FK instead of independent associations (particularly with code first)?

    Read the article

  • Framework 4 Features: Support for Timed Jobs

    - by Anthony Shorten
    One of the new features of the Oracle Utilities Application Framework V4 is the ability for the batch framework to support Timed Batch. Traditionally batch is associated with set processing in the background in a fixed time frame. For example, billing customers. Over the last few versions their has been functionality required by the products required a more monitoring style batch process. The monitor is a batch process that looks for specific business events based upon record status or other pieces of data. For example, the framework contains a fact monitor (F1-FCTRN) that can be configured to look for specific status's or other conditions. The batch process then uses the instructions on the object to determine what to do. To support monitor style processing, you need to run the process regularly a number of times a day (for example, every ten minutes). Traditional batch could support this but it was not as optimal as expected (if you are a site using the old Workflow subsystem, you understand what I mean). The Batch framework was extended to add additional facilities to support times (and continuous batch which is another new feature for another blog entry). The new facilities include: The batch control now defines the job as Timed or Not Timed. Non-Timed batch are traditional batch jobs. The timer interval (the interval between executions) can be specified The timer can be made active or inactive. Only active timers are executed. Setting the Timer Active to inactive will stop the job at the next time interval. Setting the Timer Active to Active will start the execution of the timed job. You can specify the credentials, language to view the messages and an email address to send the a summary of the execution to. The email address is optional and requires an email server to be specified in the relevant feature configuration. You can specify the thread limits and commit intervals to be sued for the multiple executions. Once a timer job is defined it will be executed automatically by the Business Application Server process if the DEFAULT threadpool is active. This threadpool can be started using the online batch daemon (for non-production) or externally using the threadpoolworker utility. At that time any batch process with the Timer Active set to Active and Batch Control Type of Timed will begin executing. As Timed jobs are executed automatically then they do not appear in any external schedule or are managed by an external scheduler (except via the DEFAULT threadpool itself of course). Now, if the job has no work to do as the timer interval is being reached then that instance of the job is stopped and the next instance started at the timer interval. If there is still work to complete when the interval interval is reached, the instance will continue processing till the work is complete, then the instance will be stopped and the next instance scheduled for the next timer interval. One of the key ways of optimizing this processing is to set the timer interval correctly for the expected workload. This is an interesting new feature of the batch framework and we anticipate it will come in handy for specific business situations with the monitor processes.

    Read the article

  • NHibernate & Cancelling Changes to Entities

    - by user129609
    Hi, This seems like it would be a common issue to be but I don't know the best way to solve it. I want to be able to send an Entity to a view, have changes be made to the entity in the view, but then cancel (remove) those changes if the user cancels out of the view. What is the proper way to do this. Here are two options I have but I think there should be others that are better 1) Take an entity, create a clone, send the clone to the view...if changes are accepted, update the original entity with the clone's values 2) Send the entity to the view, if the user cancels, remove the entity from NHibernate's cache and reload it from the database For (2), the issue for me would be that the old entity could still be referenced throughout my project after it has been removed from the cache.

    Read the article

  • Why does Ruby have Rails while Python has no central framework?

    - by yar
    This is a(n) historical question, not a comparison-between-languages question: This article from 2005 talks about the lack of a single, central framework for Python. For Ruby, this framework is clearly Rails. Why, historically speaking, did this happen for Ruby but not for Python? (or did it happen, and that framework is Django?) Also, the hypothetical questions: would Python be more popular if it had one, good framework? Would Ruby be less popular if it had no central framework? [Please avoid discussions of whether Ruby or Python is better, which is just too open-ended to answer.] Edit: Though I thought this is obvious, I'm not saying that other frameworks do not exist for Ruby, but rather that the big one in terms of popularity is Rails. Also, I should mention that I'm not saying that frameworks for Python are not as good (or better than) Rails. Every framework has its pros and cons, but Rails seems to, as Ben Blank says in the one of the comments below, have surpassed Ruby in terms of popularity. There are no examples of that on the Python side. WHY? That's the question.

    Read the article

  • Architecture : am I doing things right?

    - by Jeremy D
    I'm trying to use a '~classic' layered arch using .NET and Entity Framework. We are starting from a legacy database which is a little bit crappy: Inconsistent naming Unneeded views (view referencing other views, select * views etc...) Aggregated columns Potatoes and Carrots in the same table etc... So I ended with fully isolating my database structure from my domain model. To do so EF entities are hidden from presentation layer. The goal is to permit an easier database refactoring while lowering the impact of it on applications. I'm now facing a lot of challenges and I'm starting to ask myself if I'm doing things right. My Domain Model is highly volatile, it keeps evolving with apps as new fields needs are arising. Complexity of it keeps raising and class it contains start to get a lot of properties. Creating include strategy and reprojecting to EF is very tricky (my domain objects don't have any kind of lazy/eager loading relationship properties): DomainInclude<Domain.Model.Bar>.Include("Customers").Include("Customers.Friends") // To... IFooContext.Bars.Include(...).Include(...).Where(...) Some framework are raping the isolation levels (Devexpress Grids which needs either XPO or IQueryable for filtering and paging large data sets) I'm starting to ask myself if : the isolation of EF auto-generated entities is an unneeded cost. I should allow frameworks to hit IQueryable? Slow slope to hell? (it's really hard to isolate DevExpress framework, any successful experience?) the high volatility of my domain model is normal? Did you have similar difficulties? Any advice based on experience?

    Read the article

  • What's wrong performing unit test against concrete implementation if your frameworks are not going to change?

    - by palm snow
    First a bit of background: We are re-architecting our product suite that was written 10 years ago and served its purpose. One thing that we cannot change is the database schema as we have 500+ client base using this system. Our db schema has over 150+ tables. We have decided on using Entity Framework 4.1 as DAL and still evaluating various frameworks for storing our business logic. I am investigation to bring unit testing into the mix but I also confused as to how far I need to go with setting up a full blown TDD environment. One aspect of setting up unit testing is by getting into implementing Repository, unit of work and mocking frameworks etc. This mean there will be cost and investment on the code-bloat associated with all these frameworks. I understand some of this could be auto-generated but when it comes to things like behaviors, that will be mostly hand written. Just to be clear, I am not questioning the important of unit testing your code. I am just not sure we need all its components (like repository, mocking etc.) when we are fairly certain of storage mechanism/framework (SQL Server/Entity Framework). All that code bloat with generic repositories make sense when you need a generic layers with ability to change this whenever you like however its very likely a YAGNI in our case. What we need is more of integration testing where we can unit-test our code with concrete repository objects and test data in database. In this scenario, just running integration test seem to be more beneficial in our case. Any thoughts if I am missing any thing here?

    Read the article

  • Database users in the Oracle Utilities Application Framework

    - by Anthony Shorten
    I mentioned the product database users fleetingly in the last blog post and they deserve a better mention. This applies to all versions of the Oracle Utilities Application Framework. The Oracle Utilities Application Framework uses up to three users initially as part of the base operations of the product. The type of database supported (the framework supports Oracle, IBM DB2 and Microsoft SQL Server) dictates the number of users used and their permissions. For publishing brevity I will outline what is available for the Oracle database and, in summary, mention where it differs for the other database supported. For Oracle database customers we ship three distinct database users: Administration User (SPLADM or CISADM by default) - This is the database user that actually owns the schema. This user is not used by the product to do any DML (Data Manipulation Language) SQL other than that is necessary for maintenance of the database. This database user performs all the DCL (Data Control Language) and DDL (Data Definition Language) against the database. It is typically reserved for Database Administration use only. Product Read Write User (SPLUSER or CISUSER by default) - This is the database user used by the product itself to execute DML (Data Manipulation Language) statements against the schema owned by the Administration user. This user has the appropriate read and write permission to objects within the schema owned by the Administration user. For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. Product Read User (SPLREAD or CISREAD by default) - This is the database that has read only permission to the schema owned by the Administration user. It is used for reporting or any part of the product or interface that requires read permissions to the database (for example, products that have ConfigLab and Archiving use this user for remote access). For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. You may notice the words by default in the list above. The values supplied with the installer are the default and can be changed to what the site standard or implementation wants to use (as long as they conform to the standards supported by the underlying database). You can even create multiples of each within the same database and pointing to same schema. To manage the permissions for the users, there is a utility provided with the installation (oragensec (Oracle), db2gensec (DB2) or msqlgensec (SQL Server)) that generates the security definitions for the above users. That can be executed a number of times for each schema to give users appropriate permissions. For example, it is possible to define more than one read/write User to access the database. This is a common technique used by implementations to have a different user per access mode (to separate online and batch). In fact you can also allocate additional security (such as resource profiles in Oracle) to limit the impact of specific users at the database. To facilitate users and permissions, in Oracle for example, we create a CISREAD role (read only role) and a CISUSER role (read write role) that can be allocated to the appropriate database user. When the security permissions utility, oragensec in this case, is executed it uses the role to determine the permissions. To give you a case study, my underpowered laptop has multiple installations on it of multiple products but I have one database. I create a different schema for each product and each version (with my own naming convention to help me manage the databases). I create individual users on each schema and run oragensec to maintain the permissions for each appropriately. It works fine as long I have setup the userids appropriately. This means: Creating the users with the appropriate roles. I use the common CISUSER and CISREAD role across versions and across Oracle Utilities Application Framework products. Just remember to associate the CISUSER role with the database user you want to use for read/write operations and the CISREAD role with the user you wish to use for the read only operations. The role is treated as a tag to indicate the oragensec utility which appropriate permissions to assign to the user. The utilities for the other database types essentially do the same, obviously using the technology available within those databases. Run oragensec against the read write user and read only user against the appropriate administration user (I will abbreviate the user to ADM user). This ensures the right permissions are allocated to the right users for the right products. To help me there, I use the same prefix on the user name for the same product. For example, my Oracle Utilities Application Framework V4 environment has the administration user set to FW4ADM and the associated FW4USER and FW4READ as the users for the product to use. For my MWM environment I used MWMADM for the administration user and MWMUSER and MWMREAD for my associated users. You get the picture. When I run oragensec (once for each ADM user), I know what other users to associate with it. Remember to rerun oragensec against the users if I run upgrades, service packs or database based single fixes. This assures that the users are in synchronization with the ADM user. As a side note, for those who do not understand the difference between DML, DCL and DDL: DDL (Data Definition Language) - These are SQL statements that define the database schema and the structures within. SQL Statements such as CREATE and DROP are examples of DDL SQL statements. DCL (Data Control Language) - These are the SQL statements that define the database level permissions to DDL maintained objects within the database. SQL Statements such as GRANT and REVOKE are examples of DCL SQL statements. DML (Database Manipulation Language) - These are SQL statements that alter the data within the tables. SQL Statements such as SELECT, INSERT, UPDATE and DELETE are examples of DML SQL statements. Hope this has clarified the database user support. Remember in Oracle Utilities Application Framework V4 we enhanced this by also supporting CLIENT_IDENTIFIER to allow the database to still use the administration user for the main processing but make the database session more traceable.

    Read the article

  • Will TSQL become useless because of new ORMs? [closed]

    - by Saeed Neamati
    By introducing LINQ to SQL, I found myself and my .NET developer colleagues gradually moving from TSQL to C# to create queries on the database. Entity Framework made that shift almost permanent. Now it's nearly 2 years that I use LINQ to SQL and LINQ to Entities and haven't used TSQL that much. Yesterday, a colleague encountered a problem (he had to create a SP) and we went to help him. But we all found that our TSQL knowledge was diminished for sure, and a simple SP that seemed trivial to us 2 or 3 years ago, was a challenge to be solved yesterday. Thus it came to my mind that while TSQL's life is attached to SQL Server, and logically as long as SQL Server lives and doesn't change it's SQL language, TSQL would also live, practically it might die, and soon very few people might know it. Am I right? Do existence of ORMs like Entity Framework threaten TSQL's life and usability?

    Read the article

  • Visual Studio 2012 Release Candidate disponible, avec .NET Framework 4.5 et Team Foundation Server 2012

    Visual Studio 2012 Release Candidate disponible avec .NET Framework 4.5 et Team Foundation Server 2012 Comme il est de coutume depuis la publication de la Developer Preview de Windows 8, l'OS s'accompagne toujours des outils de développement de Microsoft. La société ne déroge pas à cette règle et publie à la suite de la Release Preview de Windows 8, la Release Candidate de Visual Studio 11, avec pour nom officiel Visual Studio 2012, du Framework .NET 4.5 et de Team Foundation Server 2012. L'environnement de développement qui entre dans la dernière ligne droite de son cycle de développement, arbore pour c...

    Read the article

  • SubCut Scala Dependency Injection Framework

    - by kerry
    It’s no secret I am a fan of dependency injection.  So I was happy to hear that Dick Wall of the Java Posse recently released a dependency injection framework for scala.  Called SubCut, or Scala Uniquely Bound Classes Under Traits, the project is a ‘mix of service locator and dependency injection patterns designed to provide an idiomatic way of providing configured dependencies to scala applications’. It’s hosted on github, so ‘git’ (rimshot) over there and try it out: Dependency injection framework for Scala

    Read the article

  • Microsoft lance Hadoop pour Windows Server et Windows Azure, première version Beta du framework "HDInsight"

    Microsoft lance Hadoop pour Windows Server et Windows Azure Première version Beta du framework HDInsight. Microsoft lance une version bêta publique du Framework Hadoop pour Windows Server et Windows Azure. Les deux nouveaux produits portent les noms officiels de Windows Azure HDInsight Service et Microsoft HDInsight Server pour Windows. Ces produits sont nés d'un partenariat entre Microsoft et Hortonworks, éditeur de logiciels et fournisseur de solutions Hadoop commerciales. Un mois après l'annonce du partenariat en automne 2011, Microsoft a renoncé à faire sa propre solution Big-Data intitulée Dryad

    Read the article

  • Accessing Repositories from Domain

    - by Paul T Davies
    Say we have a task logging system, when a task is logged, the user specifies a category and the task defaults to a status of 'Outstanding'. Assume in this instance that Category and Status have to be implemented as entities. Normally I would do this: Application Layer: public class TaskService { //... public void Add(Guid categoryId, string description) { var category = _categoryRepository.GetById(categoryId); var status = _statusRepository.GetById(Constants.Status.OutstandingId); var task = Task.Create(category, status, description); _taskRepository.Save(task); } } Entity: public class Task { //... public static void Create(Category category, Status status, string description) { return new Task { Category = category, Status = status, Description = descrtiption }; } } I do it like this because I am consistently told that entities should not access the repositories, but it would make much more sense to me if I did this: Entity: public class Task { //... public static void Create(Category category, string description) { return new Task { Category = category, Status = _statusRepository.GetById(Constants.Status.OutstandingId), Description = descrtiption }; } } The status repository is dependecy injected anyway, so there is no real dependency, and this feels more to me thike it is the domain that is making thedecision that a task defaults to outstanding. The previous version feels like it is the application layeer making that decision. Any why are repository contracts often in the domain if this should not be a posibility? Here is a more extreme example, here the domain decides urgency: Entity: public class Task { //... public static void Create(Category category, string description) { var task = new Task { Category = category, Status = _statusRepository.GetById(Constants.Status.OutstandingId), Description = descrtiption }; if(someCondition) { if(someValue > anotherValue) { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.UrgentId); } else { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.SemiUrgentId); } } else { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.NotId); } return task; } } There is no way you would want to pass in all possible versions of Urgency, and no way you would want to calculate this business logic in the application layer, so surely this would be the most appropriate way? So is this a valid reason to access repositories from the domain?

    Read the article

  • How does an Engine like Source process entities?

    - by Júlio Souza
    [background information] On the Source engine (and it's antecessor, goldsrc, quake's) the game objects are divided on two types, world and entities. The world is the map geometry and the entities are players, particles, sounds, scores, etc (for the Source Engine). Every entity has a think function, which do all the logic for that entity. So, if everything that needs to be processed comes from a base class with the think function, the game engine could store everything on a list and, on every frame, loop through it and call that function. On a first look, this idea is reasonable, but it can take too much resources, if the game has a lot of entities.. [end of background information] So, how does a engine like Source take care (process, update, draw, etc) of the game objects?

    Read the article

  • Java based portal framework

    - by Jatin
    We have an application that needs to be built and are looking for some Java based portal framework. In last few days I have gone through over 10 different open source option LiveRay, JetSpeed2, GateIn etc. But they are all too complex to be judged so quickly. Can anyone suggest some framework which is ease to use but has the functionality to handle complex situations. Most importantly, the portlets will run flash/HTML5 contant. Thanks.

    Read the article

  • How to profile LINQ to Entities queries in your asp.net applications - part 3

    - by nikolaosk
    In this post I will continue exploring ways on how to profile database activity when using the Entity Framework as the data access layer in our applications. If you want to read the first post of the series click here . If you want to read the second post of the series click here . In this post I will use the excellent (best tool for EF profiling) which is called Entity Framework Profiler. You can download the trial - fully functional edition of this tool from here . I will use the previous example...(read more)

    Read the article

  • How to profile LINQ to Entities queries in your asp.net applications - part 2

    - by nikolaosk
    In this post I will continue exploring ways on how to profile database activity when using the Entity Framework as the data access layer in our applications. I will use a simple asp.net web site and EF to demonstrate this. If you want to read the first post of the series click here . In this post I will use the Tracing Provider Wrappers which extend the Entity framework. You can download the whole solutions/samples project from here .The providers were developed from Jaroslaw Kowalski . 1) Unzip...(read more)

    Read the article

  • qooxdoo 4.0 : le framework JavaScript adopte les Pointers Events, l'équipe unifie les périphériques

    qooxdoo 4.0 : le framework JavaScript adopte les Pointers Events L'équipe unifie les périphériques (Desktop, Mobile et Site Web)qooxdoo est un framework JavaScript basé sur le système de classes. Il est open source et permet le développement d'applications Web dites « riches » (RIA). La principale nouveauté de la version 4.0 concerne les outils GUI des trois types de plate-formes (site Web, mobile et desktop).Événements indépendants du périphérique d'entréeLes applications qooxdoo peuvent désormais...

    Read the article

  • Runnin Framework 4.0 with Powershell

    - by Mike Koerner
    I had problems running scripts with Framework 4.0 assemblies I created.  The error I was getting was  Add-Type : Could not load file or assembly 'file:///C:\myDLL.dll' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. I had to add the supported framework to the powershell.exe.config file.<supportedRuntime version="v4.0.30319"/>I still had a problem running the assembly so I had to recompile and set "Generate serialization Assembly" to off.

    Read the article

  • Detach an entity from a JPA persistence context (JPA 2.0 / Hibernate / EJB 3 / J2EE 6)

    - by Julien
    Hi, I wrote a stateless EJB method allowing to get an entity in "read-only" mode. The way to do this is to get the entity with the EntityManager then detach it (using the JPA 2.0 EntityManager). My code is the following: @PersistenceContext private EntityManager entityManager; public T getEntity(int entityId, Class<T> specificClass, boolean readOnly) throws Exception{ try{ T entity = (T)entityManager.find(specificClass, entityId); if (readOnly){ entityManager.detach(entity); } return entity; }catch (Exception e){ logger.error("", e); throw e; } } Getting the entity works fine, but the call to the detach method returns the following error: GRAVE: javax.ejb.EJBException at ... Caused by: java.lang.AbstractMethodError: org.hibernate.ejb.EntityManagerImpl.detach(Ljava/lang/Object;)V at com.sun.enterprise.container.common.impl.EntityManagerWrapper.detach(EntityManagerWrapper.java:973) at com.mycomp.dal.MyEJB.getEntity(MyEJB.java:37) I can't get more information and don't understand what the problem is... Could somebody help ?

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >