Search Results

Search found 1488 results on 60 pages for 'kohana orm'.

Page 11/60 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • How do I display a dynamically resized image in Kohana without saving it?

    - by Ygam
    I have an image path string stored in a database It goes like: img/uploads/imagename.jpg I have a controller: $this->image = new Image($wines->image) //this is assuming that I have a wines table with the image property $this->image->resize(60, 250, Image::AUTO) echo $this->image->render(); //the problem is nothing is rendered //Is there a better way of doing this? the image path that I am passing at the Image object //instantiation is the result of a query

    Read the article

  • Combining 2 PHP frameworks that both implement __() internationalization function

    - by mclin
    Never had a question I couldn't google until now, and it may be a doozy: I have a PHP site that combines Wordpress with Kohana - Wordpress for the blog, and Kohana for the custom functionality. This is done using a Wordpress plugin that stitches them together. This works great except they both define a __() internationalization function, with different arguments etc. so once wordpress has overridden kohana's __(), if kohana calls __() it explodes. I'm not that familiar with PHP so this might be naive, but shouldn't this stuff be namespaced? Is there anyway other than changing the source of one or the other framework to allow them to call their own respective __()?

    Read the article

  • Thick models Vs. Business Logic, Where do you draw the distinction?

    - by TokenMacGuy
    Today I got into a heated debate with another developer at my organization about where and how to add methods to database mapped classes. We use sqlalchemy, and a major part of the existing code base in our database models is little more than a bag of mapped properties with a class name, a nearly mechanical translation from database tables to python objects. In the argument, my position was that that the primary value of using an ORM was that you can attach low level behaviors and algorithms to the mapped classes. Models are classes first, and secondarily persistent (they could be persistent using xml in a filesystem, you don't need to care). His view was that any behavior at all is "business logic", and necessarily belongs anywhere but in the persistent model, which are to be used for database persistence only. I certainly do think that there is a distinction between what is business logic, and should be separated, since it has some isolation from the lower level of how that gets implemented, and domain logic, which I believe is the abstraction provided by the model classes argued about in the previous paragraph, but I'm having a hard time putting my finger on what that is. I have a better sense of what might be the API (which, in our case, is HTTP "ReSTful"), in that users invoke the API with what they want to do, distinct from what they are allowed to do, and how it gets done. tl;dr: What kinds of things can or should go in a method in a mapped class when using an ORM, and what should be left out, to live in another layer of abstraction?

    Read the article

  • Anemic Domain Model, Business Logic and DataMapper (PHP)

    - by sunwukung
    I've implemented a rudimentary ORM layer based on DataMapper (I don't want to use a full blown ORM like Propel/Doctrine - for anything beyond simple fetch/save ops I prefer to access the data directly layer using a SQL abstraction layer). Following the DataMapper pattern, I've endeavoured to keep all persistence operations in the Mapper - including the location of related entities. My Entities have access to their Mapper, although I try not to call Mapper logic from the Entity interface (although this would be simple enough). The result is: // get a mapper and produce an entity $ProductMapper = $di->get('product_mapper'); $Product = $ProductMapper->find('[email protected]','email'); //.. mutaute some values.. save $ProductMapper->save($Product) // uses __get to trigger relation acquisition $Manufacturer = $Product->manufacturer; I've read some articles regarding the concept of an Anemic Domain model, i.e. a Model that does not contain any "business logic". When demonstrating the sort of business logic ideally suited to a Domain Model, however, acquiring related data items is a common example. Therefore I wanted to ask this question: Is persistence logic appropriate in Domain Model objects?

    Read the article

  • How should I model the database for this problem? And which ORM can handle it?

    - by Kristof Claes
    I need to build some sort of a custom CMS for a client of ours. These are some of the functional requirements: Must be able to manage the list of Pages in the site Each Page can contain a number of ColumnGroups A ColumnGroup is nothing more than a list of Columns in a certain ColumnGroupLayout. For example: "one column taking up the entire width of the page", "two columns each taking up half of the width", ... Each Column can contain a number ContentBlocks Examples of a ContentBlock are: TextBlock, NewsBlock, PictureBlock, ... ContentBlocks can be given a certain sorting within a Column A ContentBlock can be put in different Columns so that content can be reused without having to be duplicated. My first quick draft of how this could look like in C# code (we're using ASP.NET 4.0 to develop the CMS) can be found at the bottom of my question. One of the technical requirements is that it must be as easy as possible to add new types of ContentBlocks to the CMS. So I would like model everything as flexible as possible. Unfortunately, I'm already stuck at trying to figure out how the database should look like. One of the problems I'm having has to do with sorting different types of ContentBlocks in a Column. I guess each type of ContentBlock (like TextBlock, NewsBlock, PictureBlock, ...) should have it's own table in the database because each has it's own different fields. A TextBlock might only have a field called Text whereas a NewsBlock might have fields for the Text, the Summary, the PublicationDate, ... Since one Column can have ContentBlocks located in different tables, I guess I'll have to create a many-to-many association for each type of ContentBlock. For example: ColumnTextBlocks, ColumnNewsBlocks and ColumnPictureBlocks. The problem I have with this setup is the sorting of the different ContentBlocks in a column. This could be something like this: TextBlock NewsBlock TextBlock TextBlock PictureBlock Where do I store the sorting number? If I store them in the associaton tables, I'll have to update a lot of tables when changing the sorting order of ContentBlocks in a Column. Is this a good approach to the problem? Basically, my question is: What is the best way to model this keeping in mind that it should be easy to add new types of ContentBlocks? My next question is: What ORM can deal with that kind of modeling? To be honest, we are ORM-virgins at work. I have been reading a bit about Linq-to-SQL and NHibernate, but we have no experience with them. Because of the IList in the Column class (see code below) I think we can rule out Linq-to-SQL, right? Can NHibernate handle the mapping of data from many different tables to one IList? Also keep in mind that this is just a very small portion of the domain. Other parts are Users belonging to a certain UserGroup having certain Permissions on Pages, ColumnGroups, Columns and ContentBlocks. The code (just a quick first draft): public class Page { public int PageID { get; set; } public string Title { get; set; } public string Description { get; set; } public string Keywords { get; set; } public IList<ColumnGroup> ColumnGroups { get; set; } } public class ColumnGroup { public enum ColumnGroupLayout { OneColumn, HalfHalf, NarrowWide, WideNarrow } public int ColumnGroupID { get; set; } public ColumnGroupLayout Layout { get; set; } public IList<Column> Columns { get; set; } } public class Column { public int ColumnID { get; set; } public IList<IContentBlock> ContentBlocks { get; set; } } public interface IContentBlock { string GetSummary(); } public class TextBlock : IContentBlock { public string GetSummary() { return "I am a piece of text."; } } public class NewsBlock : IContentBlock { public string GetSummary() { return "I am a news item."; } }

    Read the article

  • Would an ORM have any way of determining that a SQLite column contains date-times or booleans?

    - by DanM
    I've been thinking about using SQLite for my next project, but I'm concerned that it seems to lack proper datetime and bit data types. If I use DbLinq (or some other ORM) to generate C# classes, will the data types of the properties be "dumbed down"? Will date-time data be placed in properties of type string or double? Will boolean data be placed in properties of type int? If yes, what are the implications? I'm envisioning a scenario where I need to write a whole second layer of classes with more specific data types and do a bunch of transformations and casts, but maybe it's not as bad as I fear. If you have any experience with this or a similar scenario, how did you handle it?

    Read the article

  • What are alternatives to standard ORM in a data access layer?

    - by swampsjohn
    We're all familiar with basic ORM with relational databases: an object corresponds to a row and an attribute in that object to a column (or some slight variation), though many ORMs add a lot of bells and whistles. I'm wondering what other alternatives there are (besides raw access to the database or whatever you're working with). Alternatives that just work with relational databases would be great, but ones that could encapsulate multiple types of backends besides just SQL (such as flat files, RSS, NoSQL, etc.) would be even better. I'm more interested in ideas rather than specific implantations and what languages/platforms they work with, but please link to anything you think is interesting.

    Read the article

  • Is there an ORM that allows a "plugin" to extend the database?

    - by IP
    So, I've been searching for the answer to this, but I can't find anything I have an Entity Framework Model (MyModel1) - for now, we'll say this contains a "Users" table It's part of a big app, that has a references to an "Addresses" project The addresses project contains an Entity Framework Model (MyModel2), this contains a Users table, and an Addresses table (pointing to the same database. The main app has a control that edits the user, and in that control it has an "addresses" control which actually exists in the "Addresses" project. To make this work, the User control passes the User object down to the addresses control, however, as the User that's been passed belongs to MyModel1 and not MyModel2, another User object has to be loaded up, then it can be used. This isn't ideal as I've had to load up the User twice. Is there a way of say, MyModel2 extending MyModel1, which effectively just adds a relationship to "User". Or is there an ORM that would handle this better? Or even a design pattern that would handle this better?

    Read the article

  • ColdFusion 9 ORM - Securing an object at a low level...

    Hiya: I wonder if anybody has an idea on this... I'm looking at securing a low level object in my model (a "member" object) so by default only certain information can be accessed from it. Here's a possible approach (damn sexy if it would work!): 1) Add a property called "locked" - defaulting to "true" to the object itself. It appears that the only option to do this, and not tie it to a db table column, is to use the formula attribute that takes a query. So to default locked to TRUE I've got: <cfproperty name="locked" formula="select 1" /> 2) Then, I overwrite the existing set-ers and get-ers to use this: e.g. <cffunction name="getFullname" returnType="string"> <cfscript> if (this.getLocked()) { return this.getScreenName(); } else { return this.getFullname(); } </cfscript> </cffunction> 3) When i use it like this: <p> #oMember.getFullName()# </p> shows the ScreenName (great!) but... When I do this: <cfset oMember.setLocked(false)> <p> #oMember.getFullName()# </p> Just hangs!!! It appears that attempting to set a property that's been defined using "formula" is a no-no. Any ideas? Any other way we can have properties attached to an ORM object that are gettable and settable without them being present in the db? Ideas appreciated!

    Read the article

  • assembling an object graph without an ORM -- in the service layer or data layer?

    - by Hans Gruber
    At my current gig, our persistence layer uses IBatis going against SQL Server stored procedures (puke). IMHO, this approach has many disadvantages over the use of a "true" ORM such NHibernate or EF, but the one I'm trying to address here revolves around all the boilerplate code needed to map data from a result set into an object graph. Say I have the following DTO object graph I want to return to my presentation layer: IEnumerable<CustomerDTO> |--> IEnumerable<AddressDTO> |--> LatestOrderDTO The way I've implemented this is to have a discrete method in my DAO class to return each IEnumerable<*DTO>, and then have my service class be responsible for orchestrating the calls to the DAO. It then returns the fully assembled object graph to the client: public class SomeService(){ public SomeService(IDao someDao){ this._someDao = someDao; } public IEnumerable<CustomerDTO> ListCustomersForHistory(int brokerId){ var customers = _someDao.ListCustomersForBroker(brokerId); foreach (customer in customers){ customer.Addresses = someDao.ListCustomersAddresses(brokerId); customer.LatestOrder = someDao.GetCustomerLatestOrder(brokerId); } } return customers; } My question is should this logic belong in the service layer or the should I make my DAO such that it instead returns the assembled object graph. If I was using NHibernate, I assume that this kind of relationship association between objects comes for "free"?

    Read the article

  • Are separate business objects needed when persistent data can be stored in a usable format?

    - by Kylotan
    I have a system where data is stored in a persistent store and read by a server application. Some of this data is only ever seen by the server, but some of it is passed through unaltered to clients. So, there is a big temptation to persist data - whether whole rows/documents or individual fields/sub-documents - in the exact form that the client can use (eg. JSON), as this removes various layers of boilerplate, whether in the form of procedural SQL, an ORM, or any proxy structure which exists just to hold the values before having to re-encode them into a client-suitable form. This form can usually be used on the server too, though business logic may have to live outside of the object, On the other hand, this approach ends up leaking implementation details everywhere. 9 times out of 10 I'm happy just to read a JSON structure out of the DB and send it to the client, but 1 in every 10 times I have to know the details of that implicit structure (and be able to refactor access to it if the stored data ever changes). And this makes me think that maybe I should be pulling this data into separate business objects, so that business logic doesn't have to change when the data schema does. (Though you could argue this just moves the problem rather than solves it.) There is a complicating factor in that our data schema is constantly changing rapidly, to the point where we dropped our previous ORM/RDBMS system in favour of MongoDB and an implicit schema which was much easier to work with. So far I've not decided whether the rapid schema changes make me wish for separate business objects (so that server-side calculations need less refactoring, since all changes are restricted to the persistence layer) or for no separate business objects (because every change to the schema requires the business objects to change to stay in sync, even if the new sub-object or field is never used on the server except to pass verbatim to a client). So my question is whether it is sensible to store objects in the form they are usually going to be used, or if it's better to copy them into intermediate business objects to insulate both sides from each other (even when that isn't strictly necessary)? And I'd like to hear from anybody else who has had experience of a similar situation, perhaps choosing to persist XML or JSON instead of having an explicit schema which has to be assembled into a client format each time.

    Read the article

  • Starter question of declarative style SQLAlchemy relation()

    - by jfding
    I am quite new to SQLAlchemy, or even database programming, maybe my question is too simple. Now I have two class/table: class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True) name = Column(String(40)) ... class Computer(Base): __tablename__ = 'comps' id = Column(Integer, primary_key=True) buyer_id = Column(None, ForeignKey('users.id')) user_id = Column(None, ForeignKey('users.id')) buyer = relation(User, backref=backref('buys', order_by=id)) user = relation(User, backref=backref('usings', order_by=id)) Of course, it cannot run. This is the backtrace: File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/state.py", line 71, in initialize_instance fn(self, instance, args, kwargs) File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/mapper.py", line 1829, in _event_on_init instrumenting_mapper.compile() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/mapper.py", line 687, in compile mapper._post_configure_properties() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/mapper.py", line 716, in _post_configure_properties prop.init() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/interfaces.py", line 408, in init self.do_init() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/properties.py", line 716, in do_init self._determine_joins() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/properties.py", line 806, in _determine_joins "many-to-many relation, 'secondaryjoin' is needed as well." % (self)) sqlalchemy.exc.ArgumentError: Could not determine join condition between parent/child tables on relation Package.maintainer. Specify a 'primaryjoin' expression. If this is a many-to-many relation, 'secondaryjoin' is needed as well. There's two foreign keys in class Computer, so the relation() callings cannot determine which one should be used. I think I must use extra arguments to specify it, right? And howto? Thanks

    Read the article

  • What are the benefits of using ORM over XML Serialization/Deserialization?

    - by Tequila Jinx
    I've been reading about NHibernate and Microsoft's Entity Framework to perform Object Relational Mapping against my data access layer. I'm interested in the benefits of having an established framework to perform ORM, but I'm curious as to the performance costs of using it against standard XML Serialization and Deserialization. Right now, I develop stored procedures in Oracle and SQL Server that use XML Types for either input or output parameters and return or shred XML depending on need. I use a custom database command object that uses generics to deserialize the XML results into a specified serializable class. By using a combination of generics, xml (de)serialization and Microsoft's DAAB, I've got a process that's fairly simple to develop against regardless of the data source. Moreover, since I exclusively use Stored Procedures to perform database operations, I'm mostly protected from changes in the data structure. Here's an over-simplified example of what I've been doing. static void main() { testXmlClass test = new test(1); test.Name = "Foo"; test.Save(); } // Example Serializable Class ------------------------------------------------ [XmlRootAttribute("test")] class testXmlClass() { [XmlElement(Name="id")] public int ID {set; get;} [XmlElement(Name="name")] public string Name {set; get;} //create an instance of the class loaded with data. public testXmlClass(int id) { GenericDBProvider db = new GenericDBProvider(); this = db.ExecuteSerializable("myGetByIDProcedure"); } //save the class to the database... public Save() { GenericDBProvider db = new GenericDBProvider(); db.AddInParameter("myInputParameter", DbType.XML, this); db.ExecuteSerializableNonQuery("mySaveProcedure"); } } // Database Handler ---------------------------------------------------------- class GenericDBProvider { public T ExecuteSerializable<T>(string commandText) where T : class { XmlSerializer xml = new XmlSerializer(typeof(T)); // connection and command code is assumed for the purposes of this example. // the final results basically just come down to... return xml.Deserialize(commandResults) as T; } public void ExecuteSerializableNonQuery(string commandText) { // once again, connection and command code is assumed... // basically, just execute the command along with the specified // parameters which have been serialized. } public void AddInParameter(string name, DbType type, object value) { StringWriter w = new StringWriter(); XmlSerializer x = new XmlSerializer(value.GetType()); //handle serialization for serializable classes. if (type == DbType.Xml && (value.GetType() != typeof(System.String))) { x.Serialize(w, value); w.Close(); // store serialized object in a DbParameterCollection accessible // to my other methods. } else { //handle all other parameter types } } } I'm starting a new project which will rely heavily on database operations. I'm very curious to know whether my current practices will be sustainable in a high-traffic situation and whether or not I should consider switching to NHibernate or Microsoft's Entity Framework to perform what essentially seems to boil down to the same thing I'm currently doing. I appreciate any advice you may have.

    Read the article

  • How to convert an ORM to its subclass using Hibernate ?

    - by Gaaston
    Hi everybody, For example, I have two classes : Person and Employee (Employee is a subclass of Person). Person : has a lastname and a firstname. Employee : has also a salary. On the client-side, I have a single HTML form where i can fill the person informations (like lastname and firstname). I also have a "switch" between "Person" and "Employee", and if the switch is on Employee I can fill the salary field. On the server-side, Servlets receive informations from the client and use the Hibernate framework to create/update data to/from the database. The mapping i'm using is a single table for persons and employee, with a discriminator. I don't know how to convert a Person in an Employee. I firstly tried to : load the Person p from the database create an empty Employee e object copy values from p into e set the salary value save e into the database But i couldn't, as I also copy the ID, and so Hibernate told me they where two instanciated ORM with the same id. And I can't cast a Person into an Employee directly, as Person is Employee's superclass. There seems to be a dirty way : delete the person, and create an employee with the same informations, but I don't really like it.. So I'd appreciate any help on that :) Some precisions : The person class : public class Person { protected int id; protected String firstName; protected String lastName; // usual getters and setters } The employee class : public class Employee extends Person { // string for now protected String salary; // usual getters and setters } And in the servlet : // type is the "switch" if(request.getParameter("type").equals("Employee")) { Employee employee = daoPerson.getEmployee(Integer.valueOf(request.getParameter("ID"))); modifyPerson(employee, request); employee.setSalary(request.getParameter("salary")); daoPerson.save(employee ); } else { Person person = daoPerson.getPerson(Integer.valueOf(request.getParameter("ID"))); modifyPerson(employee, request); daoPerson.save(person); } And finally, the loading (in the dao) : public Contact getPerson(int ID){ Session session = HibernateSessionFactory.getSession(); Person p = (Person) session.load(Person.class, new Integer(ID)); return p; } public Contact getEmployee(int ID){ Session session = HibernateSessionFactory.getSession(); Employee = (Employee) session.load(Employee.class, new Integer(ID)); return p; } With this, i'm getting a ClassCastException when trying to load a Person using getEmployee. XML Hibernate mapping : <class name="domain.Person" table="PERSON" discriminator-value="P"> <id name="id" type="int"> <column name="ID" /> <generator class="native" /> </id> <discriminator column="type" type="character"/> <property name="firstName" type="java.lang.String"> <column name="FIRSTNAME" /> </property> <property name="lastName" type="java.lang.String"> <column name="LASTNAME" /> </property> <subclass name="domain.Employee" discriminator-value="E"> <property name="salary" column="SALARY" type="java.lang.String" /> </subclass> </class> Is it clear enough ? :-/

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • NDC2010: Rob Conery I cant hear you, there is an ORM in my ear

    In this session Rob Conery tried to show of the new Shiny toy NoSQL in as many forms as possible, discussing the pros and and some cons. He based much of the talk on his experiences with building TekPub and the requirements they had around that. He tried to illustrate that this wasnt a magical [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Using heavyweight ORM implementation for light based games

    - by Holland
    I'm just about to engulf myself in an MVC-based/Component architecture in C#, using MySQL's connector/Net for the data storage, and probably some NHibernate/FluentNHibernate Object-relational-mapping to map out the data structure. The goal is to build a scalable 2D RPG. Then I think about it...and I can't help but think this seems a little "heavy weight" for a 2D RPG, especially one which, while I plan to incorporate a lot of functionality and entertaining gameplay, may be ported to something like Windows Phone or Android in the future. Yet, on the other hand even a 2-Dimensional RPG can become very complicated, and therefore must incorporate a lot of functionality. While this can be accomplished with text/XML/JSON for data storage, is there a better way? Is something such as Object-Relational-Mapping useful in such an application? So, what do you think? Would you say that there is a place for such technologies? I don't know what to think...

    Read the article

  • ORM Release History : Q1 2010 SP 1 (v2010.01.0527)

    Enhancements Full support for Visual Studio 2010 - the visual designer is now working in Visual Studio 2010 Ria Provider beta - supports all basic operations (query, insert, update, delete) New Enhancer - The enhancer has been replaced by a new implementation based on mono cecil. This fixes all known enhancer bugs and speeds up the enhancing process as well. Data Services Wizard integration - The Data Services Wizard is now integrated into the OpenAccess product. You can start it by using the...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Organizing your Data Access Layer

    - by nighthawk457
    I am using Entity Framework as my ORM in an ASP.Net application. I have my database already created so ended up generating the entity model from it. What is a good way to organize files/classes in the data access layer. My entity framework model is in a class library and I was planning on adding additional classes per Entity(i.e per database table) and putting all the queries related to those tables in their respective classes. I am not sure if this is a right approach and if it is then where do the queries requiring data from multiple tables go? Am I completely wrong in organizing my files based on entities/tables and should I organize them based on functional areas instead.

    Read the article

  • Is there something like a "long running offline transaction" for NHibernate or any other ORM?

    - by Vilx-
    In essence this is a followup of this question. I'm beginning to feel that I should give up the whole idea, but I'll give it one more shot. What I want is pretty much like a DB transaction. It should track my changes to the DB and then in the end allow me to either commit or rollback them. If I insert an object, I should get it back in my next (appropriate) SELECT query. If I delete it, future SELECT queries should not return it. Etc. But there is one catch - this transaction would be very long running. It would start when the user opened a form (I'm talking about Windows Forms here), and the commit/rollback would be when the user closed it(with OK/Cancel). So it could take anywhere between seconds and days. This requirement rules out a standard DB transaction because that would lock the tables/rows it touched, and other users wouldn't be able to use the system. Also the transaction should not commit ANY changes to the DB until it was really committed. So if one user makes some changes, others don't see them until OK button is hit. This prevents errors in case the computer crashes or is disconnected from the network. I'm quite OK if the solution puts constraints on my model (I'm using MSSQL 2008, btw). I can design the DB/code any way I like. I'm also fine with the idea that a commit could fail because someone already modified one of the objects my transaction touched. Is there anything like this? I looked at NHibernate.Burrow, but I'm not sure that that's the thing I want. Added: It's the very beginning of the project so I'm not tied to NHibernate. I started out with it but I can still change easily.

    Read the article

  • how can i use ORM tool like My Generation?

    - by ykaratoprak
    i downloaded MyGeneration from net. And run. it has got connection string. "Provider=SQLOLEDB.1;Persist Security Info=False;User ID=sa;Data Source=localhost " on the other hand; my connection string : Data Source=.\sqlexpress; Initial Catalog=NetTanitimTest; Integrated Security=True if i click save or connect button it throws to me connection error. Who used it can you hel p me?

    Read the article

  • Cleanest way to log who updated what in CF-ORM / Hibernate?

    - by Henry
    One of the requirements of my project is to log who (which Staff) updated what (from what version to what version). UI needs to show who updated entity X at what time and what are updated. What is the cleanest way to implement this? For discussion purpose, imagine... Account has a Contact, and I need to store who update the contact, when, and what's updated.

    Read the article

  • NHibernate (or other worth recommendation ORM), real life example?

    - by migajek
    I'd like to learn database applications in C# and I'm about to select some framework. I heard many recommendations of NHibernate, however I haven't decided yet. Anyway, I'd like to know if there's any real-life example (with sources) of NHibernate in C#, to learn best practices etc.? I know all of them are probably covered in the docs, but working example helps a lot understanding the proper development pattern.

    Read the article

  • Where ORMs blur the lines between code and data, how do you decide what logic should be a stored procedure, and what should be coded?

    - by PhonicUK
    Take the following pseudocode: CreateInvoiceAndCalculate(ItemsAndQuantities, DispatchAddress, User); And say CreateInvoice does the following: Create a new entry in an Invoices table belonging to the specified User to be sent to the given DispatchAddress. Create a new entry in an InvoiceItems table for each of the items in ItemsAndQuantities, storing the Item, the Quantity, and the cost of the item as of now (by looking it up from an Items table) Calculate the total amount of the invoice (ex shipping and taxes) and store it in the new Invoice row. At a glace you wouldn't be able to tell if this was a method in my applications code, or a stored procedure in the database that is being exposed as a function by the ORM. And to some extent it doesn't really matter. Now technically none of this is business logic. You're not making any decisions - just performing a calculation and creating records. However some may argue that because you are performing a calculation that affects the business (the total amount to be invoiced) that this isn't something that should be done in a stored procedure and instead should be in code. So for this specific example - why would it be more appropriate to do one or the other? And where do you draw the line? Or does it even particular matter as long as it's sufficiently well documented?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >