Search Results

Search found 1973 results on 79 pages for 'orm profiler'.

Page 4/79 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Please recommend a Java profiler

    - by Yuval F
    I am looking for the Java equivalent of gprof. I did a little Java profiling using System.getCurrentMillis(), and saw several GUI tools which seem too much. A good compromise could be a text-based Java profiler, preferably free or low-cost, which works in either Windows XP or Linux.

    Read the article

  • Profiler tool for web service

    - by Rotem
    Hi, I need a profiler that is able to measure performance of web service execition. Our application has several layers and ideally I would like to be able to dive into each web service request and see how much time was spent in each layer (server, sql server, etc...) Is there a tool that can help detect where are the bottlenecks ? Is that something that can be done using VS Team System Test Edition ?

    Read the article

  • Is there memory usage profiler available?

    - by prosseek
    For time profiler for XYZ, I can just run 'time XYZ', or if I have the source code in C/C++, I even can use gprof to get profiled results. Is there any similar tool for memory usage? Is there any tool I can use something like 'memory XYZ', to get info such as min/max/median memory usage? What tool do you use for memory profile with C++/Objective C/C#/Java?

    Read the article

  • Profiling Startup Of VS2012 &ndash; dotTrace Profiler

    - by Alois Kraus
    Jetbrains which is famous for the Resharper tool has also a profiler in its portfolio. I downloaded dotTrace 5.2 Professional (569€+VAT) to check how far I can profile the startup of VS2012. The most interesting startup option is “.NET Process”. With that you can profile the next started .NET process which is very useful if you want to profile an application which is not started by you.     I did select Tracing as and Wall time to get similar options across all profilers. For some reason the attach option did not work with .NET 4.5 on my home machine. But I am sure that it did work with .NET 4.0 some time ago. Since we are profiling devenv.exe we can also select “Standalone Application” and start it from the profiler. The startup time of VS does increase about a factor 3 but that is ok. You get mainly three windows to work with. The first one shows the threads where you can drill down thread wise where most time is spent. I The next window is the call tree which does merge all threads together in a similar view. The last and most useful view in my opinion is the Plain List window which is nearly the same as the Method Grid in Ants Profiler. But this time we do get when I enable the Show system functions checkbox not a 150 but 19407 methods to choose from! I really tried with Ants Profiler to find something about out how VS does work but look how much we were missing! When I double click on a method I do get in the lower pane the called methods and their respective timings. This is something really useful and I can nicely drill down to the most important stuff. The measured time seems to be Wall Clock time which is a good thing to see where my time is really spent. You can also use Sampling as profiling method but this does give you much less information. Except for getting a first idea where to look first this profiling mode is not very useful to understand how you system does interact.   The options have a good list of presets to hide by default many method and gray them out to concentrate on your code. It does not filter anything out if you enable Show system functions. By default methods from these assemblies are hidden or if the checkbox is checked grayed out. All in all JetBrains has made a nice profiler which does show great detail and it has nice drill down capabilities. The only thing is that I do not trust its measured timings. I did fall several times into the trap with this one to optimize at places which were already fast but the profiler did show high times in these methods. After measuring with Tracing I was certain that the measured times were greatly exaggerated. Especially when IO is involved it seems to have a hard time to subtract its own overhead. What I did miss most was the possibility to profile not only the next started process but to be able to select a process by name and perhaps a count to profile the next n processes of this name. Next: YourKit

    Read the article

  • Using MS Standalone profiler in VS2008 Professional

    - by fishdump
    I am trying to profile my .NET dll while running it from VS unit testing tools but I am having problems. I am using the standalone command-line profiler as VS2008 Professional does not come with an inbuilt profiler. I have an open CMD window and have run the following commands (I instrumented it earlier which is why vsinstr gave the warning that it did): C:\...\BusinessRules\obj\Debug>vsperfclrenv /samplegclife /tracegclife /globalsamplegclife /globaltracegclife Enabling VSPerf Sampling Attach Profiling. Allows to 'attaching' to managed applications. Current Profiling Environment variables are: COR_ENABLE_PROFILING=1 COR_PROFILER={0a56a683-003a-41a1-a0ac-0f94c4913c48} COR_LINE_PROFILING=1 COR_GC_PROFILING=2 C:\...\BusinessRules\obj\Debug>vsinstr BusinessRules.dll Microsoft (R) VSInstr Post-Link Instrumentation 9.0.30729 x86 Copyright (C) Microsoft Corp. All rights reserved. Error VSP1018 : VSInstr does not support processing binaries that are already instrumented. C:\...\BusinessRules\obj\Debug>vsperfcmd /start:trace /output:foo.vsp Microsoft (R) VSPerf Command Version 9.0.30729 x86 Copyright (C) Microsoft Corp. All rights reserved. C:\...\BusinessRules\obj\Debug> I then ran the unit tests that exercised the instrumented code. When the unit tests were complete, I did... C:\...\BusinessRules\obj\Debug>vsperfcmd /shutdown Microsoft (R) VSPerf Command Version 9.0.30729 x86 Copyright (C) Microsoft Corp. All rights reserved. Waiting for process 4836 ( C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\vstesthost.exe) to shutdown... It was clearly waiting for VS2008 to close so I closed it... Shutting down the Profile Monitor ------------------------------------------------------------ C:\...\BusinessRules\obj\Debug> All looking good, there was a 3.2mb foo.vsp file in the directory. I next did... C:\...\BusinessRules\obj\Debug>vsperfreport foo.vsp /summary:all Microsoft (R) VSPerf Report Generator, Version 9.0.0.0 Copyright (C) Microsoft Corporation. All rights reserved. VSP2340: Environment variables were not properly set during profiling run and managed symbols may not resolve. Please use vsperfclrenv before profiling. File opened Successfully opened the file. A report file, foo_Header.csv, has been generated. A report file, foo_MarksSummary.csv, has been generated. A report file, foo_ProcessSummary.csv, has been generated. A report file, foo_ThreadSummary.csv, has been generated. Analysis completed A report file, foo_FunctionSummary.csv, has been generated. A report file, foo_CallerCalleeSummary.csv, has been generated. A report file, foo_CallTreeSummary.csv, has been generated. A report file, foo_ModuleSummary.csv, has been generated. C:\...\BusinessRules\obj\Debug> Notice the warning about environment variables and using vsperfclrenv? But I had run it! Maybe I used the wrong switches? I don't know. Anyway, loading the csv files into Excel or using the perfconsole tool gives loads of useful info with useless symbol names: *** Loading commands from: C:\temp\PerfConsole\bin\commands\timebytype.dll *** Adding command: timebytype *** Loading commands from: C:\temp\PerfConsole\bin\commands\partition.dll *** Adding command: partition Welcome to PerfConsole 1.0 (for bugs please email: [email protected]), for help type: ?, for a quickstart type: ?? > load foo.vsp *** Couldn't match to either expected sampled or instrumented profile schema, defaulting to sampled *** Couldn't match to either expected sampled or instrumented profile schema, defaulting to sampled *** Profile loaded from 'foo.vsp' into @foo > > functions @foo >>>>> Function Name Exclusive Inclusive Function Name Module Name -------------------- -------------------- -------------- --------------- 900,798,600,000.00 % 900,798,600,000.00 % 0x0600003F 20397910 14,968,500,000.00 % 44,691,540,000.00 % 0x06000040 14736385 8,101,253,000.00 % 14,836,330,000.00 % 0x06000041 5491345 3,216,315,000.00 % 6,876,929,000.00 % 0x06000042 3924533 <snip> 71,449,430.00 % 71,449,430.00 % 0x0A000074 42572 52,914,200.00 % 52,914,200.00 % 0x0A000073 0 14,791.00 % 13,006,010.00 % 0x0A00007B 0 199,177.00 % 6,082,932.00 % 0x2B000001 5350072 2,420,116.00 % 2,420,116.00 % 0x0A00008A 0 836.00 % 451,888.00 % 0x0A000045 0 9,616.00 % 399,436.00 % 0x0A000039 0 18,202.00 % 298,223.00 % 0x06000046 1479900 I am so close to being able to find the bottlenecks, if only it will give me the function and module names instead of hex numbers! What am I doing wrong? --- Alistair.

    Read the article

  • ANTS Memory Profiler 7.0 Review

    - by Michael B. McLaughlin
    (This is my first review as a part of the GeeksWithBlogs.net Influencers program. It’s a program in which I (and the others who have been selected for it) get the opportunity to check out new products and services and write reviews about them. We don’t get paid for this, but we do generally get to keep a copy of the software or retain an account for some period of time on the service that we review. In this case I received a copy of Red Gate Software’s ANTS Memory Profiler 7.0, which was released in January. I don’t have any upgrade rights nor is my review guided, restrained, influenced, or otherwise controlled by Red Gate or anyone else. But I do get to keep the software license. I will always be clear about what I received whenever I do a review – I leave it up to you to decide whether you believe I can be objective. I believe I can be. If I used something and really didn’t like it, keeping a copy of it wouldn’t be worth anything to me. In that case though, I would simply uninstall/deactivate/whatever the software or service and tell the company what I didn’t like about it so they could (hopefully) make it better in the future. I don’t think it’d be polite to write up a terrible review, nor do I think it would be a particularly good use of my time. There are people who get paid for a living to review things, so I leave it to them to tell you what they think is bad and why. I’ll only spend my time telling you about things I think are good.) Overview of Common .NET Memory Problems When coming to land of managed memory from the wilds of unmanaged code, it’s easy to say to one’s self, “Wow! Now I never have to worry about memory problems again!” But this simply isn’t true. Managed code environments, such as .NET, make many, many things easier. You will never have to worry about memory corruption due to a bad pointer, for example (unless you’re working with unsafe code, of course). But managed code has its own set of memory concerns. For example, failing to unsubscribe from events when you are done with them leaves the publisher of an event with a reference to the subscriber. If you eliminate all your own references to the subscriber, then that memory is effectively lost since the GC won’t delete it because of the publishing object’s reference. When the publishing object itself becomes subject to garbage collection then you’ll get that memory back finally, but that could take a very long time depending of the life of the publisher. Another common source of resource leaks is failing to properly release unmanaged resources. When writing a class that contains members that hold unmanaged resources (e.g. any of the Stream-derived classes, IsolatedStorageFile, most classes ending in “Reader” or “Writer”), you should always implement IDisposable, making sure to use a properly written Dispose method. And when you are using an instance of a class that implements IDisposable, you should always make sure to use a 'using' statement in order to ensure that the object’s unmanaged resources are disposed of properly. (A ‘using’ statement is a nicer, cleaner looking, and easier to use version of a try-finally block. The compiler actually translates it as though it were a try-finally block. Note that Code Analysis warning 2202 (CA2202) will often be triggered by nested using blocks. A properly written dispose method ensures that it only runs once such that calling dispose multiple times should not be a problem. Nonetheless, CA2202 exists and if you want to avoid triggering it then you should write your code such that only the innermost IDisposable object uses a ‘using’ statement, with any outer code making use of appropriate try-finally blocks instead). Then, of course, there are situations where you are operating in a memory-constrained environment or else you want to limit or even eliminate allocations within a certain part of your program (e.g. within the main game loop of an XNA game) in order to avoid having the GC run. On the Xbox 360 and Windows Phone 7, for example, for every 1 MB of heap allocations you make, the GC runs; the added time of a GC collection can cause a game to drop frames or run slowly thereby making it look bad. Eliminating allocations (or else minimizing them and calling an explicit Collect at an appropriate time) is a common way of avoiding this (the other way is to simplify your heap so that the GC’s latency is low enough not to cause performance issues). ANTS Memory Profiler 7.0 When the opportunity to review Red Gate’s recently released ANTS Memory Profiler 7.0 arose, I jumped at it. In order to review it, I was given a free copy (which does not include upgrade rights for future versions) which I am allowed to keep. For those of you who are familiar with ANTS Memory Profiler, you can find a list of new features and enhancements here. If you are an experienced .NET developer who is familiar with .NET memory management issues, ANTS Memory Profiler is great. More importantly still, if you are new to .NET development or you have no experience or limited experience with memory profiling, ANTS Memory Profiler is awesome. From the very beginning, it guides you through the process of memory profiling. If you’re experienced and just want dive in however, it doesn’t get in your way. The help items GAHSFLASHDAJLDJA are well designed and located right next to the UI controls so that they are easy to find without being intrusive. When you first launch it, it presents you with a “Getting Started” screen that contains links to “Memory profiling video tutorials”, “Strategies for memory profiling”, and the “ANTS Memory Profiler forum”. I’m normally the kind of person who looks at a screen like that only to find the “Don’t show this again” checkbox. Since I was doing a review, though, I decided I should examine them. I was pleasantly surprised. The overview video clocks in at three minutes and fifty seconds. It begins by showing you how to get started profiling an application. It explains that profiling is done by taking memory snapshots periodically while your program is running and then comparing them. ANTS Memory Profiler (I’m just going to call it “ANTS MP” from here) analyzes these snapshots in the background while your application is running. It briefly mentions a new feature in Version 7, a new API that give you the ability to trigger snapshots from within your application’s source code (more about this below). You can also, and this is the more common way you would do it, take a memory snapshot at any time from within the ANTS MP window by clicking the “Take Memory Snapshot” button in the upper right corner. The overview video goes on to demonstrate a basic profiling session on an application that pulls information from a database and displays it. It shows how to switch which snapshots you are comparing, explains the different sections of the Summary view and what they are showing, and proceeds to show you how to investigate memory problems using the “Instance Categorizer” to track the path from an object (or set of objects) to the GC’s root in order to find what things along the path are holding a reference to it/them. For a set of objects, you can then click on it and get the “Instance List” view. This displays all of the individual objects (including their individual sizes, values, etc.) of that type which share the same path to the GC root. You can then click on one of the objects to generate an “Instance Retention Graph” view. This lets you track directly up to see the reference chain for that individual object. In the overview video, it turned out that there was an event handler which was holding on to a reference, thereby keeping a large number of strings that should have been freed in memory. Lastly the video shows the “Class List” view, which lets you dig in deeply to find problems that might not have been clear when following the previous workflow. Once you have at least one memory snapshot you can begin analyzing. The main interface is in the “Analysis” tab. You can also switch to the “Session Overview” tab, which gives you several bar charts highlighting basic memory data about the snapshots you’ve taken. If you hover over the individual bars (and the individual colors in bars that have more than one), you will see a detailed text description of what the bar is representing visually. The Session Overview is good for a quick summary of memory usage and information about the different heaps. You are going to spend most of your time in the Analysis tab, but it’s good to remember that the Session Overview is there to give you some quick feedback on basic memory usage stats. As described above in the summary of the overview video, there is a certain natural workflow to the Analysis tab. You’ll spin up your application and take some snapshots at various times such as before and after clicking a button to open a window or before and after closing a window. Taking these snapshots lets you examine what is happening with memory. You would normally expect that a lot of memory would be freed up when closing a window or exiting a document. By taking snapshots before and after performing an action like that you can see whether or not the memory is really being freed. If you already know an area that’s giving you trouble, you can run your application just like normal until just before getting to that part and then you can take a few strategic snapshots that should help you pin down the problem. Something the overview didn’t go into is how to use the “Filters” section at the bottom of ANTS MP together with the Class List view in order to narrow things down. The video tutorials page has a nice 3 minute intro video called “How to use the filters”. It’s a nice introduction and covers some of the basics. I’m going to cover a bit more because I think they’re a really neat, really helpful feature. Large programs can bring up thousands of classes. Even simple programs can instantiate far more classes than you might realize. In a basic .NET 4 WPF application for example (and when I say basic, I mean just MainWindow.xaml with a button added to it), the unfiltered Class List view will have in excess of 1000 classes (my simple test app had anywhere from 1066 to 1148 classes depending on which snapshot I was using as the “Current” snapshot). This is amazing in some ways as it shows you how in stark detail just how immensely powerful the WPF framework is. But hunting through 1100 classes isn’t productive, no matter how cool it is that there are that many classes instantiated and doing all sorts of awesome things. Let’s say you wanted to examine just the classes your application contains source code for (in my simple example, that would be the MainWindow and App). Under “Basic Filters”, click on “Classes with source” under “Show only…”. Voilà. Down from 1070 classes in the snapshot I was using as “Current” to 2 classes. If you then click on a class’s name, it will show you (to the right of the class name) two little icon buttons. Hover over them and you will see that you can click one to view the Instance Categorizer for the class and another to view the Instance List for the class. You can also show classes based on which heap they live on. If you chose both a Baseline snapshot and a Current snapshot then you can use the “Comparing snapshots” filters to show only: “New objects”; “Surviving objects”; “Survivors in growing classes”; or “Zombie objects” (if you aren’t sure what one of these means, you can click the helpful “?” in a green circle icon to bring up a popup that explains them and provides context). Remember that your selection(s) under the “Show only…” heading will still apply, so you should update those selections to make sure you are seeing the view you want. There are also links under the “What is my memory problem?” heading that can help you diagnose the problems you are seeing including one for “I don’t know which kind I have” for situations where you know generally that your application has some problems but aren’t sure what the behavior you have been seeing (OutOfMemoryExceptions, continually growing memory usage, larger memory use than expected at certain points in the program). The Basic Filters are not the only filters there are. “Filter by Object Type” gives you the ability to filter by: “Objects that are disposable”; “Objects that are/are not disposed”; “Objects that are/are not GC roots” (GC roots are things like static variables); and “Objects that implement _______”. “Objects that implement” is particularly neat. Once you check the box, you can then add one or more classes and interfaces that an object must implement in order to survive the filtering. Lastly there is “Filter by Reference”, which gives you the option to pare down the list based on whether an object is “Kept in memory exclusively by” a particular item, a class/interface, or a namespace; whether an object is “Referenced by” one or more of those choices; and whether an object is “Never referenced by” one or more of those choices. Remember that filtering is cumulative, so anything you had set in one of the filter sections still remains in effect unless and until you go back and change it. There’s quite a bit more to ANTS MP – it’s a very full featured product – but I think I touched on all of the most significant pieces. You can use it to debug: a .NET executable; an ASP.NET web application (running on IIS); an ASP.NET web application (running on Visual Studio’s built-in web development server); a Silverlight 4 browser application; a Windows service; a COM+ server; and even something called an XBAP (local XAML browser application). You can also attach to a .NET 4 process to profile an application that’s already running. The startup screen also has a large number of “Charting Options” that let you adjust which statistics ANTS MP should collect. The default selection is a good, minimal set. It’s worth your time to browse through the charting options to examine other statistics that may also help you diagnose a particular problem. The more statistics ANTS MP collects, the longer it will take to collect statistics. So just turning everything on is probably a bad idea. But the option to selectively add in additional performance counters from the extensive list could be a very helpful thing for your memory profiling as it lets you see additional data that might provide clues about a particular problem that has been bothering you. ANTS MP integrates very nicely with all versions of Visual Studio that support plugins (i.e. all of the non-Express versions). Just note that if you choose “Profile Memory” from the “ANTS” menu that it will launch profiling for whichever project you have set as the Startup project. One quick tip from my experience so far using ANTS MP: if you want to properly understand your memory usage in an application you’ve written, first create an “empty” version of the type of project you are going to profile (a WPF application, an XNA game, etc.) and do a quick profiling session on that so that you know the baseline memory usage of the framework itself. By “empty” I mean just create a new project of that type in Visual Studio then compile it and run it with profiling – don’t do anything special or add in anything (except perhaps for any external libraries you’re planning to use). The first thing I tried ANTS MP out on was a demo XNA project of an editor that I’ve been working on for quite some time that involves a custom extension to XNA’s content pipeline. The first time I ran it and saw the unmanaged memory usage I was convinced I had some horrible bug that was creating extra copies of texture data (the demo project didn’t have a lot of texture data so when I saw a lot of unmanaged memory I instantly figured I was doing something wrong). Then I thought to run an empty project through and when I saw that the amount of unmanaged memory was virtually identical, it dawned on me that the CLR itself sits in unmanaged memory and that (thankfully) there was nothing wrong with my code! Quite a relief. Earlier, when discussing the overview video, I mentioned the API that lets you take snapshots from within your application. I gave it a quick trial and it’s very easy to integrate and make use of and is a really nice addition (especially for projects where you want to know what, if any, allocations there are in a specific, complicated section of code). The only concern I had was that if I hadn’t watched the overview video I might never have known it existed. Even then it took me five minutes of hunting around Red Gate’s website before I found the “Taking snapshots from your code" article that explains what DLL you need to add as a reference and what method of what class you should call in order to take an automatic snapshot (including the helpful warning to wrap it in a try-catch block since, under certain circumstances, it can raise an exception, such as trying to call it more than 5 times in 30 seconds. The difficulty in discovering and then finding information about the automatic snapshots API was one thing I thought could use improvement. Another thing I think would make it even better would be local copies of the webpages it links to. Although I’m generally always connected to the internet, I imagine there are more than a few developers who aren’t or who are behind very restrictive firewalls. For them (and for me, too, if my internet connection happens to be down), it would be nice to have those documents installed locally or to have the option to download an additional “documentation” package that would add local copies. Another thing that I wish could be easier to manage is the Filters area. Finding and setting individual filters is very easy as is understanding what those filter do. And breaking it up into three sections (basic, by object, and by reference) makes sense. But I could easily see myself running a long profiling session and forgetting that I had set some filter a long while earlier in a different filter section and then spending quite a bit of time trying to figure out why some problem that was clearly visible in the data wasn’t showing up in, e.g. the instance list before remembering to check all the filters for that one setting that was only culling a few things from view. Some sort of indicator icon next to the filter section names that appears you have at least one filter set in that area would be a nice visual clue to remind me that “oh yeah, I told it to only show objects on the Gen 2 heap! That’s why I’m not seeing those instances of the SuperMagic class!” Something that would be nice (but that Red Gate cannot really do anything about) would be if this could be used in Windows Phone 7 development. If Microsoft and Red Gate could work together to make this happen (even if just on the WP7 emulator), that would be amazing. Especially given the memory constraints that apps and games running on mobile devices need to work within, a good memory profiler would be a phenomenally helpful tool. If anyone at Microsoft reads this, it’d be really great if you could make something like that happen. Perhaps even a (subsidized) custom version just for WP7 development. (For XNA games, of course, you can create a Windows version of the game and use ANTS MP on the Windows version in order to get a better picture of your memory situation. For Silverlight on WP7, though, there’s quite a bit of educated guess work and WeakReference creation followed by forced collections in order to find the source of a memory problem.) The only other thing I found myself wanting was a “Back” button. Between my Windows Phone 7, Zune, and other things, I’ve grown very used to having a “back stack” that lets me just navigate back to where I came from. The ANTS MP interface is surprisingly easy to use given how much it lets you do, and once you start using it for any amount of time, you learn all of the different areas such that you know where to go. And it does remember the state of the areas you were previously in, of course. So if you go to, e.g., the Instance Retention Graph from the Class List and then return back to the Class List, it will remember which class you had selected and all that other state information. Still, a “Back” button would be a welcome addition to a future release. Bottom Line ANTS Memory Profiler is not an inexpensive tool. But my time is valuable. I can easily see ANTS MP saving me enough time tracking down memory problems to justify it on a cost basis. More importantly to me, knowing what is happening memory-wise in my programs and having the confidence that my code doesn’t have any hidden time bombs in it that will cause it to OOM if I leave it running for longer than I do when I spin it up real quickly for debugging or just to see how a new feature looks and feels is a good feeling. It’s a feeling that I like having and want to continue to have. I got the current version for free in order to review it. Having done so, I’ve now added it to my must-have tools and will gladly lay out the money for the next version when it comes out. It has a 14 day free trial, so if you aren’t sure if it’s right for you or if you think it seems interesting but aren’t really sure if it’s worth shelling out the money for it, give it a try.

    Read the article

  • .NET ORM and Security

    - by Sphynx
    We're going to use an ORM tool with a .NET desktop application. The tool allows creation of persistent classes. It generates all database tables automatically. In addition to other data, our system needs to store user credentials, and deliver access control. The question is, is there any possibility of access control by means of ORM, without creating the database authentication mechanisms manually? Is there any product on the market which allows this? We thought of limiting the access in the program itself, but users can easily access the database directly, and bypass the program limitations. Thanks.

    Read the article

  • Is an ORM redundant with a NoSQL API?

    - by Earlz
    Hello, with MongoDB (and I assume other NoSQL database APIs worth their salt) the ways of querying the database are much more simplistic than SQL. There is no tedious SQL queries to generate and such. For instance take this from mongodb-csharp: using MongoDB.Driver; Mongo db = new Mongo(); db.Connect(); //Connect to localhost on the default port. Document query = new Document(); query["field1"] = 10; Document result = db["tests"]["reads"].FindOne(query); db.Disconnect(); How could an ORM even simplify that? Is an ORM or other "database abstraction device" required on top of a decent NoSQL API?

    Read the article

  • PHP - Frameworks, ORM, Encapsulation.

    - by Ian
    Programming languages/environments aside, are there many developers who are using a framework in PHP, ORM and still abide by encapsulation for the DAL/BLL? I'm managing a team of a few developers and am finding that most of the frameworks require me to do daily code inspection because my developers are using the built in ORM. Right now, I've been using a tool to generate the classes and CRUD myself, with an area for them to write additional queries/functions. What's been happening though, is they are creating vulnerabilities by not doing proper checks on data permission, or allowing the key fields to be manipulated in the form. Any suggestions, other than get a new team and a new language (I've seen Python/Ruby frameworks have the same issues).

    Read the article

  • Handling user security scope with nHibernate or other ORM

    - by Schotime
    How should one handle the situation where you may need to filter by a group of users. Here is the scenario. I have an administrator role in my company. I should be able to see all the data belonging to me plus all the other users who I have control over. A plain old user however should only be able to access their own data. If you are writing regular sql statements then you can have a security table with every user and who they have access too but i'm not sure how to handle this situation in the OO and ORM world. Any one dealt with this scenario in a web application using an ORM? Would love to hear your thoughts!

    Read the article

  • Visual Studio 2008 profiler analysis - missing time

    - by Scott Vercuski
    I ran the Visual Studio 2008 profiler against my ASP.NET application and came up with the following result set. CURRENT FUNCTION TIME (msec) ---------------------------------------------------|-------------- Data.GetItem(params) | 10,158.12 ---------------------------------------------------|-------------- Functions that were called by Data.GetItem(params) TIME (msec) ---------------------------------------------------|-------------- Model.GetSubItem(params) | 0.83 Model.GetSubItem2(params) | 0.77 Model.GetSubItem3(params) | 0.76 etc. The issue I'm facing is that the sum of the Functions called by Data.GetItem(params) do not sum up to the 10,158.12 msec total. This would lead me to believe that the bulk of the time is actually spent executing the code within that method. My question is ... does Visual Studio provide a way to analyze the method itself so I can see which sections of code are taking the longest? if it does not are there any recommended tools to do this? or should I start writing my own timing scripts? Thank you

    Read the article

  • Using NetBeans profiler with Guice classes

    - by Karussell
    How can I use the profiler from NetBeans 6.8 or 6.9 (choosing 'entire application') with guice enhanced classes? I am using google guice 2.0 (with warp persist 2.0-20090214) for some classes and wanted to profile those classes. But I cannot see a result for those classes. The only result I can see is for one method 'EnhancedClass.access$000' which is not very helpful. Other classes are working. Does somebody know a workaround? Or know what I am doing wrong?

    Read the article

  • looking for a good vc++ profiler, already checked previous posts

    - by coreSOLO
    I'm looking for a good profiler for vs2008 professional edition, free or reasonably priced. I've already checked previous posts and tried about 8 profilers, but most of them are too basic or too detailed. Kindly suggest something, my requirements are as follows: It can be compiled, so that its well integrated with my application. I'm not shying away from instrumenting my methods. The output should be simple, i only need call count and time taken by methods and nothing else. I am mostly concerned about things INSIDE a method, you may call it line by line profiling. I want to select a method and know which line (expression / method call) is eating most of the time.

    Read the article

  • best way to create tables with ORM?

    - by ajsie
    assume that i start coding an application from scratch, is the best way to create tables when using an ORM (doctrine), to manually create tables in mysql and then generate models from the tables, or is it the other way around, that is to create the models in php and then generate tables from models? and if i already have a database, will the models created be optimal? cause i have heard some say that its best to create the database from scratch when using ORM, so that the relations are optimized for OOD. share your thoughts!

    Read the article

  • Strange profiler behavior: same functions, different performances

    - by arthurprs
    I was learning to use gprof and then i got weird results for this code: int one(int a, int b) { return a / (b + 1); } int two(int a, int b) { return a / (b + 1); } int main() { for (int i = 1; i < 30000000; i++) { two(i, i * 2); one(i, i * 2); } return 0; } and this is the profiler output % cumulative self self total time seconds seconds calls ns/call ns/call name 48.39 0.90 0.90 29999999 30.00 30.00 one(int, int) 40.86 1.66 0.76 29999999 25.33 25.33 two(int, int) 10.75 1.86 0.20 main If i call one then two the result is the inverse, two takes more time than one both are the same functions, but the first calls always take less time then the second Why is that? Note: The assembly code is exactly the same and code is being compiled with no optimizations

    Read the article

  • To ORM or Not to ORM. That is the question&hellip;

    - by Patrick Liekhus
    UPDATE:  Thanks for the feedback and comments.  I have adjusted my table below with your recommendations.  I had missed a point or two. I wanted to do a series on creating an entire project using the EDMX XAF code generation and the SpecFlow BDD Easy Test tools discussed in my earlier posts, but I thought it would be appropriate to start with a simple comparison and reasoning on why I choose to use these tools. Let’s start by defining the term ORM, or Object-Relational Mapping.  According to Wikipedia it is defined as the following: Object-relational mapping (ORM, O/RM, and O/R mapping) in computer software is a programming technique for converting data between incompatible type systems in object-oriented programming languages. This creates, in effect, a "virtual object database" that can be used from within the programming language. Why should you care?  Basically it allows you to map your business objects in code to their persistence layer behind them. And better yet, why would you want to do this?  Let me outline it in the following points: Development speed.  No more need to map repetitive tasks query results to object members.  Once the map is created the code is rendered for you. Persistence portability.  The ORM knows how to map SQL specific syntax for the persistence engine you choose.  It does not matter if it is SQL Server, Oracle and another database of your choosing. Standard/Boilerplate code is simplified.  The basic CRUD operations are consistent and case use database metadata for basic operations. So how does this help?  Well, let’s compare some of the ORM tools that I have used and/or researched.  I have been interested in ORM for some time now.  My ORM of choice for a long time was NHibernate and I still believe it has a strong case in some business situations.  However, you have to take business considerations into account and the law of diminishing returns.  Because of these two factors, my recent activity and experience has been around DevExpress eXpress Persistence Objects (XPO).  The primary reason for this is because they have the DevExpress eXpress Application Framework (XAF) that sits on top of XPO.  With this added value, the data model can be created (either database first of code first) and the Web and Windows client can be created from these maps.  While out of the box they provide some simple list and detail screens, you can verify easily extend and modify these to your liking.  DevExpress has done a tremendous job of providing enough framework while also staying out of the way when you need to extend it.  This sounds worse than it really is.  What I mean by this is that if you choose to follow DevExpress coding style and recommendations, the hooks and extension points provided allow you to do some pretty heavy lifting while also not worrying about the basics. I have put together a list of the top features that I have used to compare the limited list of ORM’s that I have exposure with.  Again, the biggest selling point in my opinion is that XPO is just a solid as any of the other ORM’s but with the added layer of XAF they become unstoppable.  And then couple that with the EDMX modeling tools and code generation, it becomes a no brainer. Designer Features Entity Framework NHibernate Fluent w/ Nhibernate Telerik OpenAccess DevExpress XPO DevExpress XPO/XAF plus Liekhus Tools Uses XML to map relationships - Yes - - -   Visual class designer interface Yes - - - - Yes Management integrated w/ Visual Studio Yes - - Yes - Yes Supports schema first approach Yes - - Yes - Yes Supports model first approach Yes - - Yes Yes Yes Supports code first approach Yes Yes Yes Yes Yes Yes Attribute driven coding style Yes - Yes - Yes Yes                 I have a very small team and limited resources with a lot of responsibilities.  In order to keep up with our customers, we must rely on tools like these.  We use the EDMX tool so that we can create a visual representation of the applications with our customers.  Second, we rely on the code generation so that we can focus on the business problems at hand and not whether a field is mapped correctly.  This keeps us from requiring as many junior level developers on our team.  I have also worked on multiple teams where they believed in writing their own “framework”.  In my experiences and opinion this is not the route to take unless you have a team dedicated to supporting just the framework.  Each time that I have worked on custom frameworks, the framework eventually becomes old, out dated and full of “performance” enhancements specific to one or two requirements.  With an ORM, there are a lot smarter people than me working on the bigger issue of persistence and performance.  Again, my recommendation would be to use an available framework and get to working on your business domain problems.  If your coding is not making money for you, why are you working on it?  Do you really need to be writing query to object member code again and again? Thanks

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Suggestions on how to map from Domain (ORM) objects to Data Transfer Objects (DTO)

    - by FryHard
    The current system that I am working on makes use of Castle Activerecord to provide ORM (Object Relational Mapping) between the Domain objects and the database. This is all well and good and at most times actually works well! The problem comes about with Castle Activerecords support for asynchronous execution, well, more specifically the SessionScope that manages the session that objects belong to. Long story short, bad stuff happens! We are therefore looking for a way to easily convert (think automagically) from the Domain objects (who know that a DB exists and care) to the DTO object (who know nothing about the DB and care not for sessions, mapping attributes or all thing ORM). Does anyone have suggestions on doing this. For the start I am looking for a basic One to One mapping of object. Domain object Person will be mapped to say PersonDTO. I do not want to do this manually since it is a waste. Obviously reflection comes to mind, but I am hoping with some of the better IT knowledge floating around this site that "cooler" will be suggested. Oh, I am working in C#, the ORM objects as said before a mapped with Castle ActiveRecord. Example code: By @ajmastrean's request I have linked to an example that I have (badly) mocked together. The example has a capture form, capture form controller, domain objects, activerecord repository and an async helper. It is slightly big (3MB) because I included the ActiveRecored dll's needed to get it running. You will need to create a database called ActiveRecordAsync on your local machine or just change the .config file. Basic details of example: The Capture Form The capture form has a reference to the contoller private CompanyCaptureController MyController { get; set; } On initialise of the form it calls MyController.Load() private void InitForm () { MyController = new CompanyCaptureController(this); MyController.Load(); } This will return back to a method called LoadComplete() public void LoadCompleted (Company loadCompany) { _context.Post(delegate { CurrentItem = loadCompany; bindingSource.DataSource = CurrentItem; bindingSource.ResetCurrentItem(); //TOTO: This line will thow the exception since the session scope used to fetch loadCompany is now gone. grdEmployees.DataSource = loadCompany.Employees; }, null); } } this is where the "bad stuff" occurs, since we are using the child list of Company that is set as Lazy load. The Controller The controller has a Load method that was called from the form, it then calls the Asyc helper to asynchronously call the LoadCompany method and then return to the Capture form's LoadComplete method. public void Load () { new AsyncListLoad<Company>().BeginLoad(LoadCompany, Form.LoadCompleted); } The LoadCompany() method simply makes use of the Repository to find a know company. public Company LoadCompany() { return ActiveRecordRepository<Company>.Find(Setup.company.Identifier); } The rest of the example is rather generic, it has two domain classes which inherit from a base class, a setup file to instert some data and the repository to provide the ActiveRecordMediator abilities.

    Read the article

  • Looking for a .Net ORM

    - by SLaks
    I'm looking for a .Net 3.5 ORM framework with a rather unusual set of requirements: I need to create and alter tables at runtime with schemas defined by my end-users. (Obviously, that wouldn't be strongly-typed; I'm looking for something like a DataTable there) I also want regular strongly-typed partial classes for rows in non-dynamic tables, with custom validation and other logic. (Like normal ORMs) I want to load the entire database (or some entire tables) once, and keep it in memory throughout the life of the (WinForms) GUI. (I have a shared SQL Server with a relatively slow connection) I also want regular LINQ support (like LINQ-to-SQL) for ASP.Net on the shared server (which has a fast connection to SQL Server) In addition to SQL Server, I also want to be able to use a single-file database that would support XCopy deployment (without installing SQL CE on the end-user's machine). (Probably Access or SQLite) Finally, it has to be free (unless it's OpenAccess) I'll probably have to write it myself, as I don't think there is an existing ORM that meets these requirements. However, I don't want to re-invent the wheel if there is one, hence this question. I'm using VS2010, but I don't know when my webhost (LFC) will upgrade to .Net 4.0

    Read the article

  • What ORM for .NET should I use?

    - by eKek0
    I'm relatively new to .NET and have being using Linq2Sql for a almost a year, but it lacks some of the features I'm looking for now. I'm going to start a new project in which I want to use an ORM with the following characteristics: It has to be very productive, I don't want to be dealing with the access layer to save or retrieve objects from or to the database, but it should allows me to easily tweak any object before actually commit it to the database; also it should allows me to work easily with a changing database schema It should allows me to extend the objects mapped from the database, for example to add virtual attributes to them (virtual columns to a table) It has to be (at least almost) database agnostic, it should allows me to work with different databases in a transparent way It has to have not so much configuration or must be based on conventions to make it work It should allows me to work with Linq So, do you know any ORM that I could use? Thank you for your help. EDIT I know that an option is to use NHibernate. This appears as the facto standard for enterprise level applications, but also it seems that is not very productive because its deep learning curve. In other way, I have read in some other post here in SO that it doesn't integrate well with Linq. Is all of that true?

    Read the article

  • Sorting out POCO, Repository Pattern, Unit of Work, and ORM

    - by CoffeeAddict
    I'm reading a crapload on all these subjects: POCO Repository Pattern Unit of work Using an ORM mapper ok I see the basic definitions of each in books, etc. but I can't visualize this all together. Meaning an example structure (DL, BL, PL). So what, you have your DL objects that contain your CRUD methods, then your BL objects which are "mapped" using an ORM back to your DL objects? What about DTOs...they're your DL objects right? I'm confused. Can anyone really explain all this together or send me example code? I'm just trying to put this together. I am determining whether to go LINQ to SQL or EF 4 (not sure about NHibrernate yet). Just not getting the concepts as in physical layers and code layers here and what each type of object contains (just properties for DTOs, and CRUDs for your core DL classes that match the table fields???). I just need some guidance here. I'm reading Fowler's books and starting to read Evans but just not all there yet.

    Read the article

  • SQL Server Express Profiler

    - by David Turner
    During a recent project, while waiting for our Development Database to be provisioned on the clients corporate SQL Server Environment (these things can sometimes take weeks or months to be setup), we began our initial development against a local instance on SQL Server Express, just as an interim measure until the Development database was live.  This was going just fine, until we found that we needed to do some profiling to understand a problem we were having with the performance of our ORM generated Data Access Layer.  The full version of SQL Server Management Studio includes a profiler, that we could use to help with this kind of problem, however the Express version does not, so I was really pleased to find that there is a freely available Profiler for SQL Server Express imaginatively titled ‘SQL Server Express Profiler’, and it worked great for us.  http://sites.google.com/site/sqlprofiler/

    Read the article

  • php framework with sqlite, orm, i18n, l10n search addon

    - by ddorian
    im trying to find a php framework to build small,multilingual sites. do you know a php framework with support for: 1.sqlite (it will be little sites so no performance problem and good for copy-paste from development to production) 2.orm 3.i18n & l10n 4.easy search addon 5.ability to just copy-paste no need to change config for going from devel machine to production and if you know cms with those features put it too thank you

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >