Search Results

Search found 62280 results on 2492 pages for 're draw time'.

Page 109/2492 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • How to make a mutable ItemizedOverlay

    - by Hamy
    Hey all, I would like to make a Google map overlay with changable pins. An easy way to visualize this would be to think of a near real time overlay, where the pins are constantly changing location. However, I can't seem to think of a safe way to do this with the ItemizedOverlay. The problem seems to be the call to populate - If size() is called by some maps thread, and then my data changes, then the result when the maps call accesses getItem() can be an IndexOutOfBoundsException. Can anyone think of a better solution than overloading populate and wrapping super.populate in a synchronized block? Perhaps I could get better luck using a normal Overlay? The Itemized one seems to exist to manage the data for you, perhaps I am making a fundamental mistake by using it? Thanks for any help, my brain is hurting! Hamy

    Read the article

  • real time stock quotes, StreamReader performance optimization

    - by sean717
    I am working on a program that extracts real time quote for 900+ stocks from a website. I use HttpWebRequest to send HTTP request to the site and store the response to a stream and open a stream using the following code: HttpWebResponse response = (HttpWebResponse)request.GetResponse(); Stream stream = response.GetResponseStream (); StreamReader reader = new StreamReader( stream ) the size of the received HTML is large (5000+ lines), so it takes a long time to parse it and extract the price. For 900 files, It takes about 6 mins for parsing and extracting. Which my boss isn't happy with, he told me he'd want the whole process to be done in TWO mins. I've identified the part of the program that takes most of time to finish is parsing and extracting. I've tried to optimize the code to make it faster, the following is what I have now after some optimization: // skip lines at the top for(int i=0;i<1500;++i) reader.ReadLine(); // read the line that contains the price string theLine = reader.ReadLine(); // ... extract the price from the line now it takes about 4 mins to process all the files, there is still a significant gap to what my boss's expecting. So I am wondering, is there other way that I can further speed up the parsing and extracting and have everything done within 2 mins?

    Read the article

  • Page Render Time in ASP.MVC in trace

    - by Pankaj
    Hello Everyone I want to check render time of each page in asp.net mvc application. i am using asp.net tracing. i have override the OnActionExecuting and OnActionExecuted methods on the BaseController class. protected override void OnActionExecuting(ActionExecutingContext filterContext) { string controler = filterContext.RouteData.Values["controller"].ToString(); string action = filterContext.RouteData.Values["action"].ToString(); StartTime =System.DateTime.Now; System.Diagnostics.Trace.Write(string.Format("Start '{0}/{1}' on: {2}", controler, action, System.DateTime.Now.UtilToISOFormat())); } protected override void OnActionExecuted(ActionExecutedContext filterContext) { string controler = filterContext.RouteData.Values["controller"].ToString(); string action = filterContext.RouteData.Values["action"].ToString(); var totalTime = System.DateTime.Now - this.StartTime; System.Diagnostics.Trace.Write(totalTime.ToString()); System.Diagnostics.Trace.Write(string.Format("End '{0}/{1}' on: {2}", controler, action, System.DateTime.Now.UtilToISOFormat())); } in OnActionExecuted method i get total time. how can i show this time in my http://localhost:51335/Trace.axd report?

    Read the article

  • Javascriptlibrary more efficient than Rickshaw for realtime visualizations

    - by dan kutz
    I want to visualize data as time-series graphs on mobile devices(tablets) and therefore stumbled upon rickshaw, which is based on D3. First I must say I was a little bit confused when I realized that realtime in web design is defined totally different to realtime in engineering which has fixed(and often very short) timeframes. Anyway my aim is to visualize the data as fast as possible, and on older tablets visualization with rickshaw is quite slow. Can anybody recommend another library, which may be more efficient in rendering? Or is there no way out and I have to go native? regards Dan.

    Read the article

  • App crashes every second time a tableview row is selected in navigation controller setup

    - by Thaurin
    Disclaimer first: I'm pretty new to Objective-C and the retain model. I've been developing in a garbage collected .NET environment for the last five years, so I've been spoiled. I'm still learning. I'm having my iPhone app crash with EXC_BAD_ACCESS. It happens in a navigtation controller/tableview setup. When I select a row the first time, no problems. It switches in the child controller without problems. I go back and select the same row again. Program then proceeds to crash. Every other row works fine, but every second time a row is accessed, it's a crash. I've pinpointed the location where this happens. The child controller (which is a class that I reuse for every row of the same type) that's being switched into has an array of NSString's representing the rows that will be displayed. I set it before pushing the child viewcontroller. It's there where this apparently happens. I'm having a hard time debugging this problem, still wrestling with xcode and all. I fear there may be some vital information missing here, but maybe there is something you recognize here.

    Read the article

  • controlling bluetooth connection in matlab

    - by Mehreen Shahid
    we have a project in which we have to pick randomly generated numbers from a remote device, say mobile. there is a simple java application to generate numbers. next, we have to recieve those numbers in our matlab program via bluetooth connection with the device. assuming those numbers are temperature readings, we want to recieve a new number after every 10 sec and display the number on our matlab GUI program. the problem is do we implement bluetooth protocol through our programming? or use the matlab templates? because otherwise whenever we want to transfer file from a mobile to computer, we have to manually click "recieve a file" in blutooth wizard , just like we normally do to transfer a file. we want to enable the connection once, and then recieve text files after every 10 seconds to be read in our matlab program. can anyone please tell is it even possible in matlab? if yes, how do we establish such an automatic real time connection?

    Read the article

  • Why is it that I cannot insert this into Django correctly?

    - by alex
    new_thing = MyTable(last_updated=datetime.datetime.now()) new_thing.save() >>>>select * from MyTable\G; last_updated: 2010-04-01 05:26:21 However, in my Python console...this is what it says... >>> print datetime.datetime.now() 2010-04-01 10:26:21.643041 So obviously it's off by 5 hours. By the way, the database uses "SYSTEM" as its time, so they should match perfectly. mysql> SELECT current_time; +--------------+ | current_time | +--------------+ | 10:30:16 | +--------------+ >>> print datetime.datetime.now() 2010-04-01 10:30:17.793040

    Read the article

  • Getting the time interval between typing characters in a TextBox, in C#

    - by sama
    I have a form that has a TextBox and a Label and I want to get the time of the first character entered in the textbox. Then if the user enters more than ten charcaters, the time between the first charcter entered and the tenth charcter entered is displayed in the label. Can any one help me please? I'm using C# Here is the code but I cannot complete it and I have many things that need to be written but I don't know how to continue. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Threading; namespace n { public partial class Form1 : Form { int count=0; public Form1() { InitializeComponent(); } private void textBox1_TextChanged(object sender, EventArgs e) { DateTime t1 = new DateTime();// the time when entering the first charcter DateTime t2 = new DateTime(); t2 = System.DateTime.Now - t1; int index = textBox1.SelectionStart; Point p; p = textBox1.GetPositionFromCharIndex(index); Thread t = new Thread(counttext); t.Start(); label1.Text = "t2"; } private int counttext() { while (textBox1.Text.Length < 10) { count++; if (count == 10) return count; } } } }

    Read the article

  • Load Testing Java Web Application - find TPS / Avg transaction response time

    - by Steve
    I would like to build my own load testing tool in Java with the goal of being able to load test a web application I am building throughout the development cycle. The web application will be receiving server to server HTTP Post requests and I would like to find its starting transaction per second (TPS) capacity along with the avgerage response time. The Post request and response messages will be in XML (I dont' think that's really applicable though :) ). I have written a very simple Java app to send transactions and count how many transactions it was able to send in one second (1000 ms) however I don't think this is the best way to load test. Really what I want is to send any number of transactions at exactly the same time - i.e. 10, 50, 100 etc. Any help would be appreciated! Oh and here is my current test app code: Thread[] t = new Thread[1]; for (int a = 0; a < t.length; a++) { t[a] = new Thread(new MessageLoop()); } startTime = System.currentTimeMillis(); System.out.println(startTime); for (int a = 0; a < t.length; a++) { t[a].start(); } while ((System.currentTimeMillis() - startTime) < 1000 ) { } if ((System.currentTimeMillis() - startTime) > 1000 ) { for (int a = 0; a < t.length; a++) { t[a].interrupt(); } } long endTime = System.currentTimeMillis(); System.out.println(endTime); System.out.println("Total time: " + (endTime - startTime)); System.out.println("Total transactions: " + count); private static class MessageLoop implements Runnable { public void run() { try { //Test Number of transactions while ((System.currentTimeMillis() - startTime) < 1000 ) { // SEND TRANSACTION HERE count++; } } catch (Exception e) { } } }

    Read the article

  • php second countdown

    - by jesop
    in my php page i have: <?php setcookie("game", "GOW2", time()+3); echo $_COOKIE["game"]."</br>"; echo "<a href=\"/mypro/mypro2.php/\">Refresh</a></br>"; ?> now i want at page load or when user clicks on 'Refresh' the following functionality the timer should get reset and get displayed (as a countdown) and this countdown goes till it reaches 0 and stops. is this can be done??

    Read the article

  • SQL Server Reporting Services Format Hours as Hours:Minutes

    - by Frank Schmitt
    I'm writing reports on SQL Server Reporting Server that have a number of hours grouped by, say, user, and a total calculated based on the sum of the values. Currently my query runs a stored proc that returns the hours as in HH:MM format, rather than decimal hours, as our users find that more intuitive. The problem occurs when I try and add up the column using an SSRS expression, because the SUM function isn't smart enough to handle adding up times in this format. Is there any way to: Display a time interval (in minutes or hours) in HH:MM format while having it calculated in decimal form? Or split up and calculate the total of the HH:MM text values to arrive at a total as an expression? I'd like to avoid having to write/run a second query just to get the total.

    Read the article

  • WPF tabswitch/ render takes too much time..

    - by Socrates
    I have a WPF application with many tabs.. in one tab.. i make a verycomplex vector drawing consisting of thousands of drawing visuals.. (this represents a machine and all elements need to be interactable..) It takes 3/4 seconds for drawing this for the first time..After the first draw it should be done.. The problem is if i switch to another tab and comeback, it takes atlease 2,3 seconds to show the tabpage with drawing again.. Since there is no redraw, why should it take so much time..?

    Read the article

  • NHibernate + Fluent long startup time

    - by PaRa
    Hi all, am new to NHibernate. When performing below test took 11.2 seconds (debug mode) i am seeing this large startup time in all my tests (basically creating the first session takes a tone of time) setup = Windows 2003 SP2 / Oracle10gR2 latest CPU / ODP.net 2.111.7.20 / FNH 1.0.0.636 / NHibernate 2.1.2.4000 / NUnit 2.5.2.9222 / VS2008 SP1 using System; using System.Collections; using System.Data; using System.Globalization; using System.IO; using System.Text; using System.Data; using NUnit.Framework; using System.Collections.Generic; using System.Data.Common; using NHibernate; using log4net.Config; using System.Configuration; using FluentNHibernate; [Test()] public void GetEmailById() { Email result; using (EmailRepository repository = new EmailRepository()) { results = repository.GetById(1111); } Assert.IsTrue(results != null); } public class EmailRepository : RepositoryBase { public EmailRepository():base() { } } In my RepositoryBase public T GetById(object id) { using (var session = sessionFactory.OpenSession()) using (var transaction = session.BeginTransaction()) { try { T returnVal = session.Get(id); transaction.Commit(); return returnVal; } catch (HibernateException ex) { // Logging here transaction.Rollback(); return null; } } } The query time is very small. The resulting entity is really small. Subsequent queries are fine. Its seems to be getting the first session started. Has anyone else seen something similar?

    Read the article

  • Refresh Div with Jquery at fixed time

    - by Ben C
    I've got a php script that tells me when the next bus is due, and at the moment I'm refreshing this into a div, using jquery, every minute or so. Now, because I know the time at which the data will change (after the bus has come), I want it to refresh the div at this time (or just after, doesn't really matter). I should point out that I'm fairly new to js, but this is what I've got so far: var nextbustime = $('#bus').contents(); var nextbustime = new Date(nextbustime); var now = new Date(); var t = nextbustime.getTime() - now.getTime(); var refreshId = setTimeout(function() { $('#bus').fadeOut("slow").load('modules/bus.php?randval='+ Math.random()).fadeIn("slow"); }, t); The div is loaded originally with a php include. Naturally, what I've done doesn't work at all. Do I need some loops going on? Do I need to refresh the time calculator? Please please help! Thanks in advance...

    Read the article

  • Round time to 5 minute nearest SQL Server

    - by Drako
    i don't know if it can be usefull to somebody but I went crazy looking for a solution and ended up doing it myself. Here is a function that (according to a date passed as parameter), returns the same date and approximate time to the nearest multiple of 5. It is a slow query, so if anyone has a better solution, it is welcome. A greeting. CREATE FUNCTION [dbo].[RoundTime] (@Time DATETIME) RETURNS DATETIME AS BEGIN DECLARE @min nvarchar(50) DECLARE @val int DECLARE @hour int DECLARE @temp int DECLARE @day datetime DECLARE @date datetime SET @date = CONVERT(DATETIME, @Time, 120) SET @day = (select DATEADD(dd, 0, DATEDIFF(dd, 0, @date))) SET @hour = (select datepart(hour,@date)) SET @min = (select datepart(minute,@date)) IF LEN(@min) > 1 BEGIN SET @val = CAST(substring(@min, 2, 1) as int) END else BEGIN SET @val = CAST(substring(@min, 1, 1) as int) END IF @val <= 2 BEGIN SET @val = CAST(CAST(@min as int) - @val as int) END else BEGIN IF (@val <> 5) BEGIN SET @temp = 5 - CAST(@min%5 as int) SET @val = CAST(CAST(@min as int) + @temp as int) END IF (@val = 60) BEGIN SET @val = 0 SET @hour = @hour + 1 END IF (@hour = 24) BEGIN SET @day = DATEADD(day,1,@day) SET @hour = 0 SET @min = 0 END END RETURN CONVERT(datetime, CAST(DATEPART(YYYY, @day) as nvarchar) + '-' + CAST(DATEPART(MM, @day) as nvarchar) + '-' + CAST(DATEPART(dd, @day) as nvarchar) + ' ' + CAST(@hour as nvarchar) + ':' + CAST(@val as nvarchar), 120) END

    Read the article

  • JDBC programms running long time performance issue

    - by phyerbarte
    My program has an issue with Oracle query performance, I believe the SQL have good performance, because it returns quickly in SQLPlus. But when my program has been running for a long time, like 1 week, the SQL query (using JDBC) becomes slower (In my logs, the query time is much longer than when I originally started the program). When I restart my program, the query performance comes back to normal. I think it is could be something wrong with the way I use the preparedStatement, because the SQL I'm using does not use placeholders "?" at all. Just a complex select query. The query process is done by a util class. Here is the pertinent code building the query: public List<String[]> query(String sql, String[] args) { Connection conn = null; conn = openConnection(); conn.setAutocommit(true); .... PreparedStatement preStatm = null; ResultSet rs = null; ....//set preparedstatment arg code rs = preStatm.executeQuery(); .... finally{ //close rs //close prestatm //close connection } } In my case, the args is always null, so it just passes a query sql to this query method. Is that possible this way could slow down the DB query after program long time running? Or I should use statement instead, or just pass args with "?" in the SQL? How can I find out the root cause for my issue? Thanks.

    Read the article

  • long startup time...Need help

    - by Jeff
    My app is all done and working great. So now I ran it on a old iPhone and the app takes 17.3 seconds to start!?!? i spent a lot of time looking into it and i found that the reason it is taking so long to load is i have a lot of views and each view has a png background image. All my views and made in IB and in my code: #import "MyTestAppDelegate.h" #import "MyTestViewController.h" @implementation MyTestAppDelegate @synthesize window; @synthesize viewController; - (void)applicationDidFinishLaunching:(UIApplication *)application { // Override point for customization after app launch [window addSubview:viewController.view]; [window makeKeyAndVisible]; } - (void)dealloc { [viewController release]; [window release]; [super dealloc]; } @end At the end of the code where is says: [window addSubview:viewController.view]; the app seems to be loading all the views in the nib at the same time. All the png's from all the views are about 12mb. There is no need for the app to load all the views at the same time during startup. Is there a way i can only load the first "home" view at startup? (All the views are part of the same nib.)

    Read the article

  • how to avoid temporaries when copying weakly typed object

    - by Truncheon
    Hi. I'm writing a series classes that inherit from a base class using virtual. They are INT, FLOAT and STRING objects that I want to use in a scripting language. I'm trying to implement weak typing, but I don't want STRING objects to return copies of themselves when used in the following way (instead I would prefer to have a reference returned which can be used in copying): a = "hello "; b = "world"; c = a + b; I have written the following code as a mock example: #include <iostream> #include <string> #include <cstdio> #include <cstdlib> std::string dummy("<int object cannot return string reference>"); struct BaseImpl { virtual bool is_string() = 0; virtual int get_int() = 0; virtual std::string get_string_copy() = 0; virtual std::string const& get_string_ref() = 0; }; struct INT : BaseImpl { int value; INT(int i = 0) : value(i) { std::cout << "constructor called\n"; } INT(BaseImpl& that) : value(that.get_int()) { std::cout << "copy constructor called\n"; } bool is_string() { return false; } int get_int() { return value; } std::string get_string_copy() { char buf[33]; sprintf(buf, "%i", value); return buf; } std::string const& get_string_ref() { return dummy; } }; struct STRING : BaseImpl { std::string value; STRING(std::string s = "") : value(s) { std::cout << "constructor called\n"; } STRING(BaseImpl& that) { if (that.is_string()) value = that.get_string_ref(); else value = that.get_string_copy(); std::cout << "copy constructor called\n"; } bool is_string() { return true; } int get_int() { return atoi(value.c_str()); } std::string get_string_copy() { return value; } std::string const& get_string_ref() { return value; } }; struct Base { BaseImpl* impl; Base(BaseImpl* p = 0) : impl(p) {} ~Base() { delete impl; } }; int main() { Base b1(new INT(1)); Base b2(new STRING("Hello world")); Base b3(new INT(*b1.impl)); Base b4(new STRING(*b2.impl)); std::cout << "\n"; std::cout << b1.impl->get_int() << "\n"; std::cout << b2.impl->get_int() << "\n"; std::cout << b3.impl->get_int() << "\n"; std::cout << b4.impl->get_int() << "\n"; std::cout << "\n"; std::cout << b1.impl->get_string_ref() << "\n"; std::cout << b2.impl->get_string_ref() << "\n"; std::cout << b3.impl->get_string_ref() << "\n"; std::cout << b4.impl->get_string_ref() << "\n"; std::cout << "\n"; std::cout << b1.impl->get_string_copy() << "\n"; std::cout << b2.impl->get_string_copy() << "\n"; std::cout << b3.impl->get_string_copy() << "\n"; std::cout << b4.impl->get_string_copy() << "\n"; return 0; } It was necessary to add an if check in the STRING class to determine whether its safe to request a reference instead of a copy: Script code: a = "test"; b = a; c = 1; d = "" + c; /* not safe to request reference by standard */ C++ code: STRING(BaseImpl& that) { if (that.is_string()) value = that.get_string_ref(); else value = that.get_string_copy(); std::cout << "copy constructor called\n"; } If was hoping there's a way of moving that if check into compile time, rather than run time.

    Read the article

  • measure the response time of a link

    - by Ahoura Ghotbi
    I am trying to create a simple load balance script and I was wondering if it is possible to find the response time of a server live? By that I mean is it possible to measure how long it takes for a server to respond after the request has been sent out? What I am trying to do is fairly simple, I want to send a request to a link/server and do a count down, if the server took more than 5 seconds to reply, I would like to fall on the backup server. Note that it doesnt have to be in pure php, I wouldnt mind using other languages such as javascript, C/C++, asp, but I prefer to do it in PHP. if it is possible to do the task, could you just point me to the right direction so I can read up on it. Clarification What I want to do is not to download a file and see how long it took, my servers have high load and it takes a while for them to respond when you click on a file to download, what I want to do is to measure the time it takes the server to respond (in this situation, its the time it takes the server to respond and allow the user to download the file), and if it takes longer than x seconds, it should fall back on a backup server.

    Read the article

  • how to calculate the beginning of a day given milliseconds?

    - by conman
    i want to figure out the time from the beginning of the day given a days milliseconds. so say i'm given this: 1340323100024 which is like mid day of 6/21/2012. now i want the milliseconds from the beginning of the day, which would be 1340262000000 (at least i think that's what it's supposed to be.) how do i get 1340262000000 from 1340323100024? i tried doing Math.floor(1340323100024/86400000) * 86400000 but that gives me 1340236800000, which if i create a date object out of it, says its the 20th. i know i can create a date object from 1340323100024, then get the month, year, and date, to create a new object which would give me 1340262000000, but i find it ridiculous i can't figure out something so simple. any help would be appreciated. btw, i'm doing this in javascript if it makes any difference.

    Read the article

  • comparison between point and integer

    - by LawVS
    Right, basically I want to add two numbers together. It's for a working hours calculator and I've included parameters for a night shift scenario as an if statement. However, it now mucks up the day shift pattern. So I want to sort out that if the start time is below 12, then it'll revert to the original equation shown in the code instead of the if statement. -(IBAction)done:(id)sender { int result = [finishHours.text intValue] - [startHours.text intValue]; totalHours.text = [NSString stringWithFormat:@"%d", result]; if (result < 0) { totalHours.text = [NSString stringWithFormat:@"%d", result * -1]; } if (result < 12) { totalHours.text = [NSString stringWithFormat:@"%d", result + 24]; } if (startHours < 12) { totalHours.text = [NSString stringWithFormat:@"%d", result - 24]; }

    Read the article

  • Sleeping a thread blocking stdin

    - by Sid
    Hey, I'm running a function which evaluates commands passed in using stdin and another function which runs a bunch of jobs. I need to make the latter function sleep at regular intervals but that seems to be blocking the stdin. Any advice on how to resolve this would be appreciated. The source code for the functions is def runJobs(comps, jobQueue, numRunning, limit, lock): while len(jobQueue) >= 0: print(len(jobQueue)); if len(jobQueue) > 0: comp, tasks = find_computer(comps, 0); #do something time.sleep(5); def manageStdin(): print "Global Stdin Begins Now" for line in fileinput.input(): try: print(eval(line)); except Exception, e: print e; --Thanks

    Read the article

  • linux process scheduling delayed for long time

    - by Medicine
    I have done strace on my multi-threaded c++ application running on linux after couple hours of running, none of the threads got run, for about 12 seconds. I have seen that the unfinished select system call which is called with a timeout was unfinished before the thread was suspended, reported after it resumed that, it took 11.x seconds for the operation to finish. This is clear indication that the process got starved for a long time. All threads in the process are created with default scheduling policy(SCHED_OTHER) of linux and default priority. There are another 5 similar apps running on the same box which are also heavy I/O bound like this app due to heavy data received on the socket. But most of the time, this app is getting scheduled delay. The other apps are created with same sched policy and priority as this i.e. the defaults. why is only this process gets blocked almost all of the time? Could it be because this process is more I/O intensive as in more busy due to may be higher rates of data? So, the linux dynamic priority adjusting in play here which pushed this process down?

    Read the article

  • run two thread at the same time in java

    - by user1805005
    i have used timertask to schedule my java program. now when the run method of timertask is in process, i want to run two threads which run at the same time and do different functions. here is my code.. please help me.. import java.util.Calendar; import java.util.Date; import java.util.Timer; import java.util.TimerTask; public class timercheck extends TimerTask{ // my first thread Thread t1 = new Thread(){ public void run(){ for(int i = 1;i <= 10;i++) { System.out.println(i); } } }; // my second thread Thread t2 = new Thread(){ public void run(){ for(int i = 11;i <= 20;i++) { System.out.println(i); } } }; public static void main(String[] args){ long ONCE_PER_DAY = 1000*60*60*24; Calendar calendar = Calendar.getInstance(); calendar.set(Calendar.HOUR_OF_DAY, 12); calendar.set(Calendar.MINUTE, 05); calendar.set(Calendar.SECOND, 00); Date time = calendar.getTime(); TimerTask check = new timercheck(); Timer timer = new Timer(); timer.scheduleAtFixedRate(check, time ,ONCE_PER_DAY); } @Override // run method of timer task public void run() { t1.start(); t2.start(); } }

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >