Search Results

Search found 14282 results on 572 pages for 'performance counter'.

Page 140/572 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • Table per subclass inheritance relationship: How to query against the Parent class without loading a

    - by Arthur Ronald F D Garcia
    Suppose a Table per subclass inheritance relationship which can be described bellow (From wikibooks.org - see here) Notice Parent class is not abstract @Entity @Inheritance(strategy=InheritanceType.JOINED) public class Project { @Id private long id; // Other properties } @Entity @Table(name="LARGEPROJECT") public class LargeProject extends Project { private BigDecimal budget; } @Entity @Table(name="SMALLPROJECT") public class SmallProject extends Project { } I have a scenario where i just need to retrieve the Parent class. Because of performance issues, What should i do to run a HQL query in order to retrieve the Parent class and just the Parent class without loading any subclass ???

    Read the article

  • How to find which method makes my iPhone app slow ?

    - by Stewart Hou
    Currently I am working on a production app. One function acts like the settings.app on iPhone. When the user click a cell of a tableView, as shown below http://www.penguintech.net/images/stackoverflow/1.png It will push another view, which includes a textfield to let user input something. However, on both simulator and device, after the app just loaded, the delay between clicking and showing the second view takes around 2 seconds. Then if user get back to previous view and click again, it will be no delay at all. To detect which method makes the delay, I put a NSLog() in every involved methods, but when I was inspecting the console while running the app, all NSLog() message showed in 0.1 seconds, and then still a delay on the app. Is there any other way to trace the performance footage of a app? The Instruments shows only CPU usage in Mac OS not in iPhone.

    Read the article

  • How can I write faster JavaScript?

    - by a paid nerd
    I'm writing an HTML5 canvas visualization. According to the Chrome Developer Tools profiler, 90% of the work is being done in (program), which I assume is the V8 interpreter at work calling functions and switching contexts and whatnot. Other than logic optimizations (e.g., only redrawing parts of the visualization that have changed), what can I do to optimize the CPU usage of my JavaScript? I'm willing to sacrifice some amount of readability and extensibility for performance. Is there a big list I'm missing because my Google skills suck? I have some ideas but I'm not sure if they're worth it: Limit function calls When possible, use arrays instead of objects and properties Use variables for math operation results as much as possible Cache common math operations such as Math.PI / 180 Use sin and cos approximation functions instead of Math.sin() and Math.cos() Reuse objects when passing around data instead of creating new ones Replace Math.abs() with ~~ Study jsperf.com until my eyes bleed Use a preprocessor on my JavaScript to do some of the above operations

    Read the article

  • Why is this javascript function so slow on Firefox?

    - by macrael
    This function was adapted from the website: http://eriwen.com/javascript/measure-ems-for-layout/ function getEmSize(el) { var tempDiv = document.createElement("div"); tempDiv.style.height = "1em"; el.appendChild(tempDiv); var emSize = tempDiv.offsetHeight; el.removeChild(tempDiv); return emSize; } I am running this function as part of another function on window.resize, and it is causing performance problems on Firefox 3.6 that do not exist on current Safari or Chrome. Firefox's profiler says I'm spending the most time in this function and I'm curious as to why that is. Is there a way to get the em size in javascript without doing all this work? I would like to recalculate the size on resize incase the user has changed it.

    Read the article

  • Making linq avoid using in memory filtering where possible

    - by linqmonkey
    Consider the these two LINQ2SQL data retrieval methods. The first creates a 'proper' SQL statement that filters the data, but requires passing the data context into the method. The second has a nicer syntax but loads the entire list of that accounts projects, then does in memory filtering. Is there any way to preserve the syntax of the second method but with the performance advantage of the first? public partial class Account { public IQueryable<Project> GetProjectsByYear(LinqDataContext context, int year) { return context.Projects.Where(p => p.AccountID==this.AccountID && p.Year==year).OrderBy(p => p.ProjectNo) } public IQueryable<Project> GetProjectsByYear(int year) { return this.Projects.Where(p => p.Year==year).OrderBy(p => p.ProjectNo).AsQueryable() } }

    Read the article

  • Oracle SQL: Multiple Subqueries Unioned Without Running Original Query Multiple Times.

    - by Bob
    So I've got a very large database, and need to work on a subset ~1% of the data to dump into an excel spreadsheet to make a graph. Ideally, I could select out the subset of data and then run multiple select queries on that, which are then UNION'ed together. Is this even possible? I can't seem to find anyone else trying to do this and would improve the performance of my current query quite a bit. Right now I have something like this: SELECT ( SELECT ( SELECT( long list of requirements ) UNION SELECT( slightly different long list of requirements ) ) ) and it would be nice if i could group the commonalities of the two long requirements and have simple differences between the two select statements being unioned.

    Read the article

  • Should i use TabContainer for multiple pages?

    - by Tim
    I'm considering if it is a good idea to use an ASP.Net TabContainer-Control in the way that every TabPanel contains content of a different page. For example: Next i want to implement in my application is the masterdata management. Normally i would create one aspx page for every masterdata-table (f.e. Customer - MD_Customer.aspx). Then i would add a link into my Menu to this page. Now i'm thinking of creating one aspx page for all(Masterdata.aspx) with a Tabcontainer and an UpdatePanel for every type of Masterdata. The link it the menu could have an additional MDType as URL-Parameter. My main concerns are related to performance(one "page" for every TabPanel currently means 7 "pages" in one) and maintainability because of increasing complexity. Is it a good approach or a bad idea? Thanks

    Read the article

  • Interpreted languages: The higher-level the faster?

    - by immersion
    I have designed around 5 experimental languages and interpreters for them so far, for education, as a hobby and for fun. One thing I noticed: The assembly-like language featuring only subroutines and conditional jumps as structures was much slower than the high-level language featuring if, while and so on. I developed them both simultaneously and both were interpreted languages. I wrote the interpreters in C++ and I tried to optimize the code-execution part to be as fast as possible. My hypothesis: In almost all cases, performance of interpreted languages rises with their level (high/low). Am I basically right with this? (If not, why?)

    Read the article

  • PHP 5.3 Namespaces should i use every PHP function with backslash?

    - by lhwparis
    Hi, im now using namespaces in PHP 5.3 now there is a fallback mechanism for functions which dont exist in the namespace. so php every time checks if the function exists in namespace and then tries to load it from global space. So what about all php internal functions? strstr for example? Should i now use every php internal function with a \ ? to avoid php first checking the namespace? is this fallback a huge performance drop? what do you think?

    Read the article

  • Does a servlet-based stack have significant overheads?

    - by John
    I don't know if it's simply because page-loads take a little time, or the way servlets have an abstraction framework above the 'bare metal' of HTTP, or just because of the "Enterprise" in Jave-EE, but in my head I have the notion that a servlet-based app is inherently adding overhead compared to a Java app which simply deals with sockets directly. Forget web-pages, imagine instead a Java server app where you send it a question over an HTTP request and it looks up an answer from memory and returns the answer in the response. You can easily write a Java socket-based app which does this, you can also do a servlet approach and get away from the "bare metal" of sockets. Is there any measurable performance impact to be expected implementing the same approach using Servlets rather than a custom socket-based HTTP listening app? And yes, I am hazy on the exact data sent in HTTP requests and I know it's a vague question. It's really about whether servlet implementations have lots of layers of indirection or anything else that would add up to a significant overhead per call, where by significant I mean maybe an additional 0.1s or more.

    Read the article

  • How to make my WPF application as FAST as Outlook

    - by Raul Otaño
    The commons WPF applications take some time for loading medium complex views, once the view is loaded it works fine. For example in a Master - Detail view, if the Detail view is very complex and use different DataTemplates take some seconds (2-3 seconds) for load the view. When i open the Outlook application, for instance, it renders complex views and it is relative much more fast. Is there a way for increase the performance of my WPF application? Maybe a way for not loading the template's data every time that change the "master" item, and load it only one time in the app time live? i will appreciate any suggestion.

    Read the article

  • Distributing cpu-bound compression jobs to multiple computers?

    - by barnaby
    The other day I needed to archive a lot of data on our network and I was frustrated I had no immediate way to harness the power of multiple machines to speed-up the process. I understand that creating a distributed job management system is a leap from a command-line archiving tool. I'm now wondering what the simplest solution to this type of distributed performance scenario could be. Would a custom tool always be a requirement or are there ways to use standard utilities and somehow distribute their load transparently at a higher level? Thanks for any suggestions.

    Read the article

  • Does Adding more namespace in the code file affect performace ?

    - by Harikrishna
    If we imports more namespace in the code file(cs file) then it affects on perfomance ? Like we should add namespace in the cs file as needed. That is adding more namespace in the cs file affects performance ? Like using System; using System.Data.Sql; using System.Collections.Generic; using System.Data; using System.IO; using System.Linq; using System.Text.RegularExpressions; using System.Windows.Forms; using System.Xml; using System.Data.SqlClient; using System.ComponentModel;

    Read the article

  • Minimizing calls to database in rails

    - by ming yeow
    Hi guys, i am familiar with memcached and eager loading, but neither seems to solve the problem i am facing. My main performance lag comes from hundreds of data retrieval calls from the database. The tricky thing is that I do not know which set of users i need to retrieve until i have several steps of computation. I can refactor my code, but i was wondering how you experts handle this situation? I think it should be a fairly common situation def newsfeed - find out which users i need - retrieve those users via DB - find out which events happened for these users - for each of those events - retrieve new set of users - find out which groups are relevant - for each of those groups - retrieve new set of users - etc, etc end

    Read the article

  • MVC more specified models should be populated by more precise query too?

    - by KevinUK
    If you have a Car model with 20 or so properties (and several table joins) for a carDetail page then your LINQ to SQL query will be quite large. If you have a carListing page which uses under 5 properties (all from 1 table) then you use a CarSummary model. Should the CarSummary model be populated using the same query as the Car model? Or should you use a separate LINQ to SQL query which would be more precise? I am just thinking of performance but LINQ uses lazy loading anyway so I am wondering if this is an issue or not.

    Read the article

  • fastest method for minimum of two numbers

    - by user85030
    I was going through mit's opencourseware related to performance engineering. The quickest method (requiring least number of clock cycles) for finding the minimum of two numbers(say x and y) is stated as: min= y^((x^y) & -(x<y)) The output of the expression x < y can be 0 or 1 (assuming C is being used) which then changes to -0 or -1. I understand that xor can be used to swap two numbers. Questions: 1. How is -0 different from 0 and -1 in terms of binary? 2. How is that result used with the and operator to get the minimum? Thanks in advance.

    Read the article

  • Best way to access nested data structures?

    - by Blackshark
    I would like to know what the best way (performance wise) to access a large data structure is. There are about hundred ways to do it but what is the most accessible for the compiler to optimize? One can access a value by foo[someindex].bar[indexlist[i].subelement[j]].baz[0] or create some pointer aliases like sometype_t* tmpfoo = &foo[someindex]; tmpfoo->bar[indexlist[i].subelement[j]].baz[0] or create reference aliases like sometype_t &tmpfoo = foo[someindex]; tmpfoo.bar[indexlist[i].subelement[j]].baz[0] and so forth...

    Read the article

  • One database or many?

    - by dsims
    I am developing a website that will manage data for multiple entities. No data is shared between entities, but they may be owned by the same customer. A customer may want to manage all their entities from a single "dashboard". So should I have one database for everything, or keep the data seperated into individual databases? Is there a best-practice? What are the positives/negatives for having a: database for the entire site (entity has a "customerID", data has "entityID") database for each customer (data has "entityID") database for each entity (relation of database to customer is outside of database) Multiple databases seems like it would have better performance (fewer rows and joins) but may eventually become a maintenance nightmare.

    Read the article

  • Understanding memory leak in Android app.

    - by sat
    After going through few articles about performance, Not able to get this statement exactly. "When a Drawable is attached to a view, the view is set as a callback on the drawable" Soln: "Setting the stored drawables’ callbacks to null when the activity is destroyed." What does that mean, e.g. In my app , I initialize an imageButton in onCreate() like this, imgButton= (ImageButton) findViewById(R.id.imagebtn); At later stage, I get an image from an url, get the stream and convert that to drawable, and set image btn like this, imgButton.setImageDrawable(drawable); According to the above statement, when I am exiting my app, say in onDestroy() I have to set stored drawables’ callbacks to null, not able to understand this part ! In this simple case what I have to set as null ? I am using Android 2.2 Froyo, whether this technique is required, or not necessary.

    Read the article

  • What can we do to make XML processing faster?

    - by adpd
    We work on an internal corporate system that has a web front-end as one of its interfaces. The front-end (Java + Tomcat + Apache) communicates to the back-end (proprietary system written in a COBOL-like language) through SOAP web services. As a result, we pass large XML files back and forth. We believe that this architecture has a significant impact on performance due to the large overhead of XML transportation and parsing. Unfortunately, we are stuck with this architecture. How can we make this XML set-up more efficient? Any tips or techniques are greatly appreciated.

    Read the article

  • Using temporary arrays to cut down on code - inefficient?

    - by tommaisey
    I'm new to c++ (and SO) so sorry if this is obvious. I've started using temporary arrays in my code to cut down on repetition and to make it easier to do the same thing to multiple objects. So instead of: MyObject obj1, obj2, obj3, obj4; obj1.doSomming(arg); obj2.doSomming(arg); obj3.doSomming(arg); obj4.doSomming(arg); I'm doing: MyObject obj1, obj2, obj3, obj4; MyObject* objs[] = {&obj1, &obj2, &obj3, &obj4}; for (int i = 0; i !=4; ++i) objs[i]->doSomming(arg); Is this detrimental to performance? Like, does it cause unnecessary memory allocation? Is it good practice? Thanks.

    Read the article

  • Which of these Array Initializations is better in Ruby?

    - by Bragaadeesh
    Hi, Which of these two forms of Array Initialization is better in Ruby? Method 1: DAYS_IN_A_WEEK = (0..6).to_a HOURS_IN_A_DAY = (0..23).to_a @data = Array.new(DAYS_IN_A_WEEK.size).map!{ Array.new(HOURS_IN_A_DAY.size) } DAYS_IN_A_WEEK.each do |day| HOURS_IN_A_DAY.each do |hour| @data[day][hour] = 'something' end end Method 2: DAYS_IN_A_WEEK = (0..6).to_a HOURS_IN_A_DAY = (0..23).to_a @data = {} DAYS_IN_A_WEEK.each do |day| HOURS_IN_A_DAY.each do |hour| @data[day] ||= {} @data[day][hour] = 'something' end end The difference between the first method and the second method is that the second one does not allocate memory initially. I feel the second one is a bit inferior when it comes to performance due to the numerous amount of Array copies that has to happen. However, it is not straight forward in Ruby to find what is happening. So, if someone can explain me which is better, it would be really great! Thanks

    Read the article

  • Why do these seemingly similar queries have such drastically different run times?

    - by Jherico
    I'm working with an oracle DB trying to tune some queries and I'm having trouble understanding why working a particular clause in a particular way has such a drastic impact on the query performance. Here is a performant version of the query I'm doing select * from ( select a.*, rownum rn from ( select * from table_foo ) a where rownum < 3 ) where rn >= 2 The same query by replacing the last two lines with this ) a where rownum >=2 rownum < 3 ) performs horribly. Several orders of magnitude worse ) a where rownum between 2 and 3 ) also performs horribly. I don't understand the magic from the first query and how to apply it to further similar queries.

    Read the article

  • How does Array.ForEach() compare to standard for loop in C#?

    - by DaveN59
    I pine for the days when, as a C programmer, I could type: memset( byte_array, '0xFF' ); and get a byte array filled with 'FF' characters. So, I have been looking for a replacement for this: for (int i=0; i < byteArray.Length; i++) { byteArray[i] = 0xFF; } Lately, I have been using some of the new C# features and have been using this approach instead: Array.ForEach<byte>(byteArray, b => b = 0xFF); Granted, the second approach seems cleaner and is easier on the eye, but how does the performance compare to using the first approach? Am I introducing needless overhead by using Linq and generics? Thanks, Dave

    Read the article

  • Faster way of initializing arrays in Delphi

    - by Max
    I'm trying to squeeze every bit of performance in my Delphi application and now I came to a procedure which works with dynamic arrays. The slowest line in it is SetLength(Result, Len); which is used to initialize the dynamic array. When I look at the code for the SetLength procedure I see that it is far from optimal. The call sequence is as follows: _DynArraySetLength - DynArraySetLength DynArraySetLength gets the array length (which is zero for initialization) and then uses ReallocMem which is also unnecessary for initilization. I was doing SetLength to initialize dynamic array all the time. Maybe I'm missing something? Is there a faster way to do this?

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >