Search Results

Search found 13300 results on 532 pages for 'exalytics performance tuning'.

Page 34/532 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Display another field in the referenced table for multiple columns with performance issues in mind

    - by israkir
    I have a table of edge like this: ------------------------------- | id | arg1 | relation | arg2 | ------------------------------- | 1 | 1 | 3 | 4 | ------------------------------- | 2 | 2 | 6 | 5 | ------------------------------- where arg1, relation and arg2 reference to the ids of objects in another object table: -------------------- | id | object_name | -------------------- | 1 | book | -------------------- | 2 | pen | -------------------- | 3 | on | -------------------- | 4 | table | -------------------- | 5 | bag | -------------------- | 6 | in | -------------------- What I want to do is that, considering performance issues (a very big table more than 50 million of entries) display the object_name for each edge entry rather than id such as: --------------------------- | arg1 | relation | arg2 | --------------------------- | book | on | table | --------------------------- | pen | in | bag | --------------------------- What is the best select query to do this? Also, I am open to suggestions for optimizing the query - adding more index on the tables etc... EDIT: Based on the comments below: 1) @Craig Ringer: PostgreSQL version: 8.4.13 and only index is id for both tables. 2) @andrefsp: edge is almost x2 times bigger than object.

    Read the article

  • Tool to monitor IE performance running JavaScript

    - by StefanE
    Hi, Company I work for are one of the largest betting companies in Europe and the website has thousands of lines of JavaScript on all our pages. Lately Internet Explorer versions earlier than version 9 are running painfully slow and I want to be able to monitor what parts of a page load (including scripts) that are slow. I know that IE are slower in general and has DOM API issues etc. What I want to accomplish is a way to quickly identify slow parts and see if we can replace the code with IE specific code that will render with higher performance. Cheers, Stefan

    Read the article

  • .NET Performance: Deep Recursion vs Queue

    - by JeffN825
    I'm writing a component that needs to walk large object graphs, sometimes 20-30 levels deep. What is the most performant way of walking the graph? A. Enqueueing "steps" so as to avoid deep recursion or B. A DFS (depth first search) which may step many levels deep and have a "deep" stack trace at times. I guess the question I'm asking is: Is there a performance hit in .NET for doing a DFS that causes a "deep" stack trace? If so, what is the hit? And would I better better off with some BFS by means of queueing up steps that would have been handled recursively in a DFS? Sorry if I'm being unclear. Thanks.

    Read the article

  • C# Dynamic From Components (Performance problem)

    - by Svisstack
    Hello, I have a problem with performance of my code under Windows Forms. Have a form, her layout is depending on constructor data, because he layout must be OnLoad or in Constructor generated. I generation is simple, base FlowLayoutPanel have other FlowLayoutPanels, for each have a Label and TextBox with DataBinding. Problem is this is VERY SLOW, up to 20 seconds, i drawing less than 100 controls, from Performace Session i know a problem is on 70% procesing functions: System.Windows.Forms.Control.ControlCollection.Add(class System.Windows.Forms.Control) System.Windows.Forms.ControlBindingsCollection.Add(class System.Windows.Forms.Binding) How i can do with this? Anyone help me in this problem? How solve the dynamic form layout problem?

    Read the article

  • Function calls in virtual machine killing performance

    - by GenTiradentes
    I wrote a virtual machine in C, which has a call table populated by pointers to functions that provide the functionality of the VM's opcodes. When the virtual machine is run, it first interprets a program, creating an array of indexes corresponding to the appropriate function in the call table for the opcode provided. It then loops through the array, calling each function until it reaches the end. Each instruction is extremely small, typically one line. Perfect for inlining. The problem is that the compiler doesn't know when any of the virtual machine's instructions are going to be called, as it's decided at runtime, so it can't inline them. The overhead of function calls and argument passing is killing the performance of my VM. Any ideas on how to get around this?

    Read the article

  • How to track IIS server performance

    - by Chris Brandsma
    I have a reoccurring issue where a customer calls up and complains that the web site is too slow. Specifically, if they are inactive for a short period of time, then go back to the site, there will be a minute-two minute delay before the user sees a response. (the standard browser is Firefox in this case) I have Perfmon up and running, the cpu utilization is usually below 20% (single proc...don't ask). The database is humming along. And I'm pulling my hair out. So, what metrics/tools do you find useful when evaluating IIS performance?

    Read the article

  • How can I measure file access performance (and volume) of a (Java) application

    - by stmoebius
    Given an application, how can I measure the amount of data read and written by that application? the time spent reading/writing to disk? The specific application is Java-based (JBoss), and multi-threaded, and running as a service on Windows 7/2008 x64. The overall goal I have is determining whether and why file access is a bottleneck in my application. Therefore, running the application in a defined and repeatable scenario is a given. File access may be local as well as on network shares. Windows performance monitor appears to be too hard to use (unless someone can point me to a helpful explanation). Any ideas?

    Read the article

  • Java generic Interface performance

    - by halfwarp
    Simple question, but tricky answer I guess. Does using Generic Interfaces hurts performance? Example: public interface Stuff<T> { void hello(T var); } vs public interface Stuff { void hello(Integer var); <---- Integer used just as an example } My first thought is that it doesn't. Generics are just part of the language and the compiler will optimize it as though there were no generics (at least in this particular case of generic interfaces). Is this correct?

    Read the article

  • AS3: Performance question calling an event function with null param

    - by adehaas
    Lately I needed to call a listener function without an actual listener like so: foo(null); private function foo(event:Event):void { //do something } So I was wondering if there is a significant difference regarding performance between this and using the following, in which I can prevent the null in calling the function without the listener, but am still able to call it with a listener as well: foo(); private function foo(event:Event = null):void { } I am not sure wether it is just a question of style, or actually bad practice and I should write two similar functions, one with and one without the event param (which seems cumbersome to me). Looking forward to your opinions, thx.

    Read the article

  • c# performance- create font

    - by user85917
    I have performance issues in this code segment which I think is caused by the "new Font". Will it be faster if fonts are static/global ? if (row.StartsWith(TILD_BEGIN)) { rtbTrace.SelectionColor = Color.Maroon; rtbTrace.SelectionFont = new Font(myFont, (float)8.25, FontStyle.Regular); if (row.StartsWith(BEGIN) ) rtbTrace.AppendText(Environment.NewLine + row + Environment.NewLine); else rtbTrace.AppendText(Environment.NewLine + row.Substring(1) + Environment.NewLine); continue; } if (row.StartsWith(EXCL_BEGIN)) { -- similar block } if (row.StartsWith(DLR_BEGIN)) { -- similar block } . . .

    Read the article

  • SSRS Performance Mystery

    - by user101654
    I have a stored procedure that returns about 50000 records in 10sec using at most 2 cores in SSMS. The SSRS report using the stored procedure was taking 20min and would max out the processor on an 8 core server for the entire time. The report was relatively simple (i.e. no graphs, calculations). The report did not appear to be the issue as I wrote the 50K rows to a temp table and the report could display the data in a few seconds. I tried many different ideas for testing altering the stored procedure each time, but keeping the original code in a separate window to revert back to. After one Alter of the stored procedure, going back to the original code, the report and server utilization started running fast, comparable to the performance of the stored procedure alone. Everything is fine for now, but I am would like to get to the bottom of what caused this in case it happens again. Any ideas?

    Read the article

  • C# chart control Performance with large amounts of data

    - by user3642115
    I am using a chart control with a range bar graph to basically make a gantt chart for lots of people and lots of projects, say about 1000 total series. The issue that I am running in to is that once I have all my data added to the chart, which takes some time but that is to be expected, and I go to scroll down on my graph it freezes the whole application and takes a while before it unfreezes and scrolls down. Is there any way to improve the performance of this? I tried adding the graph to a panel and growing the graph size dynamically and then scrolling down from the panel but that cause a whole plethora of other issues. Any tips for speeding this up? I don't think it is my code as it has already finished running when this issue happens. Thanks.

    Read the article

  • OpenGL performance on rendering "virtual gallery" (textures)

    - by maticus
    I have a considerable (120-240) amount of 640x480 images that will be displayed as textured flat surfaces (4 vertex polygons) in a 3D environment. About 30-50% of them will be visible in a given frame. It is possible for them to crossover. Nothing else will be present in the environment. The question is - will the modern and/or few-years-old (lets say Radeon 9550) GPU cope with that, and what frame rate can I expect? I aim for 20FPS, but 30-40 would be nice. Would changing the resolution to 320x240 make it more probable to happen? I do not have any previous experience with performance issues of 3D graphics on modern GPUs, and unfortunately I must make a design choice. I don't want to waste time on doing something that couldn't have worked :-)

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • MySQL Cluster 7.2: Over 8x Higher Performance than Cluster 7.1

    - by Mat Keep
    0 0 1 893 5092 Homework 42 11 5974 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary The scalability enhancements delivered by extensions to multi-threaded data nodes enables MySQL Cluster 7.2 to deliver over 8x higher performance than the previous MySQL Cluster 7.1 release on a recent benchmark What’s New in MySQL Cluster 7.2 MySQL Cluster 7.2 was released as GA (Generally Available) in February 2012, delivering many enhancements to performance on complex queries, new NoSQL Key / Value API, cross-data center replication and ease-of-use. These enhancements are summarized in the Figure below, and detailed in the MySQL Cluster New Features whitepaper Figure 1: Next Generation Web Services, Cross Data Center Replication and Ease-of-Use Once of the key enhancements delivered in MySQL Cluster 7.2 is extensions made to the multi-threading processes of the data nodes. Multi-Threaded Data Node Extensions The MySQL Cluster 7.2 data node is now functionally divided into seven thread types: 1) Local Data Manager threads (ldm). Note – these are sometimes also called LQH threads. 2) Transaction Coordinator threads (tc) 3) Asynchronous Replication threads (rep) 4) Schema Management threads (main) 5) Network receiver threads (recv) 6) Network send threads (send) 7) IO threads Each of these thread types are discussed in more detail below. MySQL Cluster 7.2 increases the maximum number of LDM threads from 4 to 16. The LDM contains the actual data, which means that when using 16 threads the data is more heavily partitioned (this is automatic in MySQL Cluster). Each LDM thread maintains its own set of data partitions, index partitions and REDO log. The number of LDM partitions per data node is not dynamically configurable, but it is possible, however, to map more than one partition onto each LDM thread, providing flexibility in modifying the number of LDM threads. The TC domain stores the state of in-flight transactions. This means that every new transaction can easily be assigned to a new TC thread. Testing has shown that in most cases 1 TC thread per 2 LDM threads is sufficient, and in many cases even 1 TC thread per 4 LDM threads is also acceptable. Testing also demonstrated that in some instances where the workload needed to sustain very high update loads it is necessary to configure 3 to 4 TC threads per 4 LDM threads. In the previous MySQL Cluster 7.1 release, only one TC thread was available. This limit has been increased to 16 TC threads in MySQL Cluster 7.2. The TC domain also manages the Adaptive Query Localization functionality introduced in MySQL Cluster 7.2 that significantly enhanced complex query performance by pushing JOIN operations down to the data nodes. Asynchronous Replication was separated into its own thread with the release of MySQL Cluster 7.1, and has not been modified in the latest 7.2 release. To scale the number of TC threads, it was necessary to separate the Schema Management domain from the TC domain. The schema management thread has little load, so is implemented with a single thread. The Network receiver domain was bound to 1 thread in MySQL Cluster 7.1. With the increase of threads in MySQL Cluster 7.2 it is also necessary to increase the number of recv threads to 8. This enables each receive thread to service one or more sockets used to communicate with other nodes the Cluster. The Network send thread is a new thread type introduced in MySQL Cluster 7.2. Previously other threads handled the sending operations themselves, which can provide for lower latency. To achieve highest throughput however, it has been necessary to create dedicated send threads, of which 8 can be configured. It is still possible to configure MySQL Cluster 7.2 to a legacy mode that does not use any of the send threads – useful for those workloads that are most sensitive to latency. The IO Thread is the final thread type and there have been no changes to this domain in MySQL Cluster 7.2. Multiple IO threads were already available, which could be configured to either one thread per open file, or to a fixed number of IO threads that handle the IO traffic. Except when using compression on disk, the IO threads typically have a very light load. Benchmarking the Scalability Enhancements The scalability enhancements discussed above have made it possible to scale CPU usage of each data node to more than 5x of that possible in MySQL Cluster 7.1. In addition, a number of bottlenecks have been removed, making it possible to scale data node performance by even more than 5x. Figure 2: MySQL Cluster 7.2 Delivers 8.4x Higher Performance than 7.1 The flexAsynch benchmark was used to compare MySQL Cluster 7.2 performance to 7.1 across an 8-node Intel Xeon x5670-based cluster of dual socket commodity servers (6 cores each). As the results demonstrate, MySQL Cluster 7.2 delivers over 8x higher performance per data nodes than MySQL Cluster 7.1. More details of this and other benchmarks will be published in a new whitepaper – coming soon, so stay tuned! In a following blog post, I’ll provide recommendations on optimum thread configurations for different types of server processor. You can also learn more from the Best Practices Guide to Optimizing Performance of MySQL Cluster Conclusion MySQL Cluster has achieved a range of impressive benchmark results, and set in context with the previous 7.1 release, is able to deliver over 8x higher performance per node. As a result, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs.

    Read the article

  • How to improve WinForms MSChart performance?

    - by Marcel
    Hi all, I have created some simple charts (of type FastLine) with MSChart and update them with live data, like below: . To do so, I bind an observable collection of a custom type to the chart like so: // set chart data source this._Chart.DataSource = value; //is of type ObservableCollection<SpectrumLevels> //define x and y value members for each series this._Chart.Series[0].XValueMember = "Index"; this._Chart.Series[1].XValueMember = "Index"; this._Chart.Series[0].YValueMembers = "Channel0Level"; this._Chart.Series[1].YValueMembers = "Channel1Level"; // bind data to chart this._Chart.DataBind(); //lasts 1.5 seconds for 8000 points per series At each refresh, the dataset completely changes, it is not a scrolling update! With a profiler I have found that the DataBind() call takes about 1.5 seconds. The other calls are negligible. How can I make this faster? Should I use another type than ObservableCollection? An array probably? Should I use another form of data binding? Is there some tweak for the MSChart that I may have missed? Should I use a sparsed set of date, having one value per pixel only? Have I simply reached the performance limit of MSCharts? From the type of the application to keep it "fluent", we should have multiple refreshes per second. Thanks for any hints!

    Read the article

  • Silverlight combobox performance issue

    - by Vinzz
    Hi, I'm facing a performance issue with a crowded combobox (5000 items). Rendering of the drop down list is really slow (as if it was computing all items before showing any). Do you have any trick to make this dropdown display lazy? Xaml code: <Grid x:Name="LayoutRoot"> <StackPanel Orientation="Horizontal" Width="200" Height="20"> <TextBlock>Test Combo </TextBlock> <ComboBox x:Name="fooCombo" Margin="5,0,0,0"></ComboBox> </StackPanel> </Grid> code behind: public MainPage() { InitializeComponent(); List<string> li = new List<string>(); int Max = 5000; for (int i = 0; i < Max; ++i) li.Add("Item - " + i); fooCombo.ItemsSource = li; }

    Read the article

  • Increase performance on iphone at pdf rendering

    - by burki
    Hi! I have a UITableView, and in every cell there's displayed a UIImage created from a pdf. But now the performance is very bad. Here's my code I use to generate the UIImage from the PDF. Creating CGPDFDocumentRef and UIImageView (in cellForRowAtIndexPath method): ... CFURLRef pdfURL = CFBundleCopyResourceURL(CFBundleGetMainBundle(), (CFStringRef)formula.icon, NULL, NULL); CGPDFDocumentRef documentRef = CGPDFDocumentCreateWithURL((CFURLRef)pdfURL); CFRelease(pdfURL); UIImageView *imageView = [[UIImageView alloc] initWithImage:[self imageFromPDFWithDocumentRef:documentRef]]; ... Generate UIImage: - (UIImage *)imageFromPDFWithDocumentRef:(CGPDFDocumentRef)documentRef { CGPDFPageRef pageRef = CGPDFDocumentGetPage(documentRef, 1); CGRect pageRect = CGPDFPageGetBoxRect(pageRef, kCGPDFCropBox); UIGraphicsBeginImageContext(pageRect.size); CGContextRef context = UIGraphicsGetCurrentContext(); CGContextTranslateCTM(context, CGRectGetMinX(pageRect),CGRectGetMaxY(pageRect)); CGContextScaleCTM(context, 1, -1); CGContextTranslateCTM(context, -(pageRect.origin.x), -(pageRect.origin.y)); CGContextDrawPDFPage(context, pageRef); UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return finalImage; } What can I do to increas the speed and keep the memory low?

    Read the article

  • Iterator performance contract (and use on non-collections)

    - by polygenelubricants
    If all that you're doing is a simple one-pass iteration (i.e. only hasNext() and next(), no remove()), are you guaranteed linear time performance and/or amortized constant cost per operation? Is this specified in the Iterator contract anywhere? Are there data structures/Java Collection which cannot be iterated in linear time? java.util.Scanner implements Iterator<String>. A Scanner is hardly a data structure (e.g. remove() makes absolutely no sense). Is this considered a design blunder? Is something like PrimeGenerator implements Iterator<Integer> considered bad design, or is this exactly what Iterator is for? (hasNext() always returns true, next() computes the next number on demand, remove() makes no sense). Similarly, would it have made sense for java.util.Random implements Iterator<Double>? Should a type really implement Iterator if it's effectively only using one-third of its API? (i.e. no remove(), always hasNext())

    Read the article

  • MySQL MyISAM table performance... painfully, painfully slow

    - by Salman A
    I've got a table structure that can be summarized as follows: pagegroup * pagegroupid * name has 3600 rows page * pageid * pagegroupid * data references pagegroup; has 10000 rows; can have anything between 1-700 rows per pagegroup; the data column is of type mediumtext and the column contains 100k - 200kbytes data per row userdata * userdataid * pageid * column1 * column2 * column9 references page; has about 300,000 rows; can have about 1-50 rows per page The above structure is pretty straight forwad, the problem is that that a join from userdata to page group is terribly, terribly slow even though I have indexed all columns that should be indexed. The time needed to run a query for such a join (userdata inner_join page inner_join pagegroup) exceeds 3 minutes. This is terribly slow considering the fact that I am not selecting the data column at all. Example of the query that takes too long: SELECT userdata.column1, pagegroup.name FROM userdata INNER JOIN page USING( pageid ) INNER JOIN pagegroup USING( pagegroupid ) Please help by explaining why does it take so long and what can i do to make it faster. Edit #1 Explain returns following gibberish: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE userdata ALL pageid 372420 1 SIMPLE page eq_ref PRIMARY,pagegroupid PRIMARY 4 topsecret.userdata.pageid 1 1 SIMPLE pagegroup eq_ref PRIMARY PRIMARY 4 topsecret.page.pagegroupid 1 Edit #2 SELECT u.field2, p.pageid FROM userdata u INNER JOIN page p ON u.pageid = p.pageid; /* 0.07 sec execution, 6.05 sec fecth */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE u ALL pageid 372420 1 SIMPLE p eq_ref PRIMARY PRIMARY 4 topsecret.u.pageid 1 Using index SELECT p.pageid, g.pagegroupid FROM page p INNER JOIN pagegroup g ON p.pagegroupid = g.pagegroupid; /* 9.37 sec execution, 60.0 sec fetch */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE g index PRIMARY PRIMARY 4 3646 Using index 1 SIMPLE p ref pagegroupid pagegroupid 5 topsecret.g.pagegroupid 3 Using where Moral of the story Keep medium/long text columns in a separate table if you run into performance problems such as this one.

    Read the article

  • Varying performance of MSVC release exe

    - by Andrew
    Hello everyone, I am curious what could be the reason for highly varying performance of the same executable. Sometimes, I run it and it takes 20 seconds and sometimes it is 110. Source is compiled with MSVC in Release mode with standard options. The code is here: vector<double> Un; vector<double> Ucur; double *pUn, *pUcur; ... // time marching for (old_time=time-logfreq, time+=dt; time <= end_time; time+=dt) { for (i=1, j=Un.size()-1, pUn=&Un[1], pUcur=&Ucur[1]; i < j; ++i, ++pUn, ++pUcur) { *pUcur = (*pUn)*(1.0-0.5*alpha*( *(pUn+1) - *(pUn-1) )); } Ucur[0] = (Un[0])*(1.0-0.5*alpha*( Un[1] - Un[j] )); Ucur[j] = (Un[j])*(1.0-0.5*alpha*( Un[0] - Un[j-1] )); Un = Ucur; }

    Read the article

  • Haskell math performance

    - by Travis Brown
    I'm in the middle of porting David Blei's original C implementation of Latent Dirichlet Allocation to Haskell, and I'm trying to decide whether to leave some of the low-level stuff in C. The following function is one example—it's an approximation of the second derivative of lgamma: double trigamma(double x) { double p; int i; x=x+6; p=1/(x*x); p=(((((0.075757575757576*p-0.033333333333333)*p+0.0238095238095238) *p-0.033333333333333)*p+0.166666666666667)*p+1)/x+0.5*p; for (i=0; i<6 ;i++) { x=x-1; p=1/(x*x)+p; } return(p); } I've translated this into more or less idiomatic Haskell as follows: trigamma :: Double -> Double trigamma x = snd $ last $ take 7 $ iterate next (x' - 1, p') where x' = x + 6 p = 1 / x' ^ 2 p' = p / 2 + c / x' c = foldr1 (\a b -> (a + b * p)) [1, 1/6, -1/30, 1/42, -1/30, 5/66] next (x, p) = (x - 1, 1 / x ^ 2 + p) The problem is that when I run both through Criterion, my Haskell version is six or seven times slower (I'm compiling with -O2 on GHC 6.12.1). Some similar functions are even worse. I know practically nothing about Haskell performance, and I'm not terribly interested in digging through Core or anything like that, since I can always just call the handful of math-intensive C functions through FFI. But I'm curious about whether there's low-hanging fruit that I'm missing—some kind of extension or library or annotation that I could use to speed up this numeric stuff without making it too ugly.

    Read the article

  • Performance of Java matrix math libraries?

    - by dfrankow
    We are computing something whose runtime is bound by matrix operations. (Some details below if interested.) This experience prompted the following question: Do folk have experience with the performance of Java libraries for matrix math (e.g., multiply, inverse, etc.)? For example: JAMA: http://math.nist.gov/javanumerics/jama/ COLT: http://acs.lbl.gov/~hoschek/colt/ Apache commons math: http://commons.apache.org/math/ I searched and found nothing. Details of our speed comparison: We are using Intel FORTRAN (ifort (IFORT) 10.1 20070913). We have reimplemented it in Java (1.6) using Apache commons math 1.2 matrix ops, and it agrees to all of its digits of accuracy. (We have reasons for wanting it in Java.) (Java doubles, Fortran real*8). Fortran: 6 minutes, Java 33 minutes, same machine. jvisualm profiling shows much time spent in RealMatrixImpl.{getEntry,isValidCoordinate} (which appear to be gone in unreleased Apache commons math 2.0, but 2.0 is no faster). Fortran is using Atlas BLAS routines (dpotrf, etc.). Obviously this could depend on our code in each language, but we believe most of the time is in equivalent matrix operations. In several other computations that do not involve libraries, Java has not been much slower, and sometimes much faster.

    Read the article

  • Performance of map overlay in conjunction with ItemizedOverlay is very poor

    - by oviroa
    I am trying to display one png (drawable) on a map in about 300 points. I am retrieving the coordinates from a Sqlite table, dumping them in a cursor. When I try to display them by parsing through the cursor, it takes for ever for the images to be drawn, about .5 second per image. I find that to be suspiciously slow, so some insight on how I can increase performance would help. Here is the snippet of my code that does the rendering: while (!mFlavorsCursor.isAfterLast()) { Log.d("cursor",""+(i++)); point = new GeoPoint( (int)(mFlavorsCursor.getFloat(mFlavorsCursor.getColumnIndex(DataBaseHelper.KEY_LATITUDE))*1000000), (int)(mFlavorsCursor.getFloat(mFlavorsCursor.getColumnIndex(DataBaseHelper.KEY_LONGITUDE))*1000000)); overlayitem = new OverlayItem(point, "", ""); itemizedoverlay.addOverlay(overlayitem); itemizedoverlay.doPopulate(); mFlavorsCursor.moveToNext(); } mapOverlays.add(itemizedoverlay); I tried to isolate all the steps and it looks like the slow one is this: itemizedoverlay.doPopulate(); This is a public method in my class that extends ItemizedOverlay that runs the private populate() method.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >