Search Results

Search found 13757 results on 551 pages for 'performance diagnostics'.

Page 30/551 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Performance intensive string splitting and manipulation in java

    - by juhanic
    What is the most efficient way to split a string by a very simple separator? Some background: I am porting a function I wrote in C with a bunch of pointer arithmetic to java and it is incredibly slow(After some optimisation still 5* slower). Having profiled it, it turns out a lot of that overhead is in String.split The function in question takes a host name or ip address and makes it generic: 123.123.123.123-*.123.123.123 a.b.c.example.com-*.example.com This can be run over several million items on a regular basis, so performance is an issue.

    Read the article

  • jQuery.keypad Performance Issues

    - by John Duff
    I am working on a Kiosk Touch Screen application and using the JQuery.keypad plugin and noticing some major performance issues. If you click a number of buttons in rapid succession the CPU gets pegged, the button clicks don't keep up with the clicking and some button presses even get lost. On my dev machine this isn't as noticeable, but on the Kiosk itself with 1 gig of ram it's painful. Trying the demo keypad at http://keith-wood.name/keypad.html#inline the one with multiple targets (which is the case with mine) has the exact same issues. Does anyone have any suggestions on how we might be able to improve this? The Kiosk runs in Firefox only so something specific to that would work. I'm using v1.2.1 of jquery.keypad and just upgraded to v1.4.2 of jquery.

    Read the article

  • Inline javascript performance.

    - by Geromey
    I know it is better coding practice to avoid inline javascript like: <img id="the_image" onclick="do_this(true);return false;"/> I am thinking about switching this kind of stuff for bound jquery click events like: $("#the_image").bind("click",function(){ do_this(true); return false; }); Will I lose any performance if I bind a ton of click events? I am not worried about the time it takes to initially bind the events, but the response times between clicking and it happening. I bet if there is a difference, it is negligible, but I will have a ton of functions bound. I'm wondering if browsers treat the onclick attribute the same way as a bound event. Thanks

    Read the article

  • Find Loding performance of the Website

    - by pandora
    How to find the site performance, there is a tools like YSLOW, Speed traker in google that shows the speed of the website. I have done a php project on LMS with Zend Framework, Everything is in live. When user post contents for a subject that may be size 200K and submitted to the server takes too slow. Sometime server may get DOWN. I login to server(PUTTY) and checked i found that there is more resource occupied in my server. It uses full memory on the server. When i cleared the resource the site loads well. Site is in Dedicated server with 3 more domains with 4GB Ram. Because of this LMS website all the website gets down. I need to check what is wrong in my website. How do i Start?

    Read the article

  • Performance problem Loading Dataset inside a virtual.

    - by ShaneH
    Host Configuration: HP EliteBook 8530w 4G Ram Win7 Ultimate 64Bit RC SQL Server 2005 64bit Developer Edition Virtual: Windows Virtual PC 1G Ram allocated Integration Services installed Windows XP 64bit Up to date service packs and .Net framework through 3.5 SP1 Sharing the Gigabit network adapter of the Host I have a simple .Net console application which loads a dataset of approximately 37K rows. Running the application on the host executes in approximately 4 seconds. Running inside the virtual takes 729 seconds. The size of the application grows to about 65Mb when the dataset is finished loading, no calculated columns or event handlers are attached. [edit] I changed the virtual to use a loopback adapter to communicate with the host and performance is now on par with running on hardware. Any ideas as to why it would going over the network adapter be almost 200x longer? TraceRt shows that the connection is only one hop. Thanks, Shane Holder

    Read the article

  • Using "CASE" in Where clause to choose various column harm the performance

    - by zivgabo
    I have query which needs to be dynamic on some of the columns, meaning I get a parameter and according its value I decide which column to fetch in my Where clause. I've implemented this request using "CASE" expression: (CASE @isArrivalTime WHEN 1 THEN ArrivalTime ELSE PickedupTime END) >= DATEADD(mi, -@TZOffsetInMins, @sTime) AND (CASE @isArrivalTime WHEN 1 THEN ArrivalTime ELSE PickedupTime END) < DATEADD(mi, -@TZOffsetInMins, @fTime) If @isArrivalTime = 1 then chose ArrivalTime column else chose PickedupTime column. I have a clustered index on ArrivalTime and nonclustered index on PickedupTime. I've noticed that when I'm using this query (with @isArrivalTime = 1), my performance is a lot worse comparing to only using ArrivalTime. Maybe the query optimizer can't use\choose the index properly in this way? I compared the execution plans an noticed that when I'm using the CASE 32% of the time is being wasted on the index scan, but when I didn't use the CASE(just usedArrivalTime`) only 3% were wasted on this index scan. Anyone know the reason for this?

    Read the article

  • Is there any performance overhead in using RaiseEvent in .net

    - by Sachin
    Is there any performance overhead in using RaiseEvent in .net I have a code which is similar to following. Dim _startTick As Integer = Environment.TickCount 'Do some Task' Dim duration As Integer = Environment.TickCount - _startTick Logger.Debug("Time taken : {0}", duration) RaiseEvent Datareceived() Above code returns Time Taken :1200 Time Taken :1400 But if remove RaiseEvent it returns Time Taken :110 Time Taken :121 I am surprised that the raiseevent is called after the logging of time taken. How it effects total time taken. I am working on Compact framework. Update: In the Eventhandler I had given a MsgBox. When I removed the message box it is now showing time taken as 110,121,etc i.e. less that 500 milliseconds. If I put the Msgbox back in eventhandler it shows 1200,1400,etc i.e. more that a second. More surprised now.(Event is raised after the logging part)

    Read the article

  • Best performance approach to history mechanism?

    - by Royi Namir
    We are going to create History Mechanism for our changes in DB (DART in pic) via Triggers. we have 600 tables. Each record that will be changed - the trigger will insert the deleted one into XXX. regarding to the XXX : option 1 : clone each table in "Dart" DB and each table now will have a "sister table" e.g. : Table1 will have Table1_History problems : we will have 1200 tables programmer can do mistakes by working on wrong tables... option 2 : make a new DB (DART_2005 in pic) and the history tables will be there option 3 : use linked server which stores the Db which will contain the history tables. question : 1) which option gives the best performance ( I guess 3 is not - but is it 1 or 2 or same ?) 2) Does option 2 is acting like "linked server" ( in queries we will need to select from both DB's...) 3) What is the best practice approach ?

    Read the article

  • MongoDB: embedding performance question

    - by Alex
    I just started learning MongoDB, and I really like the idea of embedding collections instead of referencing them. MongoDB's documentation recommends to use embedding if performance is needed. I just thought about a simple forum model. Let's say, every board category has several boards, every board has several topics, and every topic has several messages. All of these collections are embedded. After some time the size of the board category will be huge. Way more than the 2MB limit. Does this mean that there's a flaw in this design?

    Read the article

  • Performance characteristics of pthreads vs ucontext

    - by Robert Mason
    I'm trying to port a library that uses ucontext over to a platform which supports pthreads but not ucontext. The code is pretty well written so it should be relatively easy to replace all the calls to the ucontext API with a call to pthread routines. However, does this introduce a significant amount of additional overhead? Or is this a satisfactory replacement. I'm not sure how ucontext maps to operating system threads, and the purpose of this facility is to make coroutine spawning fairly cheap and easy. So, question is: Does replacing ucontext calls with pthread calls significantly change the performance characteristics of a library?

    Read the article

  • django url tag performance

    - by zxygentoo
    I was trying to integrate django-voting into my project following the RedditStyleVoting instruction. In my urls.py, i did something like this: url(r'^sections/(?P<object_id>\d+)/(?P<direction>up|down|clear)vote/?$', vote_on_object, dict( model=Section, template_object_name='section', template_name='script/section_confirm_vote.html', allow_xmlhttprequest=True ), name="section_vote", then, in my template: {% vote_by_user user on section as vote %} {% score_for_object section as score %} {% vote_by_user user on section as vote %} {% score_for_object section as score %} {{ score.score|default:0 }} It takes over 1.3s to load the page, but by hard coding it like this: {% vote_by_user user on section as vote %} {% score_for_object section as score %} {{ score.score|default:0 }} I got 50ms. Just avoid the url tag resolving stuff I got a 20+ times performance improvement. Is there something I did wrong? If not, then what's the best practice here, should we do things the right way or the fast way?

    Read the article

  • Display another field in the referenced table for multiple columns with performance issues in mind

    - by israkir
    I have a table of edge like this: ------------------------------- | id | arg1 | relation | arg2 | ------------------------------- | 1 | 1 | 3 | 4 | ------------------------------- | 2 | 2 | 6 | 5 | ------------------------------- where arg1, relation and arg2 reference to the ids of objects in another object table: -------------------- | id | object_name | -------------------- | 1 | book | -------------------- | 2 | pen | -------------------- | 3 | on | -------------------- | 4 | table | -------------------- | 5 | bag | -------------------- | 6 | in | -------------------- What I want to do is that, considering performance issues (a very big table more than 50 million of entries) display the object_name for each edge entry rather than id such as: --------------------------- | arg1 | relation | arg2 | --------------------------- | book | on | table | --------------------------- | pen | in | bag | --------------------------- What is the best select query to do this? Also, I am open to suggestions for optimizing the query - adding more index on the tables etc... EDIT: Based on the comments below: 1) @Craig Ringer: PostgreSQL version: 8.4.13 and only index is id for both tables. 2) @andrefsp: edge is almost x2 times bigger than object.

    Read the article

  • Tool to monitor IE performance running JavaScript

    - by StefanE
    Hi, Company I work for are one of the largest betting companies in Europe and the website has thousands of lines of JavaScript on all our pages. Lately Internet Explorer versions earlier than version 9 are running painfully slow and I want to be able to monitor what parts of a page load (including scripts) that are slow. I know that IE are slower in general and has DOM API issues etc. What I want to accomplish is a way to quickly identify slow parts and see if we can replace the code with IE specific code that will render with higher performance. Cheers, Stefan

    Read the article

  • .NET Performance: Deep Recursion vs Queue

    - by JeffN825
    I'm writing a component that needs to walk large object graphs, sometimes 20-30 levels deep. What is the most performant way of walking the graph? A. Enqueueing "steps" so as to avoid deep recursion or B. A DFS (depth first search) which may step many levels deep and have a "deep" stack trace at times. I guess the question I'm asking is: Is there a performance hit in .NET for doing a DFS that causes a "deep" stack trace? If so, what is the hit? And would I better better off with some BFS by means of queueing up steps that would have been handled recursively in a DFS? Sorry if I'm being unclear. Thanks.

    Read the article

  • C# Dynamic From Components (Performance problem)

    - by Svisstack
    Hello, I have a problem with performance of my code under Windows Forms. Have a form, her layout is depending on constructor data, because he layout must be OnLoad or in Constructor generated. I generation is simple, base FlowLayoutPanel have other FlowLayoutPanels, for each have a Label and TextBox with DataBinding. Problem is this is VERY SLOW, up to 20 seconds, i drawing less than 100 controls, from Performace Session i know a problem is on 70% procesing functions: System.Windows.Forms.Control.ControlCollection.Add(class System.Windows.Forms.Control) System.Windows.Forms.ControlBindingsCollection.Add(class System.Windows.Forms.Binding) How i can do with this? Anyone help me in this problem? How solve the dynamic form layout problem?

    Read the article

  • Function calls in virtual machine killing performance

    - by GenTiradentes
    I wrote a virtual machine in C, which has a call table populated by pointers to functions that provide the functionality of the VM's opcodes. When the virtual machine is run, it first interprets a program, creating an array of indexes corresponding to the appropriate function in the call table for the opcode provided. It then loops through the array, calling each function until it reaches the end. Each instruction is extremely small, typically one line. Perfect for inlining. The problem is that the compiler doesn't know when any of the virtual machine's instructions are going to be called, as it's decided at runtime, so it can't inline them. The overhead of function calls and argument passing is killing the performance of my VM. Any ideas on how to get around this?

    Read the article

  • How to track IIS server performance

    - by Chris Brandsma
    I have a reoccurring issue where a customer calls up and complains that the web site is too slow. Specifically, if they are inactive for a short period of time, then go back to the site, there will be a minute-two minute delay before the user sees a response. (the standard browser is Firefox in this case) I have Perfmon up and running, the cpu utilization is usually below 20% (single proc...don't ask). The database is humming along. And I'm pulling my hair out. So, what metrics/tools do you find useful when evaluating IIS performance?

    Read the article

  • How can I measure file access performance (and volume) of a (Java) application

    - by stmoebius
    Given an application, how can I measure the amount of data read and written by that application? the time spent reading/writing to disk? The specific application is Java-based (JBoss), and multi-threaded, and running as a service on Windows 7/2008 x64. The overall goal I have is determining whether and why file access is a bottleneck in my application. Therefore, running the application in a defined and repeatable scenario is a given. File access may be local as well as on network shares. Windows performance monitor appears to be too hard to use (unless someone can point me to a helpful explanation). Any ideas?

    Read the article

  • Java generic Interface performance

    - by halfwarp
    Simple question, but tricky answer I guess. Does using Generic Interfaces hurts performance? Example: public interface Stuff<T> { void hello(T var); } vs public interface Stuff { void hello(Integer var); <---- Integer used just as an example } My first thought is that it doesn't. Generics are just part of the language and the compiler will optimize it as though there were no generics (at least in this particular case of generic interfaces). Is this correct?

    Read the article

  • AS3: Performance question calling an event function with null param

    - by adehaas
    Lately I needed to call a listener function without an actual listener like so: foo(null); private function foo(event:Event):void { //do something } So I was wondering if there is a significant difference regarding performance between this and using the following, in which I can prevent the null in calling the function without the listener, but am still able to call it with a listener as well: foo(); private function foo(event:Event = null):void { } I am not sure wether it is just a question of style, or actually bad practice and I should write two similar functions, one with and one without the event param (which seems cumbersome to me). Looking forward to your opinions, thx.

    Read the article

  • c# performance- create font

    - by user85917
    I have performance issues in this code segment which I think is caused by the "new Font". Will it be faster if fonts are static/global ? if (row.StartsWith(TILD_BEGIN)) { rtbTrace.SelectionColor = Color.Maroon; rtbTrace.SelectionFont = new Font(myFont, (float)8.25, FontStyle.Regular); if (row.StartsWith(BEGIN) ) rtbTrace.AppendText(Environment.NewLine + row + Environment.NewLine); else rtbTrace.AppendText(Environment.NewLine + row.Substring(1) + Environment.NewLine); continue; } if (row.StartsWith(EXCL_BEGIN)) { -- similar block } if (row.StartsWith(DLR_BEGIN)) { -- similar block } . . .

    Read the article

  • SSRS Performance Mystery

    - by user101654
    I have a stored procedure that returns about 50000 records in 10sec using at most 2 cores in SSMS. The SSRS report using the stored procedure was taking 20min and would max out the processor on an 8 core server for the entire time. The report was relatively simple (i.e. no graphs, calculations). The report did not appear to be the issue as I wrote the 50K rows to a temp table and the report could display the data in a few seconds. I tried many different ideas for testing altering the stored procedure each time, but keeping the original code in a separate window to revert back to. After one Alter of the stored procedure, going back to the original code, the report and server utilization started running fast, comparable to the performance of the stored procedure alone. Everything is fine for now, but I am would like to get to the bottom of what caused this in case it happens again. Any ideas?

    Read the article

  • C# chart control Performance with large amounts of data

    - by user3642115
    I am using a chart control with a range bar graph to basically make a gantt chart for lots of people and lots of projects, say about 1000 total series. The issue that I am running in to is that once I have all my data added to the chart, which takes some time but that is to be expected, and I go to scroll down on my graph it freezes the whole application and takes a while before it unfreezes and scrolls down. Is there any way to improve the performance of this? I tried adding the graph to a panel and growing the graph size dynamically and then scrolling down from the panel but that cause a whole plethora of other issues. Any tips for speeding this up? I don't think it is my code as it has already finished running when this issue happens. Thanks.

    Read the article

  • OpenGL performance on rendering "virtual gallery" (textures)

    - by maticus
    I have a considerable (120-240) amount of 640x480 images that will be displayed as textured flat surfaces (4 vertex polygons) in a 3D environment. About 30-50% of them will be visible in a given frame. It is possible for them to crossover. Nothing else will be present in the environment. The question is - will the modern and/or few-years-old (lets say Radeon 9550) GPU cope with that, and what frame rate can I expect? I aim for 20FPS, but 30-40 would be nice. Would changing the resolution to 320x240 make it more probable to happen? I do not have any previous experience with performance issues of 3D graphics on modern GPUs, and unfortunately I must make a design choice. I don't want to waste time on doing something that couldn't have worked :-)

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >