Search Results

Search found 13341 results on 534 pages for '1 obiee performance tuning'.

Page 34/534 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Performance optimization strategies of last resort?

    - by jerryjvl
    There are plenty of performance questions on this site already, but it occurs to me that almost all are very problem-specific and fairly narrow. And almost all repeat the advice to avoid premature optimization. Let's assume: the code already is working correctly the algorithms chosen are already optimal for the circumstances of the problem the code has been measured, and the offending routines have been isolated all attempts to optimize will also be measured to ensure they do not make matters worse What I am looking for here is strategies and tricks to squeeze out up to the last few percent in a critical algorithm when there is nothing else left to do but whatever it takes. Ideally, try to make answers language agnostic, and indicate any down-sides to the suggested strategies where applicable. I'll add a reply with my own initial suggestions, and look forward to whatever else the SO community can think of.

    Read the article

  • Testing perceived performance

    - by Josh Kelley
    I recently got a shiny new development workstation. The only disadvantage of this is that the desktop apps I'm developing now run very, very fast, and so I fear that parts of the code that would be annoyingly slow on end users' machines will go unnoticed during my testing. Is there a good way to slow down an application for testing? I've tried searching around, but all of the results I've been able to find seem pretty fiddly to set up (e.g., manually setting up a high-priority CPU-bound task on the same CPU core as the target app, or running a background process that rapidly interrupts and resumes the target app), and I don't know if the end result is actually a good representation of running on a slower computer (with its slower CPU, slower RAM, slower disk I/O...). I don't think that this is a job for a profiler; I'm interested in the user's perception of end-to-end performance rather than in where the time goes for particular operations.

    Read the article

  • Making DiveIntoPython3 work in IE8 (fixing a Javascript performance issue)

    - by srid
    I am trying to fix the performance problem with Dive Into Python 3 on IE8. Visit this page in IE8 and, after a few moments, you will see the following popup: I traced down the culprit down to this line in j/dip3.js ... find("tr:nth-child(" + (i+1) + ") td:nth-child(2)"); If I disable it (and return from the function immediately), the "Stop executing this script?" dialog does not appear as the page now loads fairly fast. I am no Javascript/jquery expert, so I ask you fellow developers as to why this query is making IE slow. Is there a fix for it? Edit: you can download the entire webpage (980K) for local viewing/editing.

    Read the article

  • md5hash performance with big files for check copy files in shared folder

    - by alhambraeidos
    Hi all, My app Windows forms .NET in Win XP copy files pdfs in shared network folder in a server win 2003. Admin user in Win2003 detects some corrupt files pdfs, in that shared folder. I want check if a fileis copied right in shared folder Andre Krijen says me the best way is to create a MD5Hash of original file. When the file is copied, verify the MD5Hash file of the copied one with the original one. I have big pdf files. apply md5 hash about big file, any performance problem ?? If I only check (without generate md5 hash) Length of files (original and copied) ?? Thanks in advanced.

    Read the article

  • Timestamp as int field, query performance

    - by Kirzilla
    Hello, I'm storing timestamp as int field. And on large table it takes too long to get rows inserted at date because I'm using mysql function FROM_UNIXTIME. SELECT * FROM table WHERE FROM_UNIXTIME(timestamp_field, '%Y-%m-%d') = '2010-04-04' Is there any ways to speed this query? Maybe I should use query for rows using timestamp_field >= x AND timestamp_field < y? Thank you I've just tried this query... SELECT * FROM table WHERE timestamp_field >= UNIX_TIMESTAMP('2010-04-14 00:00:00') AND timestamp_field <= UNIX_TIMESTAMP('2010-04-14 23:59:59') but there is no any performance bonuses. :(

    Read the article

  • bad performance from too many caught errors?

    - by Christopher Klein
    I have a large project in C# (.NET 2.0) which contains very large chunks of code generated by SubSonic. Is a try-catch like this causing a horrible performance hit? for (int x = 0; x < identifiers.Count; x++) {decimal target = 0; try { target = Convert.ToDecimal(assets[x + identifiers.Count * 2]); // target % } catch { targetEmpty = true; }} What is happening is if the given field that is being passed in is not something that can be converted to a decimal it sets a flag which is then used further along in the record to determine something else. The problem is that the application is literally throwing 10s of thousands of exceptions as I am parsing through 30k records. The process as a whole takes almost 10 minutes for everything and my overall task is to improve that time some and this seemed like easy hanging fruit if its a bad design idea. Any thoughts would be helpful (be kind, its been a miserable day) thanks, Chris

    Read the article

  • (My)SQL performance: updating one field vs many unneccesary fields

    - by changokun
    i'm processing a form that has a lot of fields for a user who is editing an existing record. the user may have only changed one field, and i would typically do an update query that sets the values of all the fields, even though most of them don't change. i could do some sort of tracking to see which fields have actually changed, and only update the few that did. is there a performance difference between updating all fields in a record vs only the one that changed? are there other reasons to go with either method? the shotgun method is pretty easy...

    Read the article

  • Performance optimization for SQL Server: decrease stored procedures execution time or unload the ser

    - by tim
    We have a web service which provides search over hotels. There is a problem with performance: a single request to the service takes around 5000 ms. Almost all of the time is spent in database by executing storing procedures. During the request our server (mssql2008) consumes ~90% of the processor time. When 2 requests are made in parallel the average time grows and is around 7000 ms. When number of request is increasing, the average time of response is increasing as well. We have 20-30 requests per minute. Which kind of optimization is the best in this case having in mind that the goal is to provide stable response time for the service: 1) Try to decrease the stored procedures execution time 2) Try to find the way how to unload the server It is interesting to hear from people who deal with booking sites.

    Read the article

  • SQL Server Express performance issue

    - by Developer IT
    Hi folks ! I know my questions will sound silly and probably nobody will have perfect answer but since I am in a complete dead-end with the situation it will make me feel better to post it here. So... I have a SQL Server Express database that's 500 Mb. It contains 5 tables and maybe 30 stored procedure. This database is use to store articles and is use for the Developer It web site. Normally the web pages load quickly, let's say 2 ou 3 sec. BUT, sqlserver process uses 100% of the processor for those 2 or 3 sec. I try to find which stored procedure was the problem and I could not find one. It seems like every read into the table dans contains the articles (there are about 155,000 of them and 20 or so gets added every 15 minutes). I added few index but without luck... It is because the table is full text indexed ? Should I have order with the primary key instead of date ? I never had any problems with ordering by dates.... Should I use dynamic SQL ? Should I add the primary key into the url of the articles ? Should I use mutiple indexes for seperate columns or one big index ? I you want more details or code bits, just ask for it. Basicly, every little hint is much apreciated. Thanks.

    Read the article

  • Performance intensive string splitting and manipulation in java

    - by juhanic
    What is the most efficient way to split a string by a very simple separator? Some background: I am porting a function I wrote in C with a bunch of pointer arithmetic to java and it is incredibly slow(After some optimisation still 5* slower). Having profiled it, it turns out a lot of that overhead is in String.split The function in question takes a host name or ip address and makes it generic: 123.123.123.123-*.123.123.123 a.b.c.example.com-*.example.com This can be run over several million items on a regular basis, so performance is an issue.

    Read the article

  • Struts/JSP/J2EE performance and memory profiling and issues

    - by Berlin Brown
    We are using Struts and having performance issues. And making heavy use of jsp includes, tiles, EL expressions. I am sure this is eating up a lot of memory and processing time. What are some approaches to profile the JSP page? What tools could I use? What should I look for when profiling? I have seen the code generated JSP Java Servlet Code and I see the bottlenecks but would rather measure it more accurately. This is under JDK1.5 and IBM Websphere 6.1 (RAD7)

    Read the article

  • jQuery.keypad Performance Issues

    - by John Duff
    I am working on a Kiosk Touch Screen application and using the JQuery.keypad plugin and noticing some major performance issues. If you click a number of buttons in rapid succession the CPU gets pegged, the button clicks don't keep up with the clicking and some button presses even get lost. On my dev machine this isn't as noticeable, but on the Kiosk itself with 1 gig of ram it's painful. Trying the demo keypad at http://keith-wood.name/keypad.html#inline the one with multiple targets (which is the case with mine) has the exact same issues. Does anyone have any suggestions on how we might be able to improve this? The Kiosk runs in Firefox only so something specific to that would work. I'm using v1.2.1 of jquery.keypad and just upgraded to v1.4.2 of jquery.

    Read the article

  • Inline javascript performance.

    - by Geromey
    I know it is better coding practice to avoid inline javascript like: <img id="the_image" onclick="do_this(true);return false;"/> I am thinking about switching this kind of stuff for bound jquery click events like: $("#the_image").bind("click",function(){ do_this(true); return false; }); Will I lose any performance if I bind a ton of click events? I am not worried about the time it takes to initially bind the events, but the response times between clicking and it happening. I bet if there is a difference, it is negligible, but I will have a ton of functions bound. I'm wondering if browsers treat the onclick attribute the same way as a bound event. Thanks

    Read the article

  • Find Loding performance of the Website

    - by pandora
    How to find the site performance, there is a tools like YSLOW, Speed traker in google that shows the speed of the website. I have done a php project on LMS with Zend Framework, Everything is in live. When user post contents for a subject that may be size 200K and submitted to the server takes too slow. Sometime server may get DOWN. I login to server(PUTTY) and checked i found that there is more resource occupied in my server. It uses full memory on the server. When i cleared the resource the site loads well. Site is in Dedicated server with 3 more domains with 4GB Ram. Because of this LMS website all the website gets down. I need to check what is wrong in my website. How do i Start?

    Read the article

  • Performance problem Loading Dataset inside a virtual.

    - by ShaneH
    Host Configuration: HP EliteBook 8530w 4G Ram Win7 Ultimate 64Bit RC SQL Server 2005 64bit Developer Edition Virtual: Windows Virtual PC 1G Ram allocated Integration Services installed Windows XP 64bit Up to date service packs and .Net framework through 3.5 SP1 Sharing the Gigabit network adapter of the Host I have a simple .Net console application which loads a dataset of approximately 37K rows. Running the application on the host executes in approximately 4 seconds. Running inside the virtual takes 729 seconds. The size of the application grows to about 65Mb when the dataset is finished loading, no calculated columns or event handlers are attached. [edit] I changed the virtual to use a loopback adapter to communicate with the host and performance is now on par with running on hardware. Any ideas as to why it would going over the network adapter be almost 200x longer? TraceRt shows that the connection is only one hop. Thanks, Shane Holder

    Read the article

  • Using "CASE" in Where clause to choose various column harm the performance

    - by zivgabo
    I have query which needs to be dynamic on some of the columns, meaning I get a parameter and according its value I decide which column to fetch in my Where clause. I've implemented this request using "CASE" expression: (CASE @isArrivalTime WHEN 1 THEN ArrivalTime ELSE PickedupTime END) >= DATEADD(mi, -@TZOffsetInMins, @sTime) AND (CASE @isArrivalTime WHEN 1 THEN ArrivalTime ELSE PickedupTime END) < DATEADD(mi, -@TZOffsetInMins, @fTime) If @isArrivalTime = 1 then chose ArrivalTime column else chose PickedupTime column. I have a clustered index on ArrivalTime and nonclustered index on PickedupTime. I've noticed that when I'm using this query (with @isArrivalTime = 1), my performance is a lot worse comparing to only using ArrivalTime. Maybe the query optimizer can't use\choose the index properly in this way? I compared the execution plans an noticed that when I'm using the CASE 32% of the time is being wasted on the index scan, but when I didn't use the CASE(just usedArrivalTime`) only 3% were wasted on this index scan. Anyone know the reason for this?

    Read the article

  • Is there any performance overhead in using RaiseEvent in .net

    - by Sachin
    Is there any performance overhead in using RaiseEvent in .net I have a code which is similar to following. Dim _startTick As Integer = Environment.TickCount 'Do some Task' Dim duration As Integer = Environment.TickCount - _startTick Logger.Debug("Time taken : {0}", duration) RaiseEvent Datareceived() Above code returns Time Taken :1200 Time Taken :1400 But if remove RaiseEvent it returns Time Taken :110 Time Taken :121 I am surprised that the raiseevent is called after the logging of time taken. How it effects total time taken. I am working on Compact framework. Update: In the Eventhandler I had given a MsgBox. When I removed the message box it is now showing time taken as 110,121,etc i.e. less that 500 milliseconds. If I put the Msgbox back in eventhandler it shows 1200,1400,etc i.e. more that a second. More surprised now.(Event is raised after the logging part)

    Read the article

  • Best performance approach to history mechanism?

    - by Royi Namir
    We are going to create History Mechanism for our changes in DB (DART in pic) via Triggers. we have 600 tables. Each record that will be changed - the trigger will insert the deleted one into XXX. regarding to the XXX : option 1 : clone each table in "Dart" DB and each table now will have a "sister table" e.g. : Table1 will have Table1_History problems : we will have 1200 tables programmer can do mistakes by working on wrong tables... option 2 : make a new DB (DART_2005 in pic) and the history tables will be there option 3 : use linked server which stores the Db which will contain the history tables. question : 1) which option gives the best performance ( I guess 3 is not - but is it 1 or 2 or same ?) 2) Does option 2 is acting like "linked server" ( in queries we will need to select from both DB's...) 3) What is the best practice approach ?

    Read the article

  • Performance characteristics of pthreads vs ucontext

    - by Robert Mason
    I'm trying to port a library that uses ucontext over to a platform which supports pthreads but not ucontext. The code is pretty well written so it should be relatively easy to replace all the calls to the ucontext API with a call to pthread routines. However, does this introduce a significant amount of additional overhead? Or is this a satisfactory replacement. I'm not sure how ucontext maps to operating system threads, and the purpose of this facility is to make coroutine spawning fairly cheap and easy. So, question is: Does replacing ucontext calls with pthread calls significantly change the performance characteristics of a library?

    Read the article

  • MongoDB: embedding performance question

    - by Alex
    I just started learning MongoDB, and I really like the idea of embedding collections instead of referencing them. MongoDB's documentation recommends to use embedding if performance is needed. I just thought about a simple forum model. Let's say, every board category has several boards, every board has several topics, and every topic has several messages. All of these collections are embedded. After some time the size of the board category will be huge. Way more than the 2MB limit. Does this mean that there's a flaw in this design?

    Read the article

  • django url tag performance

    - by zxygentoo
    I was trying to integrate django-voting into my project following the RedditStyleVoting instruction. In my urls.py, i did something like this: url(r'^sections/(?P<object_id>\d+)/(?P<direction>up|down|clear)vote/?$', vote_on_object, dict( model=Section, template_object_name='section', template_name='script/section_confirm_vote.html', allow_xmlhttprequest=True ), name="section_vote", then, in my template: {% vote_by_user user on section as vote %} {% score_for_object section as score %} {% vote_by_user user on section as vote %} {% score_for_object section as score %} {{ score.score|default:0 }} It takes over 1.3s to load the page, but by hard coding it like this: {% vote_by_user user on section as vote %} {% score_for_object section as score %} {{ score.score|default:0 }} I got 50ms. Just avoid the url tag resolving stuff I got a 20+ times performance improvement. Is there something I did wrong? If not, then what's the best practice here, should we do things the right way or the fast way?

    Read the article

  • Display another field in the referenced table for multiple columns with performance issues in mind

    - by israkir
    I have a table of edge like this: ------------------------------- | id | arg1 | relation | arg2 | ------------------------------- | 1 | 1 | 3 | 4 | ------------------------------- | 2 | 2 | 6 | 5 | ------------------------------- where arg1, relation and arg2 reference to the ids of objects in another object table: -------------------- | id | object_name | -------------------- | 1 | book | -------------------- | 2 | pen | -------------------- | 3 | on | -------------------- | 4 | table | -------------------- | 5 | bag | -------------------- | 6 | in | -------------------- What I want to do is that, considering performance issues (a very big table more than 50 million of entries) display the object_name for each edge entry rather than id such as: --------------------------- | arg1 | relation | arg2 | --------------------------- | book | on | table | --------------------------- | pen | in | bag | --------------------------- What is the best select query to do this? Also, I am open to suggestions for optimizing the query - adding more index on the tables etc... EDIT: Based on the comments below: 1) @Craig Ringer: PostgreSQL version: 8.4.13 and only index is id for both tables. 2) @andrefsp: edge is almost x2 times bigger than object.

    Read the article

  • Tool to monitor IE performance running JavaScript

    - by StefanE
    Hi, Company I work for are one of the largest betting companies in Europe and the website has thousands of lines of JavaScript on all our pages. Lately Internet Explorer versions earlier than version 9 are running painfully slow and I want to be able to monitor what parts of a page load (including scripts) that are slow. I know that IE are slower in general and has DOM API issues etc. What I want to accomplish is a way to quickly identify slow parts and see if we can replace the code with IE specific code that will render with higher performance. Cheers, Stefan

    Read the article

  • .NET Performance: Deep Recursion vs Queue

    - by JeffN825
    I'm writing a component that needs to walk large object graphs, sometimes 20-30 levels deep. What is the most performant way of walking the graph? A. Enqueueing "steps" so as to avoid deep recursion or B. A DFS (depth first search) which may step many levels deep and have a "deep" stack trace at times. I guess the question I'm asking is: Is there a performance hit in .NET for doing a DFS that causes a "deep" stack trace? If so, what is the hit? And would I better better off with some BFS by means of queueing up steps that would have been handled recursively in a DFS? Sorry if I'm being unclear. Thanks.

    Read the article

  • C# Dynamic From Components (Performance problem)

    - by Svisstack
    Hello, I have a problem with performance of my code under Windows Forms. Have a form, her layout is depending on constructor data, because he layout must be OnLoad or in Constructor generated. I generation is simple, base FlowLayoutPanel have other FlowLayoutPanels, for each have a Label and TextBox with DataBinding. Problem is this is VERY SLOW, up to 20 seconds, i drawing less than 100 controls, from Performace Session i know a problem is on 70% procesing functions: System.Windows.Forms.Control.ControlCollection.Add(class System.Windows.Forms.Control) System.Windows.Forms.ControlBindingsCollection.Add(class System.Windows.Forms.Binding) How i can do with this? Anyone help me in this problem? How solve the dynamic form layout problem?

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >