Search Results

Search found 5602 results on 225 pages for 'tuning red gate'.

Page 24/225 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Performance statistics hooks

    - by tinny
    Lets be honest, most software that developers produce has quite modest performance requirements. E.g. Systems perhaps serving 100's of requests per second, if that. But lets assume for a moment (or even dream) that you where perhaps involved in the "next big thing" (whatever that means) and you wanted to put some sort of performance statistics logging in place to help you out when all those users come flying in. Performance statistics logging, how would you approach this requirement? Perhaps you would use some sort of generic framework for this? Or roll your own solution? What would you log? How granular? Or would you not even bother putting anything in place and rather deal with this issue when it actually became an issue? It would be really interesting to hear your thoughts on this topic.

    Read the article

  • MySQL Normalization stored procedure performance

    - by srkiNZ84
    Hi, I've written a stored procedure in MySQL to take values currently in a table and to "Normalize" them. This means that for each value passed to the stored procedure, it checks whether the value is already in the table. If it is, then it stores the id of that row in a variable. If the value is not in the table, it stores the newly inserted value's id. The stored procedure then takes the id's and inserts them into a table which is equivalent to the original de-normailized table, but this table is fully normalized and consists of mainly foreign keys. My problem with this design is that the stored procedure takes approximately 10ms or so to return, which is too long when you're trying to work through some 10million records. My suspicion is that the performance is to do with the way in which I'm doing the inserts. i.e. INSERT INTO TableA (first_value) VALUES (argument_from_sp) ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id); SET @TableAId = LAST_INSERT_ID(); The "ON DUPLICATE KEY UPDATE" is a bit of a hack, due to the fact that on a duplicate key I don't want to update anything but rather just return the id value of the row. If you miss this step though, the LAST_INSERT_ID() function returns the wrong value when you're trying to run the "SET ..." statement. Does anyone know of a better way to do this in MySQL? Thank you

    Read the article

  • Performance tune my sql query removing OR statements

    - by SmartestVEGA
    I need to performance tune following SQL query by removing "OR" statements Please help ... SELECT a.id, a.fileType, a.uploadTime, a.filename, a.userId, a.CID, ui.username, company.name companyName, a.screenName FROM TM_transactionLog a, TM_userInfo ui, TM_company company, TM_airlineCompany ac WHERE ( a.CID = 3049 ) OR ( a.CID = company.ID AND ac.SERVICECID = 3049 AND company.SPECIFICCID = ac.ID ) OR ( a.USERID = ui.ID AND ui.CID = 3049 );

    Read the article

  • Speeding up jQuery empty() or replaceWith() Functions When Dealing with Large DOM Elements

    - by Levi Hackwith
    Let me start off by apologizing for not giving a code snippet. The project I'm working on is proprietary and I'm afraid I can't show exactly what I'm working on. However, I'll do my best to be descriptive. Here's a breakdown of what goes on in my application: User clicks a button Server retrieves a list of images in the form of a data-table Each row in the table contains 8 data-cells that in turn each contain one hyperlink Each request by the user can contain up to 50 rows (I can change this number if need be) That means the table contains upwards of 800 individual DOM elements My analysis shows that jQuery("#dataTable").empty() and jQuery("#dataTable).replaceWith(tableCloneObject) take up 97% of my overall processing time and take on average 4 - 6 seconds to complete. I'm looking for a way to speed up either of the above mentioned jQuery functions when dealing with massive DOM elements that need to be removed / replaced. I hope my explanation helps.

    Read the article

  • Having all Views in the Shared folder - works but is throwing "caught exceptions". Performance conc

    - by Scott
    Hi everyone, I have a simple but heavily used app done in VS2010/MVC2. I didn't like having separate folders for each view/controller and so have all the views in the Shared folder. It's working fine but while debugging in VS, I noticed that it's throwing IO "caught exceptions" since it seems to be looking in the [FolderName]/[ViewName] folder before going down to the Shared folder. Again, the app runs fine but I'm concerned that all these "caught exceptions" will have a minor performance impact since they do have a cost in via the CLR. Is there any way I can configure the Routing so that it will only look in the Shared folder? Thanks.

    Read the article

  • SQL server virtual memory usage and perofrmance

    - by user365035
    Hello, I have a very large DB used mostly for analytics. The performance overall is very sluggish. I just noticed that when running the query below, the amount of virtual memory used greatly exceed the amount of physical memory available. Currently, phsycial memory is 10GB (10238 bytes) where as the virtual memory returns significantly more 8388607 bytes. That seems really wrong, but I'm at a bit of a loss on how to proceed. USE [master]; GO select cpu_count , hyperthread_ratio , physical_memory_in_bytes / 1048576 as 'mem_MB' , virtual_memory_in_bytes / 1048576 as 'virtual_mem_MB' , max_workers_count , os_error_mode , os_priority_class from sys.dm_os_sys_info

    Read the article

  • pipelined function

    - by user289429
    Can someone provide an example of how to use parallel table function in oracle pl/sql. We need to run massive queries for 15 years and combine the result. SELECT * FROM Table(TableFunction(cursor(SELECT * FROM year_table))) ...is what we want effectively. The innermost select will give all the years, and the table function will take each year and run massive query and returns a collection. The problem we have is that all years are being fed to one table function itself, we would rather prefer the table function being called in parallel for each of the year. We tried all sort of partitioning by hash and range and it didn't help. Also, can we drop the keyword PIPELINED from the function declaration? because we are not performing any transformation and just need the aggregate of the resultset.

    Read the article

  • Python faster way to read fixed length fields form a file into dictionary

    - by Martlark
    I have a file of names and addresses as follows (example line) OSCAR ,CANNONS ,8 ,STIEGLITZ CIRCUIT And I want to read it into a dictionary of name and value. Here self.field_list is a list of the name, length and start point of the fixed fields in the file. What ways are there to speed up this method? (python 2.6) def line_to_dictionary(self, file_line,rec_num): file_line = file_line.lower() # Make it all lowercase return_rec = {} # Return record as a dictionary for (field_start, field_length, field_name) in self.field_list: field_data = file_line[field_start:field_start+field_length] if (self.strip_fields == True): # Strip off white spaces first field_data = field_data.strip() if (field_data != ''): # Only add non-empty fields to dictionary return_rec[field_name] = field_data # Set hidden fields # return_rec['_rec_num_'] = rec_num return_rec['_dataset_name_'] = self.name return return_rec

    Read the article

  • Project Performance Evaluation and Finding Weak Areas

    - by pramodc84
    I'm working in J2EE web project, which has lots of Java, SQL scripts, JS, AJAX stuff. Its been 5 years for project still running fine. I have assigned with work of performance evaluation on the project as there might be some memory usage issues, DB fetching logic delays and other similar weak performance areas. From where should I begin? Any best practices to make project better?

    Read the article

  • Speed up math code in C# by writing a C dll?

    - by Projectile Fish
    I have a very large nested for loop in which some multiplications and additions are performed on floating point numbers. for (int i = 0; i < length1; i++) { s = GetS(i); c = GetC(i); for(int j = 0; j < length2; j++) { double oldU = u[j]; u[j] = c * oldU + s * omega[i][j]; omega[i][j] = c * omega[i][j] - s * oldU; } } This loop is taking up the majority of my processing time and is a bottleneck. Would I be likely to see any speed improvements if I rewrite this loop in C and interface to it from C#?

    Read the article

  • MySQL: Is it faster to use inserts and updates instead of insert on duplicate key update?

    - by Nir
    I have a cron job that updates a large number of rows in a database. Some of the rows are new and therefore inserted and some are updates of existing ones and therefore update. I use insert on duplicate key update for the whole data and get it done in one call. But- I actually know which rows are new and which are updated so I can also do inserts and updates seperately. Will seperating the inserts and updates have advantage in terms of performance? What are the mechanics behind this ? Thanks!

    Read the article

  • Approximate timings for various operations on a "typical desktop PC" anno 2010

    - by knorv
    In the article "Teach Yourself Programming in Ten Years" Peter Norvig (Director of Research, Google) gives the following approximate timings for various operations on a typical 1GHz PC back in 2001: execute single instruction = 1 nanosec = (1/1,000,000,000) sec fetch word from L1 cache memory = 2 nanosec fetch word from main memory = 10 nanosec fetch word from consecutive disk location = 200 nanosec fetch word from new disk location (seek) = 8,000,000 nanosec = 8 millisec What would the corresponding timings be for your definition of a typical PC desktop anno 2010?

    Read the article

  • Temporary intermediate table

    - by user289429
    In our project to generate massive reports in oracle we use some permanent table to hold intermediate results. For example to generate one report we run few queries and populate the table, at the final step we join the intermediate table with huge application tables. These intermediate tables are cleared for next report run. We have few concerns in performance areas. These intermediate tables are transactional and don't have statistics. Is it good idea to join these with application tables which are partitioned and have up to date statistics. We need these results stored in the intermediate tables to be available across requests from UI hence we are not in a position to use oracle provided temporary tables. Any thoughts on what could be done would be appreciated.

    Read the article

  • How to profile object creation in Java?

    - by gooli
    The system I work with is creating a whole lot of objects and garbage collecting them all the time which results in a very steeply jagged graph of heap consumption. I would like to know which objects are being generated to tune the code, but I can't figure out a way to dump the heap at the moment the garbage collection starts. When I tried to initiate dumpHeap via JConsole manually at random times, I always got results after GC finished its run, and didn't get any useful data. Any notes on how to track down excessive temporary object creation are welcome.

    Read the article

  • How can I optimize MVC and IIS pipeline to obtain higher speed?

    - by Andy
    Hi, I am doing performance tweaking of a simple app that uses MVC on IIS 7.5. I have a StopWatch starting up in Application_BeginRequest and I take a snapshot at Controller.OnActionExecuting. So I measure the time spend in the entire IIS pipeline: from request receipt to the moment execution finally gets to my controller. I obtain 700 microseconds on my 3GHz quad-core (project compiled Release x64), and I wonder where the bottleneck is, especially hearing some people say that one can get up to 8000 page loads per second with MVC. How can I optimize MVC and IIS pipeline to obtain higher speed?

    Read the article

  • What are some good code optimization methods?

    - by esac
    I would like to understand good code optimization methods and methodology. How do I keep from doing premature optimization if I am thinking about performance already. How do I find the bottlenecks in my code? How do I make sure that over time my program does not become any slower? What are some common performance errors to avoid (e.g.; I know it is bad in some languages to return while inside the catch portion of a try{} catch{} block

    Read the article

  • SQL query: how to translate IN() into a JOIN?

    - by tangens
    I have a lot of SQL queries like this: SELECT o.Id, o.attrib1, o.attrib2 FROM table1 o WHERE o.Id IN ( SELECT DISTINCT Id FROM table1, table2, table3 WHERE ... ) These queries have to run on different database engines (MySql, Oracle, DB2, MS-Sql, Hypersonic), so I can only use common SQL syntax. Here I read, that with MySql the IN statement isn't optimized and it's really slow, so I want to switch this into a JOIN. I tried: SELECT o.Id, o.attrib1, o.attrib2 FROM table1 o, table2, table3 WHERE ... But this does not take into account the DISTINCT keyword. Question: How do I get rid of the duplicate rows using the JOIN approach?

    Read the article

  • Scalable way of doing self join with many to many table

    - by johnathan
    I have a table structure like the following: user id name profile_stat id name profile_stat_value id name user_profile user_id profile_stat_id profile_stat_value_id My question is: How do I evaluate a query where I want to find all users with profile_stat_id and profile_stat_value_id for many stats? I've tried doing an inner self join, but that quickly gets crazy when searching for many stats. I've also tried doing a count on the actual user_profile table, and that's much better, but still slow. Is there some magic I'm missing? I have about 10 million rows in the user_profile table and want the query to take no longer than a few seconds. Is that possible?

    Read the article

  • java increase xmx dynamically at runtime

    - by Tomer
    Hi, I have a jvm server in my machine, now I want to have 2 apservers of mine sitting in same machine, however I want the standby one to have a really low amount of memory allocated with xmx because its passive, one the main server (active) goes down I want to allocate more memory to my passive server which is already up without restarting it (I have have them both having too much xmx - note they would consume memory at startup and I cant allow possibility of outOfMemory). So I want passive - low xmx once active goes down I want my passive to receive much more xmx. is there a way for me to achieve that. Thanks

    Read the article

  • How to decide between using PLINQ and LINQ at runtime?

    - by Hamish Grubijan
    Or decide between a parallel and a sequential operation in general. It is hard to know without testing whether parallel or sequential implementation is best due to overhead. Obviously it will take some time to train "the decider" which method to use. I would say that this method cannot be perfect, so it is probabilistic in nature. The x,y,z do influence "the decider". I think a very naive implementation would be to give both 1/2 chance at the beginning and then start favoring them according to past performance. This disregards x,y,z, however. I suspect that this question would be better answered by academics than practitioners. Anyhow, please share your heuristic, your experience if any, your tips on this. Sample code: public interface IComputer { decimal Compute(decimal x, decimal y, decimal z); } public class SequentialComputer : IComputer { public decimal Compute( ... // sequential implementation } public class ParallelComputer : IComputer { public decimal Compute( ... // parallel implementation } public class HybridComputer : IComputer { private SequentialComputer sc; private ParallelComputer pc; private TheDecider td; // Helps to decide between the two. public HybridComputer() { sc = new SequentialComputer(); pc = new ParallelComputer(); td = TheDecider(); } public decimal Compute(decimal x, decimal y, decimal z) { decimal result; decimal time; if (td.PickOneOfTwo() == 0) { // Time this and save result into time. result = sc.Compute(...); } else { // Time this and save result into time. result = pc.Compute(); } td.Train(time); return result; } }

    Read the article

  • How to set up a load/stress test for a web site?

    - by Ryan
    I've been tasked with stress/load testing our company web site out of the blue and know nothing about doing so. Every search I make on google for "how to load test a web site" just comes back with various companies and software to physically do the load testing. For now I'm more interested in how to actually go about setting up a load test like what I should take into account prior to load testing, what pages within my site I should be testing load against and what things I'm going to want to monitor when doing the test. Our web site is on a multi-tier system complete with a separate database server (IIS 7 Web Server, SQL Server 2000 db). I imagine I'd want to monitor both the web server and the database server for testing load however when setting up scenarios to load test the web server I'd have to use pages that query the database to see any load on the database server at the same time. Are web servers and database servers generally tested simultaneously or are they done as separate tests? As you can see I'm pretty clueless as to the whole operation so any incite as to how to go about this would be very helpful. FYI I have been tinkering with Pylot and was able to create and run a scenario against our site but I'm not sure what I should be looking for in the results or if the scenario I created is even a scenario worth measuring for our site. Thanks in advance.

    Read the article

  • Image size guidelines

    - by user502014
    Hi all, This may well be a little of an open-ended question The site I am working on requires to be optimised for performance. One of the key areas is to optimise the file sizes of the images used upon the site. Unfortunatley these images are being created by employees who do not have the required knowledge for creating images for the web, and it is my job to produce a set of guidelines for them to use. I was wondering whether there was any resource/guidlines/literature regarding typical images file sizes for images of different dimensions - as I would like to include something like this to aid them to ensure their images are being created properly. Any info would be greatly appreciated. Thanks in advance

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >