Search Results

Search found 5602 results on 225 pages for 'tuning red gate'.

Page 22/225 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • EF4 performance tips and tricks

    - by Will
    I've gotten to that point in one of my projects, and haven't found much information out there. So if you've got some pointers for improving performance in the new Entity Framework 4, please let us know!

    Read the article

  • Python performance: iteration and operations on nested lists

    - by J.J.
    Problem Hey folks. I'm looking for some advice on python performance. Some background on my problem: Given: A mesh of nodes of size (x,y) each with a value (0...255) starting at 0 A list of N input coordinates each at a specified location within the range (0...x, 0...y) Increment the value of the node at the input coordinate and the node's neighbors within range Z up to a maximum of 255. Neighbors beyond the mesh edge are ignored. (No wrapping) BASE CASE: A mesh of size 1024x1024 nodes, with 400 input coordinates and a range Z of 75 nodes. Processing should be O(x*y*Z*N). I expect x, y and Z to remain roughly around the values in the base case, but the number of input coordinates N could increase up to 100,000. My goal is to minimize processing time. Current results I have 2 current implementations: f1, f2 Running speed on my 2.26 GHz Intel Core 2 Duo with Python 2.6.1: f1: 2.9s f2: 1.8s f1 is the initial naive implementation: three nested for loops. f2 is replaces the inner for loop with a list comprehension. Code is included below for your perusal. Question How can I further reduce the processing time? I'd prefer sub-1.0s for the test parameters. Please, keep the recommendations to native Python. I know I can move to a third-party package such as numpy, but I'm trying to avoid any third party packages. Also, I've generated random input coordinates, and simplified the definition of the node value updates to keep our discussion simple. The specifics have to change slightly and are outside the scope of my question. thanks much! f1 is the initial naive implementation: three nested for loops. 2.9s def f1(x,y,n,z): rows = [] for i in range(x): rows.append([0 for i in xrange(y)]) for i in range(n): inputX, inputY = (int(x*random.random()), int(y*random.random())) topleft = (inputX - z, inputY - z) for i in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)): for j in xrange(max(0, topleft[1]), min(topleft[1]+(z*2), y)): if rows[i][j] <= 255: rows[i][j] += 1 f2 is replaces the inner for loop with a list comprehension. 1.8s def f2(x,y,n,z): rows = [] for i in range(x): rows.append([0 for i in xrange(y)]) for i in range(n): inputX, inputY = (int(x*random.random()), int(y*random.random())) topleft = (inputX - z, inputY - z) for i in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)): l = max(0, topleft[1]) r = min(topleft[1]+(z*2), y) rows[i][l:r] = [j+1 for j in rows[i][l:r] if j < 255]

    Read the article

  • Performance penalty of typecasting and boxing/unboxing types in C# when storing generic values

    - by kitsune
    I have a set-up similar to WPF's DependencyProperty and DependencyObject system. My properties however are generic. A BucketProperty has a static GlobalIndex (defined in BucketPropertyBase) which tracks all BucketProperties. A Bucket can have many BucketProperties of any type. A Bucket saves and gets the actual values of these BucketProperties... now my question is, how to deal with the storage of these values, and what is the penalty of using a typecasting when retrieving them? I currently use an array of BucketEntries that save the property values as simple objects. Is there any better way of saving and returning these values? Beneath is a simpliefied version: public class BucketProperty<T> : BucketPropertyBase { } public class Bucket { private BucketEntry[] _bucketEntries; public void SaveValue<T>(BucketProperty<T> property, T value) { SaveBucketEntry(property.GlobalIndex, value) } public T GetValue<T>(BucketProperty<T> property) { return (T)FindBucketEntry(property.GlobalIndex).Value; } } public class BucketEntry { private object _value; private uint _index; public BucketEntry(uint globalIndex, object value) { ... } }

    Read the article

  • Avoiding denormal values in C++

    - by Nathan
    Hi, After search a long time for a performance bug, I read abut denormal foating point values. I have an Intel Core 2 Duo and I am compiling with gcc, using "-O2". So what do I do? Can I somehow instruct g++ to avoid denormal values? If not, can I somehow test if a float is denormal? Thanks! Nathan

    Read the article

  • How to optimize simple linked server select query?

    - by tomaszs
    Hello, I have a table called Table with columns: ID (int, primary key, clustered, unique index) TEXT (varchar 15) on a MSSQL linked server called LS. Linked server is on the same server computer. And: When I call: SELECT ID, TEXT FROM OPENQUERY(LS, 'SELECT ID, TEXT FROM Table') It takes 400 ms. When I call: SELECT ID, TEXT FROM LS.dbo.Table It takes 200 ms And when I call the query directly while being at LS server: SELECT ID, TEXT FROM dbo.Table It takes 100 ms. In many places i've read that OPENQUERY is faster, but in this simple case it does not seem to work. What can I do to make this query faster when I call it from another server, not LS directly?

    Read the article

  • LINQ To objects: Quicker ideas?

    - by SDReyes
    Do you see a better approach to obtain and concatenate item.Number in a single string? Current: var numbers = new StringBuilder( ); // group is the result of a previous group by var basenumbers = group.Select( item => item.Number ); basenumbers.Aggregate ( numbers, ( res, element ) => res.AppendFormat( "{0:00}", element ) );

    Read the article

  • Slow Execution of an ASP.NET Web Page

    - by Sweta Jha
    I have a web page which brings 13K+ records in 20 seconds. There is a menu on the page, clicking on which navigates me to another page which is very lightweight. Displaying the data (13K+) took only 20 seconds whereas navigating from that page took much longer, more than 2 mins. Can you tell me why is the latter taking so much of time. I've stopped the page_load code execution on click of the menu. I've disabled the viewstate for that page as well.

    Read the article

  • Will client side performance improve if images/scripts/styles on different subdomains?

    - by Andrey
    Hi, I have a domain specifically for static content, so cookies don't travel along with requests to images/scripts/css. Now, I think I've read somewhere that most browsers only open one download thread for each domain/subdomain, so different static content can't be downloaded in parallel if on the same domain. Will it make difference for browsers if i place scripts in script.mycdn.com, styles in css.mycdn.com and images in images.mycdn.com? Will it allow to let browser download images at the same time as scripts and styles? mycdn.com is of course a made up name :) Thanks! Andrey

    Read the article

  • Why does Joomla debug show 446 queries logged and 446 legacy queries logged?

    - by Darye
    I have been called in to fix the performance of a Joomla site that was already setup. I look at the debug output and it shows the same queries twice, once for queries logged and again for legacy queries logged. My guess is that it is actually running the same queries twice make for just under 900 queries per page (hope I am wrong) The Legacy plugin is disabled, so Legacy mode is not on at all. The site uses VirtueMart as well (which BTW isn't working properly if the cache in the Global Config is turned on) Besides the fact that I don't think it should be running 446 queries anyway (sometimes even up to 650 per page ), has anyone every experienced this issue, and where would I look to fix this. Thanks

    Read the article

  • Data in two databases, eager spool resulting in query

    - by Valkyrie
    I have two databases in SQL2k5: one that holds a large amount of static data (SQL Database 1) (never updated but frequently inserted into) and one that holds relational data (SQL Database 2) related to the static data. They're separated mainly because of corporate guidelines and business requirements: assume for the following problem that combining them is not practical. There are places in SQLDB2 that PKs in SQLDB1 are referenced; triggers control the referential integrity, since cross-database relationships are troublesome in SQL Server. BUT, because of the large amount of data in SQLDB1, I'm getting eager spools on queries that join from the Id in SQLDB2 that references the data in SQLDB1. (With me so far? Maybe an example will help:) SELECT t.Id, t.Name, t2.Company FROM SQLDB1.table t INNER JOIN SQLDB2.table t2 ON t.Id = t2.FKId This query results in a eager spool that's 84% of the load of the query; the table in SQLDB1 has 35M rows, so it's completely choking this query. I can't create a view on the table in SQLDB1 and use that as my FK/index; it doesn't want me to create a constraint based on a view. Anyone have any idea how I can fix this huge bottleneck? (Short of putting the static data in the first db: believe me, I've argued that one until I'm blue in the face to no avail.) Thanks! valkyrie Edit: also can't create an indexed view because you can't put schemabinding on a view that references a table outside the database where the view resides. Dang it.

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Strange C++ performance difference?

    - by STingRaySC
    I just stumbled upon a change that seems to have counterintuitive performance ramifications. Can anyone provide a possible explanation for this behavior? Original code: for (int i = 0; i < ct; ++i) { // do some stuff... int iFreq = getFreq(i); double dFreq = iFreq; if (iFreq != 0) { // do some stuff with iFreq... // do some calculations with dFreq... } } While cleaning up this code during a "performance pass," I decided to move the definition of dFreq inside the if block, as it was only used inside the if. There are several calculations involving dFreq so I didn't eliminate it entirely as it does save the cost of multiple run-time conversions from int to double. I expected no performance difference, or if any at all, a negligible improvement. However, the perfomance decreased by nearly 10%. I have measured this many times, and this is indeed the only change I've made. The code snippet shown above executes inside a couple other loops. I get very consistent timings across runs and can definitely confirm that the change I'm describing decreases performance by ~10%. I would expect performance to increase because the int to double conversion would only occur when iFreq != 0. Chnaged code: for (int i = 0; i < ct; ++i) { // do some stuff... int iFreq = getFreq(i); if (iFreq != 0) { // do some stuff with iFreq... double dFreq = iFreq; // do some stuff with dFreq... } } Can anyone explain this? I am using VC++ 9.0 with /O2. I just want to understand what I'm not accounting for here.

    Read the article

  • SQL server virtual memory usage and performance

    - by user365035
    Hello, I have a very large DB used mostly for analytics. The performance overall is very sluggish. I just noticed that when running the query below, the amount of virtual memory used greatly exceeds the amount of physical memory available. Currently, physical memory is 10GB (10238k bytes) whereas the virtual memory returns significantly more - 8388607k bytes. That seems really wrong, but I'm at a bit of a loss on how to proceed. USE [master]; GO select cpu_count , hyperthread_ratio , physical_memory_in_bytes / 1048576 as 'mem_MB' , virtual_memory_in_bytes / 1048576 as 'virtual_mem_MB' , max_workers_count , os_error_mode , os_priority_class from sys.dm_os_sys_info

    Read the article

  • Oracle T4CPreparedStatement memory leaks?

    - by Jay
    A little background on the application that I am gonna talk about in the next few lines: XYZ is a data masking workbench eclipse RCP application: You give it a source table column, and a target table column, it would apply a trasformation (encryption/shuffling/etc) and copy the row data from source table to target table. Now, when I mask n tables at a time, n threads are launched by this app. Here is the issue: I have run into a production issue on first roll out of the above said app. Unfortunately, I don't have any logs to get to the root. However, I tried to run this app in test region and do a stress test. When I collected .hprof files and ran 'em through an analyzer (yourKit), I noticed that objects of oracle.jdbc.driver.T4CPreparedStatement was retaining heap. The analysis also tells me that one of my classes is holding a reference to this preparedstatement object and thereby, n threads have n such objects. T4CPreparedStatement seemed to have character arrays: lastBoundChars and bindChars each of size char[300000]. So, I researched a bit (google!), obtained ojdbc6.jar and tried decompiling T4CPreparedStatement. I see that T4CPreparedStatement extends OraclePreparedStatement, which dynamically manages array size of lastBoundChars and bindChars. So, my questions here are: Have you ever run into an issue like this? Do you know the significance of lastBoundChars / bindChars? I am new to profiling, so do you think I am not doing it correct? (I also ran the hprofs through MAT - and this was the main identified issue - so, I don't really think I could be wrong?) I have found something similar on the web here: http://forums.oracle.com/forums/thread.jspa?messageID=2860681 Appreciate your suggestions / advice.

    Read the article

  • After writing SQL statements in MySQL, how to measure the speed / performance of them?

    - by Jian Lin
    I saw something from an "execution plan" article: 10 rows fetched in 0.0003s (0.7344s) How come there are 2 durations shown? What if I don't have large data set yet. For example, if I have only 20, 50, or even just 100 records, I can't really measure how faster 2 different SQL statements compare in term of speed in real life situation? In other words, there needs to be at least hundreds of thousands of records, or even a million records to accurately compares the performance of 2 different SQL statements?

    Read the article

  • Why is Oracle using a skip scan for this query?

    - by Jason Baker
    Here's the tkprof output for a query that's running extremely slowly (WARNING: it's long :-) ): SELECT mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn FROM (SELECT /*+ FIRST_ROWS(1) */ mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn, ROWNUM AS ora_rn FROM (SELECT mbr.comment_idn AS mbr_comment_idn, mbr.crt_dt AS mbr_crt_dt, mbr.data_source AS mbr_data_source, mbr.dol_bl_rmo_ind AS mbr_dol_bl_rmo_ind, mbr.dxcg_ctl_member AS mbr_dxcg_ctl_member, mbr.employment_start_dt AS mbr_employment_start_dt, mbr.employment_term_dt AS mbr_employment_term_dt, mbr.entity_active AS mbr_entity_active, mbr.ethnicity_idn AS mbr_ethnicity_idn, mbr.general_health_status_code AS mbr_general_health_status_code, mbr.hand_dominant_code AS mbr_hand_dominant_code, mbr.hgt_feet AS mbr_hgt_feet, mbr.hgt_inches AS mbr_hgt_inches, mbr.highest_edu_level AS mbr_highest_edu_level, mbr.insd_addr_idn AS mbr_insd_addr_idn, mbr.insd_alt_id AS mbr_insd_alt_id, mbr.insd_name AS mbr_insd_name, mbr.insd_ssn_tin AS mbr_insd_ssn_tin, mbr.is_smoker AS mbr_is_smoker, mbr.is_vip AS mbr_is_vip, mbr.lmbr_first_name AS mbr_lmbr_first_name, mbr.lmbr_last_name AS mbr_lmbr_last_name, mbr.marital_status_cd AS mbr_marital_status_cd, mbr.mbr_birth_dt AS mbr_mbr_birth_dt, mbr.mbr_death_dt AS mbr_mbr_death_dt, mbr.mbr_expired AS mbr_mbr_expired, mbr.mbr_first_name AS mbr_mbr_first_name, mbr.mbr_gender_cd AS mbr_mbr_gender_cd, mbr.mbr_idn AS mbr_mbr_idn, mbr.mbr_ins_type AS mbr_mbr_ins_type, mbr.mbr_isreadonly AS mbr_mbr_isreadonly, mbr.mbr_last_name AS mbr_mbr_last_name, mbr.mbr_middle_name AS mbr_mbr_middle_name, mbr.mbr_name AS mbr_mbr_name, mbr.mbr_status_idn AS mbr_mbr_status_idn, mbr.mpi_id AS mbr_mpi_id, mbr.preferred_am_pm AS mbr_preferred_am_pm, mbr.preferred_time AS mbr_preferred_time, mbr.prv_innetwork AS mbr_prv_innetwork, mbr.rep_addr_idn AS mbr_rep_addr_idn, mbr.rep_name AS mbr_rep_name, mbr.rp_mbr_id AS mbr_rp_mbr_id, mbr.same_mbr_ins AS mbr_same_mbr_ins, mbr.special_needs_cd AS mbr_special_needs_cd, mbr.timezone AS mbr_timezone, mbr.upd_dt AS mbr_upd_dt, mbr.user_idn AS mbr_user_idn, mbr.wgt AS mbr_wgt, mbr.work_status_idn AS mbr_work_status_idn FROM mbr JOIN mbr_identfn ON mbr.mbr_idn = mbr_identfn.mbr_idn WHERE mbr_identfn.mbr_idn = mbr.mbr_idn AND mbr_identfn.identfd_type = :identfd_type_1 AND mbr_identfn.identfd_number = :identfd_number_1 AND mbr_identfn.entity_active = :entity_active_1) WHERE ROWNUM <= :ROWNUM_1) WHERE ora_rn > :ora_rn_1 call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 9936 0.46 0.49 0 0 0 0 Execute 9936 0.60 0.59 0 0 0 0 Fetch 9936 329.87 404.00 0 136966922 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 29808 330.94 405.09 0 136966922 0 0 Misses in library cache during parse: 0 Optimizer mode: FIRST_ROWS Parsing user id: 36 (JIVA_DEV) Rows Row Source Operation ------- --------------------------------------------------- 0 VIEW (cr=102 pr=0 pw=0 time=2180 us) 0 COUNT STOPKEY (cr=102 pr=0 pw=0 time=2163 us) 0 NESTED LOOPS (cr=102 pr=0 pw=0 time=2152 us) 0 INDEX SKIP SCAN IDX_MBR_IDENTFN (cr=102 pr=0 pw=0 time=2140 us)(object id 341053) 0 TABLE ACCESS BY INDEX ROWID MBR (cr=0 pr=0 pw=0 time=0 us) 0 INDEX UNIQUE SCAN PK_CLAIMANT (cr=0 pr=0 pw=0 time=0 us)(object id 334044) Rows Execution Plan ------- --------------------------------------------------- 0 SELECT STATEMENT MODE: HINT: FIRST_ROWS 0 VIEW 0 COUNT (STOPKEY) 0 NESTED LOOPS 0 INDEX MODE: ANALYZED (SKIP SCAN) OF 'IDX_MBR_IDENTFN' (INDEX (UNIQUE)) 0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'MBR' (TABLE) 0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PK_CLAIMANT' (INDEX (UNIQUE)) ******************************************************************************** Based on my reading of Oracle's documentation of skip scans, a skip scan is most useful when the first column of an index has a low number of unique values. The thing is that the first index of this column is a unique primary key. So am I correct in assuming that a skip scan is the wrong thing to do here? Also, what kind of scan should it be doing? Should I do some more hinting for this query? EDIT: I should also point out that the query's where clause uses the columns in IDX_MBR_IDENTFN and no columns other than what's in that index. So as far as I can tell, I'm not skipping any columns.

    Read the article

  • Performance profiler for a java application

    - by Nitin Garg
    I need to optimize a java application. It makes some 3rd party calls. I need some good tool to accurately measure the time taken by individual api calls. To give an idea of complexity- the application takes a data source file containing 10 lakh rows, and it takes around one hour to complete the processing. As a part of processing , it makes some 3rd party calls (including some network calls). I need to identify which calls are taking more time then others, and based on that, find out a way to optimize the application. Any suggestions would be appreciated.

    Read the article

  • Sql serve Full Text Search with Containstable is very slow when Used in JOIN!

    - by Bob
    Hello, I am using sql 2008 full text search and I am having serious issues with performance depending on how I use Contains or ContainsTable. Here are sample: (table one has about 5000 records and there is a covered index on table1 which has all the fields in the where clause. I tried to simplify the statements so forgive me if there is syntax issues.) Scenario 1: select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select top 1 * from containstable(table1,*, 'something') as t2 where t2.[key]=t1.id) results: 10 second (very slow) Scenario 2: select * from table1 as t1 join containstable(table1,*, 'something') as t2 on t2.[key] = t1.id where t1.field1=90 and t1.field2='something' results: 10 second (very slow) Scenario 3: Declare @tbl Table(id uniqueidentifier primary key) insert into @tbl select {key] from containstable(table1,*, 'something') select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select id from @tbl as tbl where id=req1.id) results: fraction of a second (super fast) Bottom line, it seems if I use Containstable in any kind of join or where clause condition of a select statement that also has other conditions, the performance is really bad. In addition if you look at profiler, the number of reads from the database goes to the roof. But if I first do the full text search and put results in a table variable and use that variable everything goes super fast. The number of reads are also much lower. It seems in "bad" scenarios, somehow it gets stuck in a loop which causes it to read many times from teh database but of course I don't understant why. Now the question is first of all whyis that happening? and question two is that how scalable table variables are? what if it results to 10s of thousands of records? is it still going to be fast. Any ideas? Thanks

    Read the article

  • How to scale MySQL with multiple machines?

    - by erotsppa
    I have a web app running LAMP. We recently have an increase in load and is now looking at solutions to scale. Scaling apache is pretty easy we are just going to have multiple multiple machines hosting it and round robin the incoming traffic. However, each instance of apache will talk with MySQL and eventually MySQL will be overloaded. How to scale MySQL across multiple machines in this setup? I have already looked at this but specifically we need the updates from the DB available immediately so I don't think replication is a good strategy here? Also hopefully this can be done with minimal code change. PS. We have around a 1:1 read-write ratio.

    Read the article

  • Loading animation Memory leak

    - by Ayaz Alavi
    Hi, I have written network class that is managing all network calls for my application. There are two methods showLoadingAnimationView and hideLoadingAnimationView that will show UIActivityIndicatorView in a view over my current viewcontroller with fade background. I am getting memory leaks somewhere on these two methods. Here is the code -(void)showLoadingAnimationView { textmeAppDelegate *textme = (textmeAppDelegate *)[[UIApplication sharedApplication] delegate]; [[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:YES]; if(wrapperLoading != nil) { [wrapperLoading release]; } wrapperLoading = [[UIView alloc] initWithFrame:CGRectMake(0.0, 0.0, 320.0, 480.0)]; wrapperLoading.backgroundColor = [UIColor clearColor]; wrapperLoading.alpha = 0.8; UIView *_loadingBG = [[UIView alloc] initWithFrame:CGRectMake(0.0, 0.0, 320.0, 480.0)]; _loadingBG.backgroundColor = [UIColor blackColor]; _loadingBG.alpha = 0.4; circlingWheel = [[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:UIActivityIndicatorViewStyleWhiteLarge]; CGRect wheelFrame = circlingWheel.frame; circlingWheel.frame = CGRectMake(((320.0 - wheelFrame.size.width) / 2.0), ((480.0 - wheelFrame.size.height) / 2.0), wheelFrame.size.width, wheelFrame.size.height); [wrapperLoading addSubview:_loadingBG]; [wrapperLoading addSubview:circlingWheel]; [circlingWheel startAnimating]; [textme.window addSubview:wrapperLoading]; [_loadingBG release]; [circlingWheel release]; } -(void)hideLoadingAnimationView { [[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:NO]; wrapperLoading.alpha = 0.0; [self.wrapperLoading removeFromSuperview]; //[NSTimer scheduledTimerWithTimeInterval:0.8 target:wrapperLoading selector:@selector(removeFromSuperview) userInfo:nil repeats:NO]; } Here is how I am calling these two methods [NSThread detachNewThreadSelector:@selector(showLoadingAnimationView) toTarget:self withObject:nil]; and then somewhere later in the code i am using following function call to hide animation. [self hideLoadingAnimationView]; I am getting memory leaks when I call showLoadingAnimationView function. Anything wrong in the code or is there any better technique to show loading animation when we do network calls?

    Read the article

  • Is my understanding of "select distinct" correct?

    - by paxdiablo
    We recently discovered a performance problem with one of our systems and I think I have the fix but I'm not certain my understanding is correct. In simplest form, we have a table blah into which we accumulate various values based on a key field. The basic form is: recdate date rectime time system varchar(20) count integer accum1 integer accum2 integer There are a lot more accumulators than that but they're all of the same form. The primary key is made up of recdate, rectime and system. As values are collected to the table, the count for a given recdate/rectime/system is incremented and the values for that key are added to the accumulators. That means the averages can be obtained by using accumN / count. Now we also have a view over that table specified as follows: create view blah_v ( recdate, rectime, system, count, accum1, accum2 ) as select distinct recdate, rectime, system, count, value (case when count > 0 then accum1 / count end, 0), value (case when count > 0 then accum2 / count end, 0) from blah; In other words, the view gives us the average value of the accumulators rather than the sums. It also makes sure we don't get a divide-by-zero in those cases where the count is zero (these records do exist and we are not allowed to remove them so don't bother telling me they're rubbish - you're preaching to the choir). We've noticed that the time difference between doing: select distinct recdate from XX varies greatly depending on whether we use the table or the view. I'm talking about the difference being 1 second for the table and 27 seconds for the view (with 100K rows). We actually tracked it back to the select distinct. What seems to be happening is that the DBMS is actually loading all the rows in and sorting them so as to remove duplicates. That's fair enough, it's what we stupidly told it to do. But I'm pretty sure the fact that the view includes every component of the primary key means that it's impossible to have duplicates anyway. We've validated the problem since, if we create another view without the distinct, it performs at the same speed as the underlying table. I just wanted to confirm my understanding that a select distinct can not have duplicates if it includes all the primary key components. If that's so, then we can simply change the view appropriately.

    Read the article

  • Set of Tools to optimize the performance in general of SQL Server

    - by Dave
    Hi, I know there are things out there to help to optimize queries, ect... but is there anything else, something like a full package that can scan your database and highlight all the performance issues, naming conventions, tables not properly normalized, etc? I know this is the job of a DBA and if the DBA is good, he shouldn't need a tool like that, but sometimes you start a new job, you get in charge of an existing database and the DB is a mess, so you don't know where to start... Thanks to everyone Dave

    Read the article

  • mysql medium int vs. int performance?

    - by aviv
    Hi, I have a simple users table, i guess the maximum users i am going to have is 300,000. Currently i am using: CREATE TABLE users ( id INT UNSIGEND AUTOINCEREMENT PRIMARY KEY, .... Of course i have many other tables that the users(id) is a FOREIGN KEY in them. I read that since the id is not going to use the full maximum of INT it is better to use: MEDIUMINT and it will give better performance. Is it true? (I am using mysql on Windows Server 2008) Thanks.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >