Search Results

Search found 21107 results on 845 pages for 'size optimization'.

Page 10/845 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Set UIImageView Size?

    - by Tronic
    hi, i have following problem: i generate subviews UIView in an UIScrollView including UIImageViews. NSArray *bilder = [[NSArray alloc] initWithObjects:@"lol.jpg", @"lol2.jpg", nil]; NSArray *added = [[NSMutableArray alloc] init]; UIImageView *tmpView; for (NSString *name in bilder) { UIImage *image = [UIImage imageNamed:name]; tmpView = [[UIImageView alloc] initWithImage:image]; tmpView.contentMode = UIViewContentModeScaleAspectFit; tmpView.frame = CGRectMake(0, 0, 300, 300); [added addObject:tmpView]; [tmpView release]; } CGSize framy = mainFrame.frame.size; NSUInteger x = 0; for (UIView *view in added) { [mainFrame addSubview:view]; view.frame = CGRectMake(framy.height * x++, 0, framy.height, framy.width); } mainFrame.scrollEnabled = YES; mainFrame.pagingEnabled = YES; mainFrame.contentSize = CGSizeMake(framy.height * [added count], framy.width); mainFrame.backgroundColor = [UIColor blackColor]; i get image file names out of an array. now i want to resize the imageview to a specific size, but it won't work. any advice? thanks + regards

    Read the article

  • Shadow volume shader optimization (GLSL)

    - by Soubok
    I wondering if there is a way to optimize this vertex shader. This vertex shader projects (in the light direction) a vertex to the far plane if it is in the shadow. void main(void) { vec3 lightDir = (gl_ModelViewMatrix * gl_Vertex - gl_LightSource[0].position).xyz; // if the vertex is lit if ( dot(lightDir, gl_NormalMatrix * gl_Normal) < 0.01 ) { // don't move it gl_Position = ftransform(); } else { // move it far, is the light direction vec4 fin = gl_ProjectionMatrix * ( gl_ModelViewMatrix * gl_Vertex + vec4(normalize(lightDir) * 100000.0, 0.0) ); if ( fin.z > fin.w ) // if fin is behind the far plane fin.z = fin.w; // move to the far plane (needed for z-fail algo.) gl_Position = fin; } }

    Read the article

  • MySql query optimization help

    - by rohitgu
    I have few queries and am not able to figure out how to optimize them, QUERY 1 select * from t_twitter_tracking where classified is null and tweetType='ENGLISH' order by id limit 500; QUERY 2 Select count(*) as cnt, DATE_FORMAT(CONVERT_TZ(wrdTrk.createdOnGMTDate,'+00:00','+05:30'),'%Y-%m-%d') as dat from t_twitter_tracking wrdTrk where wrdTrk.word like ('dell') and CONVERT_TZ(wrdTrk.createdOnGMTDate,'+00:00','+05:30') between '2010-12-12 00:00:00' and '2010-12-26 00:00:00' group by dat; Both these queries run on the same table, CREATE TABLE `t_twitter_tracking` ( `id` BIGINT(20) NOT NULL AUTO_INCREMENT, `word` VARCHAR(200) NOT NULL, `tweetId` BIGINT(100) NOT NULL, `twtText` VARCHAR(800) NULL DEFAULT NULL, `language` TEXT NULL, `links` TEXT NULL, `tweetType` VARCHAR(20) NULL DEFAULT NULL, `source` TEXT NULL, `sourceStripped` TEXT NULL, `isTruncated` VARCHAR(40) NULL DEFAULT NULL, `inReplyToStatusId` BIGINT(30) NULL DEFAULT NULL, `inReplyToUserId` INT(11) NULL DEFAULT NULL, `rtUsrProfilePicUrl` TEXT NULL, `isFavorited` VARCHAR(40) NULL DEFAULT NULL, `inReplyToScreenName` VARCHAR(40) NULL DEFAULT NULL, `latitude` BIGINT(100) NOT NULL, `longitude` BIGINT(100) NOT NULL, `retweetedStatus` VARCHAR(40) NULL DEFAULT NULL, `statusInReplyToStatusId` BIGINT(100) NOT NULL, `statusInReplyToUserId` BIGINT(100) NOT NULL, `statusFavorited` VARCHAR(40) NULL DEFAULT NULL, `statusInReplyToScreenName` TEXT NULL, `screenName` TEXT NULL, `profilePicUrl` TEXT NULL, `twitterId` BIGINT(100) NOT NULL, `name` TEXT NULL, `location` VARCHAR(100) NULL DEFAULT NULL, `bio` TEXT NULL, `url` TEXT NULL COLLATE 'latin1_swedish_ci', `utcOffset` INT(11) NULL DEFAULT NULL, `timeZone` VARCHAR(100) NULL DEFAULT NULL, `frenCnt` BIGINT(20) NULL DEFAULT '0', `createdAt` DATETIME NULL DEFAULT NULL, `createdOnGMT` VARCHAR(40) NULL DEFAULT NULL, `createdOnServerTime` DATETIME NULL DEFAULT NULL, `follCnt` BIGINT(20) NULL DEFAULT '0', `favCnt` BIGINT(20) NULL DEFAULT '0', `totStatusCnt` BIGINT(20) NULL DEFAULT NULL, `usrCrtDate` VARCHAR(200) NULL DEFAULT NULL, `humanSentiment` VARCHAR(30) NULL DEFAULT NULL, `replied` BIT(1) NULL DEFAULT NULL, `replyMsg` TEXT NULL, `classified` INT(32) NULL DEFAULT NULL, `createdOnGMTDate` DATETIME NULL DEFAULT NULL, `locationDetail` TEXT NULL, `geonameid` INT(11) NULL DEFAULT NULL, `country` VARCHAR(255) NULL DEFAULT NULL, `continent` CHAR(2) NULL DEFAULT NULL, `placeLongitude` FLOAT NULL DEFAULT NULL, `placeLatitude` FLOAT NULL DEFAULT NULL, PRIMARY KEY (`id`), INDEX `id` (`id`, `word`), INDEX `createdOnGMT_index` (`createdOnGMT`) USING BTREE, INDEX `word_index` (`word`) USING BTREE, INDEX `location_index` (`location`) USING BTREE, INDEX `classified_index` (`classified`) USING BTREE, INDEX `tweetType_index` (`tweetType`) USING BTREE, INDEX `getunclassified_index` (`classified`, `tweetType`) USING BTREE, INDEX `timeline_index` (`word`, `createdOnGMTDate`, `classified`) USING BTREE, INDEX `createdOnGMTDate_index` (`createdOnGMTDate`) USING BTREE, INDEX `locdetail_index` (`country`, `id`) USING BTREE, FULLTEXT INDEX `twtText_index` (`twtText`) ) COLLATE='utf8_general_ci' ENGINE=MyISAM ROW_FORMAT=DEFAULT AUTO_INCREMENT=12608048; The table has more than 10 million records. How can I optimize it?

    Read the article

  • Mysql Sub Select Query Optimization

    - by Matt
    I'm running a query daily to compile stats - but it seems really inefficient. This is the Query: SELECT a.id, tstamp, label_id, (SELECT author_id FROM b WHERE b.tid = a.id ORDER BY b.tstamp DESC LIMIT 1) AS author_id FROM a, b WHERE (status = '2' OR status = '3') AND category != 6 AND a.id = b.tid AND (b.type = 'C' OR b.type = 'R') AND a.tstamp1 BETWEEN {$timestamp_start} AND {$timestamp_end} ORDER BY b.tstamp DESC LIMIT 500 This query seems to run really slow. Apologies for the crap naming - I've been asked to not reveal the actual table names. The reason there is a sub select is because the outer select gets one row from the table a and it gets a row from table b. But also need to know the latest author_id from table b as well, so I run a subselect to return that one. I don't want to run another select inside a php loop - as that is also inefficient. It works correctly - I just need to find a much faster way of getting this data set.

    Read the article

  • Compiler optimization of repeated accessor calls

    - by apocalypse9
    I've found recently that for some types of financial calculations that the following pattern is much easier to follow and test especially in situations where we may need to get numbers from various stages of the computation. public class nonsensical_calculator { ... double _rate; int _term; int _days; double monthlyRate { get { return _rate / 12; }} public double days { get { return (1 - i); }} double ar { get { return (1+ days) /(monthlyRate * days) double bleh { get { return Math.Pow(ar - days, _term) public double raar { get { return bleh * ar/2 * ar / days; }} .... } Obviously this often results in multiple calls to the same accessor within a given formula. I was curious as to whether or not the compiler is smart enough to optimize away these repeated calls with no intervening change in state, or whether this style is causing a decent performance hit. Further reading suggestions are always appreciated

    Read the article

  • Python optimization

    - by Rami Jarrar
    Hi, I do like this: f = open('wl4.txt', 'w') hh = 0 ###################################### for n in range(1,5): for l in range(33,127): if n==1: b = chr(l) + '\n' f.write(b) hh += 1 elif n==2: for s0 in range(33, 127): b = chr(l) + chr(s0) + '\n' f.write(b) hh += 1 elif n==3: for s0 in range(33, 127): for s1 in range(33, 127): b = chr(l) + chr(s0) + chr(s1) + '\n' f.write(b) hh += 1 elif n==4: for s0 in range(33, 127): for s1 in range(33, 127): for s2 in range(33,127): b = chr(l) + chr(s0) + chr(s1) + chr(s2) + '\n' f.write(b) hh += 1 ###################################### print "We Made %d Words." %(hh) ###################################### f.close() So, is there any method to make it faster?

    Read the article

  • Open space sitting optimization algorithm

    - by Georgy Bolyuba
    As a result of changes in the company, we have to rearrange our sitting plan: one room with 10 desks in it. Some desks are more popular than others for number of reasons. One solution would be to draw a desk number from a hat. We think there is a better way to do it. We have 10 desks and 10 people. Lets give every person in this contest 50 hypothetical tokens to bid on the desks. There is no limit of how much you bid on one desk, you can put all 50, which would be saying "I want to sit only here, period". You can also say "I do not care" by giving every desk 5 tokens. Important note: nobody knows what other people are doing. Everyone has to decide based only on his/her best interest (sounds familiar?) Now lets say we obtained these hypothetical results: # | Desk# >| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 1 | Alise | 30 | 2 | 2 | 1 | 0 | 0 | 0 | 15 | 0 | 0 | = 50 2 | Bob | 20 | 15 | 0 | 10 | 1 | 1 | 1 | 1 | 1 | 0 | = 50 ... 10 | Zed | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | = 50 Now, what we need to find is that one (or more) configuration(s) that gives us maximum satisfaction (i.e. people get desks they wanted taking into account all the bids and maximizing on the total of the group. Naturally the assumption is the more one bade on the desk the more he/she wants it). Since there are only 10 people, I think we can brute force it looking into all possible configurations, but I was wondering it there is a better algorithm for solving this kind of problems?

    Read the article

  • Compilier optimization of repeated accessor calls C#

    - by apocalypse9
    I've found recently that for some types of financial calculations that the following pattern is much easier to follow and test especially in situations where we may need to get numbers from various stages of the computation. public class nonsensical_calculator { ... double _rate; int _term; int _days; double monthlyRate { get { return _rate / 12; }} public double days { get { return (1 - i); }} double ar { get { return (1+ days) /(monthlyRate * days) double bleh { get { return Math.Pow(ar - days, _term) public double raar { get { return bleh * ar/2 * ar / days; }} .... } Obviously this often results in multiple calls to the same accessor within a given formula. I was curious as to whether or not the compiler is smart enough to optimize away these repeated calls with no intervening change in state, or whether this style is causing a decent performance hit. Further reading suggestions are always appreciated

    Read the article

  • Optimization of running total calculation in SQL for multiple values per join condition

    - by Kiril
    I have the following table (test_table): date value --------------- d1 10.0 d1 20.0 d2 60.0 d2 10.0 d2 -20.0 d3 40.0 I calculate the running total as follows. I use the same query twice, because first I need to calculate the values for a specifi date, and afterwards I can calculate the running total. Otherwise, joining the two tables where date is not unique, I would get too many results from the join: SELECT t1.date, SUM(t2.value) AS total FROM (SELECT date, SUM(value) AS value FROM test_table GROUP BY date) AS t1 JOIN (SELECT date, SUM(value) AS value FROM test_table GROUP BY date) AS t2 ON t1.date >= t2.date GROUP BY t1.date ORDER BY t1.date This gives me (which is fine): date total ------------- d1 30.0 d2 80.0 d3 120.0 BUT, this query isn't very efficient, because I need to change conditions in two places, if necessary. In production, the test_table is a lot bigger ( 4 Mio. rows), and the query takes too much time to complete. Question: How can I avoid using the same query twice?

    Read the article

  • optimization mvc code

    - by user276640
    i have such code var prj = _dataContext.Project.FirstOrDefault(p => p.isPopular == true); if (prj != null) { prj.isPopular = false; _dataContext.SaveChanges(); } prj = Details(id); prj.isPopular = true; _dataContext.SaveChanges(); idea-i have only one record with value true in field isPopular, so i get it and make false, then i get object by id and make it isPopular true. i don't like 2 calls on savechanges. any ideas?

    Read the article

  • Python: Memory usage and optimization when modifying lists

    - by xApple
    The problem My concern is the following: I am storing a relativity large dataset in a classical python list and in order to process the data I must iterate over the list several times, perform some operations on the elements, and often pop an item out of the list. It seems that deleting one item out of a Python list costs O(N) since Python has to copy all the items above the element at hand down one place. Furthermore, since the number of items to delete is approximately proportional to the number of elements in the list this results in an O(N^2) algorithm. I am hoping to find a solution that is cost effective (time and memory-wise). I have studied what I could find on the internet and have summarized my different options below. Which one is the best candidate ? Keeping a local index: while processingdata: index = 0 while index < len(somelist): item = somelist[index] dosomestuff(item) if somecondition(item): del somelist[index] else: index += 1 This is the original solution I came up with. Not only is this not very elegant, but I am hoping there is better way to do it that remains time and memory efficient. Walking the list backwards: while processingdata: for i in xrange(len(somelist) - 1, -1, -1): dosomestuff(item) if somecondition(somelist, i): somelist.pop(i) This avoids incrementing an index variable but ultimately has the same cost as the original version. It also breaks the logic of dosomestuff(item) that wishes to process them in the same order as they appear in the original list. Making a new list: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) newlist = [] for item in somelist: if somecondition(item): newlist.append(item) somelist = newlist gc.collect() This is a very naive strategy for eliminating elements from a list and requires lots of memory since an almost full copy of the list must be made. Using list comprehensions: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist[:] = [x for x in somelist if somecondition(x)] This is very elegant but under-the-cover it walks the whole list one more time and must copy most of the elements in it. My intuition is that this operation probably costs more than the original del statement at least memory wise. Keep in mind that somelist can be huge and that any solution that will iterate through it only once per run will probably always win. Using the filter function: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist = filter(lambda x: not subtle_condition(x), somelist) This also creates a new list occupying lots of RAM. Using the itertools' filter function: from itertools import ifilterfalse while processingdata: for item in itertools.ifilterfalse(somecondtion, somelist): dosomestuff(item) This version of the filter call does not create a new list but will not call dosomestuff on every item breaking the logic of the algorithm. I am including this example only for the purpose of creating an exhaustive list. Moving items up the list while walking while processingdata: index = 0 for item in somelist: dosomestuff(item) if not somecondition(item): somelist[index] = item index += 1 del somelist[index:] This is a subtle method that seems cost effective. I think it will move each item (or the pointer to each item ?) exactly once resulting in an O(N) algorithm. Finally, I hope Python will be intelligent enough to resize the list at the end without allocating memory for a new copy of the list. Not sure though. Abandoning Python lists: class Doubly_Linked_List: def __init__(self): self.first = None self.last = None self.n = 0 def __len__(self): return self.n def __iter__(self): return DLLIter(self) def iterator(self): return self.__iter__() def append(self, x): x = DLLElement(x) x.next = None if self.last is None: x.prev = None self.last = x self.first = x self.n = 1 else: x.prev = self.last x.prev.next = x self.last = x self.n += 1 class DLLElement: def __init__(self, x): self.next = None self.data = x self.prev = None class DLLIter: etc... This type of object resembles a python list in a limited way. However, deletion of an element is guaranteed O(1). I would not like to go here since this would require massive amounts of code refactoring almost everywhere.

    Read the article

  • MySQL query optimization JOIN

    - by Pierre
    Hi, I need your help to optimize those mysql query, both are in my slow query logs. SELECT a.nom, c.id_apps, c.id_commentaire, c.id_utilisateur, c.note_commentaire, u.nom_utilisateur FROM comments AS c LEFT JOIN apps AS a ON c.id_apps = a.id_apps LEFT JOIN users AS u ON c.id_utilisateur = u.id_utilisateur ORDER BY c.date_commentaire DESC LIMIT 5; There is a MySQL INDEX on c.id_apps, a.id_apps, c.id_utilisateur, u.id_utilisateur and c.date_commentaire. SELECT a.id_apps, a.id_itunes, a.nom, a.prix, a.resume, c.nom_fr_cat, e.nom_edit FROM apps AS a LEFT JOIN cat AS c ON a.categorie = c.id_cat LEFT JOIN edit AS e ON a.editeur = e.id_edit ORDER BY a.id_apps DESC LIMIT 20; There is a MySQL INDEX on a.categorie, c.id_cat, a.editeur, e.id_edit and a.id_apps Thanks

    Read the article

  • Mysql Server Optimization

    - by Ish Kumar
    Hi Geeks, We are having serious MySQL(InnoDB) performance issues at a moment when we do: (10-20) insertions on TABLE1 (10-20) updates on TABLE2 Note: Both above operations happens within fraction of a second. And this occurs every few (10-15) minutes. And all online users (approx 400-600) doing read operation on join of TABLE1 & TABLE2 every 1 second. Here is our mysql configuration info: http://docs.google.com/View?id=dfrswh7c_117fmgcmb44 Issues: Lot queries wait and expire later (saw it from phpmyadmin / processes). My poor MySQL server crashes sometimes Questions Q1: Any suggestions to optimize at MySQL level? Q2: I thinking to use persistent connections at application level, is it right? Info Added Later: Database Engine: InnoDB TABLE1 : 400,000 rows (inserting 8,000 daily) & TABLE2: 8,000 rows 1 second query: SELECT b.id, b.user_id, b.description, b.debit, b.created, b.price, u.username, u.email, u.mobile FROM TABLE1 b, TABLE2 u WHERE b.credit = 0 AND b.user_id = u.id AND b.auction_id = "12345" ORDER BY b.id DESC LIMIT 10; // there are few more but they are not so critical. Indexing is good, we are using them wisely. In above query all id's are indexed And TABLE1 has frequent insertions and TABLE2 has frequent updates.

    Read the article

  • matlab: figure size, same axis size

    - by Art
    I have a figure with fixed size, like that: hFig = figure(1); set(hFig, 'Position', [200 200 500 500]) But the thing is, that I want to have my AXIS with fixed size (i want them to be a square), not (necessary) the whole figure... - see image attached, Y axis is a bit longer than X axis (of course longer in a meaning of display... X and Y axis range is set to the same value). How to adjust it? Thanks!

    Read the article

  • Solver Foundation Optimization - 1D Bin Packing

    - by Val Nolav
    I want to optimize loading marbles into trucks. I do not know, if I can use Solver Foundation class for that purpose. Before, I start writing code, I wanted to ask it here. 1- Marbles can be in any weight between 1 to 24 Tons. 2 - A truck can hold maximum of 24 Tons. 3- It can be loaded as many marble cubes, as it can take for upto 24 tones, which means there is no Volume limitation. 4- There can be between 200 up to 500 different marbles depending on time. GOAL - The goal is to load marbles in minimum truck shipment. How can I do that without writing a lot of if conditions and for loops? Can I use Microsoft Solver Foundation for that purpose? I read the documentation provided by Microsoft however, I could not find a scenario similar to mine. M1+ M2 + M3 + .... Mn <=24 this is for one truck shipment. Let say there are 200 different Marbles and Marble weights are Float. Thanks

    Read the article

  • query optimization

    - by Gaurav
    I have a query of the form SELECT uid1,uid2 FROM friend WHERE uid1 IN (SELECT uid2 FROM friend WHERE uid1='.$user_id.') and uid2 IN (SELECT uid2 FROM friend WHERE uid1='.$user_id.') The problem now is that the nested query SELECT uid2 FROM friend WHERE uid1='.$user_id.' returns a very large number of ids(approx. 5000). The table structure of the friend table is uid1(int), uid2(int). This table is used to determine whether two users are linked together as friends. Any workaround? Can I write the query in a different way? Or is there some other way to solve this issue. I'm sure I am not the first person to face such a problem. Any help would be greatly appreciated.

    Read the article

  • Haskell optimization of a function looking for a bytestring terminator

    - by me2
    Profiling of some code showed that about 65% of the time I was inside the following code. What it does is use the Data.Binary.Get monad to walk through a bytestring looking for the terminator. If it detects 0xff, it checks if the next byte is 0x00. If it is, it drops the 0x00 and continues. If it is not 0x00, then it drops both bytes and the resulting list of bytes is converted to a bytestring and returned. Any obvious ways to optimize this? I can't see it. parseECS = f [] False where f acc ff = do b <- getWord8 if ff then if b == 0x00 then f (0xff:acc) False else return $ L.pack (reverse acc) else if b == 0xff then f acc True else f (b:acc) False

    Read the article

  • Mysql InnoDB performance optimization and indexing

    - by Davide C
    Hello everybody, I have 2 databases and I need to link information between two big tables (more than 3M entries each, continuously growing). The 1st database has a table 'pages' that stores various information about web pages, and includes the URL of each one. The column 'URL' is a varchar(512) and has no index. The 2nd database has a table 'urlHops' defined as: CREATE TABLE urlHops ( dest varchar(512) NOT NULL, src varchar(512) DEFAULT NULL, timestamp timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, KEY dest_key (dest), KEY src_key (src) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Now, I need basically to issue (efficiently) queries like this: select p.id,p.URL from db1.pages p, db2.urlHops u where u.src=p.URL and u.dest=? At first, I thought to add an index on pages(URL). But it's a very long column, and I already issue a lot of INSERTs and UPDATEs on the same table (way more than the number of SELECTs I would do using this index). Other possible solutions I thought are: -adding a column to pages, storing the md5 hash of the URL and indexing it; this way I could do queries using the md5 of the URL, with the advantage of an index on a smaller column. -adding another table that contains only page id and page URL, indexing both columns. But this is maybe a waste of space, having only the advantage of not slowing down the inserts and updates I execute on 'pages'. I don't want to slow down the inserts and updates, but at the same time I would be able to do the queries on the URL efficiently. Any advice? My primary concern is performance; if needed, wasting some disk space is not a problem. Thank you, regards Davide

    Read the article

  • Database query optimization

    - by hdx
    Ok my Giant friends once again I seek a little space in your shoulders :P Here is the issue, I have a python script that is fixing some database issues but it is taking way too long, the main update statement is this: cursor.execute("UPDATE jiveuser SET username = '%s' WHERE userid = %d" % (newName,userId)) That is getting called about 9500 times with different newName and userid pairs... Any suggestions on how to speed up the process? Maybe somehow a way where I can do all updates with just one query? Any help will be much appreciated! PS: Postgres is the db being used.

    Read the article

  • (x86) Assembler Optimization

    - by Pindatjuh
    I'm building a compiler/assembler/linker in Java for the x86-32 (IA32) processor targeting Windows. High-level concepts of a "language" (in essential a Java API for creating executables) are translated into opcodes, which then are wrapped and outputted to a file. The translation process has several phases, one is the translation between languages: the highest-level code is translated into the medium-level code which is then translated into the lowest-level code (probably more than 3 levels). My problem is the following; if I have higher-level code (X and Y) translated to lower-level code (x, y, U and V), then an example of such a translation is, in pseudo-code: x + U(f) // generated by X + V(f) + y // generated by Y (An easy example) where V is the opposite of U (compare with a stack push as U and a pop as V). This needs to be 'optimized' into: x + y (essentially removing the "useless" code) My idea was to use regular expressions. For the above case, it'll be a regular expression looking like this: x:(U(x)+V(x)):null, meaning for all x find U(x) followed by V(x) and replace by null. Imagine more complex regular expressions, for more complex optimizations. This should work on all levels. What do you suggest? What would be a good approach to optimize in these situations?

    Read the article

  • Execution Plan Optimization when where clause is removed then added back

    - by nmushov
    I have a stored procedure that uses a table valued function which executes in 9 seconds. If I alter the table valued function and remove the where clause, the stored procedure executes in 3 seconds. If I add the where clause back, the query still executes in 3 seconds. I took a look at the execution plans and it appears that after I remove the where clause, the execution plan includes parallelism and the scan count for 2 of my tables drops for 50000 and 65000 down to 5 and 3. After I add the where clause back, the optimized execution plan still runs unless I run DBCC FREEPROCCACHE. Questions 1. Why would SQL Server start using the optimized execution plan for both queries only when I first remove the where clause? Is there a way to force SQL Server to use this execution plan? Also, this is a paramaterized all-in-one query that uses the (Parameter is null or Parameter) in the where clause, which I believe is bad for performance. RETURNS TABLE AS RETURN ( SELECT TOP (@PageNumber * @PageSize) CASE WHEN @SortOrder = 'Expensive' THEN ROW_NUMBER() OVER (ORDER BY SellingPrice DESC) WHEN @SortOrder = 'Inexpensive' THEN ROW_NUMBER() OVER (ORDER BY SellingPrice ASC) WHEN @SortOrder = 'LowMiles' THEN ROW_NUMBER() OVER (ORDER BY Mileage ASC) WHEN @SortOrder = 'HighMiles' THEN ROW_NUMBER() OVER (ORDER BY Mileage DESC) WHEN @SortOrder = 'Closest' THEN ROW_NUMBER() OVER (ORDER BY P1.Distance ASC) WHEN @SortOrder = 'Newest' THEN ROW_NUMBER() OVER (ORDER BY [Year] DESC) WHEN @SortOrder = 'Oldest' THEN ROW_NUMBER() OVER (ORDER BY [Year] ASC) ELSE ROW_NUMBER() OVER (ORDER BY InventoryID ASC) END as rn, P1.InventoryID, P1.SellingPrice, P1.Distance, P1.Mileage, Count(*) OVER () RESULT_COUNT, dimCarStatus.[year] FROM (SELECT InventoryID, SellingPrice, Zip.Distance, Mileage, ColorKey, CarStatusKey, CarKey FROM facInventory JOIN @ZipCodes Zip ON Zip.DealerKey = facInventory.DealerKey) as P1 JOIN dimColor ON dimColor.ColorKey = P1.ColorKey JOIN dimCarStatus ON dimCarStatus.CarStatusKey = P1.CarStatusKey JOIN dimCar ON dimCar.CarKey = P1.CarKey WHERE (@ExteriorColor is NULL OR dimColor.ExteriorColor like @ExteriorColor) AND (@InteriorColor is NULL OR dimColor.InteriorColor like @InteriorColor) AND (@Condition is NULL OR dimCarStatus.Condition like @Condition) AND (@Year is NULL OR dimCarStatus.[Year] like @Year) AND (@Certified is NULL OR dimCarStatus.Certified like @Certified) AND (@Make is NULL OR dimCar.Make like @Make) AND (@ModelCategory is NULL OR dimCar.ModelCategory like @ModelCategory) AND (@Model is NULL OR dimCar.Model like @Model) AND (@Trim is NULL OR dimCar.Trim like @Trim) AND (@BodyType is NULL OR dimCar.BodyType like @BodyType) AND (@VehicleTypeCode is NULL OR dimCar.VehicleTypeCode like @VehicleTypeCode) AND (@MinPrice is NULL OR P1.SellingPrice >= @MinPrice) AND (@MaxPrice is NULL OR P1.SellingPrice < @MaxPrice) AND (@Mileage is NULL OR P1.Mileage < @Mileage) ORDER BY CASE WHEN @SortOrder = 'Expensive' THEN -SellingPrice WHEN @SortOrder = 'Inexpensive' THEN SellingPrice WHEN @SortOrder = 'LowMiles' THEN Mileage WHEN @SortOrder = 'HighMiles' THEN -Mileage WHEN @SortOrder = 'Closest' THEN P1.Distance WHEN @SortOrder = 'Newest' THEN -[YEAR] WHEN @SortOrder = 'Oldest' THEN [YEAR] ELSE InventoryID END )

    Read the article

  • Java variables -> replace? RAM optimization

    - by poeschlorn
    Hi guys, I just wanted to know what happens behind my program when I declare and initialize a variable and later initialize it again with other values, e.g. an ArrayList or something similar. What happens in my RAM, when I say e.g. this: ArrayList<String> al = new ArrayList<String>(); ...add values, work with it and so on.... al = new ArrayList<String>(); So is my first ArrayList held in RAM or will the second ArrayList be stored on the same position where the first one has been before? Or will it just change the reference of "al"? If it is not replaced...is there a way to manually free the RAM which was occupied by the first arraylist? (without waiting for the garbage collector) Would it help to set it first =null? Nice greetings, poeschlorn

    Read the article

  • C# 'is' type check on struct - odd .NET 4.0 x86 optimization behavior

    - by Jacob Stanley
    Since upgrading to VS2010 I'm getting some very strange behavior with the 'is' keyword. The program below (test.cs) outputs True when compiled in debug mode (for x86) and False when compiled with optimizations on (for x86). Compiling all combinations in x64 or AnyCPU gives the expected result, True. All combinations of compiling under .NET 3.5 give the expected result, True. I'm using the batch file below (runtest.bat) to compile and test the code using various combinations of compiler .NET framework. Has anyone else seen these kind of problems under .NET 4.0? Does everyone else see the same behavior as me on their computer when running runtests.bat? #@$@#$?? Is there a fix for this? test.cs using System; public class Program { public static bool IsGuid(object item) { return item is Guid; } public static void Main() { Console.Write(IsGuid(Guid.NewGuid())); } } runtest.bat @echo off rem Usage: rem runtest -- runs with csc.exe x86 .NET 4.0 rem runtest 64 -- runs with csc.exe x64 .NET 4.0 rem runtest v3.5 -- runs with csc.exe x86 .NET 3.5 rem runtest v3.5 64 -- runs with csc.exe x64 .NET 3.5 set version=v4.0.30319 set platform=Framework for %%a in (%*) do ( if "%%a" == "64" (set platform=Framework64) if "%%a" == "v3.5" (set version=v3.5) ) echo Compiler: %platform%\%version%\csc.exe set csc="C:\Windows\Microsoft.NET\%platform%\%version%\csc.exe" set make=%csc% /nologo /nowarn:1607 test.cs rem CS1607: Referenced assembly targets a different processor rem This happens if you compile for x64 using csc32, or x86 using csc64 %make% /platform:x86 test.exe echo =^> x86 %make% /platform:x86 /optimize test.exe echo =^> x86 (Optimized) %make% /platform:x86 /debug test.exe echo =^> x86 (Debug) %make% /platform:x86 /debug /optimize test.exe echo =^> x86 (Debug + Optimized) %make% /platform:x64 test.exe echo =^> x64 %make% /platform:x64 /optimize test.exe echo =^> x64 (Optimized) %make% /platform:x64 /debug test.exe echo =^> x64 (Debug) %make% /platform:x64 /debug /optimize test.exe echo =^> x64 (Debug + Optimized) %make% /platform:AnyCPU test.exe echo =^> AnyCPU %make% /platform:AnyCPU /optimize test.exe echo =^> AnyCPU (Optimized) %make% /platform:AnyCPU /debug test.exe echo =^> AnyCPU (Debug) %make% /platform:AnyCPU /debug /optimize test.exe echo =^> AnyCPU (Debug + Optimized) Test Results When running the runtest.bat I get the following results on my Win7 x64 install. > runtest 32 v4.0 Compiler: Framework\v4.0.30319\csc.exe False => x86 False => x86 (Optimized) True => x86 (Debug) False => x86 (Debug + Optimized) True => x64 True => x64 (Optimized) True => x64 (Debug) True => x64 (Debug + Optimized) True => AnyCPU True => AnyCPU (Optimized) True => AnyCPU (Debug) True => AnyCPU (Debug + Optimized) > runtest 64 v4.0 Compiler: Framework64\v4.0.30319\csc.exe False => x86 False => x86 (Optimized) True => x86 (Debug) False => x86 (Debug + Optimized) True => x64 True => x64 (Optimized) True => x64 (Debug) True => x64 (Debug + Optimized) True => AnyCPU True => AnyCPU (Optimized) True => AnyCPU (Debug) True => AnyCPU (Debug + Optimized) > runtest 32 v3.5 Compiler: Framework\v3.5\csc.exe True => x86 True => x86 (Optimized) True => x86 (Debug) True => x86 (Debug + Optimized) True => x64 True => x64 (Optimized) True => x64 (Debug) True => x64 (Debug + Optimized) True => AnyCPU True => AnyCPU (Optimized) True => AnyCPU (Debug) True => AnyCPU (Debug + Optimized) > runtest 64 v3.5 Compiler: Framework64\v3.5\csc.exe True => x86 True => x86 (Optimized) True => x86 (Debug) True => x86 (Debug + Optimized) True => x64 True => x64 (Optimized) True => x64 (Debug) True => x64 (Debug + Optimized) True => AnyCPU True => AnyCPU (Optimized) True => AnyCPU (Debug) True => AnyCPU (Debug + Optimized) tl;dr

    Read the article

  • php parsing speed optimization

    - by Arnaud
    I would like to add tooltip or generate link according to the element available in the database, for exemple if the html page printed is: to reboot your linux host in single-user mode you can ... I will use explode(" ", $row[page]) and the idea is now to lookup for every single word in the page to find out if they have a related referance in this exemple let's say i've got a table referance an one entry for reboot and one for linux reboot: restart a computeur linux: operating system now my output will look like (replaced < and by @) to @a href="ref/reboot"@reboot@/a@ your @a href="ref/linux"@linux@/a@ host in single-user mode you can ... Instead of have a static list generated when I saved the content, if I add more keyword in the future, then the text will become more interactive. My main concerne and question is how can I create a efficient enough process to do it ? Should I store all the db entry in an array and compare them ? Do an sql query for each word (seems to be crazy) Dump the table in a file and use a very long regex or a "grep -f pattern data" way of doing it? Or or or or I'm sure it must be a better way of doing it, just don't have a clue about it, or maybe this will be far too resource un-friendly and I should avoid doing such things. Cheers!

    Read the article

  • Query with many CASE statements - optimization

    - by Nemanja Vujacic
    Hi guys, I have one very dirty query that per sure can be optimized because there are so many CASE statements in it! SELECT (CASE pa.KplusTable_Id WHEN 1 THEN sp.sp_id WHEN 2 THEN fw.fw_id WHEN 3 THEN s.sw_Id WHEN 4 THEN id.ia_id END) as Deal_Id, max(CASE pa.KplusTable_Id WHEN 1 THEN sp.Trans_Id WHEN 2 THEN fw.Trans_Id WHEN 3 THEN s.Trans_Id WHEN 4 THEN id.Trans_Id END) as TransId_CurrentMax INTO #MaxRazlicitOdNull FROM #PotencijalniAktuelni pa LEFT JOIN kplus_sp sp (nolock) on sp.sp_id=pa.Deal_Id AND pa.KplusTable_Id=1 LEFT JOIN kplus_fw fw (nolock) on fw.fw_id=pa.Deal_Id AND pa.KplusTable_Id=2 LEFT JOIN dev_sw s (nolock) on s.sw_Id=pa.Deal_Id AND pa.KplusTable_Id=3 LEFT JOIN kplus_ia id (nolock) on id.ia_id=pa.Deal_Id AND pa.KplusTable_Id=4 WHERE isnull(CASE pa.KplusTable_Id WHEN 1 THEN sp.BROJ_TIKETA WHEN 2 THEN fw.BROJ_TIKETA WHEN 3 THEN s.tiket WHEN 4 THEN id.BROJ_TIKETA END, '')<>'' GROUP BY CASE pa.KplusTable_Id WHEN 1 THEN sp.sp_id WHEN 2 THEN fw.fw_id WHEN 3 THEN s.sw_Id WHEN 4 THEN id.ia_id END Because I have same condition couple times, do you have idea how to optimize query, make it simpler and better. All suggestions are welcome! TnX in advance! Nemanja

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >