Search Results

Search found 3828 results on 154 pages for 'mathematical optimization'.

Page 48/154 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Why doesn't g++ pay attention to __attribute__((pure)) for virtual functions?

    - by jchl
    According to the GCC documentation, __attribute__((pure)) tells the compiler that a function has no side-effects, and so it can be subject to common subexpression elimination. This attribute appears to work for non-virtual functions, but not for virtual functions. For example, consider the following code: extern void f( int ); class C { public: int a1(); int a2() __attribute__((pure)); virtual int b1(); virtual int b2() __attribute__((pure)); }; void test_a1( C *c ) { if( c->a1() ) { f( c->a1() ); } } void test_a2( C *c ) { if( c->a2() ) { f( c->a2() ); } } void test_b1( C *c ) { if( c->b1() ) { f( c->b1() ); } } void test_b2( C *c ) { if( c->b2() ) { f( c->b2() ); } } When compiled with optimization enabled (either -O2 or -Os), test_a2() only calls C::a2() once, but test_b2() calls b2() twice. Is there a reason for this? Is it because, even though the implementation in class C is pure, g++ can't assume that the implementation in every subclass will also be pure? If so, is there a way to tell g++ that this virtual function and every subclass's implementation will be pure?

    Read the article

  • Optimize Duplicate Detection

    - by Dave Jarvis
    Background This is an optimization problem. Oracle Forms XML files have elements such as: <Trigger TriggerName="name" TriggerText="SELECT * FROM DUAL" ... /> Where the TriggerText is arbitrary SQL code. Each SQL statement has been extracted into uniquely named files such as: sql/module=DIAL_ACCESS+trigger=KEY-LISTVAL+filename=d_access.fmb.sql sql/module=REP_PAT_SEEN+trigger=KEY-LISTVAL+filename=rep_pat_seen.fmb.sql I wrote a script to generate a list of exact duplicates using a brute force approach. Problem There are 37,497 files to compare against each other; it takes 8 minutes to compare one file against all the others. Logically, if A = B and A = C, then there is no need to check if B = C. So the problem is: how do you eliminate the redundant comparisons? The script will complete in approximately 208 days. Script Source Code The comparison script is as follows: #!/bin/bash echo Loading directory ... for i in $(find sql/ -type f -name \*.sql); do echo Comparing $i ... for j in $(find sql/ -type f -name \*.sql); do if [ "$i" = "$j" ]; then continue; fi # Case insensitive compare, ignore spaces diff -IEbwBaq $i $j > /dev/null # 0 = no difference (i.e., duplicate code) if [ $? = 0 ]; then echo $i :: $j >> clones.txt fi done done Question How would you optimize the script so that checking for cloned code is a few orders of magnitude faster? System Constraints Using a quad-core CPU with an SSD; trying to avoid using cloud services if possible. The system is a Windows-based machine with Cygwin installed -- algorithms or solutions in other languages are welcome. Thank you!

    Read the article

  • memcached: which is faster, doing an add (and checking result), or doing a get (and set when returni

    - by Mike Sherov
    The title of this question isn't so clear, but the code and question is straightforward. Let's say I want to show my users an ad once per day. To accomplish this, every time they visit a page on my site, I check to see if a certain memcache key has any data stored on it. If so, don't show an ad. If not, store the value '1' in that key with an expiration of 86400. I can do this 2 ways: //version a $key='OPD_'.date('Ymd').'_'.$type.'_'.$user; if($memcache->get($key)===false){ $memcache->set($key,'1',false,$expire); //show ad } //version b $key='OPD_'.date('Ymd').'_'.$type.'_'.$user; if($memcache->add($key,'1',false,$expire)){ //show ad } Now, it might seem obvious that b is better, it always makes 1 memcache call. However, what is the overhead of "add" vs. "get"? These aren't the real comparisons... and I just made up these numbers, but let's say 1 add ~= 1 set ~= 5 get in terms of effort, and the average user views 5 pages a day: a: (5 get * 1 effort) + (1 set * 5 effort) = 10 units of effort b: (5 add * 5 effort) = 25 units of effort Would it make sense to always do the add call? Is this an unnecessary micro-optimization?

    Read the article

  • PHP, ImageMagick, Google's Page Speed, & JPG File Size Optimization

    - by Sonny
    I have photo gallery code that does image re-sizing and thumbnail creation. I use ImageMagick to do this. I ran a gallery page through Google's Page Speed tool and it revealed that the re-sized images and thumbnails both have about an extra 10KB of data (JPEG files specifically). What can I add to my scripts to optimize the file size? ADDITIONAL INFORMATION I am using the imagick::FILTER_LANCZOS filter with a blur setting of 0.9 when calling the resizeImage() function. JPEGs have a quality setting of 80.

    Read the article

  • JPG File Size Optimization - PHP, ImageMagick, & Google's Page Speed

    - by Sonny
    I have photo gallery code that does image re-sizing and thumbnail creation. I use ImageMagick to do this. I ran a gallery page through Google's Page Speed tool and it revealed that the re-sized images and thumbnails both have about an extra 10KB of data (JPEG files specifically). What can I add to my scripts to optimize the file size? ADDITIONAL INFORMATION I am using the imagick::FILTER_LANCZOS filter with a blur setting of 0.9 when calling the resizeImage() function. JPEGs have a quality setting of 80.

    Read the article

  • SQL query optimization

    - by nvtthang
    I have a problem with my SQL query that take time to get all records from database. Any body help me. Below is a sample of database: order(order_id, order_nm) customer(customer_id, customer_nm) orderDetail(orderDetail_id, order_id, orderDate, customer_id, Comment) I want to get latest customer and order detail information. Here is may solution: I've created a function that GetLatestOrderByCustomer(CusID) to get lastest Customer information. CREATE FUNCTION [dbo].[GetLatestOrderByCustomer] ( @cus_id int ) RETURNS varchar(255) AS BEGIN DECLARE @ResultVar varchar(255) SELECT @ResultVar = tmp.comment FROM ( SELECT TOP 1 orderDate, comment FROM orderDetail WHERE orderDetail.customer_id = @cust_id ) tmp -- Return the result of the function RETURN @ResultVar END Below is my SQL query SELECT customer.customer_id , customer.customer_nm , dbo.GetLatestOrderByCustomer(customer.customer_id) FROM Customer LEFT JOIN orderDetail ON orderDetail.customer_id = customer.customer_id It's take time to run the function. Could anybody suggest me any solutions to make it better? Thanks in advance.

    Read the article

  • Compiler optimization of reference variables

    - by Phineas
    I often use references to simplify the appearance of code: vec3f& vertex = _vertices[index]; // Calculate the vertex position vertex[0] = startx + col * colWidth; vertex[1] = starty + row * rowWidth; vertex[2] = 0.0f; Will compilers recognize and optimize this so it is essentially the following? _vertices[index][0] = startx + col * colWidth; _vertices[index][1] = starty + row * rowWidth; _vertices[index][2] = 0.0f;

    Read the article

  • algorithm optimization: returning object and sub-objects from single SQL statement in PHP

    - by pocketfullofcheese
    I have an object oriented PHP application. I have a simple hierarchy stored in an SQL table (Chapters and Authors that can be assigned to a chapter). I wrote the following method to fetch the chapters and the authors in a single query and then loop through the result, figuring out which rows belong to the same chapter and creating both Chapter objects and arrays of Author objects. However, I feel like this can be made a lot neater. Can someone help? function getChaptersWithAuthors($monographId, $rangeInfo = null) { $result =& $this->retrieveRange( 'SELECT mc.chapter_id, mc.monograph_id, mc.chapter_seq, ma.author_id, ma.monograph_id, mca.primary_contact, mca.seq, ma.first_name, ma.middle_name, ma.last_name, ma.affiliation, ma.country, ma.email, ma.url FROM monograph_chapters mc LEFT JOIN monograph_chapter_authors mca ON mc.chapter_id = mca.chapter_id LEFT JOIN monograph_authors ma ON ma.author_id = mca.author_id WHERE mc.monograph_id = ? ORDER BY mc.chapter_seq, mca.seq', $monographId, $rangeInfo ); $chapterAuthorDao =& DAORegistry::getDAO('ChapterAuthorDAO'); $chapters = array(); $authors = array(); while (!$result->EOF) { $row = $result->GetRowAssoc(false); // initialize $currentChapterId for the first row if ( !isset($currentChapterId) ) $currentChapterId = $row['chapter_id']; if ( $row['chapter_id'] != $currentChapterId) { // we're on a new row. create a chapter from the previous one $chapter =& $this->_returnFromRow($prevRow); // set the authors with all the authors found so far $chapter->setAuthors($authors); // clear the authors array unset($authors); $authors = array(); // add the chapters to the returner $chapters[$currentChapterId] =& $chapter; // set the current id for this row $currentChapterId = $row['chapter_id']; } // add every author to the authors array if ( $row['author_id'] ) $authors[$row['author_id']] =& $chapterAuthorDao->_returnFromRow($row); // keep a copy of the previous row for creating the chapter once we're on a new chapter row $prevRow = $row; $result->MoveNext(); if ( $result->EOF ) { // The result set is at the end $chapter =& $this->_returnFromRow($row); // set the authors with all the authors found so far $chapter->setAuthors($authors); unset($authors); // add the chapters to the returner $chapters[$currentChapterId] =& $chapter; } } $result->Close(); unset($result); return $chapters; } PS: the _returnFromRow methods simply construct an Chapter or Author object given the SQL row. If needed, I can post those methods here.

    Read the article

  • Compiler optimization of references

    - by Phineas
    I often use references to simplify the appearance of code: vec3f& vertex = _vertices[index]; // Calculate the vertex position vertex[0] = startx + col * colWidth; vertex[1] = starty + row * rowWidth; vertex[2] = 0.0f; Will compilers recognize and optimize this so it is essentially the following? _vertices[index][0] = startx + col * colWidth; _vertices[index][1] = starty + row * rowWidth; _vertices[index][2] = 0.0f;

    Read the article

  • Query optimization (OR based)

    - by john194
    I have googled but I can't find answers for these questions. Your advice is appreciated. centOS on vps with 512MB RAM, nginx, php5 (fastcgi), mysql5 (myisam, not innodb). I need to optimize this app created by some ex-employee. This app is working, but it's slow. Table: t1(id[bigint(20)],c1[mediumtext],c2[mediumtext],c3[mediumtext],c4[mediumtext]) id is some random big number, and is PK Those mediumtext rows look like this: c1="|box-002877|" c2="|ct-2348|rd-11124854|hw-3949|wd-8872|hw-119037736|...etc.. " c3="|fg-2448|wd-11172|hw-1656|...etc.. " c4="|hg-2448|qd-16667|...etc." (some columns contain a lot of data, around 900 KiB, database around 300 MiB) Yes, mediumtext "is bad", and (20) is too big... but I didn't create this. Those codes can be found on any of those 4 mediumtext's... //he needs all the columns of the row containing $code, so he wrote this: function f1($code) { SELECT * FROM t1 WHERE c1 LIKE '%$code%' OR c2 LIKE '%$code%' OR c3 LIKE '%$code%' OR c4 LIKE '%$code%'; Questions: Q1. If $code is found on c1... mysql automatically stops checking and returns row=id+c1+c2+c3+c4? or it will continue (wasting time) checking c2, c3 and c4?... Q2. Mysql is working with this table on disk (not RAM) because of the mediumtext, right? is this the primary cause of slowness? Q3. That query can be cached by mysql (if using a big query_cache_size=128M value on the my.cnf)? or that's not cacheable due to the mediumtexts, or due to the "OR LIKE"...? Q4. Do you recommend rewriting this with mysql's INSTR() / LOCATE() / MATCH..AGAINST [FULLTEXT]?

    Read the article

  • SQL model optimization question

    - by supermogx
    I need to keep track of number of "hits" on a particular item in a DB. The thing is that the "hits" should stay unique with a user ID, so if a user hits the item 3 times, it should still count for a hit of 1. Also, I need to display the total number of hits for a particular item. Is there a better way than to store each hits for each items by each users in a separate table? Would keeping the user ID in a string separated by commas a better and efficient way?

    Read the article

  • mysql index optimization for a table with multiple indexes that index some of the same columns

    - by Sean
    I have a table that stores some basic data about visitor sessions on third party web sites. This is its structure: id, site_id, unixtime, unixtime_last, ip_address, uid There are four indexes: id, site_id/unixtime, site_id/ip_address, and site_id/uid There are many different types of ways that we query this table, and all of them are specific to the site_id. The index with unixtime is used to display the list of visitors for a given date or time range. The other two are used to find all visits from an IP address or a "uid" (a unique cookie value created for each visitor), as well as determining if this is a new visitor or a returning visitor. Obviously storing site_id inside 3 indexes is inefficient for both write speed and storage, but I see no way around it, since I need to be able to quickly query this data for a given specific site_id. Any ideas on making this more efficient? I don't really understand B-trees besides some very basic stuff, but it's more efficient to have the left-most column of an index be the one with the least variance - correct? Because I considered having the site_id being the second column of the index for both ip_address and uid but I think that would make the index less efficient since the IP and UID are going to vary more than the site ID will, because we only have about 8000 unique sites per database server, but millions of unique visitors across all ~8000 sites on a daily basis. I've also considered removing site_id from the IP and UID indexes completely, since the chances of the same visitor going to multiple sites that share the same database server are quite small, but in cases where this does happen, I fear it could be quite slow to determine if this is a new visitor to this site_id or not. The query would be something like: select id from sessions where uid = 'value' and site_id = 123 limit 1 ... so if this visitor had visited this site before, it would only need to find one row with this site_id before it stopped. This wouldn't be super fast necessarily, but acceptably fast. But say we have a site that gets 500,000 visitors a day, and a particular visitor loves this site and goes there 10 times a day. Now they happen to hit another site on the same database server for the first time. The above query could take quite a long time to search through all of the potentially thousands of rows for this UID, scattered all over the disk, since it wouldn't be finding one for this site ID. Any insight on making this as efficient as possible would be appreciated :) Update - this is a MyISAM table with MySQL 5.0. My concerns are both with performance as well as storage space. This table is both read and write heavy. If I had to choose between performance and storage, my biggest concern is performance - but both are important. We use memcached heavily in all areas of our service, but that's not an excuse to not care about the database design. I want the database to be as efficient as possible.

    Read the article

  • Any specifications/docs around optimization of Google Apps Script, to avoid timeouts and "hangs"

    - by BruceM
    From my experience so far, it seems that if you write a script that makes lots of expensive calls close together, the functionality just "hangs", or you get inconsistent responses, and have to refresh the browser because sheets stop updating etc. Are there any docs or specs that clarify this, as releasing an app fr real-world use is not possible if users can expect it to work most of the time, and produce random results every now and then...

    Read the article

  • SSE SIMD Optimization For Loop

    - by Projectile Fish
    I have some code in a loop for(int i = 0; i < n; i++) { u[i] = c * u[i] + s * b[i]; } So, u and b are vectors of the same length, and c and s are scalars. Is this code a good candidate for vectorization for use with SSE in order to get a speedup?

    Read the article

  • C#, weird optimization

    - by Snake
    Hi, I'm trying to read my compiled C# code. this is my code: using(OleDbCommand insertCommand = new OleDbCommand("...", connection)) { // do super stuff } But! We all know that a using gets translated to this: { OleDbCommand insertCommand = new OleDbCommand("...", connection) try { //do super stuff } finally { if(insertCommand != null) ((IDisposable)insertCommand).Dispose(); } } (since OleDbCommand is a reference type). But when I decompile my assembly (compiled with .NET 2.0) I get this in Resharper: try { insertCommand = new OleDbCommand("", connection); Label_0017: try { //do super stuff } finally { Label_0111: if ((insertCommand == null) != null) { goto Label_0122; } insertCommand.Dispose(); Label_0122:; } I'm talking about this line: if ((insertCommand == null) != null). True is not null, it never is, nor is false. So how is my object disposed properly? WTF? Thanks! -Kristof

    Read the article

  • Display a ranking grid for game : optimization of left outer join and find a player

    - by Jerome C.
    Hello, I want to do a ranking grid. I have a table with different values indexed by a key: Table SimpleValue : key varchar, value int, playerId int I have a player which have several SimpleValue. Table Player: id int, nickname varchar Now imagine these records: SimpleValue: Key value playerId for 1 1 int 2 1 agi 2 1 lvl 5 1 for 6 2 int 3 2 agi 1 2 lvl 4 2 Player: id nickname 1 Bob 2 John I want to display a rank of these players on various SimpleValue. Something like: nickname for lvl Bob 1 5 John 6 4 For the moment I generate an sql query based on which SimpleValue key you want to display and on which SimpleValue key you want to order players. eg: I want to display 'lvl' and 'for' of each player and order them on the 'lvl' The generated query is: SELECT p.nickname as nickname, v1.value as lvl, v2.value as for FROM Player p LEFT OUTER JOIN SimpleValue v1 ON p.id=v1.playerId and v1.key = 'lvl' LEFT OUTER JOIN SimpleValue v2 ON p.id=v2.playerId and v2.key = 'for' ORDER BY v1.value This query runs perfectly. BUT if I want to display 10 different values, it generates 10 'left outer join'. Is there a way to simplify this query ? I've got a second question: Is there a way to display a portion of this ranking. Imagine I've 1000 players and I want to display TOP 10, I use the LIMIT keyword. Now I want to display the rank of the player Bob which is 326/1000 and I want to display 5 rank player above and below (so from 321 to 331 position). How can I achieve it ? thanks.

    Read the article

  • MongoDB vs CouchDB (Speed optimization)

    - by Edward83
    Hi! I made some tests of speed to compare MongoDB and CouchDB. Only inserts were while testing. I got MongoDB 15x faster than CouchDB. I know that it is because of sockets vs http. But, it is very interesting for me how can I optimize inserts in CouchDB? Test platform: Windows XP SP3 32 bit. I used last versions of MongoDB, MongoDB C# Driver and last version of installation package of CouchDB for Windows. Thanks!

    Read the article

  • Simple neon optimization in Android

    - by Peter
    In my Android application, I use a bunch of open source libraries such as libyuv, libvpx, libcrypto, libssl, etc. Some of them come with Android.mk. For others, I hand-crafted Android.mk. The code is built only for arm for now. Here is my Application.mk: APP_ABI := armeabi-v7a APP_OPTIM := release APP_STL := gnustl_static APP_CPPFLAGS := -frtti I am looking for way to generate binaries that are optimized for neon. Browsing the net, I found the following setting that someone is using in his Android.mk: LOCAL_CFLAGS += -mfloat-abi=softfp -mfpu=neon -march=armv7 I wonder if I simply put this setting in Application.mk, will it automatically get applied across all the libraries? A step before each library is built is the following: include $(CLEAR_VARS) Is it better to include LOCAL_CFLAGS directive after this line (instead of including it in Application.mk)? Finally, why doesn't ndk-build automatically optimize for neon when it sees armabi in Application.mk? Regards.

    Read the article

  • url optimization in rails?

    - by piemesons
    Hello I want to create seo optimize url in rails.Same like done in stackoverflow. Right now this is my url http://localhost:3000/questions/56 I want to make it something like this:- http://localhost:3000/questions/56/this-is-my-optimized-url i am using restful approach. is there any plug-in available for this.

    Read the article

  • Stored procedure optimization

    - by George Zacharia
    Hi, i have a stored procedure which takes lot of time to execure .Can any one suggest a better approch so that the same result set is achived. ALTER PROCEDURE [dbo].[spFavoriteRecipesGET] @USERID INT, @PAGENUMBER INT, @PAGESIZE INT, @SORTDIRECTION VARCHAR(4), @SORTORDER VARCHAR(4),@FILTERBY INT AS BEGIN DECLARE @ROW_START INT DECLARE @ROW_END INT SET @ROW_START = (@PageNumber-1)* @PageSize+1 SET @ROW_END = @PageNumber*@PageSize DECLARE @RecipeCount INT DECLARE @RESULT_SET_TABLE TABLE ( Id INT NOT NULL IDENTITY(1,1), FavoriteRecipeId INT, RecipeId INT, DateAdded DATETIME, Title NVARCHAR(255), UrlFriendlyTitle NVARCHAR(250), [Description] NVARCHAR(MAX), AverageRatingId FLOAT, SubmittedById INT, SubmittedBy VARCHAR(250), RecipeStateId INT, RecipeRatingId INT, ReviewCount INT, TweaksCount INT, PhotoCount INT, ImageName NVARCHAR(50) ) INSERT INTO @RESULT_SET_TABLE SELECT FavoriteRecipes.FavoriteRecipeId, Recipes.RecipeId, FavoriteRecipes.DateAdded, Recipes.Title, Recipes.UrlFriendlyTitle, Recipes.[Description], Recipes.AverageRatingId, Recipes.SubmittedById, COALESCE(users.DisplayName,users.UserName,Recipes.SubmittedBy) As SubmittedBy, Recipes.RecipeStateId, RecipeReviews.RecipeRatingId, COUNT(RecipeReviews.Review), COUNT(RecipeTweaks.Tweak), COUNT(Photos.PhotoId), dbo.udfGetRecipePhoto(Recipes.RecipeId) AS ImageName FROM FavoriteRecipes INNER JOIN Recipes ON FavoriteRecipes.RecipeId=Recipes.RecipeId AND Recipes.RecipeStateId <> 3 LEFT OUTER JOIN RecipeReviews ON RecipeReviews.RecipeId=Recipes.RecipeId AND RecipeReviews.ReviewedById=@UserId AND RecipeReviews.RecipeRatingId= ( SELECT MAX(RecipeReviews.RecipeRatingId) FROM RecipeReviews WHERE RecipeReviews.ReviewedById=@UserId AND RecipeReviews.RecipeId=FavoriteRecipes.RecipeId ) OR RecipeReviews.RecipeRatingId IS NULL LEFT OUTER JOIN RecipeTweaks ON RecipeTweaks.RecipeId = Recipes.RecipeId AND RecipeTweaks.TweakedById= @UserId LEFT OUTER JOIN Photos ON Photos.RecipeId = Recipes.RecipeId AND Photos.UploadedById = @UserId AND Photos.RecipeId = FavoriteRecipes.RecipeId AND Photos.PhotoTypeId = 1 LEFT OUTER JOIN users ON Recipes.SubmittedById = users.UserId WHERE FavoriteRecipes.UserId=@UserId GROUP BY FavoriteRecipes.FavoriteRecipeId, Recipes.RecipeId, FavoriteRecipes.DateAdded, Recipes.Title, Recipes.UrlFriendlyTitle, Recipes.[Description], Recipes.AverageRatingId, Recipes.SubmittedById, Recipes.SubmittedBy, Recipes.RecipeStateId, RecipeReviews.RecipeRatingId, users.DisplayName, users.UserName, Recipes.SubmittedBy; WITH SortResults AS ( SELECT ROW_NUMBER() OVER ( ORDER BY CASE WHEN @SORTDIRECTION = 't' AND @SORTORDER='a' THEN TITLE END ASC, CASE WHEN @SORTDIRECTION = 't' AND @SORTORDER='d' THEN TITLE END DESC, CASE WHEN @SORTDIRECTION = 'r' AND @SORTORDER='a' THEN AverageRatingId END ASC, CASE WHEN @SORTDIRECTION = 'r' AND @SORTORDER='d' THEN AverageRatingId END DESC, CASE WHEN @SORTDIRECTION = 'mr' AND @SORTORDER='a' THEN RecipeRatingId END ASC, CASE WHEN @SORTDIRECTION = 'mr' AND @SORTORDER='d' THEN RecipeRatingId END DESC, CASE WHEN @SORTDIRECTION = 'd' AND @SORTORDER='a' THEN DateAdded END ASC, CASE WHEN @SORTDIRECTION = 'd' AND @SORTORDER='d' THEN DateAdded END DESC ) RowNumber, FavoriteRecipeId, RecipeId, DateAdded, Title, UrlFriendlyTitle, [Description], AverageRatingId, SubmittedById, SubmittedBy, RecipeStateId, RecipeRatingId, ReviewCount, TweaksCount, PhotoCount, ImageName FROM @RESULT_SET_TABLE WHERE ((@FILTERBY = 1 AND SubmittedById= @USERID) OR ( @FILTERBY = 2 AND (SubmittedById <> @USERID OR SubmittedById IS NULL)) OR ( @FILTERBY <> 1 AND @FILTERBY <> 2)) ) SELECT RowNumber, FavoriteRecipeId, RecipeId, DateAdded, Title, UrlFriendlyTitle, [Description], AverageRatingId, SubmittedById, SubmittedBy, RecipeStateId, RecipeRatingId, ReviewCount, TweaksCount, PhotoCount, ImageName FROM SortResults WHERE RowNumber BETWEEN @ROW_START AND @ROW_END print @ROW_START print @ROW_END SELECT @RecipeCount=dbo.udfGetFavRecipesCount(@UserId) SELECT @RecipeCount AS RecipeCount SELECT COUNT(Id) as FilterCount FROM @RESULT_SET_TABLE WHERE ((@FILTERBY = 1 AND SubmittedById= @USERID) OR (@FILTERBY = 2 AND (SubmittedById <> @USERID OR SubmittedById IS NULL)) OR (@FILTERBY <> 1 AND @FILTERBY <> 2)) END

    Read the article

  • ControlCollection extension method optimization

    - by Johan Leino
    Hi, got question regarding an extension method that I have written that looks like this: public static IEnumerable<T> FindControlsOfType<T>(this ControlCollection instance) where T : class { T control; foreach (Control ctrl in instance) { if ((control = ctrl as T) != null) { yield return control; } foreach (T child in FindControlsOfType<T>(ctrl.Controls)) { yield return child; } } } public static IEnumerable<T> FindControlsOfType<T>(this ControlCollection instance, Func<T, bool> match) where T : class { return FindControlsOfType<T>(instance).Where(match); } The idea here is to find all controls that match a specifc criteria (hence the Func<..) in the controls collection. My question is: Does the second method (that has the Func) first call the first method to find all the controls of type T and then performs the where condition or does the "runtime" optimize the call to perform the where condition on the "whole" enumeration (if you get what I mean). secondly, are there any other optimizations that I can do to the code to perform better. An example can look like this: var checkbox = this.Controls.FindControlsOfType<MyCustomCheckBox>( ctrl => ctrl.CustomProperty == "Test" ) .FirstOrDefault();

    Read the article

  • Database maintenance, big binary table optimization

    - by jgemedina
    Hi i have a huge database, around 1 TB in size, most of the space is consumed by a table which stores images, the tables has right now almost 800k rows. server response time has increased, i would like to know which techniques should i use or you recomend, partitioning? o how to reorganize the table every row is accessed by the image id column, and it has its clustered index by that column, and every two days i reorganize the index and every 7 days i rebuild it, but it seems not to be working any suggestions?

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >