Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 7/1148 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • Programmer performance

    - by RSK
    I am a PHP programmer with 1 year of experience. As I am just starting my career, I am learning a lot of things now. I can say I am a little bit of a perfectionist. When I am assigned a problem I start off by Googling. Then, even when I find a solution, I keep trying for a better one until I find 2-3 options. Then I start learning it and choose the best performing solution. Even though I am learning a lot, this process gets me labeled as a low performer. My questions: As a novice, should I continue to use this learning process and not worry about my performance? Should I focus more on my performance and less on how the code performs?

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at StackOverflow

    Read the article

  • SQL SERVER – Relationship with Parallelism with Locks and Query Wait – Question for You

    - by Pinal Dave
    Today, I have one very simple question based on following image. A full disclaimer is that I have no idea why it is like that. I tried to reach out to few of my friends who know a lot about SQL Server but no one has any answer. Here is the question: If you go to server properties and click on Advanced you will see the following screen. Under the Parallelism section if you noticed there are four options: Cost Threshold for Parallelism Locks Max Degree of Parallelism Query Wait I can clearly understand why Cost Threshold for Parallelism and Max Degree of Parallelism belongs to Parallelism but I am not sure why we have two other options Locks and Query Wait belongs to Parallelism section. I can see that the options are ordered alphabetically but I do not understand the reason for locks and query wait to list under Parallelism. Here is the question for you – Why Locks and Query Wait options are listed under Parallelism section in SQL Server Advanced Properties? Please leave a comment with your explanation. I will publish valid answers on this blog with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Service and/or tool to monitor performance?

    - by chris
    I am seeing wildly different performance from a clients web site, and would like to set up some sort of monitoring. What I'm looking for is a service that will issue requests to a couple of URLs, and report on the time it took to process the page - TTFB and time to download the entire page - that means I need something that will process javascript & css. Are there services like this? I've seen a few that monitor uptime, but they don't seem to report on the overall page performance.

    Read the article

  • Ios Game with many animated Nodes,performance issues

    - by user31929
    I'm working in a large map upside-down game(not tiled map),the map i use is a city. I have to insert many node to create the "life of the city",something like people that cross the streets,cars,etc... Some of this characters are involved in physics and game logic but others are only graphic characters. For what i know the only way i can achive this result is to create each character node with or without physic body and animate each character with a texture atlas. In this way i think that i'll have many performance problems, (the characters will be something like 100/150) even if i'll apply all the performance tips that i know... My question is: with large numbers of characters there another programming pattern that i must follow ? What is the approch of game like simcity,simpsons tapped out for ios,etc... that have so many animation at the same time?

    Read the article

  • Undocumented Query Plans: Equality Comparisons

    - by Paul White
    The diagram below shows two data sets, with differences highlighted: To find changed rows using TSQL, we might write a query like this: The logic is clear: join rows from the two sets together on the primary key column, and return rows where a change has occurred in one or more data columns.  Unfortunately, this query only finds one of the expected four rows: The problem, of course, is that our query does not correctly handle NULLs.  The ‘not equal to’ operators <> and != do not evaluate...(read more)

    Read the article

  • Buzzword for "performance-aware" software development

    - by errantlinguist
    There seems to be an overabundance of buzzwords for software development styles and methodologies: Agile development, extreme programming, test-driven development, etc... well, is there any sort of buzzword for "performance-aware" development? By "performance awareness", I don't necessarily mean low-latency or low-level programming, although the former would logically fall under the blanket term I'm looking for. I mean development in which resources are recognised to be finite and so there is a general emphasis on low computational complexity, good resource management, etc. If I was to be snarky, I would say "good programming", but that doesn't seem to get the message across so well...

    Read the article

  • Performance Tuning and Query Optimisation–SQLBits Training Day

    - by simonsabin
    I will be doing a training day at SQLbits in April on Performance Tuning and Query Optimisation. This is the outline for the day. Its going to be an intense day, I look forward to seeing you there. To register go to http://www. sqlbits .com/information/registration.aspx . Places are limited so make sure you register soon. Outline of the day. Most database performance issues are due to a combination of bad queries, bad database design or poor indexing. All of them are related to each other. In this...(read more)

    Read the article

  • SQL SERVER – Using MAXDOP 1 for Single Processor Query – SQL in Sixty Seconds #008 – Video

    - by pinaldave
    Today’s SQL in Sixty Seconds video is inspired from my presentation at TechEd India 2012 on Speed up! – Parallel Processes and Unparalleled Performance. There are always special cases when it is about SQL Server. There are always few queries which gives optimal performance when they are executed on single processor and there are always queries which gives optimal performance when they are executed on multiple processors. I will be presenting the how to identify such queries as well what are the best practices related to the same. In this quick video I am going to demonstrate if the query is giving optimal performance when running on single CPU how one can restrict queries to single CPU by using hint OPTION (MAXDOP 1). More on Errors: Difference Temp Table and Table Variable – Effect of Transaction Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT Debate – Table Variables vs Temporary Tables – Quiz – Puzzle – 13 of 31 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Video

    Read the article

  • EF4 performance tips and tricks

    - by Will
    I've gotten to that point in one of my projects, and haven't found much information out there. So if you've got some pointers for improving performance in the new Entity Framework 4, please let us know!

    Read the article

  • specify query timeout when using toplink essential query hint

    - by yhzs8
    Hi, For glassfish v2, I have searched through the web and I cannot find anyway to specify query timeout when using TopLink essential query hint. We have another option to migrate to EclipseLink but that is not feasible. have tried the solution in http://forums.oracle.com/forums/thread.jspa?threadID=974732&tstart=-1 but it seems the DatabaseQuery which one could set a timeout value is actually for Toplink, not TopLink essential. Do we have some other way to instruct the JDBC driver for this timeout value other than the query hint? I need to do it on query-basis and not system-basis (which is just to change the value of DISTRIBUTED_LOCK_TIMEOUT)

    Read the article

  • VB.Net IO performance

    - by CFP
    Having read this page, I can't believe that VB.Net has such a terrible performance when it comes to I/O. Is this still true today? How does the .Net Framework 2.0 perform in terms of I/O (taht's the version I'm targeting)?

    Read the article

  • Set of Tools to optimize the performance in general of SQL Server

    - by Dave
    Hi, I know there are things out there to help to optimize queries, ect... but is there anything else, something like a full package that can scan your database and highlight all the performance issues, naming conventions, tables not properly normalized, etc? I know this is the job of a DBA and if the DBA is good, he shouldn't need a tool like that, but sometimes you start a new job, you get in charge of an existing database and the DB is a mess, so you don't know where to start... Thanks to everyone Dave

    Read the article

  • mysql query query

    - by nightcoder1
    basically i need to write a query for mysql, but i have no experience in this and i cant find good tutorials on the old tinternet. i have a table called rels with columns "hosd_id" "linkedhost_id" "text link" and a table called hostlist with columns "id" "hostname" all i am trying to achieve is a query which outputs the "hostname" and "linked_id" when "host_id" is equal to "id" any help or pointers on syntax or code would be helpfull, or even a good mysql query guide

    Read the article

  • MySQL MyISAM table performance... painfully, painfully slow

    - by Salman A
    I've got a table structure that can be summarized as follows: pagegroup * pagegroupid * name has 3600 rows page * pageid * pagegroupid * data references pagegroup; has 10000 rows; can have anything between 1-700 rows per pagegroup; the data column is of type mediumtext and the column contains 100k - 200kbytes data per row userdata * userdataid * pageid * column1 * column2 * column9 references page; has about 300,000 rows; can have about 1-50 rows per page The above structure is pretty straight forwad, the problem is that that a join from userdata to page group is terribly, terribly slow even though I have indexed all columns that should be indexed. The time needed to run a query for such a join (userdata inner_join page inner_join pagegroup) exceeds 3 minutes. This is terribly slow considering the fact that I am not selecting the data column at all. Example of the query that takes too long: SELECT userdata.column1, pagegroup.name FROM userdata INNER JOIN page USING( pageid ) INNER JOIN pagegroup USING( pagegroupid ) Please help by explaining why does it take so long and what can i do to make it faster. Edit #1 Explain returns following gibberish: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE userdata ALL pageid 372420 1 SIMPLE page eq_ref PRIMARY,pagegroupid PRIMARY 4 topsecret.userdata.pageid 1 1 SIMPLE pagegroup eq_ref PRIMARY PRIMARY 4 topsecret.page.pagegroupid 1 Edit #2 SELECT u.field2, p.pageid FROM userdata u INNER JOIN page p ON u.pageid = p.pageid; /* 0.07 sec execution, 6.05 sec fecth */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE u ALL pageid 372420 1 SIMPLE p eq_ref PRIMARY PRIMARY 4 topsecret.u.pageid 1 Using index SELECT p.pageid, g.pagegroupid FROM page p INNER JOIN pagegroup g ON p.pagegroupid = g.pagegroupid; /* 9.37 sec execution, 60.0 sec fetch */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE g index PRIMARY PRIMARY 4 3646 Using index 1 SIMPLE p ref pagegroupid pagegroupid 5 topsecret.g.pagegroupid 3 Using where Moral of the story Keep medium/long text columns in a separate table if you run into performance problems such as this one.

    Read the article

  • Performance tune my sql query removing OR statements

    - by SmartestVEGA
    I need to performance tune following SQL query by removing "OR" statements Please help ... SELECT a.id, a.fileType, a.uploadTime, a.filename, a.userId, a.CID, ui.username, company.name companyName, a.screenName FROM TM_transactionLog a, TM_userInfo ui, TM_company company, TM_airlineCompany ac WHERE ( a.CID = 3049 ) OR ( a.CID = company.ID AND ac.SERVICECID = 3049 AND company.SPECIFICCID = ac.ID ) OR ( a.USERID = ui.ID AND ui.CID = 3049 );

    Read the article

  • mod_rewrite to redirect URL with query string

    - by meeble
    I've searched all over stackoverflow, but none of the answers seem to be working for this situation. I have a lot of working mod_rewrite rules already in my httpd.conf file. I just recently found that Google had indexed one of my non-rewritten URLs with a query string in it: http://domain.com/?state=arizona I would like to use mod_rewrite to do a 301 redirect to this URL: http://domain.com/arizona The issue is that later on in my rewrite rules, that 2nd URL is being rewritten to pass query variables on to WordPress. It ends up getting rewritten to: http://domain.com/index.php?state=arizona Which is the proper functionality. Everything I have tried so far has either not worked at all or put me in an endless rewrite loop. This is what I have right now, which is getting stuck in a loop: RewriteCond %{QUERY_STRING} state=arizona [NC] RewriteRule .* http://domain.com/arizona [R=301,L] #older rewrite rule that passes query string based on URL: RewriteRule ^([A-Za-z-]+)$ index.php?state=$1 [L] which gives me an endless rewrite loop and takes me to this URL: http://domain.com/arizona?state=arizona I then tried this: RewriteRule .* http://domain.com/arizona? [R=301,L] which got rid of the query string in the URL, but still creates a loop.

    Read the article

  • Combining two-part SQL query into one query

    - by user332523
    Hello, I have a SQL query that I'm currently solving by doing two queries. I am wondering if there is a way to do it in a single query that makes it more efficient. Consider two tables: Transaction_Entries table and Transactions, each one defined below: Transactions - id - reference_number (varchar) Transaction_Entries - id - account_id - transaction_id (references Transactions table) Notes: There are multiple transaction entries per transaction. Some transactions are related, and will have the same reference_number string. To get all transaction entries for Account X, then I would do SELECT E.*, T.reference_number FROM Transaction_Entries E JOIN Transactions T ON (E.transaction_id=T.id) where E.account_id = X The next part is the hard part. I want to find all related transactions, regardless of the account id. First I make a list of all the unique reference numbers I found in the previous result set. Then for each one, I can query all the transactions that have that reference number. Assume that I hold all the rows from the previous query in PreviousResultSet UniqueReferenceNumbers = GetUniqueReferenceNumbers(PreviousResultSet) // in Java foreach R in UniqueReferenceNumbers // in Java SELECT * FROM Transaction_Entries where transaction_id IN (SELECT * FROM Transactions WHERE reference_number=R Any suggestions how I can put this into a single efficient query?

    Read the article

  • SQL SERVER – Server Side Paging in SQL Server 2011 – Part2

    - by pinaldave
    The best part of the having blog is that SQL Community helps to keep it running with new ideas. Earlier I wrote about SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative. A very popular article on that subject. I had used variables for “number of the rows” and “number of the pages”. Blog reader send me email asking in their organizations these values are stored in the table. Is there any the new syntax can read the data from the table. Absolutely YES! USE AdventureWorks2008R2 GO CREATE TABLE PagingSetting (RowsPerPage INT, PageNumber INT) INSERT INTO PagingSetting (RowsPerPage, PageNumber) VALUES(10,5) GO SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET (SELECT RowsPerPage*PageNumber FROM PagingSetting) ROWS FETCH NEXT (SELECT RowsPerPage FROM PagingSetting) ROWS ONLY GO Here is the quick script: This is really an easy trick. I also wrote blog post on comparison of the performance over here: . SQL SERVER – Server Side Paging in SQL Server 2011 Performance Comparison Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: SQL Paging

    Read the article

  • SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – SQL and 10-11 May SharePoint

    - by pinaldave
    There were lots of request about providing more details for the blog post through email address specified in the article SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning. Here is the complete brochure of the course. There are two different courses are offered by Solid Quality Mentors 1) Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning – Pinal Dave Date: May 12-14, 2010 Price: Rs. 14,000/person for 3 days Discount Code: ‘SQLAuthority.com‘ Effective Price: Rs. 11,000/person for 3 days 2) SharePoint 2010 – Joy Rathnayake Date: May 10-11, 2010 Price: Rs. 11,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 8,000/person for 3 days Download the complete PDF brochure. Additionally there is special program of SolidQ India Insider. I will provide the details for the same very soon. Please do send me email if you need any additional details. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – SQL and 10-11 May SharePoint

    - by pinaldave
    There were lots of request about providing more details for the blog post through email address specified in the article SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning. Here is the complete brochure of the course. There are two different courses are offered by Solid Quality Mentors 1) Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning – Pinal Dave Date: May 12-14, 2010 Price: Rs. 14,000/person for 3 days Discount Code: ‘SQLAuthority.com‘ Effective Price: Rs. 11,000/person for 3 days 2) SharePoint 2010 – Joy Rathnayake Date: May 10-11, 2010 Price: Rs. 11,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 8,000/person for 3 days Download the complete PDF brochure. Additionally there is special program of SolidQ India Insider. I will provide the details for the same very soon. Please do send me email if you need any additional details. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • Performance and Optimization Isn’t Evil

    - by Reed
    Donald Knuth is a fairly amazing guy.  I consider him one of the most influential contributors to computer science of all time.  Unfortunately, most of the time I hear his name, I cringe.  This is because it’s typically somebody quoting a small portion of one of his famous statements on optimization: “premature optimization is the root of all evil.” I mention that this is only a portion of the entire quote, and, as such, I feel that Knuth is being quoted out of context.  Optimization is important.  It is a critical part of every software development effort, and should never be ignored.  A developer who ignores optimization is not a professional.  Every developer should understand optimization – know what to optimize, when to optimize it, and how to think about code in a way that is intelligent and productive from day one. I want to start by discussing my own, personal motivation here.  I recently wrote about a performance issue I ran across, and was slammed by multiple comments and emails that effectively boiled down to: “You’re an idiot.  Premature optimization is the root of all evil.  This doesn’t matter.”  It didn’t matter that I discovered this while measuring in a profiler, and that it was a portion of my code base that can take “many hours to complete.”  Even so, multiple people instantly jump to “it’s premature – it doesn’t matter.” This is a common thread I see.  For example, StackOverflow has many pages of posts with answers that boil down to (mis)quoting Knuth.  In fact, just about any question relating to a performance related issue gets this quote thrown at it immediately – whether it deserves it or not.  That being said, I did receive some positive comments and emails as well.  Many people want to understand how to optimize their code, approaches to take, tools and techniques they can use, and any other advice they can discover. First, lets get back to Knuth – I mentioned before that Knuth is being quoted out of context.  Lets start by looking at the entire quote from his 1974 paper Structured Programming with go to Statements: “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.” Ironically, if you read Knuth’s original paper, this statement was made in the middle of a discussion of how Knuth himself had changed how he approaches optimization.  It was never a statement saying “don’t optimize”, but rather, “optimizing intelligently provides huge advantages.”  His approach had three benefits: “a) it doesn’t take long” … “b) the payoff is real”, c) you can “be less efficient in the other parts of my programs, which therefore are more readable and more easily written and debugged.” Looking at Knuth’s premise here, and reading that section of his paper, really leads to a few observations: Optimization is important  “he will be wise to look carefully at the critical code” Normally, 3% of your code – three lines out of every 100 you write, are “critical code” and will require some optimization: “we should not pass up our opportunities in that critical 3%” Optimization, if done well, should not be time consuming: “it doesn’t take long” Optimization, if done correctly, provides real benefits: “the payoff is real” None of this is new information.  People who care about optimization have been discussing this for years – for example, Rico Mariani’s Designing For Performance (a fantastic article) discusses many of the same issues very intelligently. That being said, many developers seem unable or unwilling to consider optimization.  Many others don’t seem to know where to start.  As such, I’m going to spend some time writing about optimization – what is it, how should we think about it, and what can we do to improve our own code.

    Read the article

  • SQL SERVER – GUID vs INT – Your Opinion

    - by pinaldave
    I think the title is clear what I am going to write in your post. This is age old problem and I want to compile the list stating advantages and disadvantages of using GUID and INT as a Primary Key or Clustered Index or Both (the usual case). Let me start a list by suggesting one advantage and one disadvantage in each case. INT Advantage: Numeric values (and specifically integers) are better for performance when used in joins, indexes and conditions. Numeric values are easier to understand for application users if they are displayed. Disadvantage: If your table is large, it is quite possible it will run out of it and after some numeric value there will be no additional identity to use. GUID Advantage: Unique across the server. Disadvantage: String values are not as optimal as integer values for performance when used in joins, indexes and conditions. More storage space is required than INT. Please note that I am looking to create list of all the generic comparisons. There can be special cases where the stated information is incorrect, feel free to comment on the same. Please leave your opinion and advice in comment section. I will combine a final list and update this blog after a week. By listing your name in post, I will also give due credit. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Constraint and Keys, SQL Data Storage, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >