Search Results

Search found 48586 results on 1944 pages for 'page performance'.

Page 74/1944 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • SQL SERVER – Checklist for Analyzing Slow-Running Queries

    - by pinaldave
    I am recently working on upgrading my class Microsoft SQL Server 2005/2008 Query Optimization and & Performance Tuning with additional details and more interesting examples. While working on slide deck I realized that I need to have one solid slide which talks about checklist for analyzing slow running queries. A quick search on my saved [...]

    Read the article

  • Easy Listening = CRM On Demand Podcasts

    - by Anne
    OK, here's my NEW favorite resource for CRM On Demand info -- podcasts! Specifically, the CRM On Demand Podcast site -- signed, sealed, and delivered with humor and know-how. Yes, I admit, I know the cast of characters. But let's face it, sometimes dealing with software is just soooo dry! Not so when discussed by the two main commentators, Louis Peters and Robert Davidson, whom someone once referred to as CRM On Demand's "Click and Clack." (Thought that was too good not to pass along!) Anyhow, another huge plus about the site is the option to listen OR to read. Out walking my dog or doing the dishes? Just turn up the podcast. Listening to music or watching TV? I'll read Louis's entertaining write-ups to glean great info about CRM On Demand in a very short period of time. So that you get a better understanding of why I like this site so much, here's a sampling of what's discussed: Five Things about Books of Business As Louis Peters put it in his entry, when you see "Five Things" in the title, "you'll know you're going to get some concrete advice that you can put to work right away." Well, Louis and Robert do just that, pointing you in the right direction when using Books of Business to segment data. Moving to Indexed Fields - A Rough Guide (only an article, not a podcast) I've read all about performance and even helped develop material around it. But nowhere have I heard indexed custom fields referred to as "super heroes." Louis and Robert use imaginative language to describe the process for moving your data to indexed fields for optimal performance. Data Access QA from the Forums I think that everyone would admit that data access and visibility is the most difficult topic to understand in CRM On Demand. Following up on their previous podcast on the same topic, Louis and Robert answer a few key questions from the many postings on the Oracle CRM On Demand forums. And I bet that the scenarios match many companies' business requirements...maybe even yours! We Need to Talk About Adoption Another expert, Tim Koehler, joins Louis to talk about how to drive user adoption: aligning product usage with business results, communicating why and how to use the product, getting feedback on usability, and so on. Hope I've made my point -- turn to these podcasts to hear knowledgeable folks discuss CRM On Demand tips and tricks in entertaining ways. One podcast is even called "SaaS Talk"!

    Read the article

  • SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

    - by pinaldave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners: Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system. Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data. Concurrency is also affected when multiple processes are attempting to change the same data simultaneously. Two approaches to managing concurrent data access: Optimistic Concurrency Model Pessimistic Concurrency Model Concurrency Models Pessimistic Concurrency Default behavior: acquire locks to block access to data that another process is using. Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur). Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data. Also acquires locks on data being modified so no other processes can access the data for either reading or modifying. Readers block writer, writers block readers and writers. Optimistic Concurrency Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data. Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. Readers do not block writers and writers do not block readers, but, writers can and will block writers. Transaction Processing A transaction is the basic unit of work in SQL Server. Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN). Transactions must exhibit all the ACID properties of a transaction. ACID Properties Transaction processing must guarantee the consistency and recoverability of SQL Server databases. Ensures all transactions are performed as a single unit of work regardless of hardware or system failure. A – Atomicity C – Consistency I – Isolation D- Durability Atomicity: Each transaction is treated as all or nothing – it either commits or aborts. Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure. Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions. Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data. Transaction Dependencies In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems). Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely. Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. Isolation Levels SQL Server supports 5 isolation levels that control the behavior of read operations. Read Uncommitted All behaviors except for lost updates are possible. Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed. When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice. Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later. Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read. Read Committed Two varieties of read committed isolation: optimistic and pessimistic (default). Ensures that a read never reads data that another application hasn’t committed. If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating. Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began. Repeatable Read This is a Pessimistic isolation level. Ensures that if a transaction revisits data or a query is reissued the data doesn’t change. That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear. Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction. Snapshot Snapshot Isolation (SI) is an optimistic isolation level. Allows for processes to read older versions of committed data if the current version is locked. Difference between snapshot and read committed has to do with how old the older versions have to be. It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Serializable This is the strongest of the pessimistic isolation level. Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear. Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes. In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock. Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock. Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time. Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible. Now if you are with me – let us continue learning for SQL Server Locking Basics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Concurrency

    Read the article

  • Country selection, when country is not listed

    - by David Balažic
    While this might not 100% match the intent of this site, it was the closest match from Stackexchange sites. So, if a web site (the "entrance" page) offers a choice (a list) of countries, with the text "Chose your country", but the users country is not listed, what should he do? One example is http://www.samsung.com/countryselection.do Addition: I ask this standing in the users position. I encounter a web site and it gives me the above page. What to do? Another issue: What is "my" country? My current location? My permanent residence? The country of my citizenship? Something else?

    Read the article

  • Certificate Revocation checking affecting system performance [migrated]

    - by Colm Clarke
    I have a .NET 3.5 desktop application that had been showing periodic slow downs in functionality whenever the test machine it was on was out of the office. I managed to replicate the error on a machine in the office without an internet connection, but it was only when i used ANTS performance profiler that i got a clearer picture of what was going on. In ANTS I saw a "Waiting for synchronization" taking up to 16 seconds that corresponded to the delay I could see in the application when NHibernate tried to load the System.Data.SqlServerCE.dll assembly. If I tried the action again immediately it would work with no delay but if I left it for 5 minutes then it would be slow to load again the next time I tried it. From my research so far it appears to be because the SqlServerCE dll is signed and so the system is trying to connect to get the certificate revocation lists and timing out. Disabling the "Automatically detect settings" setting in the Internet Options LAN settings makes the problem go away, as does disabling the "Check for publishers certificate revocation". But the admins where this application will be deployed are not going to be happy with the idea of disabling certificate checking on a per machine or per user basis so I really need to get the application level disabling of the CRL check working. There is the well documented bug in .net 2.0 which describes this behaviour, and offers a possible fix with a config file element. <?xml version="1.0" encoding="utf-8"?> <configuration> <runtime> <generatePublisherEvidence enabled="false"/> </runtime> </configuration> This is NOT working for me however even though I am using .net 3.5. The SQLServerCE dll is being loaded dynamically by NHibernate and I wonder if the fact that it's dynamic could somehow be why the setting isn't working, but I don't know how I could check that. Can anyone offer suggestions as to why the config setting might not work? Or is there another way I could disable the check at the application level, perhaps a CAS policy setting that I can use to set an exception for the application when it's installed? Or is there something I can change in the application to up the trust level or something like that? I have also tried using to no advantage ServicePointManager.CheckCertificateRevocationList = false; http://rusanu.com/2009/07/24/fix-slow-application-startup-due-to-code-sign-validation/ I have also tried those registry settings out and unfortunately they didn't help. The dlls that appear to be the cause of the hold up are native SQL Server CE dlls, and looking at the stack traces in ProcMon mscorwks.dll doesn't appear to be involved even though the checks on crypto and cert registry keys are being done under the .NET application. It's definitely still something to do with publisher certificate checking because unticking "Check for publisher revocation certificate" still works but something odd is going on.

    Read the article

  • Playing with aspx page cycle using JustMock

    In this post , I will cover a test code that will mock the various elements needed to complete a HTTP page request and  assert the expected page cycle steps. To begin, i have a simple enumeration that has my predefined page steps: public enum PageStep {     PreInit,     Load,     PreRender,     UnLoad } Once doing so, i  first...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to print a web page that contains flash

    - by Richard
    I am using the chromium browser to display the following web page: http://www.primaryworksheets.co.uk/multiws/multi23.html I want to print off this maths worksheet for my son, but all I ever get out of my printer is a blank page. The web page appears to be produced using flash. I have been to the software centre and re-installed the flash plugin, but that did not help. I don't seem to have problems printing anything else. Firefox isn't any better. Can anyone tell me what else I might try? I'm using '11.04'. Thanks, Richard

    Read the article

  • .htaccess Redirect 301 in Wordpress – From Post to Page

    - by elocman
    By default, Wordpress posts are added to RSS feed. For my website, I want to include Wordpress pages to RSS feed. I know that some plugins could help me. Instead, I try to use redirect 301 in .htacccess file. My question is, will this way work fine for Google and other search engines? Here’s what I did: Published a new page and then a new post with the same title, desc, keywords and content (though I know that if there’s redirect 301 Google won’t "read" the post but switch to the page) Added the line Redirect 301 etc. to my .htacccess file Now my post is listed in RSS feed, and when you click on it you’re redirected to the page

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • Adding Async=true to the page- no side effects noticed.

    - by Michael Freidgeim
    Recently I needed to implement PageAsyncTask  in .Net 4 web forms application.According to http://msdn.microsoft.com/en-us/library/system.web.ui.pageasynctask.aspx"A PageAsyncTask object must be registered to the page through the RegisterAsyncTask method. The page itself does not have to be processed asynchronously to execute asynchronous tasks. You can set the Async attribute to either true (as shown in the following code example) or false on the page directive and the asynchronous tasks will still be processed asynchronously:<%@ Page Async="true" %>When the Async attribute is set to false, the thread that executes the page will be blocked until all asynchronous tasks are complete."I was worry about any site effects if I will set  Async=true on the existing page.The only documented restrictions, that I found are that@Async is not compatible with @AspCompat and Transaction attributes (from @ Page directive  MSDN article). In other words, Asynchronous pages do not work when the AspCompat attribute is set to true or the Transactionattribute is set to a value other than Disabled in the @ Page directiveFrom our tests we conclude, that adding Async=true to the page is quite safe, even if you don't always call Async tasks from the page

    Read the article

  • Google webmaster tools: parameters that only apply on one page

    - by Imagine digital
    I'm trying to get my e-commerce website on google and still figuring out how it all works. Now, I have seen this feature named URL-parameters, allowing me to set different parameters that affect page content to be indexed (one can also set parameters that do not affect the page, but for me that does not apply..). The question I have about this is whether and how I should add parameters that I only have on some pages of my site. example: The homepage of my site is www.mysite.nl. no parameters at all. But when a user clicks the navigation bar, it links to www.mysite.nl/itemList.php?category=&....subCategory=.... The parameters category and subCategory define whether there is content on my itemList page and what content that is. It gets matching products out of my database based on those 2 variables. The question: How do I make sure that I apply the google URL Parameters function decently for my website?

    Read the article

  • Extreme Performance and Scale Delivered by SOA on Oracle Exalogic

    - by J Swaroop
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Demands to incorporate internet-scale applications, data, and social media traffic with existing IT infrastructure require extreme availability, reliability, and scalability. In this session on industrial-strength SOA, learn how Oracle Exalogic and Oracle Exadata engineered systems address these requirements. Topics covered: (1) how SOA and BPM benefit from “hardware and software engineered for each other,” (2) how Oracle Exadata provides the data tier with unparalleled scalability and performance for SOA and BPM running on Oracle Exalogic (3) customer case studies (4) best practices and topology guidelines (5) information on tools that help operate, manage, provision, and deploy—to help reduce overall TCO. Extreme engineering at its best! Session details: 10/2/12 (Tuesday) 11:45 AM - Moscone South -308

    Read the article

  • SQLRally and SQLRally - Session material

    - by Hugo Kornelis
    I had a great week last week. First at SQLRally Nordic , in Stockholm, where I presented a session on how improvements to the OVER clause can help you simplify queries in SQL Server 2012 enormously. And then I continued straight on into SQLRally Amsterdam , where I delivered a session on the performance implications of using user-defined functions in T-SQL. I understand that both events will make my slides and demo code downloadable from their website, but this may take a while. So those who do not...(read more)

    Read the article

  • Do the "Contact us" and "Privacy policy" pages affect SEO?

    - by Gkhan14
    Just like the title says, what are the effects of having a "Contact us" and a "Privacy policy" on your site? I've read that it could build up your trust with Google, is this true? I've also read that some people said that you should add a noindex tag to your "Privacy policy" page, would this be a good idea? I say this because many websites have similar privacy policies, and I don't want any duplicate content issues. (For example, many people could be using the same WordPress privacy policy generator). I'm wondering the same things for the "Contact us" page as well.

    Read the article

  • Canonical url for a home page and trailing slashes

    - by serg
    My home page could be potentially linked as: http://example.com http://example.com/ http://example.com/?ref=1 http://example.com/index.html http://example.com/index.html?ref=2 (the same page is served for all those urls) I am thinking about defining a canonical url to make sure google doesn't consider those urls to be different pages: <link rel="canonical" href="/" /> (relative) <link rel="canonical" href="http://example.com/" /> (trailing slash) <link rel="canonical" href="http://example.com" /> (no trailing slash) Which one should be used? I would just slap / but messing with canonical seems like a scary business so I wanted double check first. Is it a good idea at all for defining a canonical url for a home page?

    Read the article

  • Trade offs of linking versus skinning geometry

    - by Jeff
    What are the trade offs between inherent in linking geometry to a node versus using skinned geometry? Specifically: What capabilities do you gain / lose from using each method? What are the performance impacts of doing one over the other? What are the specific situations where you would want to do one over the other? In addition, do the answers to these questions tend to be engine specific? If so, how much?

    Read the article

  • Interesting links week #6

    - by erwin21
    Below a list of interesting links that I found this week: Frontend: Understanding CSS Selectors Javascript: Breaking the Web with hash-bangs HTML5 Peeks, Pokes and Pointers Development: 10 Points to Take Care While Building Links for SEO View State decoder ASP.NET MVC Performance Tips Other: Things to Remember Before Launching a Website Tips and Tricks On How To Become a Presentation Ninja 10 Ways to Simplify Your Workday Interested in more interesting links follow me at twitter http://twitter.com/erwingriekspoor

    Read the article

  • Timestep schemes for physics simulations

    - by ktodisco
    The operations used for stepping a physics simulation are most commonly: Integrate velocity and position Collision detection and resolution Contact resolution (in advanced cases) A while ago I came across this paper from Stanford that proposed an alternative scheme, which is as follows: Collision detection and resolution Integrate velocity Contact resolution Integrate position It's intriguing because it allows for robust solutions to the stacking problem. So it got me wondering... What, if any, alternative schemes are available, either simple or complex? What are their benefits, drawbacks, and performance considerations?

    Read the article

  • Have You Heard About Project Lucy?

    - by KKline
    Lucy, You Got Some 'Splainin to Do!' Quest Software's latest community initiative, Windows Azure-based Project Lucy, has debuted! Project Lucy is part infrastructure analytics, part social media experiment, and part performance data warehouse. The best things about Project Lucy include: It’s Free - just like our SQLServerPedia website, Project Lucy is free to anyone who wants to upload a trace file It’s 1oo% web-based - you don’t have to download or maintain anything and updates roll out seamlessly,...(read more)

    Read the article

  • Crawling an ajax based page with both a hash fragment and a meta tag

    - by Christofian
    According to google's documentation on crawling ajax based web pages, if a url contains a hash fragment, or something at the end of an url that looks like #helloworld, and if there is an ! after the #, as in #!helloworld, google will then request the url url?_escaped_fragment_=helloworld. I currently have an ajax based webpage that I want google to be able to crawl. Sometimes, the page uses hash fragments, and for those situations I set up the server so it will return an html snapshot for that page using _escaped_fragment_. However, that webpage often does not load a hash fragment, and when that happens the webpage still loads content using ajax. I couldn't find a good solution to enable ajax crawling for pages that sometimes have a hash fragment and sometimes don't. How can I tell google to use _escaped_fragment_ when there is a hash fragment, and to use something else to get an html snapshot of a page when there isn't a hash fragment?

    Read the article

  • Setting the Default Wiki Page in a SharePoint Wiki Library

    - by Damon Armstrong
    I’ve seen a number of blog posts about setting the default homepage in a wiki library, and most of them offer ways of accomplishing this task through PowerShell or through SharePoint designer.  Although I have become an ever increasing fan of PowerShell, I still prefer to stay away from it unless I’m trying to do something fairly complicated or I need a script that I can run over and over again.  If all you need to do is set the default homepage in a wiki library, there is an easier way! First, navigate to the wiki page you want to use as the default homepage.  Then click the Page tab in the ribbon.  In the Page Actions group there is a button called Make Homepage.  Click it.  A confirmation displays informing you that you are about to change the homepage.  Click OK and you will have a new homepage for your wiki library.  No PowerShell required.

    Read the article

  • Website speed issues

    - by Jose David Garcia Llanos
    I am developing a website however i have noticed speed issues, i am not sure whether is due to the location of the server. I am not a guru when it comes to performance or speed issues, but according to a website speed test it seems that it takes quite a long time to connect to the website. Speed Test Results Can someone suggest something or give me some tips, the website address is http://www.n1bar.com

    Read the article

  • If incentive pay is considered harmful, what are the other options? [closed]

    - by Ricardo Cardona Ramirez
    Possible Duplicate: What kind of innovative non-cash financial benefits do I offer to my developers to retain them along with a competitive salary? I recently read about incentive payments and their consequences. In our company we have a bonus according to the developer's performance, but it has brought many problems, such as those described in the article. If the subsidies are damaging, what choice do we have?

    Read the article

  • Is there any way to test how will the site perform under load

    - by Pankaj Upadhyay
    I have made an Asp.net MVC website and hosted it on a shared hosting provider. Since my website surrounds a very generic idea, it might have number of concurrent users sometime in future. So, I was thinking of a way to test my website for on-load performance. Like how will the site perform when 100 or 1000 users are online at the same time and surfing the website. This will also make me understand whether my LINQ queries are well written or not.

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >