Search Results

Search found 12762 results on 511 pages for 'inverted index'.

Page 64/511 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Apache trailing slash added to files problem

    - by Francisc
    Hello! I am having a problem with Apache. What it does is this: Take /index.php file containing an code with src set to relative path myimg.jpg, both in the root of my server. So, www.mysite.com would show the image as would www.mysite.com/index.php. However, if I access www.mysite.com/index.php/ (with a trailing slash) it does the odd thing of executing index.php code as it would be inside an index.php folder (e.g. /index.php/index.php), thus not showing the image anymore. This is a simple example that's easy to solve with absolte addressing etc, the problem I am getting from this a security one that's not so easily fixed. So, how can I get Apache to give a 403 or 404 when files are accessed "as folders"? Thank you.

    Read the article

  • How to print index in a 'for-loop' being executed in remote host through SSH?

    - by YShin
    I want to ssh into a remote host, and then execute a for loop that goes through sequence of numbers to control number of different nodes. ssh user@host /bin/bash << EOF for i in {1..10} do echo $i done EOF If I do this, the output is just 10 blank lines, instead of printing out numbers from 1 through 10. If I execute same code on my local machine, I get the desired output which is ten lines each line printing from 1 through 10. How would one achieve the intended functionality, that is accessing the index in a for loop that is being executed in SSH?

    Read the article

  • How do i get a document index so i can delete with lucene?

    - by acidzombie24
    Basically i am doing this I think i'll set the document id as the thread id on my site (even if some types of thread wont be searched). So i can search by thread id but i am clue less of how to delete. I found pages that say use the document index and i need to optimize or close before changes take effect but i dont know how to get the document index. How do i? Also i seen one that said to use IndexWriter to delete but i couldnt figure out how to do it with that either.

    Read the article

  • mod_rewrite hide subdirectory in return url part2

    - by user64790
    Hi I am having an issue trying to get my mod_rewrite configuration correctly i have a site: 0.0.0.0/oldname/directories/index.php I would like to rename "oldname" to "newname" resulting in: 0.0.0.0/newname/directories/index.php etc.. So when a user navigates to 0.0.0.0 my site will automatically send them to 0.0.0.0/oldname/index.php I'm not planning on moving my content marketing have asked me to rename the site folder I would like to mask the request of 0.0.0.0/oldname/index.php to 0.0.0.0/newname/index.php Also if a user navigates from index.php to an link of say /oldname/project1/index.Php the final browsers returned URL will be /newname/project1.php without having to move or edit site links. I also understand my hyperlinks will refer to /oldname but this is acceptable any help would be highly appreciated. Regards

    Read the article

  • XSLT: Transform XML files tree

    - by Myniva
    I have the following file structure (XML files 'index.xml' in nested folders): index.xml foo/index.xml foo/sub/index.xml foo/.../index.xml bar/.../index.xml Now I have to transform each of this XML files with a given XSL stylesheet. The result should be the same folder structure (overwriting would be OK). What would be your approach to achieve this? My system: OS X 10.6, Saxon XSLT processor

    Read the article

  • How can keep track of the index path of a button in a tableview cell?

    - by Jake
    I have a table view where each cell has a button accessory view. The table is managed by a fatchedresults controller and is frequently reordered. I want to be able to press a button and be able to obtain the index path of the pressed button's tableviewcell. I've been trying to get this working for days by storing the row of the button in it's tag, but when the table get's reordered the row becomes incorrect and I keep failing at reordering the tags correctly when a change is made. Any new ideas on how to keep track of the button's cell's index path? Thanks so much for any help.

    Read the article

  • what is the best algorithm to use for this problem

    - by slim
    Equilibrium index of a sequence is an index such that the sum of elements at lower indexes is equal to the sum of elements at higher indexes. For example, in a sequence A: A[0]=-7 A[1]=1 A[2]=5 A[3]=2 A[4]=-4 A[5]=3 A[6]=0 3 is an equilibrium index, because: A[0]+A[1]+A[2]=A[4]+A[5]+A[6] 6 is also an equilibrium index, because: A[0]+A[1]+A[2]+A[3]+A[4]+A[5]=0 (sum of zero elements is zero) 7 is not an equilibrium index, because it is not a valid index of sequence A. If you still have doubts, this is a precise definition: the integer k is an equilibrium index of a sequence if and only if and . Assume the sum of zero elements is equal zero. Write a function int equi(int[] A); that given a sequence, returns its equilibrium index (any) or -1 if no equilibrium indexes exist. Assume that the sequence may be very long.

    Read the article

  • Multiple mod rewrites in .htaccess

    - by Bob
    I want the following rules but I don't seem to get the right setup. <domain>/training-courses/ both with or without the slash at the end it should go to: <domain>/?index.php?page=training-courses and for each variable extra after this I want it to behave like this: <domain>/training-courses/success/another-value/and-yet-another/ to <domain>/?index.php?page=training-courses&val1=success&val2=another-value&val3=and-yet-another-value If it's not possible to have the option for unlimited leading variables, i'd like to have at least 2 variables after the page variable Is this possible? and how do I get this sorted out? I have this so far: RewriteEngine On RewriteRule ^test/([^/]*)/$ /test/index.php?pagina=$1&val1=$2 RewriteRule ^test/([^/]*)$ /test/index.php?pagina=$1&val1=$2 RewriteRule ^test/([^/]*)/([^/]*)/$ /test/index.php?pagina=$1&val1=$2 RewriteRule ^test/([^/]*)/([^/]*)$ /test/index.php?pagina=$1&val1=$2 RewriteRule ^test/([^/]*)/([^/]*)/([^/]*)/$ /test/index.php?pagina=$1&val1=$2&val2=$3 RewriteRule ^test/([^/]*)/([^/]*)/([^/]*)$ /test/index.php?pagina=$1&val1=$2&val2=$3

    Read the article

  • solving problems recursively in C

    - by Harry86
    Our professor gave us the following assignment: A "correct" series is one inwhich the sum of its members equals to the index of its first member. The program is supposed to find the length of the LONGEST "correct" series within a series of n numbers. for example: if the input series would be arr[4]={1, 1, 0, 0} the output (longest "correct" series) would be 3. arr[0]=1. 0!=1 therefore the longest series here is 0. arr[1]=1,and 1=1. but the follwing members also sum up to 1 as shown below: 1=arr[1]+arr[2]+arr[3] = 1+ 0 + 0, therefore the longest series here is 3. the output in this example is 3. That's what I got so far: int solve(int arr[], int index, int length,int sum_so_far) { int maxwith,maxwithout; if(index==length) return 0; maxwith = 1+ solve(arr,index+1,length,sum_so_far+arr[index]); maxwithout = solve(arr,index+1,length,arr[index+1]); if(sum_so_far+arr[index]==index) if(maxwith>maxwithout) return maxwith; return maxwithout; return 0; } int longestIndex(int arr[], int index,int length) { return solve(arr,0,length,0); } What am I doing wrong here? Thanks a lot for your time... Harry

    Read the article

  • Does Oracle 11g automatically index fields frequently used for full table scans?

    - by gustafc
    I have an app using an Oracle 11g database. I have a fairly large table (~50k rows) which I query thus: SELECT omg, ponies FROM table WHERE x = 4 Field x was not indexed, I discovered. This query happens a lot, but the thing is that the performance wasn't too bad. Adding an index on x did make the queries approximately twice as fast, which is far less than I expected. On, say, MySQL, it would've made the query ten times faster, at the very least. I'm suspecting Oracle adds some kind of automatic index when it detects that I query a non-indexed field often. Am I correct? I can find nothing even implying this in the docs.

    Read the article

  • htaccess change DirectoryIndex priotiry to php and not html

    - by Jayapal Chandran
    In a production server there are index.html and index.php By default index.html is getting loaded. I want index.php to be the default script to load and if index.php is not present then index.html can load. It is a shared hosting so we do not have access to the httpd.conf file So i thought of creating .htaccess file which would do the above condition. What is the directive to include in .htaccess file to do so?

    Read the article

  • No Change for Index of DropDownList in a Custom Control!!!

    - by mahdiahmadirad
    Hi Dears, I have Created A Custom Control which is a DropDownList with specified Items. I designed AutoPostback and SelectedCategoryId as Properties and SelectedIndexChanged as Event for My Custom Control. Here Is My ASCX file Behind Code: private int _selectedCategoryId; private bool _autoPostback = false; public event EventHandler SelectedIndexChanged; public void BindData() { //Some Code... } protected void Page_Load(object sender, EventArgs e) { BindData(); DropDownList1.AutoPostBack = this._autoPostback; } public int SelectedCategoryId { get { return int.Parse(this.DropDownList1.SelectedItem.Value); } set { this._selectedCategoryId = value; } } public string AutoPostback { get { return this.DropDownList1.AutoPostBack.ToString(); } set { this._autoPostback = Convert.ToBoolean(value); } } protected void DropDownList1_SelectedIndexChanged(object sender, EventArgs e) { if (SelectedIndexChanged != null) SelectedIndexChanged(this, EventArgs.Empty); } I Want Used Update Panel to Update Textbox Fields According to dorp down list selected index. this is my code in ASPX page: <asp:Panel ID="PanelCategory" runat="server"> <p> Select Product Category:&nbsp; <myCtrl:CategoryDDL ID="CategoryDDL1" AutoPostback="true" OnSelectedIndexChanged="CategoryIndexChanged" SelectedCategoryId="0" runat="server" /> </p> <hr /> </asp:Panel> <asp:UpdatePanel ID="UpdatePanelEdit" runat="server"> <ContentTemplate> <%--Some TextBoxes and Other Controls--%> </ContentTemplate> <Triggers> <asp:PostBackTrigger ControlID="CategoryDDL1" /> </Triggers> </asp:UpdatePanel> But Always The Selected Index of CategoryDDL1 is 0(Like default). this means Only Zero Value will pass to the event to update textboxes Data. what is the wrong with my code? why the selected Index not Changing? Help?

    Read the article

  • What's the difference between these SQL conditions?

    - by wesley luan
    Select * from Example where 1 = Case when :index = 0 then Case when DateEx Between :pDat1 and :pDate2 then 1 end else Case When :index = 1 or :index = 2 then Case When DateEx >= :pDat1 then 1 end end end And Select * from Example where 1 = Case when :index = 0 then Case when DateEx Between :pDat1 and :pDat2 then 1 end else 1 end and 1 = Case When :index = 1 or :index = 2 then Case When DateEx >= :pDat1 then 1 end end

    Read the article

  • Grails spring security defaultTargetUrl going wrong path

    - by fsi
    Grails 2.4 with Spring security 2 3RC I have this on my Config.groovy grails.plugin.springsecurity.controllerAnnotations.staticRules = [ '/': ['permitAll'], '/index': ['permitAll'], '/index.gsp': ['permitAll'], '/**/js/**': ['permitAll'], '/**/css/**': ['permitAll'], '/**/images/**': ['permitAll'], '/**/favicon.ico': ['permitAll'] ] grails.plugin.springsecurity.successHandler.defaultTargetUrl = "/home/index" But this keeping me redirecting to assets/favicon.ico And my HomeController is like that @Secured(['ROLE_ADMIN', 'ROLE_USER']) def index() { if (SpringSecurityUtils.ifAllGranted('ROLE_ADMIN')) { redirect controller: 'admin', action: 'index' return } } And I modify this in my UrlMapping: "/"(controller: 'home', action:'index') Why it keeps me sending wrong path? Update: using another computer, it redirects me to /asset/grails_logo.png

    Read the article

  • Something wrong with redirects on my Joomla 1.5.18 site

    - by fuzzy lollipop
    My Joomla 1.5.18 site, I enabled login, when I click login the page I get sent to is NOT styled with CSS. If I login it redirects to the home page and it is not styled anymore either. It looks like it is recursively appending stuff to the URL incorrectly. http://www.myjoomlasite.org/index.php/index.php/login if I click on home page or login links it keeps putting more and more index.php entries in the URL, and sometimes on the end. The following is what I get when I try and go to a JEvents menu item. http://www.myjoomlasite.org/index.php/index.php/index.php/index.php/upcomingevents/month.calendar/2010/06/09/index.php Anyone have any idea why this is happening? I don't know what to search for on Google apparently, and none of the Joomla! books I have address this.

    Read the article

  • Zend Framework additional Get params with NGINX

    - by Johni
    I configured my NGINX for Zend in the following way (PHP 5.3 with fpm): server { root /home/page/public/; index index.php index.html index.htm; server_name localhost; location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } Now i want to process additional get params like: http://web.site/index?par=1 WIth my local dev system (Apache) it works fine but not under NGINX which did'T deliver the get params. Anny suggestions? Edit: Now i use the following config which seems to work but i'm not happy with it since everybody suggests "use try_files whenever possible". location / { if (!-e $request_filename) { rewrite /(.*)$ /index.php?q=$1 last; break; } }

    Read the article

  • Is it possible to specify the name of the Index property to use for lists in a fluent nhibernate con

    - by Teevus
    When mapping a HasMany or HasManyToMany in fluent nhibernate, you can specify the column name to use for the list as a parameter to the AsList() method as follows: HasMany(c => c.Customers) .AsList(c => c.Column("PositionIndex")); I would prefer to be able to set this using a Fluent NHibernate convention (either a pre-existing one, or a custom one), especially since the default name appears to be "Index" which is a reserved word in MSSQL. I've tried using a custom convention implementing IHasManyConvention, but the instance parameter does not seem to contain the information about whether its a list, a bag, or a set, and also does not contain the column details for the index column. public void Apply(IOneToManyCollectionInstance instance) { } Any ideas?

    Read the article

  • User activity vs. System activity on the Index Usage Statistics report

    - by Zachary G Jensen
    I recently decided to crawl over the indexes on one of our most heavily used databases to see which were suboptimal. I generated the built-in Index Usage Statistics report from SSMS, and it's showing me a great deal of information that I'm unsure how to understand. I found an article at Carpe Datum about the report, but it doesn't tell me much more than I could assume from the column titles. In particular, the report differentiates between User activity and system activity, and I'm unsure what qualifies as each type of activity. I assume that any query that uses a given index increases the '# of user X' columns. But what increases the system columns? building statistics? Is there anything that depends on the user or role(s) of a user that's running the query?

    Read the article

  • allocating extra memory for a container class.

    - by sil3nt
    Hey there, I'm writing a template container class and for the past few hours have been trying to allocate new memory for extra data that comes into the container (...hit a brick wall..:| ) template <typename T> void Container<T>::insert(T item, int index){ if ( index < 0){ cout<<"Invalid location to insert " << index << endl; return; } if (index < sizeC){ //copying original array so that when an item is //placed in the middleeverything else is shifted forward T *arryCpy = 0; int tmpSize = 0; tmpSize = size(); arryCpy = new T[tmpSize]; int i = 0, j = 0; for ( i = 0; i < tmpSize; i++){ for ( j = index; j < tmpSize; j++){ arryCpy[i] = elements[j]; } } //overwriting and placing item and location index elements[index] = item; //copying back everything else after the location at index int k = 0, l = 0; for ( k =(index+1), l=0; k < sizeC || l < (sizeC-index); k++,l++){ elements[k] = arryCpy[l]; } delete[] arryCpy; arryCpy = 0; } //seeing if the location is more than the current capacity //and hence allocating more memory if (index+1 > capacityC){ int new_capacity = 0; int current_size = size(); new_capacity = ((index+1)-capacityC)+capacityC; //variable for new capacity T *tmparry2 = 0; tmparry2 = new T[new_capacity]; int n = 0; for (n = 0; n < current_size;n++){ tmparry2[n] = elements[n]; } delete[] elements; elements = 0; //copying back what we had before elements = new T[new_capacity]; int m = 0; for (m = 0; m < current_size; m++){ elements[m] = tmparry2[m]; } //placing item elements[index] = item; } else{ elements[index] = item; } //increasing the current count sizeC++; my testing condition is Container cnt4(3); and as soon as i hit the fourth element (when I use for egsomething.insert("random",3);) it crashes and the above doesnt work. where have I gone wrong?

    Read the article

  • Throwing out of range exception in C++

    - by Shinka
    This code works; int at(int index) { if(index < 1 || index >= size) throw 0; return x[index]; } Yet this doesn't int at(int index) { if(index < 1 || index >= size) throw std::out_of_range; return x[index]; } I get the error "expected primary expression before ';'". Now... it surprises me because I know std::out_of_range exists and I have #include <stdexcept>

    Read the article

  • Existing function to slice pandas object by axis number

    - by Zero
    Pandas has the following indexers: Object Type Indexers Series s.loc[indexer] DataFrame df.loc[row_indexer,column_indexer] Panel p.loc[item_indexer,major_indexer,minor_indexer] I would like to be able to index dynamically by axis, for example: df = pd.DataFrame(data=0, index=['row1', 'row2', 'row3'], columns=['col1', 'col2', col3']) df.index(['row1', 'row3'], axis=0) # index by rows df.index(['col1', 'col2'], axis=1) # index by columns Is there a built-in function that does this?

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >