Search Results

Search found 2129 results on 86 pages for 'flag'.

Page 55/86 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • How to trigger Mouse-Over on iPhone?

    - by Andrew
    This might seem like a really dumb question, but I am writing an application and I have come across where my mouse-over, mouse-click and mouse-hover need different events bound to them. Now on Internet Explorer, Firefox, and Safari. It all works as expected. However, on my iPhone the actions will not trigger. Now my question is are their any specific ways I can have the Mouse-Over essentially be fired when I hold my finger down and trigger an event? An example where this doesn't work is right on this website when you hover over a comment it is supposed to display the +1 or flag icon. I am using jquery.

    Read the article

  • iPhone Application and programming Updates

    - by Cheryl
    Hi When a user logs into my iPhone App. I store their userID and relevant information in [NSUserDefaults standardUserDefaults]. This way they are not required to log in each time they access my application. When my application is live in the app store and I make updates to the code will this information be lost - will they then be required to log in? If so how do I go about preserving this information when I make updates? And while I can't image this happening but should a user choose to delete my application after they have installed it - how would I know that they have deleted it? I am using the push notifications and would like to flag the user as not being active so that I don't keep pushing out notifications to them. Thanks Very Much Cheryl

    Read the article

  • Getting lpfnAcceptEx (accpetex) to block in native C++

    - by Alikar
    Hello, I've been trying to get the lpfnAcceptEX function in Win32 to block on accept. If this is not possible I was wondering if there was a flag I could accpet or some other function that I could wait on. Right now the program is simply continually creating accepted sockets with no connections behind them. Perhaps I am misunderstanding how this is to work. Is there another function I need to wait on? I am following the example laid out here: http://msdn.microsoft.com/en-us/library/ms737524(VS.85).aspx Thanks, = Alikar

    Read the article

  • [AS 3.0] How to use the string.match method to find multiple occurrences of the same word in a strin

    - by Steven
    In Actionscript and Adobe Flex, I'm using a pattern and regexp (with the global flag) with the string.match method and it works how I'd like except when the match returns multiple occurrences of the same word in the text. In that case, all the matches for that word point only to the index for the first occurrence of that word. For example, if the text is "cat dog cat cat cow" and the pattern is a search for cat*, the match method returns an array of three occurrences of "cat", however, they all point to only the index of the first occurrence of cat when i use indexOf on a loop through the array. I'm assuming this is just how the string.match method is (although please let me know if i'm doing something wrong or missing something!). I want to find the specific indices of every occurrence of a match, even if it is of a word that was already previously matched. I'm wondering if that is just how the string.match method is and if so, if anyone has any idea what the best way to do this would be. Thanks.

    Read the article

  • grailsApplication not getting injected in a service , Grails 2.1.0

    - by vijay tyagi
    I have service in which i am accessing few configuration properties from grailsApplication I am injecting it like this class MyWebService{ def grailsApplication WebService webService = new WebService() def getProxy(url, flag){ return webService.getClient(url) } def getResponse(){ def proxy = getProxy(grailsApplication.config.grails.wsdlURL, true) def response = proxy.getItem(ItemType) return response } } When i call getProxy() method, i see this in tomcat logs No signature of method: org.example.MyWebService.getProxy() is applicable for argument types: (groovy.util.ConfigObject, java.lang.Boolean) values: [[:], true] Possible solutions: getProxy(), getProxy(java.lang.String, boolean), setProxy(java.lang.Object) which means grailsApplication is not getting injected into the service, is there any alternate way to access configuration object ? according to burtbeckwith's post configurationholder has been deprecated, can't think of anything else. Interestingly the very same service works fine in my local IDE(GGTS 3.1.0), that means locally grailsApplication is getting injected, but when i create a war to deploy to a standalone tomcat, it stops getting injected.

    Read the article

  • Stop MSVC++ debug errors from blocking the current process?

    - by Mike Arthur
    Any failed ASSERT statements on Windows cause the below debug message to appear and freeze the applications execution. I realise this is expected behaviour but it is running periodically on a headless machine so prevent the unit tests from failing, instead waiting on user input indefinitely. Is there s a registry key or compiler flag I can use to prevent this message box from requesting user input whilst still allowing the test to fail under ASSERT? Basically, I want to do this without modifying any code, just changing compiler or Windows options. Thanks!

    Read the article

  • user buffer after doing 'write' to file opened with O_DIRECT

    - by user1868481
    I'm using the O_DIRECT flag to write to the disk directly from the user buffer. But as far as I understand, Linux doesn't guarantee that after this call, the data is written. It just writes directly from the user buffer to the physical device using DMA or anything else... Therefore, I don't understand if I can write to the user buffer after the call to 'write' function. I'm sure that example code will help to understand my question: char *user_buff = malloc(...); /* assume it is aligned as needed */ fd = open(..., O_DIRECT); write(fd, ...) memset(user_buff, 0, ...) Is the last line (memset) legal? Is writing to the user buffer valid that is maybe used by DMA to transfer data to the device?

    Read the article

  • Building ffmpeg with an executable output

    - by Kevin Galligan
    I generally don't like to ask such "you figure it out for me" questions, but I suspect this one will be really simple for a C++ guru. I want to build ffmpeg for Android, and I'd like it to output an executable rather than a set of libraries. We've been using the guardian project's build: https://github.com/guardianproject/android-ffmpeg It does produce what we want, but I've found tweaking it for different architectures to be, at best, unpleasant. I've gotten this version to build: https://github.com/appunite/AndroidFFmpeg It does a nice job of slicing and dicing different architectures, but produces a jni version. There is a long story as to why I want the exe, but I'll skip it for now. Is there a flag that needs to be flipped? Some path or other setting? I am at this point fully baffled. Thanks in advance.

    Read the article

  • How to hang up (disconnect, terminate,..) incomings call???

    - by Cesar Valiente
    "How do you hang up incoming calls (in Android of course)?" First, I know this question has been asked and answered several times, and the response is always "you can't". But if we look in the market we get a few applications (all private software, no access to the source code... :-( ) that do this action, such as CallFilter, Panda firewall and others... So... does somebody know how these apps do the hang up action, (or terminate, or disconnect or whatever you call it..)? And other question, if the first don't get a response.. does somebody know how send an incoming call to the voice mail? Of course, all questions are about how to do it programmatically. So with the voicemail question I know there's a flag in contacts that is used for that, but like I said, I'd like to know the programmatical way. Thanks all!

    Read the article

  • mod_rewrite per-dir redirection returning a 400

    - by Eaterrust
    I moved my images directory to a different folder, and now I want to redirect all images requests from that folder to the new one. I do not have access to the main configuraion file, so I'm doing this in a .htaccess. I tried this, and it works:     RewriteCond %{REQUEST_URI} old_dir/.+.(jpg|png|gif)$     RewriteRule old_dir/(.+[^/]+..+)$ $1 [L,PT] But since they have permanently moved, I want to do a proper redirect, so I added the [R] flag, like this:      %{REQUEST_URI} old_dir/.+.(jpg|png|gif)$     RewriteRule old_dir/(.+[^/]+..+)$ $1 [L,PT,R] But the server gets confused, and returns a 400, so I looked at the log file, and this is what happens:     strip per-dir prefix: C:/wamp/www/natrazyle/old_dir/images/banner.jpg - old_dir/images/banner.jpg     applying pattern 'old_dir/(.+[^/]+..+)$' to uri 'old_dir/images/banner.jpg'     rewrite 'old_dir/images/banner.jpg' - 'images/banner.jpg'     add per-dir prefix: images/banner.jpg - C:/wamp/www/natrazyle/images/banner.jpg     explicitly forcing redirect with http://localhost/C:/wamp/www/natrazyle/images/banner.jpg As you can see, the full local path gets added after localhost I know I'm doing something wrong, I just can't figure it out myself. Any help would be greatly appreciated...

    Read the article

  • How can I parse free text (Twitter tweets) against a large database of values?

    - by user136416
    Hi there Suppose I have a database containing 500,000 records, each representing, say, an animal. What would be the best approach for parsing 140 character tweets to identify matching records by animal name? For instance, in this string... "I went down to the woods to day and couldn't believe my eyes: I saw a bear having a picnic with a squirrel." ... I would like to flag up the words "bear" and "squirrel", as they appear in my database. This strikes me as a problem that has probably been solved many times, but from where I'm sitting it looks prohibitively intensive - iterating over every db record checking for a match in the string is surely a crazy way to do it. Can anyone with a comp sci degree put me out of my misery? I'm working in C# if that makes any difference. Cheers!

    Read the article

  • Optimizing Solr for Sorting

    - by devinfoley
    I'm using Solr for a realtime search index. My dataset is about 60M large documents. Instead of sorting by relevance, I need to sort by time. Currently I'm using the sort flag in the query to sort by time. This works fine for specific searches, but when searches return large numbers of results, Solr has to take all of the resulting documents and sort them by time before returning. This is slow, and there has to be a better way. What is the better way?

    Read the article

  • Template or function arguments as implementation details in doxygen?

    - by Vincent
    In doxygen is there any common way to specify that some C++ template parameters of function parameters are implementation details and should not be specified by the user ? For example, a template parameter used as recursion level counter in metaprogramming technique or a SFINAE parameter in a function ? For example : /// \brief Do something /// \tparam MyFlag A flag... /// \tparam Limit Recursion limit /// \tparam Current Recursion level counter. SHOULD NOT BE EXPLICITELY SPECIFIED !!! template<bool MyFlag, unsigned int Limit, unsigned int Current = 0> myFunction(); Is there any doxygen normalized option equivalent to "SHOULD NOT BE EXPLICITELY SPECIFIED !!!" ?

    Read the article

  • How to perform dynamic formatting with perl during write?

    - by Bee
    I have a format which is defined like below: format STDOUT = ------------------------------------ |Field1 | Field2 | Field3 | ------------------------------------ |@<<<<<<<<<<| @<<<<<<<<<<<| @<<<<< |~~ shift(@list1),shift(@list2),shift(@list3) ------------------------------------ . write STDOUT; So the questions are as below: Is it possible to make the list of values printed dynamic? e.g. If list 1 contains 12 elements, and if $flag1 is defined, then print only elements 0..10 instead of all 12. I tried doing this by passing $flag as a parameter to the sub which generates the report. However, the last defined FORMAT seems to always take precedence and the final write when it happens, applies the last format no matter what the condition is. Is it possible to also add/hide fields using the same process. e.g. If $flag2 is defined, then add an additional field Field4 to the list?

    Read the article

  • Calling finish() After Starting a New Activity

    - by stormin986
    The first Activity that loads in my application is an initialization activity, and once complete it loads a new Activity. I want to ensure if the user presses 'Back' they go straight to the Launcher, and not the initialization screen. Side note, is this even the best approach, or would this be better done with some kind of Intent Flag? Is it correct to call finish() after calling startActivity() on the new activity? onCreate() { ... startActivity(new Intent(this, NextActivity.class)); finish(); ... } I'm still taking in the whole 'Message Queue' method of doing things in Android, and my assumption is that calling startActivity() and then finish() from my first Activity's onCreate() will log each respective message in the message queue, but finish execution of onCreate() before moving on to starting the next Activity and finishing my first one. Is this a correct understanding?

    Read the article

  • Running the script for the 2-nd time, the messages are not retrieved from the mail server

    - by Max Li
    I read the mails from my gmail account with the code following below. import poplib pop_conn = poplib.POP3_SSL('pop.gmail.com') pop_conn.user('user') # result: '+OK send PASS' pop_conn.pass_('password') # result: '+OK Welcome.' print pop_conn.list()[1] pop_conn.quit() It shows me 1 message as expected. However, if I run this script for the second time, I get 0 messages as result. On the server the message is still there and unread. How can I get all the messages also running the script for the second time? For me it behaves as an email client that doesn't download the same mail twice. Is there some flag to force the program to download everything again? I use python 2.7.x on ubuntu 12.10

    Read the article

  • Comparing large strings in JavaScript with a hash

    - by user4815162342
    I have a form with a textarea that can contain large amounts of content (say, articles for a blog) edited using one of a number of third party rich text editors. I'm trying to implement something like an autosave feature, which should submit the content through ajax if it's changed. However, I have to work around the fact that some of the editors I have as options don't support an "isdirty" flag, or an "onchange" event which I can use to see if the content has changed since the last save. So, as a workaround, what I'd like to do is keep a copy of the content in a variable (let's call it lastSaveContent), as of the last save, and compare it with the current text when the "autosave" function fires (on a timer) to see if it's different. However, I'm worried about how much memory that could take up with very large documents. Would it be more efficient to store some sort of hash in the lastSaveContent variable, instead of the entire string, and then compare the hash values? If so, can you recommend a good javascript library/jquery plugin that implements an appropriate hash for this requirement?

    Read the article

  • C: Why does gcc allow char array initialization with string literal larger than array?

    - by Ashwin
    int main() { char a[7] = "Network"; return 0; } A string literal in C is terminated internally with a nul character. So, the above code should give a compilation error since the actual length of the string literal Network is 8 and it cannot fit in a char[7] array. However, gcc (even with -Wall) on Ubuntu compiles this code without any error or warning. Why does gcc allow this and not flag it as compilation error? gcc only gives a warning (still no error!) when the char array size is smaller than the string literal. For example, it warns on: char a[6] = "Network"; [Related] Visual C++ 2012 gives a compilation error for char a[7]: 1>d:\main.cpp(3): error C2117: 'a' : array bounds overflow 1> d:\main.cpp(3) : see declaration of 'a'

    Read the article

  • which one is a faster/better sql practice?

    - by artsince
    Suppose I have a 2 column table (id, flag) and id is sequential. I expect this table to contain a lot of records. I want to periodically select the first row not flagged and update it. Some of the records on the way may have already been flagged, so I want to skip them. Does it make more sense if I store the last id I flagged and use it in my select statement, like select * from mytable where id > my_last_id order by id asc limit 1 or simply get the first unflagged row, like: select * from mytable where flagged = 'F' order by id asc limit 1 Thank you!

    Read the article

  • Makefile won't copy .o to obj/ and target to bin/ folders

    - by about blank
    I'm trying to write a Makefile which will copy its target and objects to bin/ and obj/ directories, respectively. Yet, when I try to run it I get the following error: nasm -f elf64 -g -F stabs main.asm -l spacelander.lst ld -o spacelander obj/main.o ld: cannot find obj/main.o: No such file or directory make: *** [spacelander] Error 1 Why is this happening? Update I noticed when I posted the error that it was due to white spacing errors. After taking care of those, I still get the new error I replaced with the old one I mentioned prior. What is this?? Update 2 Posted -d flag output below Makefile source. Source ASM := nasm ARGS := -f FMT := elf64 OPT := -g -F stabs SRC := main.asm OBJDIR := obj TARGETDIR := bin OBJ := $(addprefix $(OBJDIR)/,$(patsubst %.asm, %.o, $(wildcard *.asm))) TARGET := spacelander .PHONY: all clean all: $(OBJDIR) $(TARGET) $(OBJDIR): mkdir $(OBJDIR) $(OBJDIR)/%.o: $(SRC) $(ASM) $(ARGS) $(FMT) $(OPT) $(SRC) -l $(TARGET).lst $(TARGET): $(OBJ) ld -o $(TARGET) $(OBJ) clean: @rm -f $(TARGET) $(wildcard *.o) @rm -rf $(OBJDIR) make -d Output - NOTE: output is too many characters for body, thus is pastebinned http://pastebin.com/3bctGJxs

    Read the article

  • Matplotlib Contour Clabel Location

    - by jotimaz
    I would like to control the location of matplotlib clabels on a contour plot, but without utilizing the manual=True flag in clabel. For example, I would like to specify an x-coordinate, and have labels created at the points that pass through this line. I see that you can get the location of the individual labels using get_position(), but I am stuck at that. Any help would be greatly appreciated. Thanks! EDIT: The above image is an example plot that I would like to apply this method to. The default label positions are inconvenient -- the flat areas between Day 2 and Day 4 would be more visually appealing.

    Read the article

  • Good Design for Initialization of Static Array

    - by jplot
    I have a question regarding good design in C++. I have a class A, and all objects of this class use an integer array of constant values (they should share the same array, as their values are constant). The array needs to be computed (just once) before any object A. I thought about having another class B which contains the integer array as a static member, an init() method which would fill this array according to some formula and a static boolean flag initialized (if this variable if true then the init() method would do nothing), but I'm not sure this is the best way to solve my design issue. So my question is, what would be a good design/way to accomplish this ? Thanks in advance.

    Read the article

  • Is there a tool to discover if the same class exists in multiple jars in the classpath?

    - by David Citron
    If you have two jars in your classpath that contain different versions of the same class, the classpath order becomes critical. I am looking for a tool that can detect and flag such potential conflicts in a given classpath or set of folders. Certainly a script that starts: classes=`mktemp` for i in `find . -name "*.jar"` do echo "File: $i" > $classes jar tf $i > $classes ... done with some clever sort/uniq/diff/grep/awk later on has potential, but I was wondering if anyone knows of any existing solutions.

    Read the article

  • Ant command line arguments

    - by js7354
    Program works fine when run with eclipse run configurations, but when run with ant, it is unable to parse int from args[0], which I do not understand. Full code is available here https://gist.github.com/4108950/e984a581d5e9de889eaf0c8faf0e57752e825a97 I believe it has something to do with ant, target name="run" description="run the project"> java dir="${build.dir}" classname="BinarySearchTree" fork="yes"> <arg value="6 in.txt"/> /java> /target> the arg value will be changed via the -D flag, as in ant -Dargs="6 testData1.txt" run. Any help would be much appreciated, it is very frustrating.

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >