Search Results

Search found 14169 results on 567 pages for 'parallel programming'.

Page 8/567 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Pair programming business logic with a non-IT person

    - by user1598390
    Have you have any experience in which a non-IT person works with a programmer during the coding process? It's like pair programming, but one person is a non-IT person that knows a lot about the business, maybe a process engineer with math background who knows how things are calculated and can understand non-idiomatic, procedural code. I've found that some procedural, domain-specific languages like PL/SQL are quite understandable by non-IT engineers. These person end up being co-authors of the code and guarantee the correctness of formulas, factors etc. I've found this kind of pair programming quite productive, this kind of engineer user feel they are also "owners" and "authors" of the code and help minimize misunderstanding in the communication process. They even help design the test cases. Is this practice common ? Does it have a name ? Have you had similar experiences ?

    Read the article

  • Studies of Pair Programming on Translation Projects

    - by gmletzkojr
    I am looking for information (ie, studies, metrics, etc) for pair programming when translating a project from an "older" language to a "newer" language. In this particular case, translating means line for line translation where ever possible, and only modifying the design when absolutely necessary, not when the modification would provide improved performance. I have performed pair programming in new development, and I am well aware of the pros and cons of pairing in that environment. However, I haven't been able to find any information in this particular case. Any help is appreciated.

    Read the article

  • Is functional programming a superset of object oriented?

    - by Jimmy Hoffa
    The more functional programming I do, the more I feel like it adds an extra layer of abstraction that seems like how an onion's layer is- all encompassing of the previous layers. I don't know if this is true so going off the OOP principles I've worked with for years, can anyone explain how functional does or doesn't accurately depict any of them: Encapsulation, Abstraction, Inheritance, Polymorphism I think we can all say, yes it has encapsulation via tuples, or do tuples count technically as fact of "functional programming" or are they just a utility of the language? I know Haskell can meet the "interfaces" requirement, but again not certain if it's method is a fact of functional? I'm guessing that the fact that functors have a mathematical basis you could say those are a definite built in expectation of functional, perhaps? Please, detail how you think functional does or does not fulfill the 4 principles of OOP.

    Read the article

  • Should programming languages be strict or loose?

    - by Ralph
    In Python and JavaScript, semi-colons are optional. In PHP, quotes around array-keys are optional ($_GET[key] vs $_GET['key']), although if you omit them it will first look for a constant by that name. It also allows 2 different styles for blocks (colon, or brace delimited). I'm creating a programming language now, and I'm trying to decide how strict I should make it. There are a lot of cases where extra characters aren't really necessary and can be unambiguously interpreted due to priorities, but I'm wondering if I should still enforce them or not to encourage good programming habits. What do you think?

    Read the article

  • Programming Interview Question [duplicate]

    - by user136494
    This question already has an answer here: How to prepare yourself for programming interview questions? [duplicate] 6 answers I have an upcoming interview in a couple of days and had a question for you guys. I've heard that programming interviews have whiteboard problems where you solve a simple problem on a whiteboard. My question to you is? How many whiteboard problems do you have to solve? Is there more than 1? What are examples of whiteboard problems? Is FizzBuzz one of them? Where can I find practice problems for them? Anyone know of any good web sites?

    Read the article

  • How do you learn a new programming language?

    - by Naveen
    I am C++ developer with some good experience on it. When I try to learn a new language ( have tried Java, C#, python, perl till now) I usually pickup a book and try to read it. But the problem with this is that these books typically start with some very basic programming concepts such as loops, operators etc and it starts to get very boring soon. Also, I feel I would get only theoeritcal knowledge without any practical knowledge on writing the code. So my question is how do you tacke these situations? do you just skip the chapters if its explaining something basic? also, do you have some standard set of programs that you will try to write in every new programming language you try to learn?

    Read the article

  • Learning Asynchronous programming

    - by xenoterracide
    Asynchronous non-blocking event driven programming seems to be all the rage. I have a basic conceptual understanding of what this all means. However what I'm not sure is when and where my code can benefit from being asynchronous, or how to make blocking IO, non-blocking. I'm sure that I can simply use a library to do this, but I'm more interested in more in depth concepts, and the various ways to implement it myself. Are there any comprehensive/definitive books, or other resources on this subject (like GoF for Design Patterns, or K&R for C, tldp for things like bash)? (Note: I'm not sure if this is actually functionally an identical question to my question on Learning event driven programming)

    Read the article

  • Mythbusters- Programming/hacking myths [closed]

    - by stephen776
    Hey guys. I am a big fan of the Discovery show Mythbusters, as Im sure some of you are as well. I have always wanted them to do an episode on programming/hacking. They get a lot of their show ideas from fans so I though we could compile a list of possible myths to bust. Lets hear your ideas! (sorry if this is not appropriate, close if necessary) Edit: I am not necessarily looking for subjective "This is what I want to see" answers. I am talking more along the lines of interesting computer/programming/hacking stories that would appeal to a general audience. I do not expect them to do a show on "Whats faster i++ or i + 1".

    Read the article

  • Programming for the iPhone

    - by Bobby Alexander
    Whats the best way to get started on iPhone development if you are an expeienced C++ or C# programmer? Most books either assume you know nothing or something. What are the steps to achieve this? For eg: first learn objective C (let's say), next learn cocoa... I am interested in books/resources. I read Getting started with iPhone development from Oreilly (the missing manuals book) but that just provided an over view on the programming and concentrated more on getting your app into the app store. I need need resources that will help be start coding. Other questions: How much of objective C do you need to know? How do go ahead with learning the cocoa framework? Can I directly start on cocoa touch or do I need to know the MAC cocoa framework first? Inputs from someone who was in the same situation (Know c++/c# but no clue about mac programming/objective c/cocoa) would help greatly.

    Read the article

  • Pair programming and unit testing

    - by TheSilverBullet
    My team follows the Scrum development cycle. We have received feedback that our unit testing coverage is not very good. A team member is suggesting the addition of an external testing team to assist the core team, but I feel this will backfire in a bad way. I am thinking of suggesting pair programming approach. I have a feeling that this should help the code be more "test-worthy" and soon the team can move to test driven development! What are the potential problems that might arise out of pair programming??

    Read the article

  • Pronunciation in programming?

    - by Xepoch
    How do you correctly or erroneously pronounce programming terms? Any that you find need strict correction or history into the early CS culture? Programming char = "tchar" not care? ! = bang not exclamation? # = pound not hash? Exception #! = shebang * = splat not star? regex = "rej ex" not "regg ex"? sql = "s q l" not "sequel" (already answered, just i.e.) Unixen | = pipe not vertical bar? bin = bin as in pin , not as in binary? lib = lib as in library , not as in liberate? etc = "ett see" , not "e t c" (as in /etc and not "&c") Annoyance / = slash not backslash LaTeX = "laytek" not "lay teks"

    Read the article

  • Pair programming remotely with Visual Studio?

    - by shamp00
    What tools exist to facilitate pair programming with Visual Studio when the programmers are not in the same physical location? At the moment we are thinking voice (Skype?) plus remote desktop (VNC? TeamViewer?), but it would be good to know of other suggestions and experiences. Also, is there anything more integrated with Visual Studio? A bit more background: we are two experienced developers with who have collaborated well for a long time on a large mature project (ASP.NET, Windows Forms and SQL Server). However we are not usually working on the same part of the code base at the same time. We intend to spend some weeks doing substantial refactoring and it would be ideal if we were able to do this work with a pair-programming approach.

    Read the article

  • Harmful temptations in programming

    - by gaearon
    Just curious, what kinds of temptations in programming turned out to be really harmful in your projects? Like when you really feel the urge to do something and you believe it's going to benefit the project or else you just trick yourself into believing it is, and after a week you realize you haven't solved any real problems but instead created new ones or, in the best case, pleased your inner beast with no visible impact. Personally, I find it very hard to not refactor bad code. I work with a lot of bad legacy code, and it takes some deep breaths to not touch it when I have no tests to prove my refactoring doesn't not break anything. Another demon for me in user interface, I can literally spend hours changing UI layout just because I enjoy doing it. Sometimes I tell myself I'm working on usability, but the truth is just I love moving buttons around. What are your programming demons, and how do you avoid them?

    Read the article

  • Assembly as a First Programming Language?

    - by Anto
    How good of an idea do you think it would be to teach people Assembly (some variant) as a first programming language? It would take a lot more effort than learning for instance Java or Python, but one would have good understanding of the machine more or less from "programming day one" (compared to many higher level languages, at least). What do you think? Is it a realistic idea, at least to those who are ready to make the extra effort? Advantages and disadvantages? Note: I'm no teacher, just curious

    Read the article

  • Programming curricula

    - by davidk01
    There are a lot of schools that teach Java and C++ but whenever I see the syllabus for one of these classes it's almost always some cut and dry OO stuff with possibly some boring end of class project. With all the little gadgets and emulators for those gadgets why aren't more schools re-purposing those classes so that the students work their way up to building android or meego applications? That way students get to experience first hand what it takes to engineer/build a piece of software instead of doing finger exercises with syntax. Practically every self-taught programmer that I know started programming because they wanted to make their gadgets do things for them. They didn't learn a programming language with an abstract conception of using it on some far distant project so I don't understand why schools don't emulate this style of teaching.

    Read the article

  • Reflective practice in programming using keystroke playback

    - by Graham
    I'm thinking of applying Reflective Practice to improving my programming skills. To that end, I want to be able to watch myself writing code. In general, what is a good method for applying Reflective Practice to the craft of programming? In particular, if it's a good idea, is there an editor that records keystrokes then plays them back at a later time - possibly running the keys together without delays, or replaying at a 2x/4x/8x accelerated rate? Screencasting with RecordMyDesktop is an option, but has downsides of waiting for encoding and ending up with a big video file instead of a list of keystrokes.

    Read the article

  • Does the D programming language have a future?

    - by user32756
    I stumbled several times over D and really asked myself why it isn't more popular. D is a systems programming language. Its focus is on combining the power and high performance of C and C++ with the programmer productivity of modern languages like Ruby and Python. Special attention is given to the needs of quality assurance, documentation, management, portability and reliability. Do you think it has got a future? I really would like to try it but somehow the thought that I'm the only person on earth programming D discourages me to try it.

    Read the article

  • Assembly as a First Programming Language?

    - by Anto
    How good of an idea do you think it would be to teach people Assembly (some variant) as a first programming language? It would take a lot more effort than learning for instance Java or Python, but one would have good understanding of the machine more or less from "programming day one" (compared to many higher level languages, at least). What do you think? Is it a realistic idea, at least to those who are ready to make the extra effort? Advantages and disadvantages? Note: I'm no teacher, just curious

    Read the article

  • Programming my first C++ program

    - by Jason H.
    I have a basic understanding of programming and I currently learning C++. I'm in the beginning phases of building my own CLI program for ubuntu. However, I have hit a few snags and I was wondering if I could get some clarification. The program I am working on is called "sat" and will be available via command line only. I have the main.cpp. However, my real question is more of a "best practices" for programming/organization. When my program "sat" is invoked I want it to take additional arguments. Here is an example: > sat task subtask I'm not sure if the task should be in its own task.cpp file for better organization or if it should be a function in the main.cpp? If the task should be in its own file how do you accept arguments in the main.cpp file and reference the other file? Any thoughts on which method is preferred and reference material to backup the reasoning?

    Read the article

  • What are the differences between programming languages? [closed]

    - by Omega
    Once upon a time, I heard from someone the only difference between programming languages is the syntax I wanted to deny it - to say that there are other fundamental aspects that truly set a language apart from others than just syntax. But I couldn't... So, can you? Whenever I search Google for something like "differences between programming languages", the results tend to be debates between two specific languages (I'd like something more general) - however, some of the aspects that people seemed to debate the most were: Object-Oriented Method/Operator overloading (I actually see this rather related to syntax) Garbage-Collection (While it seems like a good difference, for some reason it doesn't seem that "fundamental") What important aspects other than syntax can you think of?

    Read the article

  • Good book about advanced programming techniques [closed]

    - by Luca
    I am looking to a book covering adavnced programming techniques, covering different practical scenarios and describing the different challanges with the relative solutions. As example, which are the best ways to implement a module for buying on a web application with credit card or how to manage responsivenes for the frontend of the web application itself (dealing with cache, optimeze plugins, etc). On the web there are tons of tutorials about these topics, but I am looking for a book where such cases are collected all together and treated by real professionists. If the book would provide some code samples, that would be a plus (especially if C# .NET), but I am more interested in the approach/solution rather than the code details. I could not find any of these cases in the general book about programming, therefore I hope someone might point me in the right direction. EDIT: I have 4 years experience as web developer, mainly with Microsoft (C#, ASP.NET, SQL Server) and client side technologies (jQuery, HTML/CSS).

    Read the article

  • How do i get out of the habit of procedural programming and into object oriented programming?

    - by Shadi Almosri
    Hiya all, I'm hoping to get some tips to kinda help me break out of what i consider after all these years a bad habit of procedural programming. Every time i attempt to do a project in OOP i end up eventually reverting to procedural. I guess i'm not completely convinced with OOP (even though i think i've heard everything good about it!). So i guess any good practical examples of common programming tasks that i often carry out such as user authentication/management, data parsing, CMS/Blogging/eComs are the kinda of things i do often, yet i haven't been able to get my head around how to do them in OOP and away from procedural, especially as the systems i build tend to work and work well. One thing i can see as a downfall to my development, is that i do reuse my code often, and it often needs more rewrites and improvement, but i sometimes consider this as a natural evolution of my software development. Yet i want to change! to my fellow programmers, help :) any tips on how i can break out of this nasty habbit?

    Read the article

  • how to run multiple shell scripts in parallel

    - by tom smith
    I've got a few test scripts, each of which runs a test php app. Each script runs forever. So, cat.sh, dog.sh, and foo.sh, each run a php script, and each shell script runs the php app in a loop, so it runs forever, sleeping after each run. I'm trying to figure out how to run the scripts in parallel, and at the same time, see the output of the php apps in the stdout/term window. I thought, simply doing something like foo.sh > &2 dog.sh > &2 cat.sh > &2 in a shell script would be sufficient, but it's not working. foo.sh, runs foo.php once, and it runs correctly dog.sh, runs dog.php in a never ending loop. it runs as expected cat.sh, runs cat.php in a never ending loop *** this never runs!!! it appears that the shell script never gets to run cat.sh. if i run cat.sh by itself in a separate window/term, it runs as expected... thoughts/comments

    Read the article

  • Understanding and Controlling Parallel Query Processing in SQL Server

    Data warehousing and general reporting applications tend to be CPU intensive because they need to read and process a large number of rows. To facilitate quick data processing for queries that touch a large amount of data, Microsoft SQL Server exploits the power of multiple logical processors to provide parallel query processing operations such as parallel scans. Through extensive testing, we have learned that, for most large queries that are executed in a parallel fashion, SQL Server can deliver linear or nearly linear response time speedup as the number of logical processors increases. However, some queries in high parallelism scenarios perform suboptimally. There are also some parallelism issues that can occur in a multi-user parallel query workload. This white paper describes parallel performance problems you might encounter when you run such queries and workloads, and it explains why these issues occur. In addition, it presents how data warehouse developers can detect these issues, and how they can work around them or mitigate them.

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >