Search Results

Search found 11759 results on 471 pages for 'isolation level'.

Page 15/471 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • xslt apply-templates in second level

    - by m00sila
    I cannot wrap <panel> tags to second level individual items as shown in Expected result bellow. But with the xslt i have written, all 1.x get into single node. Please help me. Source xml <root> <step id="1"> <content> <text> 1.0 Sample first level step text </text> </content> <content/> <step> <content> <text> 1.1 Sample second level step text </text> </content> </step> <step> <content> <text> 1.2 Sample second level step text </text> </content> </step> <step> <content> <text> 1.3 Sample second level step text </text> </content> </step> </step> </root> Expected output <panel> <panel> 1.0 Sample first level step text </panel> <panel> 1.1 Sample second level step text </panel> <panel> 1.2 Sample second level step text </panel> <panel> 1.3 Sample second level step text </panel> </panel> My XSLT <xsl:template match="/"> <panel> <xsl:apply-templates/> </panel> </xsl:template> <xsl:template match="root/step" > <panel> <panel> <xsl:apply-templates select ="content/text/node()"></xsl:apply-templates> </panel> <panel> <xsl:apply-templates select ="step/content/text/node()"></xsl:apply-templates> </panel> </panel> </xsl:template>

    Read the article

  • Configurable ruby logger setup: Logger.new().level = variable

    - by Daniel
    Hi, I want to change the logging level of an application (ruby). require 'logger' config = { :level => 'Logger::WARN' } log = Logger.new STDOUT log.level = Kernel.const_get config[:level] Well, the irb wasn't happy with that and threw "NameError: wrong constant name Logger::WARN" in my face. Ugh! I was insulted. I could do this in a case/when to solve this, or do log.level = 1, but there must be a more elegant way! Does anyone have any ideas? -daniel

    Read the article

  • How to use second level cache for lazy loaded collections in Hibernate?

    - by Chandru
    Let's say I have two entities, Employee and Skill. Every employee has a set of skills. Now when I load the skills lazily through the Employee instances the cache is not used for skills in different instances of Employee. Let's Consider the following data set. Employee - 1 : Java, PHP Employee - 2 : Java, PHP When I load Employee - 2 after Employee - 1, I do not want hibernate to hit the database to get the skills and instead use the Skill instances already available in cache. Is this possible? If so how? Hibernate Configuration <session-factory> <property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property> <property name="hibernate.connection.password">pass</property> <property name="hibernate.connection.url">jdbc:mysql://localhost/cache</property> <property name="hibernate.connection.username">root</property> <property name="hibernate.dialect">org.hibernate.dialect.MySQLInnoDBDialect</property> <property name="hibernate.cache.use_second_level_cache">true</property> <property name="hibernate.cache.use_query_cache">true</property> <property name="hibernate.cache.provider_class">net.sf.ehcache.hibernate.EhCacheProvider</property> <property name="hibernate.hbm2ddl.auto">update</property> <property name="hibernate.show_sql">true</property> <mapping class="org.cache.models.Employee" /> <mapping class="org.cache.models.Skill" /> </session-factory> The Entities with imports, getters and setters Removed @Entity @Table(name = "employee") @Cache(usage = CacheConcurrencyStrategy.READ_WRITE) public class Employee { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private int id; private String name; public Employee() { } @ManyToMany @JoinTable(name = "employee_skills", joinColumns = @JoinColumn(name = "employee_id"), inverseJoinColumns = @JoinColumn(name = "skill_id")) @Cache(usage = CacheConcurrencyStrategy.READ_WRITE) private List<Skill> skills; } @Entity @Table(name = "skill") @Cache(usage = CacheConcurrencyStrategy.READ_WRITE) public class Skill { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private int id; private String name; } SQL for Loading the Second Employee and his Skills Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from employee employee0_ where employee0_.id=? Hibernate: select skills0_.employee_id as employee1_1_, skills0_.skill_id as skill2_1_, skill1_.id as id1_0_, skill1_.name as name1_0_ from employee_skills skills0_ left outer join skill skill1_ on skills0_.skill_id=skill1_.id where skills0_.employee_id=? In that I specifically want to avoid the second query as the first one is unavoidable anyway.

    Read the article

  • How do you get average of sums in SQL (multi-level aggregation)?

    - by paxdiablo
    I have a simplified table xx as follows: rdate date rtime time rid integer rsub integer rval integer primary key on (rdate,rtime,rid,rsub) and I want to get the average (across all times) of the sums (across all ids) of the values. By way of a sample table, I have (with consecutive identical values blanked out for readability): rdate rtime rid rsub rval ------------------------------------- 2010-01-01 00.00.00 1 1 10 2 20 2 1 30 2 40 01.00.00 1 1 50 2 60 2 1 70 2 80 02.00.00 1 1 90 2 100 2010-01-02 00.00.00 1 1 999 I can get the sums I want with: select rdate,rtime,rid, sum(rval) as rsum from xx where rdate = '2010-06-01' group by rdate,rtime,rid which gives me: rdate rtime rid rsum ------------------------------- 2010-01-01 00.00.00 1 30 (10+20) 2 70 (30+40) 01.00.00 1 110 (50+60) 2 150 (70+80) 02.00.00 1 190 (90+100) as expected. Now what I want is the query that will also average those values across the time dimension, giving me: rdate rtime ravgsum ---------------------------- 2010-01-01 00.00.00 50 ((30+70)/2) 01.00.00 130 ((110+150)/2) 02.00.00 190 ((190)/1) I'm using DB2 for z/OS but I'd prefer standard SQL if possible.

    Read the article

  • MDX: How To Aggregate Hierarchy Level Members With Same Name

    - by Dave Frautnick
    Greetings, I am new to MDX, and am having trouble understanding how to perform an aggregation on a hierarchy level with members that have the same names. This query is particular to Microsoft Analysis Services 2000 cubes. I have a given hierarchy dimension with levels defined as follows: [Segment].[Flow].[Segment Week] Within the [Segment Week] level, I have the following members: [Week- 1] [Week- 2] [Week- 3] ... [Week- 1] [Week- 2] [Week- 3] The members have the same names, but are aligned with a different [Flow] in the parent level. So, the first occurrence of the [Week- 1] member aligns with [Flow].[A] while the second occurrence of [Week- 1] aligns with [Flow].[B]. What I am trying to do is aggregate all the members within the [Segment Week] level that have the same name. In SQL terms, I want to GROUP BY the member names within the [Segment Week] level. I am unsure how to do this. Thank you. Dave

    Read the article

  • What next generation low level language is the best bet to migrate the code base ?

    - by e-satis
    Let's say you have a company running a lot of C/C++, and you want to start planning migration to new technologies so you don't end up like COBOL companies 15 years ago. For now, C/C++ runs more than fine and there is plenty dev on the market for it. But you want to start thinking about it now, because given the huge running code base and the data sensitivity, you feel it can take 5-10 years to move to the next step without overloading the budget and the dev teams. You have heard about D, starting to be quite mature, and Go, promising to be quite popular. What would be your choice and why?

    Read the article

  • Static "LoD" hack opinions

    - by David Lively
    I've been playing with implementing dynamic level of detail for rendering a very large mesh in XNA. It occurred to me that (duh) the whole point of this is to generate small triangles close to the camera, and larger ones far away. Given that, rather than constantly modifying or swapping index buffers based on a feature's rendered size or distance from the camera, it would be a lot easier (and potentially quite a bit faster), to render a single "fan" or flat wedge/frustum-shaped planar mesh that is tessellated into small triangles close to the near or small end of the frustum and larger ones at the far end, sort of like this (overhead view) (Pardon the gap in the middle - I drew one side and mirrored it) The triangle sizes are chosen so that all are approximately the same size when projected. Then, that mesh would be transformed to track the camera so that the Z axis (center vertical in this image) is always aligned with the view direction projected into the XZ plane. The vertex shader would then read terrain heights from a height texture and adjust the Y coordinate of the mesh to match a height field that defines the terrain. This eliminates the need for culling (since the mesh is generated to match the viewport dimensions) and the need to modify the index and/or vertex buffers when drawing the terrain. Obviously this doesn't address terrain with overhangs, etc, but that could be handled to a certain extent by including a second mesh that defines a sort of "ceiling" via a different texture. The other LoD schemes I've seen aren't particularly difficult to implement and, in some cases, are a lot more flexible, but this seemed like a decent quick-and-dirty way to handle height map-based terrain without getting into geometry manipulation. Has anyone tried this? Opinions?

    Read the article

  • Representing heightmaps, on disk and when drawing

    - by gardian06
    This is a conglomeration question when answering please specify which part you are addressing. I am looking at creating a maze type game that utilizes elevation. I have a few features I would like to have, but am unsure as to some of the implementation. I have done work doing fileIO maze generation (using a key to read the file, and then generate the level based on that file), but I am unsure how to think about this with elevation in the mix. I think height maps might be a good approach, but don't know how to represent them effectively. for a height map which is more beneficial XML(containing h[u,v] data and key definition), CSV (item1 is key reference, item2 is elevation), or another approach that I have not thought of yet? When it comes to placing the elevation values themselves what kind of deltah values are appropriate to have it noticeable at about a 60degree angle while not really effecting gravity driven physics (assuming some effect while moving up/down hill)? I am thinking of maybe going to procedural generation at some point, but am wondering if it is practical to have a procedurally generated grid (wall squares possibly same dimensions as the open space squares), or if designing to a thin wall open spaces is better? this decision will effect the amount of work need on the graphics end for uniform vs. irregular walls. EDIT: Game will be a elevation maze shooter. Levels/maps will be mazes with elevation the player has to negotiate. Elevations will have effects on "combat" vision, and movement.

    Read the article

  • Java Applet Tower Defence Game needs tweeking

    - by Ephiras
    Hello :) i have made a tower defence Game for my computer science class as one of my major projects, but have encountered some rather fatal roadblocks. here they are creating a menu screen (class Menu) that can set the total number of enimies, the max number of towers, starting money and the map. i tried creating a constructor in my Main class that sets all the values to whatever the Menu class passes in. I want the Menu screen to close after a difficulty has been selected and the main class to begin. Another problem i would really like some help with is instead of having to write entire arrays i would like to create a small segment of code that runs through an entire picture and sets up an array based on that pixels color.this way i can have multiple levels just dragged into a level folder and have the program read through them. users can even create their own. so a 1 if its yellow, a two if blue and a 3 if purple, then everything else = 0; you can download all the classes and code uif you'd like here sorry about having to redirect you but i wasn't sure how to efficently add a code spoiler. help is greatly appreciated

    Read the article

  • LOD in modern games

    - by Firas Assaad
    I'm currently working on my master's thesis about LOD and mesh simplification, and I've been reading many academic papers and articles about the subject. However, I can't find enough information about how LOD is being used in modern games. I know many games use some sort of dynamic LOD for terrain, but what about elsewhere? Level of Detail for 3D Graphics for example points out that discrete LOD (where artists prepare several models in advance) is widely used because of the performance overhead of continuous LOD. That book was published in 2002 however, and I'm wondering if things are different now. There has been some research in performing dynamic LOD using the geometry shader (this paper for example, with its implementation in ShaderX6), would that be used in a modern game? To summarize, my question is about the state of LOD in modern video games, what algorithms are used and why? In particular, is view dependent continuous simplification used or does the runtime overhead make using discrete models with proper blending and impostors a more attractive solution? If discrete models are used, is an algorithm used (e.g. vertex clustering) to generate them offline, do artists manually create the models, or perhaps a combination of both methods is used?

    Read the article

  • generating maps

    - by gardian06
    This is a conglomeration question when answering please specify which part you are addressing. I am looking at creating a maze type game that utilizes elevation. I have a few features I would like to have, but am unsure as to some of the implementation. I have done work doing fileIO maze generation (using a key to read the file, and then generate the level based on that file), but I am unsure how to think about this with elevation in the mix. I think height maps might be a good approach, but don't know how to represent them effectively. for a height map which is more beneficial XML(containing h[u,v] data and key definition), CSV (item1 is key reference, item2 is elevation), or another approach that I have not thought of yet? When it comes to placing the elevation values themselves what kind of deltah values are appropriate to have it noticeable at about a 60degree angle while not really effecting gravity driven physics (assuming some effect while moving up/down hill)? I am thinking of maybe going to procedural generation at some point, but am wondering if it is practical to have a procedurally generated grid (wall squares possibly same dimensions as the open space squares), or if designing to a thin wall open spaces is better? this decision will effect the amount of work need on the graphics end for uniform vs. irregular walls. EDIT: game will be a elevation maze shooter. levels/maps will be mazes with elevation the player has to negotiate. elevations will have effects on "combat" vision, and movement

    Read the article

  • Which version management design methodology to be used in a Dependent System nodes?

    - by actiononmail
    This is my first question so please indicate if my question is too vague and not understandable. My question is more related to High Level Design. We have a system (specifically an ATCA Chassis) configured in a Star Topology, having Master Node (MN) and other sub-ordinate nodes(SN). All nodes are connected via Ethernet and shall run on Linux OS with other proprietary applications. I have to build a recovery Framework Design so that any software entity, whether its Linux, Ramdisk or application can be rollback to previous good versions if something bad happens. Thus I think of maintaining a State Version Matrix over MN, where each State(1,2....n) represents Good Kernel, Ramdisk and application versions for each SN. It may happen that one SN version can dependent on other SN's version. Please see following diagram:- So I am in dilemma whether to use Package Management Methodology used by Debian Distributions (Like Ubuntu) or GIT repository methodology; in order to do a Rollback to previous good versions on either one SN or on all the dependent SNs. The method should also be easier for upgrading SNs along with MNs. Some of the features which I am trying to achieve:- 1) Upgrade of even single software entity is achievable without hindering others. 2) Dependency checks must be done before applying rollback or upgrade on each of the SN 3) User Prompt should be given in case dependency fails.If User still go for rollback, all the SNs should get notification to rollback there own releases (if required). 4) The binaries should be distributed on SNs accordingly so that recovery process is faster; rather fetching every time from MN. 5) Release Patches from developer for bug fixes, feature enhancement can be applied on running system. 6) Each version can be easily tracked and distinguishable. Thanks

    Read the article

  • A TDD Journey: 4-Tests as Documentation; False Positive Results; Component Isolation

    In Test-Driven Development (TDD) , The writing of a unit test is done more to design and to document than to verifiy. By writing a unit test you close a number of feedback loops, and verifying the functionality of the code is just a minor one. everything you need to know about your class under test is embodied in a simple list of the names of the tests. Michael Sorens continues his introduction to TDD that is more of a journey in six parts, by discussing Tests as Documentation, False Positive Results and Component Isolation.

    Read the article

  • Apache2 URL Rewrite - Second-Level-Domain to the end of URL

    - by Acryl
    i have a site "example.com" and also many other domains like "example1.com", "example2.de", etc. I want that every Second-Level-Domain is rewritten in the following way: example.com/domainredirect=example1.com (when you open example1.com) and example.com/domainredirect=example2.de (when you open example2.de) So the original Second-Level-Domain should be rewritten after "example.com/domainredirect=" Thanks in advance

    Read the article

  • can I put files in hidden volume /home at the root level of macintosh HD

    - by mjr
    I am trying to reproduce the file structure of my VPS on my mac locally, so that it's easier for me to test websites in a local development environment to do this I would need have a /home folder at the root level of the hard drive using panic transmit I can see that there is already a volume called home at the root level can I store other files and folders in here to set up my local web server? sorry if this is a dumb question folks

    Read the article

  • Remote Desktop from Linux to Computer that Requires Network Level Authentication

    - by Kyle Brandt
    Is there a way to use rdesktop or another Linux client to connect to a server that requires Network Level Authentication? From Windows Server 2008 R2 -- Control Panel -- System And Security -- System -- Allow Remote Access there is an option that says "Allow connections only from computers running Remote Desktop with Network Level Authentication". So with this enabled I can con not connect from Linux. I can connect from XP but you need SP3 and I had to edit a couple of things in the registry for it to work.

    Read the article

  • What are your best senior level Linux interview questions

    - by Mike
    Every now and then on this site there are people asking what are some sys admin interview questions. Mostly when reading them they are all junior to mid-level questions. I'm wondering what are your best senior level Linux admin interview questions. Two of mine are 1) How do you stop a fork bomb if you are already logged into a system 2) You delete a log file that apache is using and did not restart apache yet, how can you recover that log file?

    Read the article

  • Unable to start my linux (cent OS ) machine in run level 5

    - by k38
    Suddenly my machine not working under run level 5 and it seems to be problem with xserver and it is saying that "in last 90 seconds xserver restarted 6 times and unable to start" and then just giving blank screen.So i changed the run level to 3 and using startx command i am managing to work now.can any one help me on this.......?

    Read the article

  • Unable to start my linux (cent OS ) machine in run level 5

    - by k38
    Suddenly my machine not working under run level 5 and it seems to be problem with xserver and it is saying that "in last 90 seconds xserver restarted 6 times and unable to start" and then just giving blank screen.So i changed the run level to 3 and using startx command i am managing to work now.can any one help me on this.......?

    Read the article

  • More CPU cores may not always lead to better performance – MAXDOP and query memory distribution in spotlight

    - by sqlworkshops
    More hardware normally delivers better performance, but there are exceptions where it can hinder performance. Understanding these exceptions and working around it is a major part of SQL Server performance tuning.   When a memory allocating query executes in parallel, SQL Server distributes memory to each task that is executing part of the query in parallel. In our example the sort operator that executes in parallel divides the memory across all tasks assuming even distribution of rows. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union.   In reality, how often are column values evenly distributed, think about an example; are employees working for your company distributed evenly across all the Zip codes or mainly concentrated in the headquarters? What happens when you sort result set based on Zip codes? Do all products in the catalog sell equally or are few products hot selling items?   One of my customers tested the below example on a 24 core server with various MAXDOP settings and here are the results:MAXDOP 1: CPU time = 1185 ms, elapsed time = 1188 msMAXDOP 4: CPU time = 1981 ms, elapsed time = 1568 msMAXDOP 8: CPU time = 1918 ms, elapsed time = 1619 msMAXDOP 12: CPU time = 2367 ms, elapsed time = 2258 msMAXDOP 16: CPU time = 2540 ms, elapsed time = 2579 msMAXDOP 20: CPU time = 2470 ms, elapsed time = 2534 msMAXDOP 0: CPU time = 2809 ms, elapsed time = 2721 ms - all 24 cores.In the above test, when the data was evenly distributed, the elapsed time of parallel query was always lower than serial query.   Why does the query get slower and slower with more CPU cores / higher MAXDOP? Maybe you can answer this question after reading the article; let me know: [email protected].   Well you get the point, let’s see an example.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go   Let’s create the temporary table #FireDrill with all possible Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip from Employees update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --First serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) goThe query took 1011 ms to complete.   The execution plan shows the 77816 KB of memory was granted while the estimated rows were 799624.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1912 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 799624.  The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead. Sort properties shows the rows are unevenly distributed over the 4 threads.   Sort Warnings in SQL Server Profiler.   Intermediate Summary: The reason for the higher duration with parallel plan was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001. Now let’s update the Employees table and distribute employees evenly across all Zip codes.   update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go   The query took 751 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.   Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 661 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 784707.  Sort properties shows the rows are evenly distributed over the 4 threads. No Sort Warnings in SQL Server Profiler.    Intermediate Summary: When employees were distributed unevenly, concentrated on 1 Zip code, parallel sort spilled while serial sort performed well without spilling to tempdb. When the employees were distributed evenly across all Zip codes, parallel sort and serial sort did not spill to tempdb. This shows uneven data distribution may affect the performance of some parallel queries negatively. For detailed discussion of memory allocation, refer to webcasts available at www.sqlworkshops.com/webcasts.     Some of you might conclude from the above execution times that parallel query is not faster even when there is no spill. Below you can see when we are joining limited amount of Zip codes, parallel query will be fasted since it can use Bitmap Filtering.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go  Let’s create the temporary table #FireDrill with limited Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip       from Employees where Zip between 1800 and 2001 update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 989 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 785594. No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1799 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 785594.  Sort Warnings in SQL Server Profiler.    The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead.  Intermediate Summary: The reason for the higher duration with parallel plan even with limited amount of Zip codes was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001.   Now let’s update the Employees table and distribute employees evenly across all Zip codes. update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 250  ms to complete.  The execution plan shows the 9016 KB of memory was granted while the estimated rows were 79973.8.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0.  --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 85 ms to complete.  The execution plan shows the 13152 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.    Here you see, parallel query is much faster than serial query since SQL Server is using Bitmap Filtering to eliminate rows before the hash join.   Parallel queries are very good for performance, but in some cases it can hinder performance. If one identifies the reason for these hindrances, then it is possible to get the best out of parallelism. I covered many aspects of monitoring and tuning parallel queries in webcasts (www.sqlworkshops.com/webcasts) and articles (www.sqlworkshops.com/articles). I suggest you to watch the webcasts and read the articles to better understand how to identify and tune parallel query performance issues.   Summary: One has to avoid sort spill over tempdb and the chances of spills are higher when a query executes in parallel with uneven data distribution. Parallel query brings its own advantage, reduced elapsed time and reduced work with Bitmap Filtering. So it is important to understand how to avoid spills over tempdb and when to execute a query in parallel.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan  

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >