Search Results

Search found 3596 results on 144 pages for 'division by zero'.

Page 26/144 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • DeviceIoControl returning false

    - by Anand
    In my C# code,DeviceIoControl is returning false,the handle is correct DeviceIoControl(deviceHandle, IOCTL_STORAGE_GET_DEVICE_NUMBER, IntPtr.Zero, 0, OutBuffPtr,//&psdn, OutBuffSize, ref dwBytesReturned, IntPtr.Zero);

    Read the article

  • Why is the Objective-C Boolean data type defined as a signed char?

    - by EddieCatflap
    Something that has piqued my interest is Objective-C's BOOL type definition. Why is it defined as a signed char (which could cause unexpected behaviour if a value greater than 1 byte in length is assigned to it) rather than as an int, as C does (much less margin for error: a zero value is false, a non-zero value is true)? The only reason I can think of is the Objective-C designers micro-optimising storage because the char will use less memory than the int. Please can someone enlighten me?

    Read the article

  • Collision Detection probelm (intersection with plane)

    - by Demi
    I'm doing a scene using openGL (a house). I want to do some collision detection, mainly with the walls in the house. I have tried the following code: // a plane is represented with a normal and a position in space Vector planeNor(0,0,1); Vector position(0,0,-10); Plane p(planeNor,position); Vector vel(0,0,-1); double lamda; // this is the intersection point Vector pNormal; // the normal of the intersection // this method is from Nehe's Lesson 30 coll= p.TestIntersionPlane(vel,Z,lamda,pNormal); glPushMatrix(); glBegin(GL_QUADS); if(coll) glColor3f(1,0,0); else glColor3f(1,1,1); glVertex3d(0,0,-10); glVertex3d(3,0,-10); glVertex3d(3,3,-10); glVertex3d(0,3,-10); glEnd(); glPopMatrix(); Nehe's method: #define EPSILON 1.0e-8 #define ZERO EPSILON bool Plane::TestIntersionPlane(const Vector3 & position,const Vector3 & direction, double& lamda, Vector3 & pNormal) { double DotProduct=direction.scalarProduct(normal); // Dot Product Between Plane Normal And Ray Direction double l2; // Determine If Ray Parallel To Plane if ((DotProduct<ZERO)&&(DotProduct>-ZERO)) return false; l2=(normal.scalarProduct(position))/DotProduct; // Find Distance To Collision Point if (l2<-ZERO) // Test If Collision Behind Start return false; pNormal= normal; lamda=l2; return true; } Z is initially (0,0,0) and every time I move the camera towards the plane, I reduce its z component by 0.1 (i.e. Z.z-=0.1 ). I know that the problem is with the vel vector, but I can't figure out what the right value should be. Can anyone please help me?

    Read the article

  • how to make datagrid Visibility is Collapsed in codebehind

    - by prince23
    hi i have data grid. now here i am checking the condition if Companyrows.count is zero . if count is zero make data grid.visible is false. List<Employee> Companyrows = new List<Employee>(); if (Companyrows.Count == 0) { dgrdRowDetail.Visibility = "Collapsed"; // getting error // convert type 'string' to 'System.Windows.Visibility' } else { dgrdRowDetail.ItemsSource = Companyrows; } any help how to solve this issue would be great thank you

    Read the article

  • empty base class optimization

    - by FredOverflow
    Two quotes from the C++ standard, §1.8: An object is a region of storage. Base class subobjects may have zero size. I don't think a region of storage can be of size zero. That would mean that some base class subobjects aren't actually objects. Opinions?

    Read the article

  • ffmpeg: create a video from images

    - by vailen
    Is it possible to use ffmpeg create a video from a set of sequences, where the number does not start from zero? For example, I have some images [test_100.jpg, test_101.jpg, test_102.jpg, ..., test_200.jpg], and I want to convert them to a video. I tried the following command, but it didn't work (it seems the number should start from zero): ffmpeg -i test_%d.jpg -vcodec mpeg4 test.avi Any advise?

    Read the article

  • Why closed contours are guaranteed here?

    - by user198729
    Quoted from here: BW = edge(I,'zerocross',thresh,h) specifies the zero-cross method, using the filter h. thresh is the sensitivity threshold; if the argument is empty ([]), edge chooses the sensitivity threshold automatically. If you specify a threshold of 0, the output image has closed contours, because it includes all the zero crossings in the input image. I don't understand it,can someone elaborate?

    Read the article

  • how can I create macro definitions for the lines commented in the code.

    - by yaprak
    #include <stdio.h> //Here use a macro definition that assigns a value to SIZE (for example 5) int main() { int i; int array[SIZE]; int sum=0; for(i=0; i<SIZE; i++) { //Here use a macro definition named as CALCSUM to make the //following addition operation for the array printf("Enter a[%d] = ",i); scanf("%d", &array[i]); sum+=array[i]; //Here use a macro definition named as VERBOSE to print //what program does to the screen printf("The user entered %d\n", array[i]); // // //If the macro definition CALCSUM is not used, the program //should assign 0 to the i-th element of the array array[i]=0; //Here, again use VERBOSE to print what program does to the screen printf("a[%d] is assigned to zero\n", i); // // } //If CALCSUM is defined, print the summation of the array elements to the screen printf("Summation of the array is %d\n",sum); // //If CALCSUM is not defined, but VERBOSE mode is used, print the following printf("All the elements in the array are assigned to zero\n"); // printf("Program terminated\n"); return 0; } When CALCSUM is defined, the program will sum up the values of each element in the given array. If CALCSUM is not defined, each array element will be assigned to zero. Besides, when VERBOSE mode is defined, the program will make print statements pointed out active. [root@linux55]# gcc code.c [root@linux55]# ./a.out Program terminated [root@linux55]# gcc code.c -D CALCSUM [root@linux55]# ./a.out Enter a[0] = 3 Enter a[1] = 0 Enter a[2] = 2 Enter a[3] = 5 Enter a[4] = 9 Summation of the array is 19 Program terminated [root@linux55]# gcc code.c -D CALCSUM -D VERBOSE [root@linux55]# ./a.out Enter a[0] = 2 The user entered 2 Enter a[1] = 10 The user entered 10 Enter a[2] = 3 The user entered 3 Enter a[3] = 8 The user entered 8 Enter a[4] = 1 The user entered 1 Summation of the array is 24 Program terminated [root@linux55]# gcc code.c -D VERBOSE [root@linux55]# ./a.out a[0] is assigned to 0 a[1] is assigned to 0 a[2] is assigned to 0 a[3] is assigned to 0 a[4] is assigned to 0 All the elements in the array is assigned to zero Program terminated

    Read the article

  • CONVERT(int, (datepart(month, @search)), (datepart(day, @search)), DateAdd(year, Years.Year - (datepart(year, @search)))

    - by MyHeadHurts
    In the query the top part is getting all the years that will run in the stored procedure. Works fine But at first i just wanted to run the queries for yesterdays date for all the years, but now i realized i want the user to select a date that will be in a parameter @search Booked <= CONVERT(int,DateAdd(year, Years.Year - Year(getdate()), DateAdd(day, DateDiff(day, 2, getdate()), 1))) this should be easy because normally it would just be Booked <= CONVERT(int,@search) but the problem is i want to do something like a Booked <= CONVERT(int, (datepart(month, @search)), (datepart(day, @search)), DateAdd(year, Years.Year - (datepart(year, @search))) would something like that work i dont need to worry about subtracting days but i still need to worry about the years WITH Years AS ( SELECT DATEPART(year, GETDATE()) [Year] UNION ALL SELECT [Year]-1 FROM Years WHERE [Year]>@YearToGet ), q_00 as ( select DIVISION , DYYYY , sum(PARTY) as asofPAX , sum(APRICE) as asofSales from dbo.B101BookingsDetails INNER JOIN Years ON B101BookingsDetails.DYYYY = Years.Year where Booked <= CONVERT(int,DateAdd(year, Years.Year - Year(getdate()), DateAdd(day, DateDiff(day, 2, getdate()), 1))) and DYYYY = Years.Year group by DIVISION, DYYYY, years.year having DYYYY = years.year ),

    Read the article

  • Quick, beginner MASM register question - DX:AX

    - by Francisco P.
    Hello, I am currently studying for an exam I'll have on x86 assembly. I didn't have much luck googling for ":", too common of a punctuation mark :/ IDIV - Signed Integer Division Usage: IDIV src Modifies flags: (AF,CF,OF,PF,SF,ZF undefined) Signed binary division of accumulator by source. If source is a byte value, AX is divided by "src" and the quotient is stored in AL and the remainder in AH. If source is a word value, DX:AX is divided by "src", and the quotient is stored in AL and the remainder in DX. Taken from "Intel Opcodes and Mnemonics" What does DX:AX mean? Thanks a lot for your time :)

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Recognizing when to use the mod operator

    - by Will
    I have a quick question about the mod operator. I know what it does; it calculates the remainder of a division. My question is, how can I identify a situation where I would need to use the mod operator? I know I can use the mod operator to see whether a number is even or odd and prime or composite, but that's about it. I don't often think in terms of remainders. I'm sure the mod operator is useful and I would like to learn to take advantage of it. I just have problems identifying where the mod operator is applicable. In various programming situations, it is difficult for me to see a problem and realize "hey! the remainder of division would work here!" Any tips or strategies? Thanks

    Read the article

  • Chnaging Displayed Image on a Webpage Using Javascript

    - by Gavin
    Hi all, I'm having some trouble changing an image being displayed on a page by way of a dropdown menu selection. This works fine.. getting the dropdown menu to give me an alert when I make a selection. Even though the menu is embedded within a division, paragraph, and label tag, it still works. "availableImages" is the name of my menu. function imageSelect() { var index = document.getElementById("myForm").availableImages.selectedIndex; var value = document.getElementById("myForm").availableImages.options[index].value; alert("test " + value); // alert box pops up upon list item selection } However, within the same division, I am having an issue with changing the source of my image tag... Any ideas? I will provide more code if it will be helpful. Thanks! Gavin

    Read the article

  • My vertex shader doesn't affect texture coords or diffuse info but works for position

    - by tina nyaa
    I am new to 3D and DirectX - in the past I have only used abstractions for 2D drawing. Over the past month I've been studying really hard and I'm trying to modify and adapt some of the shaders as part of my personal 'study project'. Below I have a shader, modified from one of the Microsoft samples. I set diffuse and tex0 vertex shader outputs to zero, but my model still shows the full texture and lighting as if I hadn't changed the values from the vertex buffer. Changing the position of the model works, but nothing else. Why is this? // // Skinned Mesh Effect file // Copyright (c) 2000-2002 Microsoft Corporation. All rights reserved. // float4 lhtDir = {0.0f, 0.0f, -1.0f, 1.0f}; //light Direction float4 lightDiffuse = {0.6f, 0.6f, 0.6f, 1.0f}; // Light Diffuse float4 MaterialAmbient : MATERIALAMBIENT = {0.1f, 0.1f, 0.1f, 1.0f}; float4 MaterialDiffuse : MATERIALDIFFUSE = {0.8f, 0.8f, 0.8f, 1.0f}; // Matrix Pallette static const int MAX_MATRICES = 100; float4x3 mWorldMatrixArray[MAX_MATRICES] : WORLDMATRIXARRAY; float4x4 mViewProj : VIEWPROJECTION; /////////////////////////////////////////////////////// struct VS_INPUT { float4 Pos : POSITION; float4 BlendWeights : BLENDWEIGHT; float4 BlendIndices : BLENDINDICES; float3 Normal : NORMAL; float3 Tex0 : TEXCOORD0; }; struct VS_OUTPUT { float4 Pos : POSITION; float4 Diffuse : COLOR; float2 Tex0 : TEXCOORD0; }; float3 Diffuse(float3 Normal) { float CosTheta; // N.L Clamped CosTheta = max(0.0f, dot(Normal, lhtDir.xyz)); // propogate scalar result to vector return (CosTheta); } VS_OUTPUT VShade(VS_INPUT i, uniform int NumBones) { VS_OUTPUT o; float3 Pos = 0.0f; float3 Normal = 0.0f; float LastWeight = 0.0f; // Compensate for lack of UBYTE4 on Geforce3 int4 IndexVector = D3DCOLORtoUBYTE4(i.BlendIndices); // cast the vectors to arrays for use in the for loop below float BlendWeightsArray[4] = (float[4])i.BlendWeights; int IndexArray[4] = (int[4])IndexVector; // calculate the pos/normal using the "normal" weights // and accumulate the weights to calculate the last weight for (int iBone = 0; iBone < NumBones-1; iBone++) { LastWeight = LastWeight + BlendWeightsArray[iBone]; Pos += mul(i.Pos, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone]; Normal += mul(i.Normal, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone]; } LastWeight = 1.0f - LastWeight; // Now that we have the calculated weight, add in the final influence Pos += (mul(i.Pos, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight); Normal += (mul(i.Normal, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight); // transform position from world space into view and then projection space //o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj); o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj); o.Diffuse.x = 0.0f; o.Diffuse.y = 0.0f; o.Diffuse.z = 0.0f; o.Diffuse.w = 0.0f; o.Tex0 = float2(0,0); return o; } technique t0 { pass p0 { VertexShader = compile vs_3_0 VShade(4); } } I am currently using the SlimDX .NET wrapper around DirectX, but the API is extremely similar: public void Draw() { var device = vertexBuffer.Device; device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.White, 1.0f, 0); device.SetRenderState(RenderState.Lighting, true); device.SetRenderState(RenderState.DitherEnable, true); device.SetRenderState(RenderState.ZEnable, true); device.SetRenderState(RenderState.CullMode, Cull.Counterclockwise); device.SetRenderState(RenderState.NormalizeNormals, true); device.SetSamplerState(0, SamplerState.MagFilter, TextureFilter.Anisotropic); device.SetSamplerState(0, SamplerState.MinFilter, TextureFilter.Anisotropic); device.SetTransform(TransformState.World, Matrix.Identity * Matrix.Translation(0, -50, 0)); device.SetTransform(TransformState.View, Matrix.LookAtLH(new Vector3(-200, 0, 0), Vector3.Zero, Vector3.UnitY)); device.SetTransform(TransformState.Projection, Matrix.PerspectiveFovLH((float)Math.PI / 4, (float)device.Viewport.Width / device.Viewport.Height, 10, 10000000)); var material = new Material(); material.Ambient = material.Diffuse = material.Emissive = material.Specular = new Color4(Color.White); material.Power = 1f; device.SetStreamSource(0, vertexBuffer, 0, vertexSize); device.VertexDeclaration = vertexDeclaration; device.Indices = indexBuffer; device.Material = material; device.SetTexture(0, texture); var param = effect.GetParameter(null, "mWorldMatrixArray"); var boneWorldTransforms = bones.OrderedBones.OrderBy(x => x.Id).Select(x => x.CombinedTransformation).ToArray(); effect.SetValue(param, boneWorldTransforms); effect.SetValue(effect.GetParameter(null, "mViewProj"), Matrix.Identity);// Matrix.PerspectiveFovLH((float)Math.PI / 4, (float)device.Viewport.Width / device.Viewport.Height, 10, 10000000)); effect.SetValue(effect.GetParameter(null, "MaterialDiffuse"), material.Diffuse); effect.SetValue(effect.GetParameter(null, "MaterialAmbient"), material.Ambient); effect.Technique = effect.GetTechnique(0); var passes = effect.Begin(FX.DoNotSaveState); for (var i = 0; i < passes; i++) { effect.BeginPass(i); device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, skin.Vertices.Length, 0, skin.Indicies.Length / 3); effect.EndPass(); } effect.End(); } Again, I set diffuse and tex0 vertex shader outputs to zero, but my model still shows the full texture and lighting as if I hadn't changed the values from the vertex buffer. Changing the position of the model works, but nothing else. Why is this? Also, whatever I set in the bone transformation matrices doesn't seem to have an effect on my model. If I set every bone transformation to a zero matrix, the model still shows up as if nothing had happened, but changing the Pos field in shader output makes the model disappear. I don't understand why I'm getting this kind of behaviour. Thank you!

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • hibernate createSQL how to cache it?

    - by cometta
    If i use Criteria statement, i able to use cache easily. but when i using custom CREATESQL, like below, how do i cache it? Query query = getHibernateTemplate().getSessionFactory().getCurrentSession().createSQLQuery( " select company_keysupplier.ddiv as ddiv, company_division.name as DIVISION, company_keysupplier.ddep as ddep,company_department.name as DEPARTMENT"+ " from company_keysupplier "+ " left join company_division on "+ " (company_keysupplier.ddiv= company_division.division_code and company_keysupplier.survey_num=company_division.survey_num) "+ " left join company_department on "+ " (company_keysupplier.ddep= company_department.department_code and company_keysupplier.survey_num=company_department.survey_num) "+ " where company_keysupplier.sdiv = :sdiv and company_keysupplier.sdep = :sdep and company_keysupplier.survey_num= :surveyNum " + " order by company_division.name, company_keysupplier.ddep " ) .addScalar("ddiv") .addScalar("ddep") .addScalar("DIVISION") .addScalar("DEPARTMENT") .setResultTransformer(Transformers.aliasToBean(IssKeysupplier.class)); query.setString("sdiv", sdiv); query.setString("sdep", sdepartment); query.setBigInteger("surveyNum", survey_num); //i tried query.setCachable(true) fail...... List result= (List<IssKeysupplier>)query.list(); return result;

    Read the article

  • Help ---- SQL Script

    - by Vinoj Nambiar
    Store No Store Name Region Division Q10(response) Q21(response) 2345       ABC              North Test              1                       5 2345                            North Test              6                       3 2345       ABC              North Test              4                       6 1st calculation 1 ) Engaged(%) = Response Greater than 4.5 3 (total response greater than 4.5) / 6 (total count) * 100 = 50% Store No Store Name Region Division Q10 Q21 2345             ABC      North Test           1       5 2345             ABC      North Test           6       3 2345            ABC       North Test           4       6 2) not engaged (%) = Response less than 2 1 (total response less than 2) / 6 (total count) * 100 = 16.66% I should be able to get the table like this Store No Store Name Region Division Engaged(%) Disengaged(%) 2345            ABC     North Test                 50                 16.66

    Read the article

  • Strategies for generating Zend Cache Keys

    - by emeraldjava
    ATM i'm manually generating a cache key based on the method name and parameters, then follow to the normal cache pattern. This is all done in the Controller and i'm calling a model class that 'extends Zend_Db_Table_Abstract'. public function indexAction() { $cache = Zend_Registry::get('cache'); $individualleaguekey = sprintf("getIndividualLeague_%d_%s",$leagueid,$division->code); if(!$leaguetable = $cache->load($individualleaguekey)) { $table = new Model_DbTable_Raceresult(); $leaguetable = $table->getIndividualLeague($leagueid,$division,$races); $cache->save($leaguetable, $individualleaguekey); } $this->view->leaguetable = $leaguetable; .... I want to avoid the duplication of parameters to the cache creation method and also to the model method, so i'm thinking of moving the caching logic away from my controller class and into model class packaged in './model/DbTable', but this seems incorrect since the DB model should only handle SQL operations. Any suggestions on how i can implement a clean patterned solution?

    Read the article

  • LINQ to sql return group by results from method

    - by petebob796
    How do I create a method to return the results of a group by in LINQ to sql such as the one below: internal SomeTypeIDontKnow GetDivisionsList(string year) { var divisions = from p in db.cm_SDPs where p.acad_period == year group p by new { p.Division } into g select new { g.Key.Division }; return divisions; } I can't seem to define a correct return type. I think it is because it's an anonymous type but haven't got my head around it yet. Do I need to convert it to a list or something? The results would just be used for filling a combo box.

    Read the article

  • Approximate Number of CPU Cycles for Various Operations

    - by colordot
    I am trying to find a reference for approximately how many CPU cycles various operations require. I don't need exact numbers (as this is going to vary between CPUs) but I'd like something relatively credible that gives ballpark figures that I could cite in discussion with friends. As an example, we all know that floating point division takes more CPU cycles than say doing a bitshift. I'd guess that the difference is that the division is around 100 cycles, where as a shift is 1 but I'm looking for something to cite to back that up. Can anyone recommend such a resource?

    Read the article

  • Why is the code adding 7 if the number is not >= 0

    - by Hugo Dozois
    I've got this program in MIPS assembly which comes from a C code that does the simple average of the eigth arguments of the function. average8: addu $4,$4,$5 addu $4,$4,$6 addu $4,$4,$7 lw $2,16($sp) #nop addu $4,$4,$2 lw $2,20($sp) #nop addu $4,$4,$2 lw $2,24($sp) #nop addu $4,$4,$2 lw $2,28($sp) #nop addu $2,$4,$2 bgez $2,$L2 addu $2,$2,7 $L2: sra $2,$2,3 j $31 When the number is positve, we directly divided by 8 (shift by 3 bits), but when the number is negative, we first addu 7 then do the division. My question is why do we add 7 to $2 when $2 is not >= 0 ? EDIT : Here is the C code : int average8(int x1, int x2, int x3, int x4, int x5, int x6, int x7, int x8) { return (x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8) / 8; } note : the possible loss in the division since we are using ints instead of floats or doubles is not important in this case.

    Read the article

  • Does a c/c++ compiler optimize constant divisions by power-of-two value into shifts?

    - by porgarmingduod
    Question says it all. Does anyone know if the following... size_t div(size_t value) { const size_t x = 64; return value / x; } ...is optimized into? size_t div(size_t value) { return value >> 6; } Do compilers do this? (My interest lies in GCC). Are there situations where it does and others where it doesn't? I would really like to know, because every time I write a division that could be optimized like this I spend some mental energy wondering about whether precious nothings of a second is wasted doing a division where a shift would suffice.

    Read the article

  • Changing floating point behavior in Python to Numpy style.

    - by Tristan
    Is there a way to make Python floating point numbers follow numpy's rules regarding +/- Inf and NaN? For instance, making 1.0/0.0 = Inf. >>> from numpy import * >>> ones(1)/0 array([ Inf]) >>> 1.0/0.0 Traceback (most recent call last): File "<stdin>", line 1, in <module> ZeroDivisionError: float division Numpy's divide function divide(1.0,0.0)=Inf however it is not clear if it can be used similar to from __future__ import division.

    Read the article

  • SQLAuthority News – Monthly list of Puzzles and Solutions on SQLAuthority.com

    - by pinaldave
    This month has been very interesting month for SQLAuthority.com we had multiple and various puzzles which everybody participated and lots of interesting conversation which we have shared. Let us start in latest puzzles and continue going down. There are few answers also posted on facebook as well. SQL SERVER – Puzzle Involving NULL – Resolve – Error – Operand data type void type is invalid for sum operator This puzzle involves NULL and throws an error. The challenge is to resolve the error. There are multiple ways to resolve this error. Readers has contributed various methods. Few of them even have supplied the answer why this error is showing up. NULL are very important part of the database and if one of the column has NULL the result can be totally different than the one expected. SQL SERVER – T-SQL Scripts to Find Maximum between Two Numbers I modified script provided by friend to find greatest number between two number. My script has small bug in it. However, lots of readers have suggested better scripts. Madhivanan has written blog post on the subject over here. SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints This quiz is hosted on my friend Jacob‘s site. I have written many hints how one can tune cubes. Now one can take part here and win exciting prizes. SQL SERVER – Solution – Generating Zero Without using Any Numbers in T-SQL Madhivanan has asked very interesting question on his blog about How to Generate Zero without using Any Numbers in T-SQL. He has demonstrated various methods how one can generate Zero. I asked the same question on blog and got many interesting answers which I have shared. SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once I have to accept that this was most difficult puzzle. In this puzzle I have asked even though settings are correct, why statistics of the tables are not getting updated. In this puzzle one is tested with various concepts 1) Indexes, 2) Statistics, 3) database settings etc. There are multiple ways of solving this puzzles. It was interesting as many took interest but only few got it right. SQL SERVER – Question to You – When to use Function and When to use Stored Procedure This is rather straight forward question and not the typical puzzle. The answers from readers are great however, still there is chance of more detailed answers. SQL SERVER – Selecting Domain from Email Address I wrote on selecting domains from email addresses. Madhivanan makes puzzle out of a simple question. He wrote a follow-up post over here. In his post he writes various way how one can find email addresses from list of domains. Well, this is not a puzzle but amazing Guest Post by Feodor Georgiev who has written on subject Job Interviewing the Right Way (and for the Right Reasons). An article which everyone should read. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >