Search Results

Search found 34207 results on 1369 pages for 'query output'.

Page 11/1369 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • mysql query to dynamically convert row data to columns

    - by Anirudh Goel
    I am working on a pivot table query. The schema is as follows Sno, Name, District The same name may appear in many districts eg take the sample data for example 1 Mike CA 2 Mike CA 3 Proctor JB 4 Luke MN 5 Luke MN 6 Mike CA 7 Mike LP 8 Proctor MN 9 Proctor JB 10 Proctor MN 11 Luke MN As you see i have a set of 4 distinct districts (CA, JB, MN, LP). Now i wanted to get the pivot table generated for it by mapping the name against districts Name CA JB MN LP Mike 3 0 0 1 Proctor 0 2 2 0 Luke 0 0 3 0 i wrote the following query for this select name,sum(if(District="CA",1,0)) as "CA",sum(if(District="JB",1,0)) as "JB",sum(if(District="MN",1,0)) as "MN",sum(if(District="LP",1,0)) as "LP" from district_details group by name However there is a possibility that the districts may increase, in that case i will have to manually edit the query again and add the new district to it. I want to know if there is a query which can dynamically take the names of distinct districts and run the above query. I know i can do it with a procedure and generating the script on the fly, is there any other method too? I ask so because the output of the query "select distinct(districts) from district_details" will return me a single column having district name on each row, which i will like to be transposed to the column.

    Read the article

  • An abundance of LINQ queries and expressions using both the query and method syntax.

    - by nikolaosk
    In this post I will be writing LINQ queries against an array of strings, an array of integers.Moreover I will be using LINQ to query an SQL Server database. I can use LINQ against arrays since the array of strings/integers implement the IENumerable interface. I thought it would be a good idea to use both the method syntax and the query syntax. There are other places on the net where you can find examples of LINQ queries but I decided to create a big post using as many LINQ examples as possible. We...(read more)

    Read the article

  • Increase Query Speed in PostgreSQL

    - by Anthoni Gardner
    Hello, First time posting here, but an avid reader. I am experiancing slow query times on my database (all tested locally thus far) and not sure how to go about it. The database itself has 44 tables and some of them tables have over 1 Million records (mainly the movies, actresses and actors tables). The table is made via JMDB using the flat files on IMDB. Also the SQL query that I am about to show is from that said program (that too experiances very slow search times). I have tried to include as much information as I can, such as the explain plan etc. "QUERY PLAN" "HashAggregate (cost=46492.52..46493.50 rows=98 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Append (cost=39094.17..46491.79 rows=98 width=46)" " - HashAggregate (cost=39094.17..39094.87 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on movies (cost=0.00..39093.65 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Nested Loop (cost=0.00..7395.94 rows=28 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on akatitles (cost=0.00..7159.24 rows=28 width=4)" " Output: akatitles.movieid, akatitles.language, akatitles.title, " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Index Scan using movies_pkey on movies (cost=0.00..8.44 rows=1 width=46)" " Output: public.movies.movieid, public.movies.title, public.movies.year, public.movies.imdbid" " Index Cond: (public.movies.movieid = akatitles.movieid)" SELECT * FROM ((SELECT DISTINCT title, movieid, year FROM movies WHERE title ILIKE '%Babe%' AND NOT (title ILIKE '"%}')) UNION (SELECT movies.title, movies.movieid, movies.year FROM movies INNER JOIN akatitles ON movies.movieid=akatitles.movieid WHERE akatitles.title ILIKE '%Babe%' AND NOT (akatitles.title ILIKE '"%}'))) AS union_tmp2; Returns 612 Rows in 9078ms Database backup (plain text) is 1.61GB It's a really complex query and I am not fully cognizant on it, like I said it was spat out by JMDB. Do you have any suggestions on how I can increase the speed ? Regards Anthoni

    Read the article

  • Several Small, Specific, MySQL Query Cache Questions

    - by Robbie
    I've look all over the web and in the questions asked here about MySQL caching and most of them seem very non-specific about a couple of questions that I have about performance and MySQL query caching. Specifically I want answers to these questions, assume for all questions that I have the query cache enabled and it is of type 2, or "DEMAND": Is the query cache per table, per database, or per server? Meaning if I have the cache size set to X and have T tables and D databases will I be caching TX, DX, or X amount of data? If I have table T1 which I regularly use the SQL_CACHE hint on for SELECT queries and table T2 which I never do, when I query T2 with a SELECT query will it check through the cache first before performing the query? *Note: I don't want to use the SQL_NO_CACHE for all T2 queries.* Assume the same situation as in question 2. If I alter (INSERT, DELETE) table T2 will any processing be done on the cache? For answers to 2 and 3, is this processing time negligible if T2 is constantly being altered and is the target of a majority of my SELECT queries?

    Read the article

  • How can I identify unknown query string fragments that are coming to my site?

    - by Jon
    In the Google Analytics content overview for a site that I work on, the home page is getting many pageviews with some unfamiliar query string fragments, example: /?jkId=1234567890abcdef1234567890abcdef&jt=1&jadid=1234567890&js=1&jk=key words&jsid=12345&jmt=1 (potentially identifiable IDs have been changed) It clearly looks like some kind of ad tracking info, but noone who works on the site knows where it comes from, and I haven't been able to find any useful information from searching. Is there some listing of common query string keys available anywhere? Alternatively, does anyone happen to know where these keys (jkId, jt, jadid, js, jk, jsid and jmt) might come from?

    Read the article

  • Java XML Output - proper indenting for child items

    - by Dr1Ku
    Hello, I'd like to serialize some simple data model into xml, I've been using the standard java.org.w3c -related code (see below), the indentation is better than no "OutputKeys.INDENT", yet there is one little thing that remains - proper indentation for child elements. I know that this has been asked before on stackoverflow , yet that configuration did not work for me, this is the code I'm using : DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = factory.newDocumentBuilder(); Document doc = builder.newDocument(); doc = addItemsToDocument(doc); // The addItemsToDocument method adds childElements to the document. TransformerFactory transformerFactory = TransformerFactory.newInstance(); transformerFactory.setAttribute("indent-number", new Integer(4)); // switching to setAttribute("indent-number", 4); doesn't help Transformer transformer = transformerFactory.newTransformer(); transformer.setOutputProperty(OutputKeys.METHOD, "xml"); transformer.setOutputProperty(OutputKeys.INDENT, "yes"); DOMSource source = new DOMSource(doc); StreamResult result = new StreamResult(outFile); // outFile is a regular File outFile = new File("some/path/foo.xml"); transformer.transform(source, result); The output produced is : <?xml version="1.0" encoding="UTF-8" standalone="no"?> <stuffcontainer> <stuff description="something" duration="240" title="abc"> <otherstuff /> </stuff> </stuffcontainer> Whereas I would want it (for more clarity) like : <?xml version="1.0" encoding="UTF-8" standalone="no"?> <stuffcontainer> <stuff description="something" duration="240" title="abc"> <otherstuff /> </stuff> </stuffcontainer> I was just wondering if there is a way of doing this, make it properly indented for the child elements. Thank you in advance ! Happy Easter coding :-) !

    Read the article

  • insert array to mysql db function

    - by ganjan
    Hi. I have an array where the keys represent each column in my database. Now I want a function that makes a mysql update query. Something like $db['money'] = $money_input + $money_db; $db['location'] = $location $query = 'UPDATE tbl_user SET '; for($x = 0; $x < count($db); $x++ ){ $query .= $db something ".=." $db something } $query .= "WHERE username=".$username." ";

    Read the article

  • MySQL query optimization - distinct, order by and limit

    - by Manuel Darveau
    I am trying to optimize the following query: select distinct this_.id as y0_ from Rental this_ left outer join RentalRequest rentalrequ1_ on this_.id=rentalrequ1_.rental_id left outer join RentalSegment rentalsegm2_ on rentalrequ1_.id=rentalsegm2_.rentalRequest_id where this_.DTYPE='B' and this_.id<=1848978 and this_.billingStatus=1 and rentalsegm2_.endDate between 1273631699529 and 1274927699529 order by rentalsegm2_.id asc limit 0, 100; This query is done multiple time in a row for paginated processing of records (with a different limit each time). It returns the ids I need in the processing. My problem is that this query take more than 3 seconds. I have about 2 million rows in each of the three tables. Explain gives: +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+----------------------------------------------+ | 1 | SIMPLE | rentalsegm2_ | range | index_endDate,fk_rentalRequest_id_BikeRentalSegment | index_endDate | 9 | NULL | 449904 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | rentalrequ1_ | eq_ref | PRIMARY,fk_rental_id_BikeRentalRequest | PRIMARY | 8 | solscsm_main.rentalsegm2_.rentalRequest_id | 1 | Using where | | 1 | SIMPLE | this_ | eq_ref | PRIMARY,index_billingStatus | PRIMARY | 8 | solscsm_main.rentalrequ1_.rental_id | 1 | Using where | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+----------------------------------------------+ I tried to remove the distinct and the query ran three times faster. explain without the query gives: +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+-----------------------------+ | 1 | SIMPLE | rentalsegm2_ | range | index_endDate,fk_rentalRequest_id_BikeRentalSegment | index_endDate | 9 | NULL | 451972 | Using where; Using filesort | | 1 | SIMPLE | rentalrequ1_ | eq_ref | PRIMARY,fk_rental_id_BikeRentalRequest | PRIMARY | 8 | solscsm_main.rentalsegm2_.rentalRequest_id | 1 | Using where | | 1 | SIMPLE | this_ | eq_ref | PRIMARY,index_billingStatus | PRIMARY | 8 | solscsm_main.rentalrequ1_.rental_id | 1 | Using where | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+-----------------------------+ As you can see, the Using temporary is added when using distinct. I already have an index on all fields used in the where clause. Is there anything I can do to optimize this query? Thank you very much!

    Read the article

  • Lucene Query Syntax

    - by Don
    Hi, I'm trying to use Lucene to query a domain that has the following structure Student 1-------* Attendance *---------1 Course The data in the domain is summarised below Course.name Attendance.mandatory Student.name ------------------------------------------------- cooking N Bob art Y Bob If I execute the query "courseName:cooking AND mandatory:Y" it returns Bob, because Bob is attending the cooking course, and Bob is also attending a mandatory course. However, what I really want to query for is "students attending a mandatory cooking course", which in this case would return nobody. Is it possible to formulate this as a Lucene query? I'm actually using Compass, rather than Lucene directly, so I can use either CompassQueryBuilder or Lucene's query language. For the sake of completeness, the domain classes themselves are shown below. These classes are Grails domain classes, but I'm using the standard Compass annotations and Lucene query syntax. @Searchable class Student { @SearchableProperty(accessor = 'property') String name static hasMany = [attendances: Attendance] @SearchableId(accessor = 'property') Long id @SearchableComponent Set<Attendance> getAttendances() { return attendances } } @Searchable(root = false) class Attendance { static belongsTo = [student: Student, course: Course] @SearchableProperty(accessor = 'property') String mandatory = "Y" @SearchableId(accessor = 'property') Long id @SearchableComponent Course getCourse() { return course } } @Searchable(root = false) class Course { @SearchableProperty(accessor = 'property', name = "courseName") String name @SearchableId(accessor = 'property') Long id }

    Read the article

  • PHP: MySQL query duplicating update for no reason

    - by ThinkingInBits
    The code below is first the client code, then the class file. For some reason the 'deductTokens()' method is calling twice, thus charging an account double. I've been programming all night, so I may just need a second pair of eyes: if ($action == 'place_order') { if ($_REQUEST['unlimited'] == 200) { $license = 'extended'; } else { $license = 'standard'; } if ($photograph->isValidPhotographSize($photograph_id, $_REQUEST['size_radio'])) { $token_cost = $photograph->getTokenCost($_REQUEST['size_radio'], $_REQUEST['unlimited']); $order = new ImageOrder($_SESSION['user']['id'], $_REQUEST['size_radio'], $license, $token_cost); $order->saveOrder(); $order->deductTokens(); header('location: account.php'); } else { die("Please go back and select a valid photograph size"); } } ######CLASS CODE####### <?php include_once('database_classes.php'); class Order { protected $account_id; protected $cost; protected $license; public function __construct($account_id, $license, $cost) { $this->account_id = $account_id; $this->cost = $cost; $this->license = $license; } } class ImageOrder extends Order { protected $size; public function __construct($account_id, $size, $license, $cost) { $this->size = $size; parent::__construct($account_id, $license, $cost); } public function saveOrder() { //$db = Connect::connect(); //$account_id = $db->real_escape_string($this->account_id); //$size = $db->real_escape_string($this->size); //$license = $db->real_escape_string($this->license); //$cost = $db->real_escape_string($this->cost); } public function deductTokens() { $db = Connect::connect(); $account_id = $db->real_escape_string($this->account_id); $cost = $db->real_escape_string($this->cost); $query = "UPDATE accounts set tokens=tokens-$cost WHERE id=$account_id"; $result = $db->query($query); } } ?> When I die("$query"); directly after the query, it's printing the proper statement, and when I run that query within MySQL it works perfectly.

    Read the article

  • c# creating a database query METHOD

    - by Sinaesthetic
    I'm not sure if im delluded but what I would like to do is create a method that will return the results of a query, so that i can reuse the connection code. As i understand it, a query returns an object but how do i pass that object back? I want to send the query into the method as a string argument, and have it return the results so that I can use them. Here's what i have which was a stab in the dark, it obviously doesn't work. This example is me trying to populate a listbox with the results of a query; the sheet name is Employees and the field/column is name. The error i get is "Complex DataBinding accepts as a data source either an IList or an IListSource.". any ideas? public Form1() { InitializeComponent(); openFileDialog1.ShowDialog(); openedFile = openFileDialog1.FileName; lbxEmployeeNames.DataSource = Query("Select [name] FROM [Employees$]"); } public object Query(string sql) { System.Data.OleDb.OleDbConnection MyConnection; System.Data.OleDb.OleDbCommand myCommand = new System.Data.OleDb.OleDbCommand(); string connectionPath; //build connection string connectionPath = "provider=Microsoft.Jet.OLEDB.4.0;Data Source='" + openedFile + "';Extended Properties=Excel 8.0;"; MyConnection = new System.Data.OleDb.OleDbConnection(connectionPath); MyConnection.Open(); myCommand.Connection = MyConnection; myCommand.CommandText = sql; return myCommand.ExecuteNonQuery(); }

    Read the article

  • Simple UPDATE query with (sometime) long query times

    - by Eric
    I run a dedicated MySQL server (2 cores, 16GB RAM) serving 100-200 requests per second. It is getting sluggish during peak traffic and I have a hard time optimizing the server. So I'm looking for some ideas now that I have done lots of Innodb fine-tuning with the "TUNING PRIMER" The query that now generates most slow queries is the following (see result from mysqldumpslow): Count: 433 Time=3.40s (1470s) Lock=0.00s (0s) Rows=0.0 (0), UPDATE user_sessions SET tid='S' WHERE idsession='S' I am very surprised to have so many long queries for such a simple query with no locking. Fyi, the table is InnoDB and has 14000 rows. It contains all active sessions on the site with approx 10 UPDATE and SELECT hits per second. Here is its structure: CREATE TABLE `user_sessions` ( `personid` mediumint(9) NOT NULL DEFAULT '0', `ip` varchar(18) COLLATE utf8_unicode_ci NOT NULL, `idsession` varchar(32) COLLATE utf8_unicode_ci NOT NULL, `datum` date NOT NULL DEFAULT '0000-00-00', `tid` time NOT NULL DEFAULT '00:00:00', `status` tinyint(4) NOT NULL DEFAULT '0', KEY `personid` (`personid`), KEY `idsession` (`idsession`), KEY `datum` (`datum`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Any ideas?

    Read the article

  • An issue with tessellation a model with DirectX11

    - by Paul Ske
    I took the hardware tessellation tutorial from Rastertek and implemended texturing instead of color. This is great, so I wanted to implemended the same techique to a model inside my game editor and I noticed it doesn't draw anything. I compared the detailed tessellation from DirectX SDK sample. Inside the shader file - if I replace the HullInputType with PixelInputType it draws. So, I think because when I compiled the shaders inside the program it compiles VertexShader, PixelShader, HullShader then DomainShader. Isn't it suppose to be VertexShader, HullSHader, DomainShader then PixelShader or does it really not matter? I am just curious why wouldn't the model even be drawn when HullInputType but renders fine with PixelInputType. Shader Code: [code] cbuffer ConstantBuffer { float4x4 WVP; float4x4 World; // the rotation matrix float3 lightvec; // the light's vector float4 lightcol; // the light's color float4 ambientcol; // the ambient light's color bool isSelected; } cbuffer cameraBuffer { float3 cameraDirection; float padding; } cbuffer TessellationBuffer { float tessellationAmount; float3 padding2; } struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; Texture2D Texture; Texture2D NormalTexture; SamplerState ss { MinLOD = 5.0f; MipLODBias = 0.0f; }; struct HullOutputType { float3 position : POSITION; float2 texcoord : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; }; struct HullInputType { float4 position : POSITION; float2 texcoord : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; }; struct VertexInputType { float4 position : POSITION; float2 texcoord : TEXCOORD; float3 normal : NORMAL; float3 tangent : TANGENT; uint uVertexID : SV_VERTEXID; }; struct PixelInputType { float4 position : SV_POSITION; float2 texcoord : TEXCOORD0; // texture coordinates float3 normal : NORMAL; float3 tangent : TANGENT; float4 color : COLOR; float3 viewDirection : TEXCOORD1; float4 depthBuffer : TEXTURE0; }; HullInputType VShader(VertexInputType input) { HullInputType output; output.position.w = 1.0f; output.position = mul(input.position,WVP); output.texcoord = input.texcoord; output.normal = input.normal; output.tangent = input.tangent; //output.normal = mul(normal,World); //output.tangent = mul(tangent,World); //output.color = output.color; //output.texcoord = texcoord; // set the texture coordinates, unmodified return output; } ConstantOutputType TexturePatchConstantFunction(InputPatch inputPatch,uint patchID : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("TexturePatchConstantFunction")] HullOutputType HShader(InputPatch patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; // Set the position for this control point as the output position. output.position = patch[pointId].position; // Set the input color as the output color. output.texcoord = patch[pointId].texcoord; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; return output; } [domain("tri")] PixelInputType DShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch patch) { float3 vertexPosition; float2 uvPosition; float4 worldposition; PixelInputType output; // Interpolate world space position with barycentric coordinates float3 vWorldPos = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; // Determine the position of the new vertex. vertexPosition = vWorldPos; // Calculate the position of the new vertex against the world, view, and projection matrices. output.position = mul(float4(vertexPosition, 1.0f),WVP); // Send the input color into the pixel shader. output.texcoord = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.normal = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.tangent = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; //output.depthBuffer = output.position; //output.depthBuffer.w = 1.0f; //worldposition = mul(output.position,WVP); //output.viewDirection = cameraDirection.xyz - worldposition.xyz; //output.viewDirection = normalize(output.viewDirection); return output; } [/code] Somethings are commented out but will be in place when fixed. I'm probably not connecting something correctly.

    Read the article

  • Customize Team Build 2010 – Part 13: Get control over the Build Output

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application In the part 8, I have explained how you can add informational messages, warnings or errors to the build output. If you want to integrate with other lines of text to the build output, you need to do more. This post will show you how you can add extra steps, additional information and hyperlinks to the build output. UPDATE 13-12-2010: Thanks to Jason Pricket, it is now also possible to not show every activity in the build log. This is really useful when you are doing for-loops in your template. To see how you can do that, check out Jason's blog: http://blogs.msdn.com/b/jpricket/archive/2010/12/09/tfs-2010-making-your-build-log-less-noisy.aspx Add an hyperlink to the end of the build output Lets start with a simple example of how you can adjust the build output. In this case we are going to add at the end of the build output an hyperlink where a user can click on to for example start the deployment to the test environment. In part 4 you can find information how you can create a custom activity To add information to the build output, you need the BuildDetail. This value is a variable in your xaml and is thus easily transferable to you custom activity. Besides the BuildDetail the user has also to specify the text and the url that has to be added to the end of the build output. The following code segment shows you how you can achieve this.     [BuildActivity(HostEnvironmentOption.All)]    public sealed class AddHyperlinkToBuildOutput : CodeActivity    {        [RequiredArgument]        public InArgument<IBuildDetail> BuildDetail { get; set; }         [RequiredArgument]        public InArgument<string> DisplayText { get; set; }         [RequiredArgument]        public InArgument<string> Url { get; set; }         protected override void Execute(CodeActivityContext context)        {            // Obtain the runtime value of the input arguments                        IBuildDetail buildDetail = context.GetValue(this.BuildDetail);            string displayText = context.GetValue(this.DisplayText);            string url = context.GetValue(this.Url);             // Add the hyperlink            buildDetail.Information.AddExternalLink(displayText, new Uri(url));            buildDetail.Information.Save();        }    } If you add this activity to somewhere in your build process template (within the scope Run on Agent), you will get the following build output Add an line of text to the build output The next challenge is to add this kind of output not only to the end of the build output but at the step that is currently executing. To be able to do this, you need the current node in the build output. The following code shows you how you can achieve this. First you need to get the current activity tracking, which you can get with the following line of code             IActivityTracking currentTracking = context.GetExtension<IBuildLoggingExtension>().GetActivityTracking(context); Then you can create a new node and set its type to Activity Tracking Node (so copy it from the current node) and do nice things with the node.             IBuildInformationNode childNode = currentTracking.Node.Children.CreateNode();            childNode.Type = currentTracking.Node.Type;            childNode.Fields.Add("DisplayText", "This text is displayed."); You can also add a build step to display progress             IBuildStep buildStep = childNode.Children.AddBuildStep("Custom Build Step", "This is my custom build step");            buildStep.FinishTime = DateTime.Now.AddSeconds(10);            buildStep.Status = BuildStepStatus.Succeeded; Or you can add an hyperlink to the node             childNode.Children.AddExternalLink("My link", new Uri(http://www.ewaldhofman.nl)); When you combine this together you get the following result in the build output   You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Customize Team Build 2010 – Part 13: Get control over the Build Output

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application     In the part 8, I have explained how you can add informational messages, warnings or errors to the build output. If you want to integrate with other lines of text to the build output, you need to do more. This post will show you how you can add extra steps, additional information and hyperlinks to the build output. Add an hyperlink to the end of the build output Lets start with a simple example of how you can adjust the build output. In this case we are going to add at the end of the build output an hyperlink where a user can click on to for example start the deployment to the test environment. In part 4 you can find information how you can create a custom activity To add information to the build output, you need the BuildDetail. This value is a variable in your xaml and is thus easily transferable to you custom activity. Besides the BuildDetail the user has also to specify the text and the url that has to be added to the end of the build output. The following code segment shows you how you can achieve this.     [BuildActivity(HostEnvironmentOption.All)]    public sealed class AddHyperlinkToBuildOutput : CodeActivity    {        [RequiredArgument]        public InArgument<IBuildDetail> BuildDetail { get; set; }         [RequiredArgument]        public InArgument<string> DisplayText { get; set; }         [RequiredArgument]        public InArgument<string> Url { get; set; }         protected override void Execute(CodeActivityContext context)        {            // Obtain the runtime value of the input arguments                        IBuildDetail buildDetail = context.GetValue(this.BuildDetail);            string displayText = context.GetValue(this.DisplayText);            string url = context.GetValue(this.Url);             // Add the hyperlink            buildDetail.Information.AddExternalLink(displayText, new Uri(url));            buildDetail.Information.Save();        }    } If you add this activity to somewhere in your build process template (within the scope Run on Agent), you will get the following build output Add an line of text to the build output The next challenge is to add this kind of output not only to the end of the build output but at the step that is currently executing. To be able to do this, you need the current node in the build output. The following code shows you how you can achieve this. First you need to get the current activity tracking, which you can get with the following line of code             IActivityTracking currentTracking = context.GetExtension<IBuildLoggingExtension>().GetActivityTracking(context); Then you can create a new node and set its type to Activity Tracking Node (so copy it from the current node) and do nice things with the node.             IBuildInformationNode childNode = currentTracking.Node.Children.CreateNode();            childNode.Type = currentTracking.Node.Type;            childNode.Fields.Add("DisplayText", "This text is displayed."); You can also add a build step to display progress             IBuildStep buildStep = childNode.Children.AddBuildStep("Custom Build Step", "This is my custom build step");            buildStep.FinishTime = DateTime.Now.AddSeconds(10);            buildStep.Status = BuildStepStatus.Succeeded; Or you can add an hyperlink to the node             childNode.Children.AddExternalLink("My link", new Uri(http://www.ewaldhofman.nl)); When you combine this together you get the following result in the build output     You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Feedback + Bad output

    - by user1770094
    So I've got an assignment I think I'm more or less done with, but there is something which is messing up the output badly somewhere down the line, or even the calculation, and I don't see where the problem is. The assignment is to make a game in which a certain ammount of players run up through a tunnel towards a spot,where they will stop and spin around it,and then their dizziness is supposed to make them randomly either progress towards goal or regress back towards start.And each time they get another spot closer to goal,they get another "marking",and it goes on like this until one of them reaches goal. The program includes three files: one main.cpp,one header file and another cpp file. The header file: #ifndef COMPETITOR_H #define COMPETITOR_H #include <string> using namespace std; class Competitor { public: void setName(); string getName(); void spin(); void move(); int checkScore(); void printResult(); private: string name; int direction; int markedSpots; }; #endif // COMPETITOR_H The second cpp file: #include <iostream> #include <string> #include <cstdlib> #include <ctime> #include "Competitor.h" using namespace std; void Competitor::setName() { cin>>name; } string Competitor::getName() { return name; } void Competitor::spin() { srand(time(NULL)); direction = rand()%1+0; } void Competitor::move() { if(direction == 1) { markedSpots++; } else if(direction == 0 && markedSpots != 0) { markedSpots--; } } int Competitor::checkScore() { return markedSpots; } void Competitor::printResult() { if(direction == 1) { cout<<" is heading towards goal and has currently "<<markedSpots<<" markings."; } else if(direction == 0) { cout<<"\n"<<getName()<<" is heading towards start and has currently "<<markedSpots<<" markings."; } } The main cpp file: #include <iostream> #include <string> #include <cstdlib> #include <ctime> #include "Competitor.h" using namespace std; void inputAndSetNames(Competitor comps[],int nrOfComps); void makeTwist(Competitor comps[],int nrOfComps); void makeMove(Competitor comps[],int nrOfComps); void showAll(Competitor comps[],int nrOfComps); int winner(Competitor comps[],int nrOfComps, int nrOfTwistPlaces); int main() { int nrOfTwistPlaces; int nrOfComps; int noWinner = -1; int laps = 0; cout<<"How many spinning places should there be? "; cin>>nrOfTwistPlaces; cout<<"How many competitors should there be? "; cin>>nrOfComps; Competitor * comps = new Competitor[nrOfComps]; inputAndSetNames(comps, nrOfComps); do { laps++; cout<<"\nSpin "<<laps<<":"; makeTwist(comps, nrOfComps); makeMove(comps, nrOfComps); showAll(comps, nrOfComps); }while(noWinner == -1); delete [] comps; return 0; } void inputAndSetNames(Competitor comps[],int nrOfComps) { cout<<"Type in the names of the "<<nrOfComps<<" competitors:\n"; for(int i=0;i<nrOfComps;i++) { comps[i].setName(); } cout<<"\n"; } void makeTwist(Competitor comps[],int nrOfComps) { for(int i=0;i<nrOfComps;i++) { comps[i].spin(); } } void makeMove(Competitor comps[],int nrOfComps) { for(int i=0;i<nrOfComps;i++) { comps[i].move(); } } void showAll(Competitor comps[],int nrOfComps) { for(int i=0;i<nrOfComps;i++) { comps[i].printResult(); } cout<<"\n\n"; system("pause"); } int winner(Competitor comps[],int nrOfComps, int nrOfTwistPlaces) { int end = 0; int score = 0; for(int i=0;i<nrOfComps;i++) { score = comps[i].checkScore(); if(score == nrOfTwistPlaces) { end = 1; } else end = -1; } return end; } I'd be grateful if you would point out other mistakes if you see any.Thanks in advance.

    Read the article

  • JPA : optimize EJB-QL query involving large many-to-many join table

    - by Fabien
    Hi all. I'm using Hibernate Entity Manager 3.4.0.GA with Spring 2.5.6 and MySql 5.1. I have a use case where an entity called Artifact has a reflexive many-to-many relation with itself, and the join table is quite large (1 million lines). As a result, the HQL query performed by one of the methods in my DAO takes a long time. Any advice on how to optimize this and still use HQL ? Or do I have no choice but to switch to a native SQL query that would perform a join between the table ARTIFACT and the join table ARTIFACT_DEPENDENCIES ? Here is the problematic query performed in the DAO : @SuppressWarnings("unchecked") public List<Artifact> findDependentArtifacts(Artifact artifact) { Query query = em.createQuery("select a from Artifact a where :artifact in elements(a.dependencies)"); query.setParameter("artifact", artifact); List<Artifact> list = query.getResultList(); return list; } And the code for the Artifact entity : package com.acme.dependencytool.persistence.model; import java.util.ArrayList; import java.util.List; import javax.persistence.CascadeType; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.FetchType; import javax.persistence.GeneratedValue; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.persistence.JoinTable; import javax.persistence.ManyToMany; import javax.persistence.Table; import javax.persistence.UniqueConstraint; @Entity @Table(name = "ARTIFACT", uniqueConstraints={@UniqueConstraint(columnNames={"GROUP_ID", "ARTIFACT_ID", "VERSION"})}) public class Artifact { @Id @GeneratedValue @Column(name = "ID") private Long id = null; @Column(name = "GROUP_ID", length = 255, nullable = false) private String groupId; @Column(name = "ARTIFACT_ID", length = 255, nullable = false) private String artifactId; @Column(name = "VERSION", length = 255, nullable = false) private String version; @ManyToMany(cascade=CascadeType.ALL, fetch=FetchType.EAGER) @JoinTable( name="ARTIFACT_DEPENDENCIES", joinColumns = @JoinColumn(name="ARTIFACT_ID", referencedColumnName="ID"), inverseJoinColumns = @JoinColumn(name="DEPENDENCY_ID", referencedColumnName="ID") ) private List<Artifact> dependencies = new ArrayList<Artifact>(); public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getGroupId() { return groupId; } public void setGroupId(String groupId) { this.groupId = groupId; } public String getArtifactId() { return artifactId; } public void setArtifactId(String artifactId) { this.artifactId = artifactId; } public String getVersion() { return version; } public void setVersion(String version) { this.version = version; } public List<Artifact> getDependencies() { return dependencies; } public void setDependencies(List<Artifact> dependencies) { this.dependencies = dependencies; } } Thanks in advance. EDIT 1 : The DDLs are generated automatically by Hibernate EntityMananger based on the JPA annotations in the Artifact entity. I have no explicit control on the automaticaly-generated join table, and the JPA annotations don't let me explicitly set an index on a column of a table that does not correspond to an actual Entity (in the JPA sense). So I guess the indexing of table ARTIFACT_DEPENDENCIES is left to the DB, MySQL in my case, which apparently uses a composite index based on both clumns but doesn't index the column that is most relevant in my query (DEPENDENCY_ID). mysql describe ARTIFACT_DEPENDENCIES; +---------------+------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------+------+-----+---------+-------+ | ARTIFACT_ID | bigint(20) | NO | MUL | NULL | | | DEPENDENCY_ID | bigint(20) | NO | MUL | NULL | | +---------------+------------+------+-----+---------+-------+ EDIT 2 : When turning on showSql in the Hibernate session, I see many occurences of the same type of SQL query, as below : select dependenci0_.ARTIFACT_ID as ARTIFACT1_1_, dependenci0_.DEPENDENCY_ID as DEPENDENCY2_1_, artifact1_.ID as ID1_0_, artifact1_.ARTIFACT_ID as ARTIFACT2_1_0_, artifact1_.GROUP_ID as GROUP3_1_0_, artifact1_.VERSION as VERSION1_0_ from ARTIFACT_DEPENDENCIES dependenci0_ left outer join ARTIFACT artifact1_ on dependenci0_.DEPENDENCY_ID=artifact1_.ID where dependenci0_.ARTIFACT_ID=? Here's what EXPLAIN in MySql says about this type of query : mysql explain select dependenci0_.ARTIFACT_ID as ARTIFACT1_1_, dependenci0_.DEPENDENCY_ID as DEPENDENCY2_1_, artifact1_.ID as ID1_0_, artifact1_.ARTIFACT_ID as ARTIFACT2_1_0_, artifact1_.GROUP_ID as GROUP3_1_0_, artifact1_.VERSION as VERSION1_0_ from ARTIFACT_DEPENDENCIES dependenci0_ left outer join ARTIFACT artifact1_ on dependenci0_.DEPENDENCY_ID=artifact1_.ID where dependenci0_.ARTIFACT_ID=1; +----+-------------+--------------+--------+-------------------+-------------------+---------+---------------------------------------------+------+-------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+--------+-------------------+-------------------+---------+---------------------------------------------+------+-------+ | 1 | SIMPLE | dependenci0_ | ref | FKEA2DE763364D466 | FKEA2DE763364D466 | 8 | const | 159 | | | 1 | SIMPLE | artifact1_ | eq_ref | PRIMARY | PRIMARY | 8 | dependencytooldb.dependenci0_.DEPENDENCY_ID | 1 | | +----+-------------+--------------+--------+-------------------+-------------------+---------+---------------------------------------------+------+-------+ EDIT 3 : I tried setting the FetchType to LAZY in the JoinTable annotation, but I then get the following exception : Hibernate: select artifact0_.ID as ID1_, artifact0_.ARTIFACT_ID as ARTIFACT2_1_, artifact0_.GROUP_ID as GROUP3_1_, artifact0_.VERSION as VERSION1_ from ARTIFACT artifact0_ where artifact0_.GROUP_ID=? and artifact0_.ARTIFACT_ID=? 51545 [btpool0-2] ERROR org.hibernate.LazyInitializationException - failed to lazily initialize a collection of role: com.acme.dependencytool.persistence.model.Artifact.dependencies, no session or session was closed org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: com.acme.dependencytool.persistence.model.Artifact.dependencies, no session or session was closed at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:380) at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationExceptionIfNotConnected(AbstractPersistentCollection.java:372) at org.hibernate.collection.AbstractPersistentCollection.readSize(AbstractPersistentCollection.java:119) at org.hibernate.collection.PersistentBag.size(PersistentBag.java:248) at com.acme.dependencytool.server.DependencyToolServiceImpl.createArtifactViewBean(DependencyToolServiceImpl.java:93) at com.acme.dependencytool.server.DependencyToolServiceImpl.createArtifactViewBean(DependencyToolServiceImpl.java:109) at com.acme.dependencytool.server.DependencyToolServiceImpl.search(DependencyToolServiceImpl.java:48) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:527) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:166) at com.google.gwt.user.server.rpc.RemoteServiceServlet.doPost(RemoteServiceServlet.java:86) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:362) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:729) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:843) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:647) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:205) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:395) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:488)

    Read the article

  • PHP MSSQL : How to display output when query return no row

    - by vamps
    i have a problem with my PHP-MSSQL query. i have a join table that need to give a result something be like this: Department Group A Group B Total A+B WORKHOUR A OTHOUR A WORKHOUR B OTHOUR B WORKHOUR OTHOUR HR 10 15 25 0 35 15 IT 5 5 5 5 Admin 12 12 12 12 the query will count how many employee as per given date (admin will enter data and once submitted, the query will give the above result). The problem is, the final output is a mess when there's no row to be displayed. the column is shifted to the right. i.e: only Group A in IT only Group B in Admin Department Group A Group B Total A+B WORKHOUR A OTHOUR A WORKHOUR B OTHOUR B WORKHOUR OTHOUR HR 10 15 25 0 35 15 IT 5 5 5 5 Admin 12 12 12 12 my question is, how to prevent this to happen? i've tried everything with While.... if else.. but the result is still the same. how to display output "0" if no rows to return? echo "0"; this is my QUERY: select DD.DPT_ID,DPT.DEPARTMENT_NAME,TU.EMP_GROUP, sum(DD.WORK_HOUR) AS WORK_HOUR, sum(DD.OT_HOUR) AS OT_HOUR FROM DEPARTMENT_DETAIL DD left join DEPARTMENT DPT ON (DD.DEPT_ID=DPT.DEPT_ID) LEFT JOIN TBL_USERS TU ON (TU.EMP_ID=DD.EMP_ID) WHERE DD_DATE>='2012-01-01' AND DD_DATE<='2012-01-31' AND TU.EMP_GROUP!=2 GROUP BY DD.DEPT_ID, DPT.DEPARTMENT_NAME,TU.EMP_GROUP ORDER BY DPT.DEPARTMENT_NAME this is one of the logic that i've used, but doesn't return the result that i want:: while($row = mssql_fetch_array($displayResult)) { if ((!$row["WORK_HOUR"])&&(!$row["OT_HOUR"])) { echo "<td >"; echo "empty"; echo "&nbsp;</td>"; echo "<td >"; echo "empty"; echo "&nbsp;</td>"; } else { echo "<td>"; echo $row["WORK_HOUR"]; echo "&nbsp;</td>"; echo "<td>"; echo $row["OT_HOUR"]; echo "&nbsp;</td>"; } } please help. i've been doing this for 2 days. @__@

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • Slow INFORMATION_SCHEMA query

    - by Thomas
    We have a .NET Windows application that runs the following query on login to get some information about the database: SELECT t.TABLE_NAME, ISNULL(pk_ccu.COLUMN_NAME,'') PK, ISNULL(fk_ccu.COLUMN_NAME,'') FK FROM INFORMATION_SCHEMA.TABLES t LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk_tc ON pk_tc.TABLE_NAME = t.TABLE_NAME AND pk_tc.CONSTRAINT_TYPE = 'PRIMARY KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE pk_ccu ON pk_ccu.CONSTRAINT_NAME = pk_tc.CONSTRAINT_NAME LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS fk_tc ON fk_tc.TABLE_NAME = t.TABLE_NAME AND fk_tc.CONSTRAINT_TYPE = 'FOREIGN KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE fk_ccu ON fk_ccu.CONSTRAINT_NAME = fk_tc.CONSTRAINT_NAME Usually this runs in a couple seconds, but on one server running SQL Server 2000, it is taking over four minutes to run. I ran it with the execution plan enabled, and the results are huge, but this part caught my eye (it won't let me post an image): http://img35.imageshack.us/i/plank.png/ I then updated the statistics on all of the tables that were mentioned in the execution plan: update statistics sysobjects update statistics syscolumns update statistics systypes update statistics master..spt_values update statistics sysreferences But that didn't help. The index tuning wizard doesn't help either, because it doesn't let me select system tables. There is nothing else running on this server, so nothing else could be slowing it down. What else can I do to diagnose or fix the problem on that server?

    Read the article

  • Cannot delete .Trash-503 directory, returns a $RECYCLE.BIN.trashinfo: Input/output error

    - by Parto
    I cannot delete .Trash-503 folder via GUI or terminal, it returns a $RECYCLE.BIN.trashinfo: Input/output error Not even sudo rm -r or even a simple ls works in that trash directory. Check terminal output below: subroot@subroot:~$ cd /media/xxxxx/ subroot@subroot:/media/xxxxx$ rm .Trash-503/ rm: cannot remove `.Trash-503/': Is a directory subroot@subroot:/media/xxxxx$ rm -r .Trash-503/ rm: cannot remove `.Trash-503/info/$RECYCLE.BIN.trashinfo': Input/output error rm: cannot remove `.Trash-503/info/found.000.trashinfo': Input/output error rm: cannot remove `.Trash-503/info': Directory not empty subroot@subroot:/media/xxxxx$ sudo rm -r .Trash-503/ [sudo] password for subroot: rm: cannot remove `.Trash-503/info/$RECYCLE.BIN.trashinfo': Input/output error rm: cannot remove `.Trash-503/info/found.000.trashinfo': Input/output error subroot@subroot:/media/xxxxx$ cd .Trash-503/ subroot@subroot:/media/xxxxx/.Trash-503$ ls info subroot@subroot:/media/xxxxx/.Trash-503$ cd info/ subroot@subroot:/media/xxxxx/.Trash-503/info$ ls ls: cannot access $RECYCLE.BIN.trashinfo: Input/output error ls: cannot access found.000.trashinfo: Input/output error found.000.trashinfo $RECYCLE.BIN.trashinfo subroot@subroot:/media/xxxxx/.Trash-503/info$ What's going on here and how can I delete this folder? EDIT I tried checking and repairing the partition using gparted only to get this error message: ERROR: Filesystem check failed! ERROR: 264 clusters are referenced multiple times. NTFS is inconsistent. Run chkdsk /f on Windows then reboot it TWICE! The usage of the /f parameter is very IMPORTANT! No modification was and will be made to NTFS by this software until it gets repaired. I don't have windows installed, how can I run chkdsk /f from ubuntu?

    Read the article

  • Ubuntu 12.04 LTS , Dell d410 output to External Monitor/TV with docking

    - by Gck
    I am not able to see the video output on My external monitor/TV after installing 12.04 LTS. I had the same issue with 11 versions also. This issue does not happen on 10.04 LTS version. This is my setup. Dell latitude D410 with Intel Mobile 915 GM/910GML express controller using the Dell docking station Connecting to my 42inch TV with DVI (from Docking) to HDMI cable (on TV) If i close the lid, some times i am able to see the video output on TV, but the resolution is making the fonts so small, i cannot even see it and when i click displays option to change the resolution, the output goes off completely and it is shown now on the laptop screen. one time, i got the output on both the screens, but i am not able to replicate that scenario even after trying the Fn+F8 key option on my laptop. earlier some time back when i ran version 11.x ubuntu, i was able to get the output on both the screens (basically mirrored output) with large font resolution on my TV using the monitor Test tool as mentioned here :http://www.bigfatostrich.com/2011/05/solved-ubuntu-11-04-broken-external-display/ The problem with this approach is that i have to do it every time i restart the machine and more over the video output is present on both the screens (mirrored), which is of no use. As i stated before, this does not happen with 10.04 LTS. With 10.04 LTS my laptop screen is off automatically and the output is seen on TV/external display. Can anyone share any options that i can try? How can we achieve the behavior of 10.04 LTS with respect to the External display on 12.04 LTS . Thankyou very much for your help in advance.

    Read the article

  • The overlooked OUTPUT clause

    - by steveh99999
    I often find myself applying ad-hoc data updates to production systems – usually running scripts written by other people. One of my favourite features of SQL syntax is the OUTPUT clause – I find this is rarely used, and I often wonder if this is due to a lack of awareness of this feature.. The OUTPUT clause was added to SQL Server in the SQL 2005 release – so has been around for quite a while now, yet I often see scripts like this… SELECT somevalue FROM sometable WHERE keyval = XXX UPDATE sometable SET somevalue = newvalue WHERE keyval = XXX -- now check the update has worked… SELECT somevalue FROM sometable WHERE keyval = XXX This can be rewritten to achieve the same end-result using the OUTPUT clause. UPDATE sometable SET somevalue = newvalue OUTPUT deleted.somevalue AS ‘old value’,              inserted.somevalue AS ‘new value’ WHERE keyval = XXX The Update statement with output clause also requires less IO - ie I've replaced three SQL Statements with one, using only a third of the IO.  If you are not aware of the power of the output clause – I recommend you look at the output clause in books online And finally here’s an example of the output produced using the Northwind database…  

    Read the article

  • More CPU cores may not always lead to better performance – MAXDOP and query memory distribution in spotlight

    - by sqlworkshops
    More hardware normally delivers better performance, but there are exceptions where it can hinder performance. Understanding these exceptions and working around it is a major part of SQL Server performance tuning.   When a memory allocating query executes in parallel, SQL Server distributes memory to each task that is executing part of the query in parallel. In our example the sort operator that executes in parallel divides the memory across all tasks assuming even distribution of rows. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union.   In reality, how often are column values evenly distributed, think about an example; are employees working for your company distributed evenly across all the Zip codes or mainly concentrated in the headquarters? What happens when you sort result set based on Zip codes? Do all products in the catalog sell equally or are few products hot selling items?   One of my customers tested the below example on a 24 core server with various MAXDOP settings and here are the results:MAXDOP 1: CPU time = 1185 ms, elapsed time = 1188 msMAXDOP 4: CPU time = 1981 ms, elapsed time = 1568 msMAXDOP 8: CPU time = 1918 ms, elapsed time = 1619 msMAXDOP 12: CPU time = 2367 ms, elapsed time = 2258 msMAXDOP 16: CPU time = 2540 ms, elapsed time = 2579 msMAXDOP 20: CPU time = 2470 ms, elapsed time = 2534 msMAXDOP 0: CPU time = 2809 ms, elapsed time = 2721 ms - all 24 cores.In the above test, when the data was evenly distributed, the elapsed time of parallel query was always lower than serial query.   Why does the query get slower and slower with more CPU cores / higher MAXDOP? Maybe you can answer this question after reading the article; let me know: [email protected].   Well you get the point, let’s see an example.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go   Let’s create the temporary table #FireDrill with all possible Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip from Employees update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --First serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) goThe query took 1011 ms to complete.   The execution plan shows the 77816 KB of memory was granted while the estimated rows were 799624.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1912 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 799624.  The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead. Sort properties shows the rows are unevenly distributed over the 4 threads.   Sort Warnings in SQL Server Profiler.   Intermediate Summary: The reason for the higher duration with parallel plan was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001. Now let’s update the Employees table and distribute employees evenly across all Zip codes.   update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go   The query took 751 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.   Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 661 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 784707.  Sort properties shows the rows are evenly distributed over the 4 threads. No Sort Warnings in SQL Server Profiler.    Intermediate Summary: When employees were distributed unevenly, concentrated on 1 Zip code, parallel sort spilled while serial sort performed well without spilling to tempdb. When the employees were distributed evenly across all Zip codes, parallel sort and serial sort did not spill to tempdb. This shows uneven data distribution may affect the performance of some parallel queries negatively. For detailed discussion of memory allocation, refer to webcasts available at www.sqlworkshops.com/webcasts.     Some of you might conclude from the above execution times that parallel query is not faster even when there is no spill. Below you can see when we are joining limited amount of Zip codes, parallel query will be fasted since it can use Bitmap Filtering.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go  Let’s create the temporary table #FireDrill with limited Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip       from Employees where Zip between 1800 and 2001 update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 989 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 785594. No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1799 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 785594.  Sort Warnings in SQL Server Profiler.    The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead.  Intermediate Summary: The reason for the higher duration with parallel plan even with limited amount of Zip codes was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001.   Now let’s update the Employees table and distribute employees evenly across all Zip codes. update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 250  ms to complete.  The execution plan shows the 9016 KB of memory was granted while the estimated rows were 79973.8.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0.  --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 85 ms to complete.  The execution plan shows the 13152 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.    Here you see, parallel query is much faster than serial query since SQL Server is using Bitmap Filtering to eliminate rows before the hash join.   Parallel queries are very good for performance, but in some cases it can hinder performance. If one identifies the reason for these hindrances, then it is possible to get the best out of parallelism. I covered many aspects of monitoring and tuning parallel queries in webcasts (www.sqlworkshops.com/webcasts) and articles (www.sqlworkshops.com/articles). I suggest you to watch the webcasts and read the articles to better understand how to identify and tune parallel query performance issues.   Summary: One has to avoid sort spill over tempdb and the chances of spills are higher when a query executes in parallel with uneven data distribution. Parallel query brings its own advantage, reduced elapsed time and reduced work with Bitmap Filtering. So it is important to understand how to avoid spills over tempdb and when to execute a query in parallel.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan  

    Read the article

  • MySQL Query Select using sub-select takes too long

    - by True Soft
    I noticed something strange while executing a select from 2 tables: SELECT * FROM table_1 WHERE id IN ( SELECT id_element FROM table_2 WHERE column_2=3103); This query took approximatively 242 seconds. But when I executed the subquery SELECT id_element FROM table_2 WHERE column_2=3103 it took less than 0.002s (and resulted 2 rows). Then, when I did SELECT * FROM table_1 WHERE id IN (/* prev.result */) it was the same: 0.002s. I was wondering why MySQL is doing the first query like that, taking much more time than the last 2 queries separately? Is it an optimal solution for selecting something based from the results of a sub-query? Other details: table_1 has approx. 9000 rows, and table_2 has 90000 rows. After I added an index on column_2 from table_2, the first query took 0.15s.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >