Search Results

Search found 18695 results on 748 pages for 'query manipulation'.

Page 9/748 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • MySQL query killing my server

    - by Webnet
    Looking at this query there's got to be something bogging it down that I'm not noticing. I ran it for 7 minutes and it only updated 2 rows. //set product count for makes $tru->query->run(array( 'name' => 'get-make-list', 'sql' => 'SELECT id, name FROM vehicle_make', 'connection' => 'core' )); while($tempMake = $tru->query->getArray('get-make-list')) { $tru->query->run(array( 'name' => 'update-product-count', 'sql' => 'UPDATE vehicle_make SET product_count = ( SELECT COUNT(product_id) FROM taxonomy_master WHERE v_id IN ( SELECT id FROM vehicle_catalog WHERE make_id = '.$tempMake['id'].' ) ) WHERE id = '.$tempMake['id'], 'connection' => 'core' )); } I'm sure this query can be optimized to perform better, but I can't think of how to do it. vehicle_make = 45 rows taxonomy_master = 11,223 rows vehicle_catalog = 5,108 rows All tables have appropriate indexes

    Read the article

  • PDO update query with conditional?

    - by dmontain
    I have a PDO mysql that updates 3 fields. $update = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2, field3=:field3 WHERE key=:key"); But I want field3 to be updated only when $update3 = true; (meaning that the update of field3 is controlled by a conditional statement) Is this possible to accomplish with a single query? I could do it with 2 queries where I update field1 and field2 then check the boolean and update field3 if needed in a separate query. //run this query to update only fields 1 and 2 $update_part1 = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2 WHERE key=:key"); //if field3 should be update, run a separate query to update it separately if ($update3){ $update_part2 = $mypdo->prepare("UPDATE tablename SET field3=:field3 WHERE key=:key"); } But hopefully there is a way to accomplish this in 1 query?

    Read the article

  • make reference to an empty query in flex

    - by Adam
    a bit of a dumb questions I'm sure I'm trying to allow user to set an item to be default. I've got a function that run a query to first find the current default item. Then runs a second query that unsets the current default item. Then a third query runs to set the new user selected item to be the default. This seem to work fine when a default item has been perviously selected, but when I try to set the default item initially I get the good old "Cannot access a property or method of a null object reference." error. This is because the first query that runs returns no items I'm sure. So I need to write an if statement that if the first query returns nothing to skip the second and go right to the third. The only problem is I can't make a reference to a null object. So how do I go about writing this statement. Thanks

    Read the article

  • Update query with conditional?

    - by dmontain
    I'm not sure if this possible. If not, let me know. I have a PDO mysql that updates 3 fields. $update = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2, field3=:field3 WHERE key=:key"); But I want field3 to be updated only when $update3 = true; (meaning that the update of field3 is controlled by a conditional statement) Is this possible to accomplish with a single query? I could do it with 2 queries where I update field1 and field2 then check the boolean and update field3 if needed in a separate query. //run this query to update only fields 1 and 2 $update_part1 = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2 WHERE key=:key"); //if field3 should be update, run a separate query to update it separately if ($update3){ $update_part2 = $mypdo->prepare("UPDATE tablename SET field3=:field3 WHERE key=:key"); } But hopefully there is a way to accomplish this in 1 query?

    Read the article

  • query excuting problem

    - by srini-r85
    hi, i tried to execute following query in php script. $db_selected = mysql_select_db("lumiinc1_sndemo1", $con); if ($db_selected) { echo "database connected"; } else { die ("Can\'t use db : " . mysql_error()); } $sql = "INSERT INTO `markers` ( `name`, `address`, `lat`, `lng`, `id` ) SELECT `name`, `street`, `latitude`, `longitude`, `lid` FROM `location` WHERE NOT EXISTS ( SELECT * FROM `markers` WHERE `location`.`lid` = `markers`.`id` )"; $result = mysql_query($sql); if ($result) { echo "Query executed OK"; } else { die("Invalid query: " . mysql_error()); } script does not show any error.also query executed.but i didn't get my expected result.at the same i try this query in phpmyAdmin i got my expected result. i dont know the cause of this problem. plz any one find the problem . thanks

    Read the article

  • Rails 3 query in multiple date ranges

    - by NeoRiddle
    Suppose we have some date ranges, for example: ranges = [ [(12.months.ago)..(8.months.ago)], [(7.months.ago)..(6.months.ago)], [(5.months.ago)..(4.months.ago)], [(3.months.ago)..(2.months.ago)], [(1.month.ago)..(15.days.ago)] ] and a Post model with :created_at attribute. I want to find posts where created_at value is in this range, so the goal is to create a query like: SELECT * FROM posts WHERE created_at BETWEEN '2011-04-06' AND '2011-08-06' OR BETWEEN '2011-09-06' AND '2011-10-06' OR BETWEEN '2011-11-06' AND '2011-12-06' OR BETWEEN '2012-01-06' AND '2012-02-06' OR BETWEEN '2012-02-06' AND '2012-03-23'; If you have only one range like this: range = (12.months.ago)..(8.months.ago) we can do this query: Post.where(:created_at => range) and query should be: SELECT * FROM posts WHERE created_at BETWEEN '2011-04-06' AND '2011-08-06'; Is there a way to make this query using a notation like this Post.where(:created_at => range)? And what is the correct way to build this query? Thank you

    Read the article

  • mysql query to dynamically convert row data to columns

    - by Anirudh Goel
    I am working on a pivot table query. The schema is as follows Sno, Name, District The same name may appear in many districts eg take the sample data for example 1 Mike CA 2 Mike CA 3 Proctor JB 4 Luke MN 5 Luke MN 6 Mike CA 7 Mike LP 8 Proctor MN 9 Proctor JB 10 Proctor MN 11 Luke MN As you see i have a set of 4 distinct districts (CA, JB, MN, LP). Now i wanted to get the pivot table generated for it by mapping the name against districts Name CA JB MN LP Mike 3 0 0 1 Proctor 0 2 2 0 Luke 0 0 3 0 i wrote the following query for this select name,sum(if(District="CA",1,0)) as "CA",sum(if(District="JB",1,0)) as "JB",sum(if(District="MN",1,0)) as "MN",sum(if(District="LP",1,0)) as "LP" from district_details group by name However there is a possibility that the districts may increase, in that case i will have to manually edit the query again and add the new district to it. I want to know if there is a query which can dynamically take the names of distinct districts and run the above query. I know i can do it with a procedure and generating the script on the fly, is there any other method too? I ask so because the output of the query "select distinct(districts) from district_details" will return me a single column having district name on each row, which i will like to be transposed to the column.

    Read the article

  • An abundance of LINQ queries and expressions using both the query and method syntax.

    - by nikolaosk
    In this post I will be writing LINQ queries against an array of strings, an array of integers.Moreover I will be using LINQ to query an SQL Server database. I can use LINQ against arrays since the array of strings/integers implement the IENumerable interface. I thought it would be a good idea to use both the method syntax and the query syntax. There are other places on the net where you can find examples of LINQ queries but I decided to create a big post using as many LINQ examples as possible. We...(read more)

    Read the article

  • Several Small, Specific, MySQL Query Cache Questions

    - by Robbie
    I've look all over the web and in the questions asked here about MySQL caching and most of them seem very non-specific about a couple of questions that I have about performance and MySQL query caching. Specifically I want answers to these questions, assume for all questions that I have the query cache enabled and it is of type 2, or "DEMAND": Is the query cache per table, per database, or per server? Meaning if I have the cache size set to X and have T tables and D databases will I be caching TX, DX, or X amount of data? If I have table T1 which I regularly use the SQL_CACHE hint on for SELECT queries and table T2 which I never do, when I query T2 with a SELECT query will it check through the cache first before performing the query? *Note: I don't want to use the SQL_NO_CACHE for all T2 queries.* Assume the same situation as in question 2. If I alter (INSERT, DELETE) table T2 will any processing be done on the cache? For answers to 2 and 3, is this processing time negligible if T2 is constantly being altered and is the target of a majority of my SELECT queries?

    Read the article

  • How can I identify unknown query string fragments that are coming to my site?

    - by Jon
    In the Google Analytics content overview for a site that I work on, the home page is getting many pageviews with some unfamiliar query string fragments, example: /?jkId=1234567890abcdef1234567890abcdef&jt=1&jadid=1234567890&js=1&jk=key words&jsid=12345&jmt=1 (potentially identifiable IDs have been changed) It clearly looks like some kind of ad tracking info, but noone who works on the site knows where it comes from, and I haven't been able to find any useful information from searching. Is there some listing of common query string keys available anywhere? Alternatively, does anyone happen to know where these keys (jkId, jt, jadid, js, jk, jsid and jmt) might come from?

    Read the article

  • insert array to mysql db function

    - by ganjan
    Hi. I have an array where the keys represent each column in my database. Now I want a function that makes a mysql update query. Something like $db['money'] = $money_input + $money_db; $db['location'] = $location $query = 'UPDATE tbl_user SET '; for($x = 0; $x < count($db); $x++ ){ $query .= $db something ".=." $db something } $query .= "WHERE username=".$username." ";

    Read the article

  • MySQL query optimization - distinct, order by and limit

    - by Manuel Darveau
    I am trying to optimize the following query: select distinct this_.id as y0_ from Rental this_ left outer join RentalRequest rentalrequ1_ on this_.id=rentalrequ1_.rental_id left outer join RentalSegment rentalsegm2_ on rentalrequ1_.id=rentalsegm2_.rentalRequest_id where this_.DTYPE='B' and this_.id<=1848978 and this_.billingStatus=1 and rentalsegm2_.endDate between 1273631699529 and 1274927699529 order by rentalsegm2_.id asc limit 0, 100; This query is done multiple time in a row for paginated processing of records (with a different limit each time). It returns the ids I need in the processing. My problem is that this query take more than 3 seconds. I have about 2 million rows in each of the three tables. Explain gives: +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+----------------------------------------------+ | 1 | SIMPLE | rentalsegm2_ | range | index_endDate,fk_rentalRequest_id_BikeRentalSegment | index_endDate | 9 | NULL | 449904 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | rentalrequ1_ | eq_ref | PRIMARY,fk_rental_id_BikeRentalRequest | PRIMARY | 8 | solscsm_main.rentalsegm2_.rentalRequest_id | 1 | Using where | | 1 | SIMPLE | this_ | eq_ref | PRIMARY,index_billingStatus | PRIMARY | 8 | solscsm_main.rentalrequ1_.rental_id | 1 | Using where | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+----------------------------------------------+ I tried to remove the distinct and the query ran three times faster. explain without the query gives: +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+-----------------------------+ | 1 | SIMPLE | rentalsegm2_ | range | index_endDate,fk_rentalRequest_id_BikeRentalSegment | index_endDate | 9 | NULL | 451972 | Using where; Using filesort | | 1 | SIMPLE | rentalrequ1_ | eq_ref | PRIMARY,fk_rental_id_BikeRentalRequest | PRIMARY | 8 | solscsm_main.rentalsegm2_.rentalRequest_id | 1 | Using where | | 1 | SIMPLE | this_ | eq_ref | PRIMARY,index_billingStatus | PRIMARY | 8 | solscsm_main.rentalrequ1_.rental_id | 1 | Using where | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+-----------------------------+ As you can see, the Using temporary is added when using distinct. I already have an index on all fields used in the where clause. Is there anything I can do to optimize this query? Thank you very much!

    Read the article

  • Lucene Query Syntax

    - by Don
    Hi, I'm trying to use Lucene to query a domain that has the following structure Student 1-------* Attendance *---------1 Course The data in the domain is summarised below Course.name Attendance.mandatory Student.name ------------------------------------------------- cooking N Bob art Y Bob If I execute the query "courseName:cooking AND mandatory:Y" it returns Bob, because Bob is attending the cooking course, and Bob is also attending a mandatory course. However, what I really want to query for is "students attending a mandatory cooking course", which in this case would return nobody. Is it possible to formulate this as a Lucene query? I'm actually using Compass, rather than Lucene directly, so I can use either CompassQueryBuilder or Lucene's query language. For the sake of completeness, the domain classes themselves are shown below. These classes are Grails domain classes, but I'm using the standard Compass annotations and Lucene query syntax. @Searchable class Student { @SearchableProperty(accessor = 'property') String name static hasMany = [attendances: Attendance] @SearchableId(accessor = 'property') Long id @SearchableComponent Set<Attendance> getAttendances() { return attendances } } @Searchable(root = false) class Attendance { static belongsTo = [student: Student, course: Course] @SearchableProperty(accessor = 'property') String mandatory = "Y" @SearchableId(accessor = 'property') Long id @SearchableComponent Course getCourse() { return course } } @Searchable(root = false) class Course { @SearchableProperty(accessor = 'property', name = "courseName") String name @SearchableId(accessor = 'property') Long id }

    Read the article

  • PHP: MySQL query duplicating update for no reason

    - by ThinkingInBits
    The code below is first the client code, then the class file. For some reason the 'deductTokens()' method is calling twice, thus charging an account double. I've been programming all night, so I may just need a second pair of eyes: if ($action == 'place_order') { if ($_REQUEST['unlimited'] == 200) { $license = 'extended'; } else { $license = 'standard'; } if ($photograph->isValidPhotographSize($photograph_id, $_REQUEST['size_radio'])) { $token_cost = $photograph->getTokenCost($_REQUEST['size_radio'], $_REQUEST['unlimited']); $order = new ImageOrder($_SESSION['user']['id'], $_REQUEST['size_radio'], $license, $token_cost); $order->saveOrder(); $order->deductTokens(); header('location: account.php'); } else { die("Please go back and select a valid photograph size"); } } ######CLASS CODE####### <?php include_once('database_classes.php'); class Order { protected $account_id; protected $cost; protected $license; public function __construct($account_id, $license, $cost) { $this->account_id = $account_id; $this->cost = $cost; $this->license = $license; } } class ImageOrder extends Order { protected $size; public function __construct($account_id, $size, $license, $cost) { $this->size = $size; parent::__construct($account_id, $license, $cost); } public function saveOrder() { //$db = Connect::connect(); //$account_id = $db->real_escape_string($this->account_id); //$size = $db->real_escape_string($this->size); //$license = $db->real_escape_string($this->license); //$cost = $db->real_escape_string($this->cost); } public function deductTokens() { $db = Connect::connect(); $account_id = $db->real_escape_string($this->account_id); $cost = $db->real_escape_string($this->cost); $query = "UPDATE accounts set tokens=tokens-$cost WHERE id=$account_id"; $result = $db->query($query); } } ?> When I die("$query"); directly after the query, it's printing the proper statement, and when I run that query within MySQL it works perfectly.

    Read the article

  • Stopping the manipulation of variables used for data collection?

    - by Ruinous
    I am working on a project in java and I was hoping to be able to collect statistics from the client and a possible problem that I fear will occur is the manipulation of the variables used for collection which will lead to illegitimate statistics. Is it in any way possible to prevent the manipulation of variables or is it always possible? For example: I want to log the actions made per hour from the client. The variable acting as a counter for the amount of actions performed is manipulated and a much larger amount is added to the counter. This data is then uploaded to the server (Of course using a multi-tier architecture to prevent even more possible problems) and considered 'legit.' Is there any way to prevent this?

    Read the article

  • c# creating a database query METHOD

    - by Sinaesthetic
    I'm not sure if im delluded but what I would like to do is create a method that will return the results of a query, so that i can reuse the connection code. As i understand it, a query returns an object but how do i pass that object back? I want to send the query into the method as a string argument, and have it return the results so that I can use them. Here's what i have which was a stab in the dark, it obviously doesn't work. This example is me trying to populate a listbox with the results of a query; the sheet name is Employees and the field/column is name. The error i get is "Complex DataBinding accepts as a data source either an IList or an IListSource.". any ideas? public Form1() { InitializeComponent(); openFileDialog1.ShowDialog(); openedFile = openFileDialog1.FileName; lbxEmployeeNames.DataSource = Query("Select [name] FROM [Employees$]"); } public object Query(string sql) { System.Data.OleDb.OleDbConnection MyConnection; System.Data.OleDb.OleDbCommand myCommand = new System.Data.OleDb.OleDbCommand(); string connectionPath; //build connection string connectionPath = "provider=Microsoft.Jet.OLEDB.4.0;Data Source='" + openedFile + "';Extended Properties=Excel 8.0;"; MyConnection = new System.Data.OleDb.OleDbConnection(connectionPath); MyConnection.Open(); myCommand.Connection = MyConnection; myCommand.CommandText = sql; return myCommand.ExecuteNonQuery(); }

    Read the article

  • Simple UPDATE query with (sometime) long query times

    - by Eric
    I run a dedicated MySQL server (2 cores, 16GB RAM) serving 100-200 requests per second. It is getting sluggish during peak traffic and I have a hard time optimizing the server. So I'm looking for some ideas now that I have done lots of Innodb fine-tuning with the "TUNING PRIMER" The query that now generates most slow queries is the following (see result from mysqldumpslow): Count: 433 Time=3.40s (1470s) Lock=0.00s (0s) Rows=0.0 (0), UPDATE user_sessions SET tid='S' WHERE idsession='S' I am very surprised to have so many long queries for such a simple query with no locking. Fyi, the table is InnoDB and has 14000 rows. It contains all active sessions on the site with approx 10 UPDATE and SELECT hits per second. Here is its structure: CREATE TABLE `user_sessions` ( `personid` mediumint(9) NOT NULL DEFAULT '0', `ip` varchar(18) COLLATE utf8_unicode_ci NOT NULL, `idsession` varchar(32) COLLATE utf8_unicode_ci NOT NULL, `datum` date NOT NULL DEFAULT '0000-00-00', `tid` time NOT NULL DEFAULT '00:00:00', `status` tinyint(4) NOT NULL DEFAULT '0', KEY `personid` (`personid`), KEY `idsession` (`idsession`), KEY `datum` (`datum`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Any ideas?

    Read the article

  • MS SQL Query Sum of subquery

    - by San
    Hello , I need a help i getting following output from the query . SELECT ARG_CONSUMER, cast(ARG_TOTALAMT as float)/100 AS 'Total', (SELECT SUM(cast(DAMT as float))/100 FROM DEBT WHERE DDATE >= ARG.ARG_ORIGDATE AND DDATE <= ARG.ARG_LASTPAYDATE AND DTYPE IN ('CSH','CNTP','DDR','NBP') AND DCONSUMER = ARG.ARG_CONSUMER ) AS 'Paid' FROM ARGMASTER ARG WHERE ARG_STATUS = '1' Current output is a list of all records... But what i want to achieve here is count of arg consumers Total of ARG_TOTALAMT total of that subquery PAID difference between PAID & Total amount. I am able to achieve first two i.e. count of consumers & total of ARG _ TOTALAMT... but i am confused about sum of of ...i.e. sum (SELECT SUM(cast(DAMT as float))/100 FROM DEBT WHERE DDATE >= ARG.ARG_ORIGDATE AND DDATE <= ARG.ARG_LASTPAYDATE AND DTYPE IN ('CSH','CNTP','DDR','NBP') AND DCONSUMER = ARG.ARG_CONSUMER) AS 'Paid' Please advice

    Read the article

  • JPA : optimize EJB-QL query involving large many-to-many join table

    - by Fabien
    Hi all. I'm using Hibernate Entity Manager 3.4.0.GA with Spring 2.5.6 and MySql 5.1. I have a use case where an entity called Artifact has a reflexive many-to-many relation with itself, and the join table is quite large (1 million lines). As a result, the HQL query performed by one of the methods in my DAO takes a long time. Any advice on how to optimize this and still use HQL ? Or do I have no choice but to switch to a native SQL query that would perform a join between the table ARTIFACT and the join table ARTIFACT_DEPENDENCIES ? Here is the problematic query performed in the DAO : @SuppressWarnings("unchecked") public List<Artifact> findDependentArtifacts(Artifact artifact) { Query query = em.createQuery("select a from Artifact a where :artifact in elements(a.dependencies)"); query.setParameter("artifact", artifact); List<Artifact> list = query.getResultList(); return list; } And the code for the Artifact entity : package com.acme.dependencytool.persistence.model; import java.util.ArrayList; import java.util.List; import javax.persistence.CascadeType; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.FetchType; import javax.persistence.GeneratedValue; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.persistence.JoinTable; import javax.persistence.ManyToMany; import javax.persistence.Table; import javax.persistence.UniqueConstraint; @Entity @Table(name = "ARTIFACT", uniqueConstraints={@UniqueConstraint(columnNames={"GROUP_ID", "ARTIFACT_ID", "VERSION"})}) public class Artifact { @Id @GeneratedValue @Column(name = "ID") private Long id = null; @Column(name = "GROUP_ID", length = 255, nullable = false) private String groupId; @Column(name = "ARTIFACT_ID", length = 255, nullable = false) private String artifactId; @Column(name = "VERSION", length = 255, nullable = false) private String version; @ManyToMany(cascade=CascadeType.ALL, fetch=FetchType.EAGER) @JoinTable( name="ARTIFACT_DEPENDENCIES", joinColumns = @JoinColumn(name="ARTIFACT_ID", referencedColumnName="ID"), inverseJoinColumns = @JoinColumn(name="DEPENDENCY_ID", referencedColumnName="ID") ) private List<Artifact> dependencies = new ArrayList<Artifact>(); public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getGroupId() { return groupId; } public void setGroupId(String groupId) { this.groupId = groupId; } public String getArtifactId() { return artifactId; } public void setArtifactId(String artifactId) { this.artifactId = artifactId; } public String getVersion() { return version; } public void setVersion(String version) { this.version = version; } public List<Artifact> getDependencies() { return dependencies; } public void setDependencies(List<Artifact> dependencies) { this.dependencies = dependencies; } } Thanks in advance. EDIT 1 : The DDLs are generated automatically by Hibernate EntityMananger based on the JPA annotations in the Artifact entity. I have no explicit control on the automaticaly-generated join table, and the JPA annotations don't let me explicitly set an index on a column of a table that does not correspond to an actual Entity (in the JPA sense). So I guess the indexing of table ARTIFACT_DEPENDENCIES is left to the DB, MySQL in my case, which apparently uses a composite index based on both clumns but doesn't index the column that is most relevant in my query (DEPENDENCY_ID). mysql describe ARTIFACT_DEPENDENCIES; +---------------+------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------+------+-----+---------+-------+ | ARTIFACT_ID | bigint(20) | NO | MUL | NULL | | | DEPENDENCY_ID | bigint(20) | NO | MUL | NULL | | +---------------+------------+------+-----+---------+-------+ EDIT 2 : When turning on showSql in the Hibernate session, I see many occurences of the same type of SQL query, as below : select dependenci0_.ARTIFACT_ID as ARTIFACT1_1_, dependenci0_.DEPENDENCY_ID as DEPENDENCY2_1_, artifact1_.ID as ID1_0_, artifact1_.ARTIFACT_ID as ARTIFACT2_1_0_, artifact1_.GROUP_ID as GROUP3_1_0_, artifact1_.VERSION as VERSION1_0_ from ARTIFACT_DEPENDENCIES dependenci0_ left outer join ARTIFACT artifact1_ on dependenci0_.DEPENDENCY_ID=artifact1_.ID where dependenci0_.ARTIFACT_ID=? Here's what EXPLAIN in MySql says about this type of query : mysql explain select dependenci0_.ARTIFACT_ID as ARTIFACT1_1_, dependenci0_.DEPENDENCY_ID as DEPENDENCY2_1_, artifact1_.ID as ID1_0_, artifact1_.ARTIFACT_ID as ARTIFACT2_1_0_, artifact1_.GROUP_ID as GROUP3_1_0_, artifact1_.VERSION as VERSION1_0_ from ARTIFACT_DEPENDENCIES dependenci0_ left outer join ARTIFACT artifact1_ on dependenci0_.DEPENDENCY_ID=artifact1_.ID where dependenci0_.ARTIFACT_ID=1; +----+-------------+--------------+--------+-------------------+-------------------+---------+---------------------------------------------+------+-------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+--------+-------------------+-------------------+---------+---------------------------------------------+------+-------+ | 1 | SIMPLE | dependenci0_ | ref | FKEA2DE763364D466 | FKEA2DE763364D466 | 8 | const | 159 | | | 1 | SIMPLE | artifact1_ | eq_ref | PRIMARY | PRIMARY | 8 | dependencytooldb.dependenci0_.DEPENDENCY_ID | 1 | | +----+-------------+--------------+--------+-------------------+-------------------+---------+---------------------------------------------+------+-------+ EDIT 3 : I tried setting the FetchType to LAZY in the JoinTable annotation, but I then get the following exception : Hibernate: select artifact0_.ID as ID1_, artifact0_.ARTIFACT_ID as ARTIFACT2_1_, artifact0_.GROUP_ID as GROUP3_1_, artifact0_.VERSION as VERSION1_ from ARTIFACT artifact0_ where artifact0_.GROUP_ID=? and artifact0_.ARTIFACT_ID=? 51545 [btpool0-2] ERROR org.hibernate.LazyInitializationException - failed to lazily initialize a collection of role: com.acme.dependencytool.persistence.model.Artifact.dependencies, no session or session was closed org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: com.acme.dependencytool.persistence.model.Artifact.dependencies, no session or session was closed at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:380) at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationExceptionIfNotConnected(AbstractPersistentCollection.java:372) at org.hibernate.collection.AbstractPersistentCollection.readSize(AbstractPersistentCollection.java:119) at org.hibernate.collection.PersistentBag.size(PersistentBag.java:248) at com.acme.dependencytool.server.DependencyToolServiceImpl.createArtifactViewBean(DependencyToolServiceImpl.java:93) at com.acme.dependencytool.server.DependencyToolServiceImpl.createArtifactViewBean(DependencyToolServiceImpl.java:109) at com.acme.dependencytool.server.DependencyToolServiceImpl.search(DependencyToolServiceImpl.java:48) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:527) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:166) at com.google.gwt.user.server.rpc.RemoteServiceServlet.doPost(RemoteServiceServlet.java:86) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:362) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:729) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:843) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:647) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:205) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:395) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:488)

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • Slow INFORMATION_SCHEMA query

    - by Thomas
    We have a .NET Windows application that runs the following query on login to get some information about the database: SELECT t.TABLE_NAME, ISNULL(pk_ccu.COLUMN_NAME,'') PK, ISNULL(fk_ccu.COLUMN_NAME,'') FK FROM INFORMATION_SCHEMA.TABLES t LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk_tc ON pk_tc.TABLE_NAME = t.TABLE_NAME AND pk_tc.CONSTRAINT_TYPE = 'PRIMARY KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE pk_ccu ON pk_ccu.CONSTRAINT_NAME = pk_tc.CONSTRAINT_NAME LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS fk_tc ON fk_tc.TABLE_NAME = t.TABLE_NAME AND fk_tc.CONSTRAINT_TYPE = 'FOREIGN KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE fk_ccu ON fk_ccu.CONSTRAINT_NAME = fk_tc.CONSTRAINT_NAME Usually this runs in a couple seconds, but on one server running SQL Server 2000, it is taking over four minutes to run. I ran it with the execution plan enabled, and the results are huge, but this part caught my eye (it won't let me post an image): http://img35.imageshack.us/i/plank.png/ I then updated the statistics on all of the tables that were mentioned in the execution plan: update statistics sysobjects update statistics syscolumns update statistics systypes update statistics master..spt_values update statistics sysreferences But that didn't help. The index tuning wizard doesn't help either, because it doesn't let me select system tables. There is nothing else running on this server, so nothing else could be slowing it down. What else can I do to diagnose or fix the problem on that server?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >