Search Results

Search found 21942 results on 878 pages for 'named query'.

Page 90/878 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • mysql - check if data exists across multiple tables

    - by Dd Daym
    I am currently running this query inside MySQL to check if the specified values exists within the table associated with them. SELECT COUNT(artist.artist_id), COUNT(album.album_id), COUNT(tracks.track_id) FROM artist, album, tracks WHERE artist.artist_id = 320295 OR album.album_id = 1234 OR tracks.track_id = 809 The result I get from running this query is all 1, meaning that all the statements after the WHERE clause is true. To further check the query's reliability, I changed the tracks.track_ = 809 to 802, which I know does not match. However the results displayed are still all 1, meaning that they were all successfully matched even when I purposefully inserted a value which would not have matched. How do I get it to show 1 for a match and 0 for no matches within the same query? EDIT: I have inserted an image of the query running

    Read the article

  • MySQL - Counting rows in preparation for greatest-n-per-group not working?

    - by John M
    Referring to SO and other sites have given me examples of how to use MySQL to create a 'greatest-n-per-group' query. My variant on this would be to have a query that returns the first 3 rows of each category. As the basis for this I need to sort my data into a usable sequence and that is where my problems start. Running just the sequence query with row numbering shows that changes in category are mostly ignored. I should have 35 categories returning rows but only 5 do so. My query: set @c:=0; set @a:=0; SELECT IF(@c = tdg, @a:=@a+1, @a:=1) AS rownum, (@c:=tdg) , julian_day, sequence, complete, year, tdg FROM tsd WHERE complete = 0 order by tdg, year, julian_day, sequence Do I have a syntax mistake with this query?

    Read the article

  • Access 2007 VBA & SQL - Update a Subform pointed at a dynamically created query

    - by Lucretius
    Abstract: I'm using VB to recreate a query each time a user selects one of 3 options from a drop down menu, which appends the WHERE clause If they've selected anything from the combo boxes. I then am attempting to get the information displayed on the form to refresh thereby filtering what is displayed in the table based on user input. 1) Dynamically created query using VB. Private Sub BuildQuery() ' This sub routine will redefine the subQryAllJobsQuery based on input from ' the user on the Management tab. Dim strQryName As String Dim strSql As String ' Main SQL SELECT statement Dim strWhere As String ' Optional WHERE clause Dim qryDef As DAO.QueryDef Dim dbs As DAO.Database strQryName = "qryAllOpenJobs" strSql = "SELECT * FROM tblOpenJobs" Set dbs = CurrentDb ' In case the query already exists we should deleted it ' so that we can rebuild it. The ObjectExists() function ' calls a public function in GlobalVariables module. If ObjectExists("Query", strQryName) Then DoCmd.DeleteObject acQuery, strQryName End If ' Check to see if anything was selected from the Shift ' Drop down menu. If so, begin the where clause. If Not IsNull(Me.cboShift.Value) Then strWhere = "WHERE tblOpenJobs.[Shift] = '" & Me.cboShift.Value & "'" End If ' Check to see if anything was selected from the Department ' drop down menu. If so, append or begin the where clause. If Not IsNull(Me.cboDepartment.Value) Then If IsNull(strWhere) Then strWhere = strWhere & " AND tblOpenJobs.[Department] = '" & Me.cboDepartment.Value & "'" Else strWhere = "WHERE tblOpenJobs.[Department] = '" & Me.cboDepartment.Value & "'" End If End If ' Check to see if anything was selected from the Date ' field. If so, append or begin the Where clause. If Not IsNull(Me.txtDate.Value) Then If Not IsNull(strWhere) Then strWhere = strWhere & " AND tblOpenJobs.[Date] = '" & Me.txtDate.Value & "'" Else strWhere = "WHERE tblOpenJobs.[Date] = '" & Me.txtDate.Value & "'" End If End If ' Concatenate the Select and the Where clause together ' unless all three parameters are null, in which case return ' just the plain select statement. If IsNull(Me.cboShift.Value) And IsNull(Me.cboDepartment.Value) And IsNull(Me.txtDate.Value) Then Set qryDef = dbs.CreateQueryDef(strQryName, strSql) Else strSql = strSql & " " & strWhere Set qryDef = dbs.CreateQueryDef(strQryName, strSql) End If End Sub 2) Main Form where the user selects items from combo boxes. picture of the main form and sub form http://i48.tinypic.com/25pjw2a.png 3) Subform pointed at the query created in step 1. Chain of events: 1) User selects item from drop down list on the main form. 2) Old query is deleted, new query is generated (same name). 3) Subform pointed at query does not update, but if you open the query by itself the correct results are displayed. Name of the Query: qryAllOpenJobs name of the subform: subQryAllOpenJobs Also, the Row Source of subQryAllOpenJobs = qryAllOpenJobs Name of the main form: frmManagement

    Read the article

  • C++ MySQL++ Delete query statement brain killer question

    - by shauny
    Hello all, I'm relatively new to the MySQL++ connector in C++, and have an really annoying issue with it already! I've managed to get stored procedures working, however i'm having issues with the delete statements. I've looked high and low and have found no documentation with examples. First I thought maybe the code needs to free the query/connection results after calling the stored procedure, but of course MySQL++ doesn't have a free_result method... or does it? Anyways, here's what I've got: #include <iostream> #include <stdio.h> #include <queue> #include <deque> #include <sys/stat.h> #include <mysql++/mysql++.h> #include <boost/thread/thread.hpp> #include "RepositoryQueue.h" using namespace boost; using namespace mysqlpp; class RepositoryChecker { private: bool _isRunning; Connection _con; public: RepositoryChecker() { try { this->_con = Connection(false); this->_con.set_option(new MultiStatementsOption(true)); this->_con.set_option(new ReconnectOption(true)); this->_con.connect("**", "***", "***", "***"); this->ChangeRunningState(true); } catch(const Exception& e) { this->ChangeRunningState(false); } } /** * Thread method which runs and creates the repositories */ void CheckRepositoryQueues() { //while(this->IsRunning()) //{ std::queue<RepositoryQueue> queues = this->GetQueue(); if(queues.size() > 0) { while(!queues.empty()) { RepositoryQueue &q = queues.front(); char cmd[256]; sprintf(cmd, "svnadmin create /home/svn/%s/%s/%s", q.GetPublicStatus().c_str(), q.GetUsername().c_str(), q.GetRepositoryName().c_str()); if(this->DeleteQueuedRepository(q.GetQueueId())) { printf("query deleted?\n"); } printf("Repository created!\n"); queues.pop(); } } boost::this_thread::sleep(boost::posix_time::milliseconds(500)); //} } protected: /** * Gets the latest queue of repositories from the database * and returns them inside a cool queue defined with the * RepositoryQueue class. */ std::queue<RepositoryQueue> GetQueue() { std::queue<RepositoryQueue> queues; Query query = this->_con.query("CALL sp_GetRepositoryQueue();"); StoreQueryResult result = query.store(); RepositoryQueue rQ; if(result.num_rows() > 0) { for(unsigned int i = 0;i < result.num_rows(); ++i) { rQ = RepositoryQueue((unsigned int)result[i][0], (unsigned int)result[i][1], (String)result[i][2], (String)result[i][3], (String)result[i][4], (bool)result[i][5]); queues.push(rQ); } } return queues; } /** * Allows the thread to be shut off. */ void ChangeRunningState(bool isRunning) { this->_isRunning = isRunning; } /** * Returns the running value of the active thread. */ bool IsRunning() { return this->_isRunning; } /** * Deletes the repository from the mysql queue table. This is * only called once it has been created. */ bool DeleteQueuedRepository(unsigned int id) { char cmd[256]; sprintf(cmd, "DELETE FROM RepositoryQueue WHERE Id = %d LIMIT 1;", id); Query query = this->_con.query(cmd); return (query.exec()); } }; I've removed all the other methods as they're not needed... Basically it's the DeleteQueuedRepository method which isn't working, the GetQueue works fine. PS: This is on a Linux OS (Ubuntu server) Many thanks, Shaun

    Read the article

  • Problem with PHP/AS3 - Display PHP query results back to flash via AS3

    - by Dale J
    Hi, Ive made a query in PHP, and i'm trying to send the results back into Flash via AS3, but its throwing up this error Error: Error #2101: The String passed to URLVariables.decode() must be a URL-encoded query string containing name/value pairs. at Error$/throwError() at flash.net::URLVariables/decode() at flash.net::URLVariables() at flash.net::URLLoader/onComplete() Here is the relevant part of the PHP and AS3 code, including the query. The flash variable rssAdd is passed over to the PHP which it using it in the PHP query accordingly. $url = $_POST['rssAdd']; $query= SELECT title FROM Feed WHERE category = (SELECT category FROM Feed WHERE url =$url) AND url!=$url; $result = mysql_query($query); echo $query; Here is the AS3 code I've done so far. function recommendation(){ var request:URLRequest = new URLRequest("url"); request.method = URLRequestMethod.POST var recVars:URLVariables = new URLVariables(); recVars.rssAdd=rssAdd; request.data = recVars var loader:URLLoader = new URLLoader(request); loader.addEventListener(Event.COMPLETE, onComplete); loader.dataFormat = URLLoaderDataFormat.TEXT; loader.load(request); function onComplete(event:Event):void{ recommend.text = event.target.data; } } Any help would be most appreciated, thanks.

    Read the article

  • SQL 2005 indexed queries slower than unindexed queries

    - by uos??
    Adding a seemingly perfectly index is having an unexpectedly adverse affect on a query performance... -- [Data] has a predictable structure and a simple clustered index of the primary key: ALTER TABLE [dbo].[Data] ADD PRIMARY KEY CLUSTERED ( [ID] ) -- My query, joins on itself looking for a certain kind of "overlapping" records SELECT DISTINCT [Data].ID AS [ID] FROM dbo.[Data] AS [Data] JOIN dbo.[Data] AS [Compared] ON [Data].[A] = [Compared].[A] AND [Data].[B] = [Compared].[B] AND [Data].[C] = [Compared].[C] AND ([Data].[D] = [Compared].[D] OR [Data].[E] = [Compared].[E]) AND [Data].[F] <> [Compared].[F] WHERE 1=1 AND [Data].[A] = @A AND @CS <= [Data].[C] AND [Data].[C] < @CE -- Between a range [Data] has about a quarter-million records so far, 10% to 50% of the data satisfies the where clause depending on @A, @CS, and @CE. As is, the query takes 1 second to return about 300 rows when querying 10%, and 30 seconds to return 3000 rows when querying 50% of the data. Curiously, the estimated/actual execution plan indicates two parallel Clustered Index Scans, but the clustered index is only of the ID, which isn't part of the conditions of the query, only the output. ?? If I add this hand-crafted [IDX_A_B_C_D_E_F] index which I fully expected to improve performance, the query slows down by a factor of 8 (8 seconds for 10% & 4 minutes for 50%). The estimated/actual execution plans show an Index Seek, which seems like the right thing to be doing, but why so slow?? CREATE UNIQUE INDEX [IDX_A_B_C_D_E_F] ON [dbo].[Data] ([A], [B], [C], [D], [E], [F]) INCLUDE ([ID], [X], [Y], [Z]); The Data Engine Tuning wizard suggests a similar index with no noticeable difference in performance from this one. Moving AND [Data].[F] <> [Compared].[F] from the join condition to the where clause makes no difference in performance. I need these and other indexes for other queries. I'm sure I could hint that the query should refer to the Clustered Index, since that's currently winning - but we all know it is not as optimized as it could be, and without a proper index, I can expect the performance will get much worse with additional data. What gives?

    Read the article

  • SQL Where Clause Against View

    - by Adam Carr
    I have a view (actually, it's a table valued function, but the observed behavior is the same in both) that inner joins and left outer joins several other tables. When I query this view with a where clause similar to SELECT * FROM [v_MyView] WHERE [Name] like '%Doe, John%' ... the query is very slow, but if I do the following... SELECT * FROM [v_MyView] WHERE [ID] in ( SELECT [ID] FROM [v_MyView] WHERE [Name] like '%Doe, John%' ) it is MUCH faster. The first query is taking at least 2 minutes to return, if not longer where the second query will return in less than 5 seconds. Any suggestions on how I can improve this? If I run the whole command as one SQL statement (without the use of a view) it is very fast as well. I believe this result is because of how a view should behave as a table in that if a view has OUTER JOINS, GROUP BYS or TOP ##, if the where clause was interpreted prior to vs after the execution of the view, the results could differ. My question is why wouldn't SQL optimize my first query to something as efficient as my second query?

    Read the article

  • Strange intermittent SQL connection error, fixes on reboot, comes back after 3-5 days (ASP.NET)

    - by Ryan
    For some reason every 3-5 days our web app loses the ability to open a connection to the db with the following error, the strange thing is that all we have to do is reboot the container (it is a VPS) and it is restored to normal functionality. Then a few days later or so it happens again. Has anyone ever had such a problem? I have noticed a lot of ANONYMOUS LOGONs in the security log in the middle of the night from our AD server which is strange, and also some from an IP in Amsterdam. I am not sure how to tell what exactly they mean or if it is related or not. Server Error in '/ntsb' Application. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) Source Error: Line 11: Line 12: Line 13: dbConnection.Open() Line 14: Line 15: Source File: C:\Inetpub\wwwroot\includes\connection.ascx Line: 13 Stack Trace: [SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)] System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +248 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +245 System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, SqlConnection owningObject) +475 System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) +260 System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) +2445449 System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) +2445144 System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) +354 System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) +703 System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) +54 System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) +2414696 System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) +92 System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) +1657 System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) +84 System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) +1645687 System.Data.SqlClient.SqlConnection.Open() +258 ASP.includes_connection_ascx.getConnection() in C:\Inetpub\wwwroot\includes\connection.ascx:13 ASP.default_aspx.Page_Load(Object sender, EventArgs e) in C:\Inetpub\wwwroot\Default.aspx:16 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +25 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +42 System.Web.UI.Control.OnLoad(EventArgs e) +132 System.Web.UI.Control.LoadRecursive() +66 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2428 Version Information: Microsoft .NET Framework Version:2.0.50727.3053; ASP.NET Version:2.0.50727.3053

    Read the article

  • My SQL query is only returning I need the parent aswell

    - by sico87
    My sql query is only returning the children of the parent I need it to return the parent as well, public function getNav($cat,$subcat){ //gets all sub categories for a specific category if(!$this->checkValue($cat)) return false; //checks data $query = false; if($cat=='NULL'){ $sql = "SELECT itemID, title, parent, url, description, image FROM p_cat WHERE deleted = 0 AND parent is NULL ORDER BY position;"; $query = $this->db->query($sql) or die($this->db->error); }else{ //die($cat); $sql = "SET @parent = (SELECT c.itemID FROM p_cat c WHERE url = '".$this->sql($cat)."' AND deleted = 0); SELECT c1.itemID, c1.title, c1.parent, c1.url, c1.description, c1.image, (SELECT c2.url FROM p_cat c2 WHERE c2.itemID = c1.parent LIMIT 1) as parentUrl FROM p_cat c1 WHERE c1.deleted = 0 AND c1.parent = @parent ORDER BY c1.position;"; $query = $this->db->multi_query($sql) or die($this->db->error); $this->db->store_result(); $this->db->next_result(); $query = $this->db->store_result(); } return $query; }

    Read the article

  • Searching For A Record After A LINQ query

    - by Justin
    I'm confused to why this is happening. I'm new to LINQ so I'm clearly missing something here, that is probably pretty easy. I've looked up help on the topic, but I don't really know what to ask so I haven't found any answers that really address my question. This doesn't work It throws an EntityCommandExecutionException when the FirstOrDefault method is executed. var query = from band in context.BandsEntitySet where band.ID == 12345 select band; string venueName = "Willis Park"; foreach (var item in query) { var venue = context.VenueEntitySet.FirstOrDefault(r => r.Venue.Equals(venueName)); } This works var query = from band in context.BandsEntitySet where band.ID == 12345 select band; var bandList = query.toList(); string venueName = "Willis Park"; foreach (var item in bandList) { var venue = context.VenueEntitySet.FirstOrDefault(r => r.Venue.Equals(venueName)); } My question is simple: Why is the exception being thrown? And why does creating a list from the query allow me to use the FirstOrDefault method? Exception Message: A first chance exception of type 'System.Data.EntityCommandExecutionException' occurred in System.Data.Entity.dll I guess I am wrong in my assumption that query is a list? Then what is it exactly? I'm confused because this doesn't throw an exception: foreach (var item in query) { var area = item.VenueArea; } I'd appreciate any help on this issue. thanks, Justin

    Read the article

  • How to query the SPView object

    - by Hugo Migneron
    I have a SPView object that contains a lot of SPListItem objects (there are many fields in the view). I am only interested in one of these fields. Let's call it specialField Given that view and specialField, I want to know if a value is contained in specialField. Here is a way of doing what I want to do : String specialField = "Special Field"; String specialValue = "value"; SPList list = SPContext.Current.Site.RootWeb.Lists["My List"]; SPView view = list.Views["My View"]; //This is the view I want to query SPQuery query = new SPQuery(); query.Query = view.Query; SPListItemCollection items = list.GetItems(query); foreach(SPListItem item in items) { var value = item[specialField]; if(value != null) && (value.ToString() == specialValue) { //My value is found. This is what I was looking for. //break out of the loop or return } } //My value is not found. However, iterating through each ListItem hardly seems optimal, especially as there might be hundreds of items. This query will be executed often, so I am looking for an efficient way to do this. EDIT I will not always be working with the same view, so my solution cannot be hardcoded (it has to be generic enough that the list, view and specialField can be changed.

    Read the article

  • How can I pipe two Perl CORE::system commands in a cross-platform way?

    - by Pedro Silva
    I'm writing a System::Wrapper module to abstract away from CORE::system and the qx operator. I have a serial method that attempts to connect command1's output to command2's input. I've made some progress using named pipes, but POSIX::mkfifo is not cross-platform. Here's part of what I have so far (the run method at the bottom basically calls system): package main; my $obj1 = System::Wrapper->new( interpreter => 'perl', arguments => [-pe => q{''}], input => ['input.txt'], description => 'Concatenate input.txt to STDOUT', ); my $obj2 = System::Wrapper->new( interpreter => 'perl', arguments => [-pe => q{'$_ = reverse $_}'}], description => 'Reverse lines of input input', output => { '>' => 'output' }, ); $obj1->serial( $obj2 ); package System::Wrapper; #... sub serial { my ($self, @commands) = @_; eval { require POSIX; POSIX->import(); require threads; }; my $tmp_dir = File::Spec->tmpdir(); my $last = $self; my @threads; push @commands, $self; for my $command (@commands) { croak sprintf "%s::serial: type of args to serial must be '%s', not '%s'", ref $self, ref $self, ref $command || $command unless ref $command eq ref $self; my $named_pipe = File::Spec->catfile( $tmp_dir, int \$command ); POSIX::mkfifo( $named_pipe, 0777 ) or croak sprintf "%s::serial: couldn't create named pipe %s: %s", ref $self, $named_pipe, $!; $last->output( { '>' => $named_pipe } ); $command->input( $named_pipe ); push @threads, threads->new( sub{ $last->run } ); $last = $command; } $_->join for @threads; } #... My specific questions: Is there an alternative to POSIX::mkfifo that is cross-platform? Win32 named pipes don't work, as you can't open those as regular files, neither do sockets, for the same reasons. 2. The above doesn't quite work; the two threads get spawned correctly, but nothing flows across the pipe. I suppose that might have something to do with pipe deadlocking or output buffering. What throws me off is that when I run those two commands in the actual shell, everything works as expected. Point 2 is solved; a -p fifo file test was not testing the correct file.

    Read the article

  • Slow T-SQL query, convert to LINQ to Object

    - by yimbot
    I have a T-SQL query which populates a DataSet from an MSSQL database. string qry = "SELECT * FROM EvnLog AS E WHERE TimeDate = (SELECT MAX(TimeDate) From EvnLog WHERE Code = E.Code) AND (Event = 8) AND (TimeDate BETWEEN '" + Start + "' AND '" + Finish + "')" The database is quite large and being the type of nested query it is, the Data Adapter can take a number of minutes to fill the DataSet. I have extended the DataAdapter's timeout value to 480 seconds to combat it, but the client still complains about slow performance and occassional timeouts. To combat this, I was considering executing a simpler query (ie. just taking the date range) and then populating a Generic List which I could then execute a Linq query against. The simple query executes very quickly which is great. However, I cannot seem to build a Linq query which generates the same results as the T-SQL query above. Is this the best solution to this problem? Can anyone provide tips on rewriting the above T-SQL into Linq? I have also considered using a DataView, but cannot seem to get the results from that either.

    Read the article

  • How To perform a SQL Query to DataTable Operation That Can Be Cancelled

    - by David W
    I tried to make the title as specific as possible. Basically what I have running inside a backgroundworker thread now is some code that looks like: SqlConnection conn = new SqlConnection(connstring); SqlCommand cmd = new SqlCommand(query, conn); conn.Open(); SqlDataAdapter sda = new SqlDataAdapter(cmd); sda.Fill(Results); conn.Close(); sda.Dispose(); Where query is a string representing a large, time consuming query, and conn is the connection object. My problem now is I need a stop button. I've come to realize killing the backgroundworker would be worthless because I still want to keep what results are left over after the query is canceled. Plus it wouldn't be able to check the canceled state until after the query. What I've come up with so far: I've been trying to conceptualize how to handle this efficiently without taking too big of a performance hit. My idea was to use a SqlDataReader to read the data from the query piece at a time so that I had a "loop" to check a flag I could set from the GUI via a button. The problem is as far as I know I can't use the Load() method of a datatable and still be able to cancel the sqlcommand. If I'm wrong please let me know because that would make cancelling slightly easier. In light of what I discovered I came to the realization I may only be able to cancel the sqlcommand mid-query if I did something like the below (pseudo-code): while(reader.Read()) { //check flag status //if it is set to 'kill' fire off the kill thread //otherwise populate the datatable with what was read } However, it would seem to me this would be highly ineffective and possibly costly. Is this the only way to kill a sqlcommand in progress that absolutely needs to be in a datatable? Any help would be appreciated!

    Read the article

  • Run multiple MySQL queries based on a series of ifs

    - by OldWest
    I am just getting started on this complex query I need to write and was hoping for any suggestions or feedback regarding table structure and the actual query itself.. I've already created my tables and populated test data, and now just trying to sort out how and what is possible within MySQL. Here is an outline of the problem: End result: Listing of rates based on specific queried criteria (see below): Age: [ 27 ] Spouse Age: [ 25 ] Num of Children: [ 3 ] Zip Code: [ 97128 ] The problem I am running into is each company that provides rates has a unique way of dealing with the rate. And I am looking for the best approach for multiple queries based on the company (one query with results for each company more or less all combined into one result set). Here are some facts: - Each company deals with zip code ranges which assist in the query result. - Each company has a different method of calculating the rate based on the Applicant, Spouse, Num of Children: Example, a) Company A determines rate by: Applicant + Spouse + Child(ren) = rate (age is pertinent to the applicant within a range). b) Company B determines the rate by total number of applicants like: 1, 2, 3, 4, 5, 6+ = rate (and age is ignored). First off, what would I call this type of query? Multiple nested query? And should I intertwine php within it to determine the If()s ... I apologize if this thread lacks sufficient data, so please tell me anything you would like to see.

    Read the article

  • EF Query with conditional include that uses Joins

    - by makerofthings7
    This is a follow up to another user's question. I have 5 tables CompanyDetail CompanyContacts FK to CompanyDetail CompanyContactsSecurity FK to CompanyContact UserDetail UserGroupMembership FK to UserDetail How do I return all companies and include the contacts in the same query? I would like to include companies that contain zero contacts. Companies have a 1 to many association to Contacts, however not every user is permitted to see every Contact. My goal is to get a list of every Company regardless of the count of Contacts, but include contact data. Right now I have this working query: var userGroupsQueryable = _entities.UserGroupMembership .Where(ug => ug.UserID == UserID) .Select(a => a.GroupMembership); var contactsGroupsQueryable = _entities.CompanyContactsSecurity;//.Where(c => c.CompanyID == companyID); /// OLD Query that shows permitted contacts /// ... I want to "use this query inside "listOfCompany" /// //var permittedContacts= from c in userGroupsQueryable //join p in contactsGroupsQueryable on c equals p.GroupID //select p; However this is inefficient when I need to get all contacts for all companies, since I use a For..Each loop and query each company individually and update my viewmodel. Question: How do I shoehorn the permittedContacts variable above and insert that into this query: var listOfCompany = from company in _entities.CompanyDetail.Include("CompanyContacts").Include("CompanyContactsSecurity") where company.CompanyContacts.Any( // Insert Query here.... // b => b.CompanyContactsSecurity.Join(/*inner*/,/*OuterKey*/,/*innerKey*/,/*ResultSelector*/) ) select company; My attempt at doing this resulted in: var listOfCompany = from company in _entities.CompanyDetail.Include("CompanyContacts").Include("CompanyContactsSecurity") where company.CompanyContacts.Any( // This is concept only... doesn't work... from grps in userGroupsQueryable join p in company.CompanyContactsSecurity on grps equals p.GroupID select p ) select company;

    Read the article

  • Getting Wrong Answer in range maximum query [on hold]

    - by user3186829
    I've just learnt range minimum and maximum queries using segment trees.But when I implemented it on my own I'm getting wrong answer.Logically I don't find any mistake in my code but if any one can point it out then I would be really thankful. Code in C++: #include<bits/stdc++.h> using namespace std; #define LL long long #define mp make_pair #define pb push_back #define gc getchar_unlocked #define pc putchar_unlocked #define LD long double #define MAXN 19999999 #define max(a,b) ((a)>(b)?(a):(b)) LL P[MAXN+15]; LL ST[2*MAXN+25]; long N,M,i,A,B,K; void build(long id,long L,long R) { long M=(L+R)>>1L; long LCT=id<<1L; long RCT=LCT+1L; if(L==R) { ST[id]=P[L]; return; } build(LCT,L,M); build(RCT,M+1,R); } //Range Update of segment tree void updateST(long id,long L,long R,long Q1,long Q2,long val) { long M=(L+R)>>1L; long LCT=id<<1L; long RCT=LCT+1L; if(L>Q2||R<Q1) { return; } if(L==Q1&&R==Q2) { ST[id]+=val; return; } if(Q2<=M) { updateST(LCT,L,M,Q1,Q2,val); } else if(Q1>M) { updateST(RCT,M+1,R,Q1,Q2,val); } else { updateST(LCT,L,M,Q1,M,val); updateST(RCT,M+1,R,M+1,Q2,val); } } //Query for finding maximum element in a given range[Q1,Q2] and 1<=Q1,Q2<=N LL query2(long id,long L,long R,long Q1,long Q2) { long M=(L+R)>>1; long LCT=id<<1; long RCT=LCT+1; if(L>Q2||R<Q1) { return 0; } if(L==Q1&&R==Q2) { return ST[id]; } if(Q2<=M) { return query2(LCT,L,M,Q1,Q2); } else if(Q1>M) { return query2(RCT,M+1,R,Q1,Q2); } else { LL G=query2(LCT,L,M,Q1,M); LL H=query2(RCT,M+1,R,M+1,Q2); LL RES=max(G,H); return RES; } } int main() { scanf("%ld %ld",&N,&M); build(1,1,N); for(i=0;i<M;i++) { scanf("%ld %ld %ld",&A,&B,&K); updateST(1,1,N,A,B,K); } //Finding maximum element in range[1,N]] cout<<query2(1,1,N,1,N); return 0; }

    Read the article

  • variables used in inner queries

    - by wcpro
    im trying to build a query that has something like this select id, (select top 1 create_date from table2 where table1id = t1.id and status = 'success') [last_success_date], (select count(*) from table2 where table1id = t1.id and create_date > [last_success_date]) [failures_since_success] from table1 t1 as you can see the [last_Success_Date] is not within the scope of the second query, and i was wondering how i could access that value in other queries without having to rerun it?

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • bind9 dlz/mysql at ubuntu segfault libmysqlclient.so

    - by Theos
    I have a big problem. I installed the bind9 nameserver to three different computer. two Ubuntu 10.04.4 LTS, and one Ubuntu 11.10 I compiled it 9.7.0, 9.7.3, 9.9.0 with this method: ./configure --prefix=/usr --sysconfdir=/etc/bind --localstatedir=/var \ --mandir=/usr/share/man --infodir=/usr/share/info \ --enable-threads --enable-largefile --with-libtool --enable-shared --enable-static \ --with-openssl=/usr --with-gssapi=/usr --with-gnu-ld \ --with-dlz-mysql=yes --with-dlz-bdb=no \ --with-dlz-filesystem=yes --with-geoip=/usr make make install After the set up for dlz/mysql, the BIND server is working perfetctly until 5-30 minute long. Ahter i got segfault. I resolve temporaly the problem with a simple process watchdog, and if the named is stopped, the watchdog is restart it, but this is not a good idea in long therm. My log output is: messages: Apr 13 19:33:51 dnsvm kernel: [ 8.088696] eth0: link up Apr 13 19:33:58 WATCHDOG: named not running. Restarting Apr 13 19:35:08 dnsvm kernel: [ 87.082572] named[1027]: segfault at 88 ip b71c4291 sp b5adfe30 error 4 in libmysqlclient.so.16.0.0[b714e000+1aa000] Apr 13 19:35:08 WATCHDOG: named not running. Restarting Apr 13 19:35:08 dnsvm kernel: [ 87.457510] named[1423]: segfault at 68 ip b71d6122 sp b52f0a40 error 4 in libmysqlclient.so.16.0.0[b7160000+1aa000] Apr 13 19:35:09 WATCHDOG: named not running. Restarting Apr 13 19:41:56 dnsvm kernel: [ 494.838206] named[1448]: segfault at 88 ip b731c291 sp b5436e30 error 4 in libmysqlclient.so.16.0.0[b72a6000+1aa000] Apr 13 19:41:57 WATCHDOG: named not running. Restarting Apr 13 19:57:26 dnsvm kernel: [ 1424.023409] named[2976]: segfault at 88 ip b72d1291 sp b6beee30 error 4 in libmysqlclient.so.16.0.0[b725b000+1aa000] Apr 13 19:57:26 WATCHDOG: named not running. Restarting Apr 13 20:11:56 dnsvm kernel: [ 2294.324663] named[6441]: segfault at 88 ip b7357291 sp b6473e30 error 4 in libmysqlclient.so.16.0.0[b72e1000+1aa000] Apr 13 20:11:57 WATCHDOG: named not running. Restarting syslog: http://pastebin.com/hjUyt8gN the first server is a native, normal x64 server (u1004lts), the second is virtualised server (u11.10) the third is also virtualised (10.04lts) This servers is only for dns providing with mysql server db. But the problem is be with all server, and all bind version. named.conf: http://pastebin.com/zwm1yP7V Can anybody help me, or any good idea?

    Read the article

  • Using named_scopes on the join model of a has_many :through

    - by uberllama
    Hi folks. I've been beating my head against the wall on something that on the surface should be very simple. Lets say I have the following simplified models: user.rb has_many :memberships has_many :groups, :through => :memberships membership.rb belongs_to :group belongs_to :user STATUS_CODES = {:admin => 1, :member => 2, :invited => 3} named_scope :active, :conditions => {:status => [[STATUS_CODES[:admin], STATUS_CODES[:member]]} group.rb has_many :memberships has_many :users, :through => :memberships Simple, right? So what I want to do is get a collection of all the groups a user is active in, using the existing named scope on the join model. Something along the lines of User.find(1).groups.active. Obviously this doesn't work. But as it stands, I need to do something like User.find(1).membrships.active.all(:include => :group) which returns a collection of memberships plus groups. I don't want that. I know I can add another has_many on the User model with conditions that duplicate the :active named_scope on the Membership model, but that's gross. has_many :active_groups, :through => :memberships, :source => :group, :conditions => ... So my question: is there a way of using intermediary named scopes when traversing directly between models? Many thanks.

    Read the article

  • Rails named_scope across multiple tables

    - by wakiki
    I'm trying to tidy up my code by using named_scopes in Rails 2.3.x but where I'm struggling with the has_many :through associations. I'm wondering if I'm putting the scopes in the wrong place... Here's some pseudo code below. The problem is that the :accepted named scope is replicated twice... I could of course call :accepted something different but these are the statuses on the table and it seems wrong to call them something different. Can anyone shed light on whether I'm doing the following correctly or not? I know Rails 3 is out but it's still in beta and it's a big project I'm doing so I can't use it in production yet. class Person < ActiveRecord::Base has_many :connections has_many :contacts, :through => :connections named_scope :accepted, :conditions => ["connections.status = ?", Connection::ACCEPTED] # the :accepted named_scope is duplicated named_scope :accepted, :conditions => ["memberships.status = ?", Membership::ACCEPTED] end class Group < ActiveRecord::Base has_many :memberships has_many :members, :through => :memberships end class Connection < ActiveRecord::Base belongs_to :person belongs_to :contact, :class_name => "Person", :foreign_key => "contact_id" end class Membership < ActiveRecord::Base belongs_to :person belongs_to :group end I'm trying to run something like person.contacts.accepted and group.members.accepted which are two different things. Shouldn't the named_scopes be in the Membership and Connection classes? One solution is to just call the two different named scope something different in the Person class or even to create separate associations (ie. has_many :accepted_members and has_many :accepted_contacts) but it seems hackish and in reality I have many more than just accepted (ie. banned members, ignored connections, pending, requested etc etc)

    Read the article

  • Complicated .NET factory design

    - by Tom W
    Hello SO; I'm planning to ask a fairly elaborate question that is also something of a musing here, so bear with me... I'm trying to design a factory implementation for a simulation application. The simulation will consist of different sorts of entities i.e. it is not a homogenous simulation in any respect. As a result, there will be numerous very different concrete implementations and only the very general properties will be abstracted at the top level. What I'd like to be able to do is create new simulation entities by calling a method on the model with a series of named arguments representing the parameters of the entity, and have the model infer what type of object is being described by the inbound parameters (from the names of the parameters and potentially the sequence they occur in) and call a factory method on the appropriate derived class. For example, if I pass the model a pair of parameters (Param1=5000, Param2="Bacon") I would like it to infer that the names Param1 and Param2 'belong' to the class "Blob1" and call a shared function "getBlob1" with named parameters Param1:=5000, Param2:="Bacon" whereas if I pass the model (Param1=5000, Param3=50) it would call a similar factory method for Blob2; because Param1 and Param3 in that order 'belong' to Blob2. I foresee several issues to resolve: Whether or not I can reflect on the available types with string parameter names and how to do this if it's possible Whether or not there's a neat way of doing the appropriate constructor inference from the combinatoric properties of the argument list or whether I'm going to have to bodge something to do it 'by hand'. If possible I'd like the model class to be able to accept parameters as parameters rather than as some collection of keys and values, which would require the model to expose a large number of parametrised methods at runtime without me having to code them explicitly - presumably one for every factory method available in the relevant namespace. What I'm really asking is how you'd go about implementing such a system, rather than whether or not it's fundamentally possible. I don't have the foresight or experience with .NET reflection to be able to figure out a way by myself. Hopefully this will prove an informative discussion.

    Read the article

  • Working with Timelines with LINQ to Twitter

    - by Joe Mayo
    When first working with the Twitter API, I thought that using SinceID would be an effective way to page through timelines. In practice it doesn’t work well for various reasons. To explain why, Twitter published an excellent document that is a must-read for anyone working with timelines: Twitter Documentation: Working with Timelines This post shows how to implement the recommended strategies in that document by using LINQ to Twitter. You should read the document in it’s entirety before moving on because my explanation will start at the bottom and work back up to the top in relation to the Twitter document. What follows is an explanation of SinceID, MaxID, and how they come together to help you efficiently work with Twitter timelines. The Role of SinceID Specifying SinceID says to Twitter, “Don’t return tweets earlier than this”. What you want to do is store this value after every timeline query set so that it can be reused on the next set of queries.  The next section will explain what I mean by query set, but a quick explanation is that it’s a loop that gets all new tweets. The SinceID is a backstop to avoid retrieving tweets that you already have. Here’s some initialization code that includes a variable named sinceID that will be used to populate the SinceID property in subsequent queries: // last tweet processed on previous query set ulong sinceID = 210024053698867204; ulong maxID; const int Count = 10; var statusList = new List<status>(); Here, I’ve hard-coded the sinceID variable, but this is where you would initialize sinceID from whatever storage you choose (i.e. a database). The first time you ever run this code, you won’t have a value from a previous query set. Initially setting it to 0 might sound like a good idea, but what if you’re querying a timeline with lots of tweets? Because of the number of tweets and rate limits, your query set might take a very long time to run. A caveat might be that Twitter won’t return an entire timeline back to Tweet #0, but rather only go back a certain period of time, the limits of which are documented for individual Twitter timeline API resources. So, to initialize SinceID at too low of a number can result in a lot of initial tweets, yet there is a limit to how far you can go back. What you’re trying to accomplish in your application should guide you in how to initially set SinceID. I have more to say about SinceID later in this post. The other variables initialized above include the declaration for MaxID, Count, and statusList. The statusList variable is a holder for all the timeline tweets collected during this query set. You can set Count to any value you want as the largest number of tweets to retrieve, as defined by individual Twitter timeline API resources. To effectively page results, you’ll use the maxID variable to set the MaxID property in queries, which I’ll discuss next. Initializing MaxID On your first query of a query set, MaxID will be whatever the most recent tweet is that you get back. Further, you don’t know what MaxID is until after the initial query. The technique used in this post is to do an initial query and then use the results to figure out what the next MaxID will be.  Here’s the code for the initial query: var userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.SinceID == sinceID && tweet.Count == Count select tweet) .ToList(); statusList.AddRange(userStatusResponse); // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; The query above sets both SinceID and Count properties. As explained earlier, Count is the largest number of tweets to return, but the number can be less. A couple reasons why the number of tweets that are returned could be less than Count include the fact that the user, specified by ScreenName, might not have tweeted Count times yet or might not have tweeted at least Count times within the maximum number of tweets that can be returned by the Twitter timeline API resource. Another reason could be because there aren’t Count tweets between now and the tweet ID specified by sinceID. Setting SinceID constrains the results to only those tweets that occurred after the specified Tweet ID, assigned via the sinceID variable in the query above. The statusList is an accumulator of all tweets receive during this query set. To simplify the code, I left out some logic to check whether there were no tweets returned. If  the query above doesn’t return any tweets, you’ll receive an exception when trying to perform operations on an empty list. Yeah, I cheated again. Besides querying initial tweets, what’s important about this code is the final line that sets maxID. It retrieves the lowest numbered status ID in the results. Since the lowest numbered status ID is for a tweet we already have, the code decrements the result by one to keep from asking for that tweet again. Remember, SinceID is not inclusive, but MaxID is. The maxID variable is now set to the highest possible tweet ID that can be returned in the next query. The next section explains how to use MaxID to help get the remaining tweets in the query set. Retrieving Remaining Tweets Earlier in this post, I defined a term that I called a query set. Essentially, this is a group of requests to Twitter that you perform to get all new tweets. A single query might not be enough to get all new tweets, so you’ll have to start at the top of the list that Twitter returns and keep making requests until you have all new tweets. The previous section showed the first query of the query set. The code below is a loop that completes the query set: do { // now add sinceID and maxID userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.Count == Count && tweet.SinceID == sinceID && tweet.MaxID == maxID select tweet) .ToList(); if (userStatusResponse.Count > 0) { // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; statusList.AddRange(userStatusResponse); } } while (userStatusResponse.Count != 0 && statusList.Count < 30); Here we have another query, but this time it includes the MaxID property. The SinceID property prevents reading tweets that we’ve already read and Count specifies the largest number of tweets to return. Earlier, I mentioned how it was important to check how many tweets were returned because failing to do so will result in an exception when subsequent code runs on an empty list. The code above protects against this problem by only working with the results if Twitter actually returns tweets. Reasons why there wouldn’t be results include: if the first query got all the new tweets there wouldn’t be more to get and there might not have been any new tweets between the SinceID and MaxID settings of the most recent query. The code for loading the returned tweets into statusList and getting the maxID are the same as previously explained. The important point here is that MaxID is being reset, not SinceID. As explained in the Twitter documentation, paging occurs from the newest tweets to oldest, so setting MaxID lets us move from the most recent tweets down to the oldest as specified by SinceID. The two loop conditions cause the loop to continue as long as tweets are being read or a max number of tweets have been read.  Logically, you want to stop reading when you’ve read all the tweets and that’s indicated by the fact that the most recent query did not return results. I put the check to stop after 30 tweets are reached to keep the demo from running too long – in the console the response scrolls past available buffer and I wanted you to be able to see the complete output. Yet, there’s another point to be made about constraining the number of items you return at one time. The Twitter API has rate limits and making too many queries per minute will result in an error from twitter that LINQ to Twitter raises as an exception. To use the API properly, you’ll have to ensure you don’t exceed this threshold. Looking at the statusList.Count as done above is rather primitive, but you can implement your own logic to properly manage your rate limit. Yeah, I cheated again. Summary Now you know how to use LINQ to Twitter to work with Twitter timelines. After reading this post, you have a better idea of the role of SinceID - the oldest tweet already received. You also know that MaxID is the largest tweet ID to retrieve in a query. Together, these settings allow you to page through results via one or more queries. You also understand what factors affect the number of tweets returned and considerations for potential error handling logic. The full example of the code for this post is included in the downloadable source code for LINQ to Twitter.   @JoeMayo

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >