Search Results

Search found 27396 results on 1096 pages for 'mysql query'.

Page 248/1096 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • Safe to KILL a mysql process REPLACEing records in a large myisam table?

    - by threecheeseopera
    I have a REPLACE query running for a few days now on a few MyISAM tables, the largest having 20+million records. I need it to stop. It is, basically: REPLACE INTO really_large_table (a,b,c,d) SELECT e,f,g,h FROM big_table INNER JOIN huge_table ON big_table.x LIKE CONCAT('%', huge_table.y, '%'); I need to KILL it, and I am worried that I may corrupt really_large_table. Because the sub-query itself takes a significant amount of time, the REPLACEing probably occurs (relatively) infrequently; if this is true, does this make it less likely for the data to become corrupted? For the curious, here is the SO question asked about the query I am trying to kill.

    Read the article

  • Query to retrieve records by aplhabetic order, except for n predefined items which must be on top

    - by Ashraf Bashir
    I need to retrieve all records ordered alphabetically. Except for a predefined list of record's columns which their records should appear first in a given predefined order, then all other records should be sorted alphabetically based on the same column For instance, assume we have the following table which is called Names Lets assume the predefined list is ("Mathew", "Ashraf", "Jack"). I.e. these are the names of whom their records should be listed first as in the predefined order. So the desired query result should be: Which query could retrieve this custom order ? P.S, I'm using MySQL. Here's my trial based on comments' request: (SELECT * FROM Names WHERE Name in ('Mathew', 'Ashraf', 'Jack')) UNION (SELECT * FROM Names WHERE Name NOT IN ('Mathew', 'Ashraf', 'Jack') ORDER BY Name ASC); the first query result wasn't ordered as required.

    Read the article

  • How can I Query only __key__ on a Google Appengine PolyModel child?

    - by Gabriel
    So the situation is: I want to optimize my code some for doing counting of records. So I have a parent Model class Base, a PolyModel class Entry, and a child class of Entry Article: How would I query Article.key so I can reduce the query load but only get the Article count. My first thought was to use: q = db.GqlQuery("SELECT __key__ from Article where base = :1", i_base) but it turns out GqlQuery doesn't like that because articles are actually stored in a table called Entry. Would it be possible to Query the class attribute? something like: q = db.GqlQuery("select __key__ from Entry where base = :1 and :2 in class", i_base, 'Article') neither of which work. Turns out the answer is even easier. But I am going to finish this question because I looked everywhere for this. q = db.GqlQuery("select __key__ from Entry where base = :1 and class = :2", i_base, 'Article')

    Read the article

  • Strange sql query result from Mysql and from PHP mysqli_query!

    - by qinHaiXiang
    this is the query command echo from php web page: SELECT DISTINCT FT.file_type_name AS type,FT.file_type_en AS tp,FT.file_type_id AS fti, MATCH(keywords) AGAINST ('words <2' IN BOOLEAN MODE ) AS score FROM movie AS M,file_type AS FT WHERE MATCH (keywords) AGAINST ('words <2' IN BOOLEAN MODE ) AND M.type_cn = FT.file_type_id HAVING score >=1 ORDER BY FT.file_type_order; I am running above query in MySQL tools HeidiSQL and got only tow row records which score are 1.66666 and 2. If I remove the HAVING clause I would got three row records with one's score less than 1. But the same query I get from PHP mysqli_query() were all the three records and the one which score less than 1 became 1. What is the problem. Any tips will be pleasure. Thank you very much!!

    Read the article

  • Can I join between two MySQL tables stores on separate machines?

    - by CuriousCoder
    I have a relatively light query that needs information from a local MySQL table along with another MySQL table which is stored on a physically separate machine (on the same network). I'm keen to avoid setting up replication just to facilitate this light query that only needs executed once a day. Is there any way that I can join with a table on a remote machine using one query? Or run a SELECT INTO into a local table. Notes I'm using C# & .NET 4.

    Read the article

  • Access is refusing to run an query with linked table?

    - by Mahmoud
    Hey all i have 3 tables each as follow cash_credit Bank_Name-------in_date-------Com_Id---Amount America Bank 15/05/2010 1 200 HSBC 17/05/2010 3 500 Cheque_credit Bank_Name-----Cheque_Number-----in_date-------Com_Id---Amount America Bank 74835435-5435 15/05/2010 2 600 HSBC 41415454-2851 17/05/2010 5 100 Companies com_id----Com_Name 1 Ebay 2 Google 3 Facebook 4 Amazon Companies table is a linked table when i tried to create an query as follow SELECT cash_credit.Amount, Companies.Com_Name, cheque_credit.Amount FROM cheque_credit INNER JOIN (cash_credit INNER JOIN Companies ON cash_credit.com_id = Companies.com_id) ON cheque_credit.com_id = Companies.com_id; I get an error saying that my inner Join is incorrectly, this query was created using Access 2007 query design the error is Type mismatch in expression then i thought it might be the inner join so i tried Left Join and i get an error that this method is not used JOIN expression is not supported I am confused on where is the problem that is causing all this

    Read the article

  • MySQL: selecting totals as three fields from same table as one query?

    - by coderama
    I have a table with various orders in it: ID | Date | etc... 1 | 2013-01-01 | etc 2 | 2013-02-01 | etc 3 | 2013-03-01 | etc 4 | 2013-04-01 | etc 5 | 2013-05-01 | etc 6 | 2013-06-01 | etc 7 | 2013-06-01 | etc 8 | 2013-03-01 | etc 9 | 2013-04-01 | etc 10 | 2013-05-01 | etc I want a query that ends wit the result: overallTotal | totalThisMonth | totalLastMonth 10 | 2 | 1 But I want to do this in one query! I am trying to find a way to use subqueries to do this. SO far I have: SELECT * from ( SELECT count(*) as overallTotal from ORDERS ) How can I combine this with other subqueries so I can get the totals in one query?

    Read the article

  • PHP, MySQL - can you distinguish between rows matched and rows affected?

    - by Renesis
    I am trying to write a PHP-MySQL database processor that is somewhat intelligent. When this processor decides it needs to make an update, I want to report if it was really successful or not. I thought I could use mysql_affected_rows... // Example: // After running query "UPDATE mytable SET name='Test' WHERE ID=1" $result = mysql_affected_rows(); if ($result >= 1) { /* Success */ } If, for example, there was no row with ID=1, then $result would be 0. However, it turns out that PHP's mysql_affected_rows is the actual affected rows, and may be still be 0 if the row exists but name was already "Test". (The PHP docs even say this is the case). If I run this in the command line, I get the following meta information about the query: Query OK, 0 rows affected (0.01 sec) Rows matched: 1 Changed: 0 Warnings: 0 Is there any way for me to get that "Rows matched" value in PHP instead of the affected rows?

    Read the article

  • Want to avoid the particular rows from select join query... See description

    - by OM The Eternity
    I have a Select Left Join Query whis displays me the rows for the latest changedone(its a time) column name ("field" should not be equal) column name ("trackid" should not be equal), and column name "Operation should be "UPDATE" ", below is the query I am talking about... SELECT j1. * FROM jos_audittrail j1 LEFT OUTER JOIN jos_audittrail j2 ON ( j1.trackid != j2.trackid AND j1.field != j2.field AND j1.changedone < j2.changedone ) WHERE j1.operation = 'UPDATE' AND j2.id IS NULL Now here I don't want a row to be displayed with a two particular column's value i.e. "field's value" the value is "LastvisitDate" and "hits" Now if if append the condition in the above query that " AND j1.field != 'lastvistDate' AND j1.field != 'hits' " theni do not get any result... The table structure is jos_audittrail: id trackid operation oldvalue newvalue table_name live changedone(its a time) I hope i have given the details properly If u still find something missing I will try to provide it more better way... Pls help me to avoid those two rows with those to mentioned value of "field"

    Read the article

  • How to run a progress-bar through an insert query?

    - by Gold
    I have this insert query: try { Cmd.CommandText = @"INSERT INTO BarcodTbl SELECT * FROM [Text;DATABASE=" + PathI + @"\].[Tmp.txt];"; Cmd.ExecuteNonQuery(); Cmd.Dispose(); } catch (Exception ex) { MessageBox.Show(ex.Message); } I have two questions: How can I run a progress-bar from the beginning to the end of the insert? If there is an error, I got the error exception and the action will stop - the query stops and the BarcodTbl is empty. How I can see the error and allow the query to continue filling the table?

    Read the article

  • How do I select the most recent entry in mysql?

    - by ggfan
    i want to select the most recent entry from a table and see if that entry is exactly the same as the one the user is trying to enter. How do I do a query to "select * from the most recent entry of 'posting'"? $query="Select * FROM //confused here (SELECT * FROM posting ORDER BY date_added DESC) WHERE user_id='{$_SESSION['user_id']}' AND title='$title' AND price='$price' AND city='$city' AND state='$state' AND detail='$detail' "; $data = mysqli_query($dbc, $query); $row = mysqli_fetch_array($data); if(mysqli_num_rows($data)>0) { echo "You already posted this ad. Most likely caused by refreshing too many times."; echo "<br>"; $linkposting_id=$row['posting_id']; echo "See the <a href='ad.php?posting_id=$linkposting_id'>Ad</a>"; } else { ...insert into the dbc }

    Read the article

  • SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – SQL and 10-11 May SharePoint

    - by pinaldave
    There were lots of request about providing more details for the blog post through email address specified in the article SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning. Here is the complete brochure of the course. There are two different courses are offered by Solid Quality Mentors 1) Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning – Pinal Dave Date: May 12-14, 2010 Price: Rs. 14,000/person for 3 days Discount Code: ‘SQLAuthority.com‘ Effective Price: Rs. 11,000/person for 3 days 2) SharePoint 2010 – Joy Rathnayake Date: May 10-11, 2010 Price: Rs. 11,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 8,000/person for 3 days Download the complete PDF brochure. Additionally there is special program of SolidQ India Insider. I will provide the details for the same very soon. Please do send me email if you need any additional details. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – SQL and 10-11 May SharePoint

    - by pinaldave
    There were lots of request about providing more details for the blog post through email address specified in the article SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning. Here is the complete brochure of the course. There are two different courses are offered by Solid Quality Mentors 1) Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning – Pinal Dave Date: May 12-14, 2010 Price: Rs. 14,000/person for 3 days Discount Code: ‘SQLAuthority.com‘ Effective Price: Rs. 11,000/person for 3 days 2) SharePoint 2010 – Joy Rathnayake Date: May 10-11, 2010 Price: Rs. 11,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 8,000/person for 3 days Download the complete PDF brochure. Additionally there is special program of SolidQ India Insider. I will provide the details for the same very soon. Please do send me email if you need any additional details. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • Big Data – Buzz Words: Importance of Relational Database in Big Data World – Day 9 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is HDFS. In this article we will take a quick look at the importance of the Relational Database in Big Data world. A Big Question? Here are a few questions I often received since the beginning of the Big Data Series - Does the relational database have no space in the story of the Big Data? Does relational database is no longer relevant as Big Data is evolving? Is relational database not capable to handle Big Data? Is it true that one no longer has to learn about relational data if Big Data is the final destination? Well, every single time when I hear that one person wants to learn about Big Data and is no longer interested in learning about relational database, I find it as a bit far stretched. I am not here to give ambiguous answers of It Depends. I am personally very clear that one who is aspiring to become Big Data Scientist or Big Data Expert they should learn about relational database. NoSQL Movement The reason for the NoSQL Movement in recent time was because of the two important advantages of the NoSQL databases. Performance Flexible Schema In personal experience I have found that when I use NoSQL I have found both of the above listed advantages when I use NoSQL database. There are instances when I found relational database too much restrictive when my data is unstructured as well as they have in the datatype which my Relational Database does not support. It is the same case when I have found that NoSQL solution performing much better than relational databases. I must say that I am a big fan of NoSQL solutions in the recent times but I have also seen occasions and situations where relational database is still perfect fit even though the database is growing increasingly as well have all the symptoms of the big data. Situations in Relational Database Outperforms Adhoc reporting is the one of the most common scenarios where NoSQL is does not have optimal solution. For example reporting queries often needs to aggregate based on the columns which are not indexed as well are built while the report is running, in this kind of scenario NoSQL databases (document database stores, distributed key value stores) database often does not perform well. In the case of the ad-hoc reporting I have often found it is much easier to work with relational databases. SQL is the most popular computer language of all the time. I have been using it for almost over 10 years and I feel that I will be using it for a long time in future. There are plenty of the tools, connectors and awareness of the SQL language in the industry. Pretty much every programming language has a written drivers for the SQL language and most of the developers have learned this language during their school/college time. In many cases, writing query based on SQL is much easier than writing queries in NoSQL supported languages. I believe this is the current situation but in the future this situation can reverse when No SQL query languages are equally popular. ACID (Atomicity Consistency Isolation Durability) – Not all the NoSQL solutions offers ACID compliant language. There are always situations (for example banking transactions, eCommerce shopping carts etc.) where if there is no ACID the operations can be invalid as well database integrity can be at risk. Even though the data volume indeed qualify as a Big Data there are always operations in the application which absolutely needs ACID compliance matured language. The Mixed Bag I have often heard argument that all the big social media sites now a days have moved away from Relational Database. Actually this is not entirely true. While researching about Big Data and Relational Database, I have found that many of the popular social media sites uses Big Data solutions along with Relational Database. Many are using relational databases to deliver the results to end user on the run time and many still uses a relational database as their major backbone. Here are a few examples: Facebook uses MySQL to display the timeline. (Reference Link) Twitter uses MySQL. (Reference Link) Tumblr uses Sharded MySQL (Reference Link) Wikipedia uses MySQL for data storage. (Reference Link) There are many for prominent organizations which are running large scale applications uses relational database along with various Big Data frameworks to satisfy their various business needs. Summary I believe that RDBMS is like a vanilla ice cream. Everybody loves it and everybody has it. NoSQL and other solutions are like chocolate ice cream or custom ice cream – there is a huge base which loves them and wants them but not every ice cream maker can make it just right  for everyone’s taste. No matter how fancy an ice cream store is there is always plain vanilla ice cream available there. Just like the same, there are always cases and situations in the Big Data’s story where traditional relational database is the part of the whole story. In the real world scenarios there will be always the case when there will be need of the relational database concepts and its ideology. It is extremely important to accept relational database as one of the key components of the Big Data instead of treating it as a substandard technology. Ray of Hope – NewSQL In this module we discussed that there are places where we need ACID compliance from our Big Data application and NoSQL will not support that out of box. There is a new termed coined for the application/tool which supports most of the properties of the traditional RDBMS and supports Big Data infrastructure – NewSQL. Tomorrow In tomorrow’s blog post we will discuss about NewSQL. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Puzzle to Win Print Book – Write T-SQL Self Join Without Using FIRST _VALUE and LAST_VALUE

    - by pinaldave
    Last week we asked a puzzle SQL SERVER – Puzzle to Win Print Book – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY . This puzzle got very interesting participation. The details of the winner is listed here. In this puzzle we received two very important feedback. This puzzle cleared the concepts of First_Value and Last_Value to the participants. As this was based on SQL Server 2012 many could not participate it as they have yet not installed SQL Server 2012. I really appreciate the feedback of user and decided to come up something as fun and helps learn new feature of SQL Server 2012. Please read yesterday’s blog post SQL SERVER – Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012 before continuing this puzzle as it is based on yesterday’s post. Yesterday I ran following query which uses functions LEAD and LAG. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: Puzzle: Now use T-SQL Self Join where same table is joined to itself and get the same result without using LEAD or LAG functions. Hint: Introduction to JOINs – Basic of JOINs Self Join A new analytic functions in SQL Server Denali CTP3 – LEAD() and LAG() Rules Leave a comment with your detailed answer by Nov 21's blog post. Open world-wide (where Amazon ships books) If you blog about puzzle’s solution and if you win, you win additional surprise gift as well. Prizes Print copy of my new book SQL Server Interview Questions Amazon|Flipkart If you already have this book, you can opt for any of my other books SQL Wait Stats [Amazon|Flipkart|Kindle] and SQL Programming [Amazon|Flipkart|Kindle]. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How To Backup Of MySQL Database Using PhpMyAdmin

    - by Jyoti
    It is very important to do backup of your MySql database, you will probably realize it when it is too late. A lot of web applications use MySql for storing the content. This can be blogs, and a lot of other things. When you have all your content as html files on your web server [...]

    Read the article

  • Harmonizing Character Encoding Between Imported Data and MySQL

    MySQL's Latin-1 default encoding combined with MySQL 4.1.12's (or greater) UTF8 encoding allows the maximum number of characters codes, however incoming data with different character encoding can still present problems. Rob Gravelle shows you how to avoid problems before a lot of work is required to undo the damage.

    Read the article

  • Harmonizing Character Encoding Between Imported Data and MySQL

    MySQL's Latin-1 default encoding combined with MySQL 4.1.12's (or greater) UTF8 encoding allows the maximum number of characters codes, however incoming data with different character encoding can still present problems. Rob Gravelle shows you how to avoid problems before a lot of work is required to undo the damage.

    Read the article

  • SQL SERVER – How to Set Variable and Use Variable in SQLCMD Mode

    - by Pinal Dave
    Here is the question which I received the other day on SQLAuthority Facebook page. Social media is a wonderful thing and I love the active conversation between blog readers and myself – actually I think social media adds lots of human factor to any conversation. Here is the question - “I am using sqlcmd in SSMS – I am not sure how to declare variable and pass it, for example I have a database and it has table, how can I make the table variable dynamic and pass different value everytime?” Fantastic question, and here is its very simple answer. First of all, enable sqlcmd mode in SQL Server Management Studio as described in following image. Now in query editor type following SQL. :SETVAR DatabaseName “AdventureWorks2012″ :SETVAR SchemaName “Person” :SETVAR TableName “EmailAddress“ USE $(DatabaseName); SELECT * FROM $(SchemaName).$(TableName); Note that I have set the value of the database, schema and table as a sqlcmd variable and I am executing the query using the same parameters. Well, that was it, sqlcmd is a very simple language to master and it also aids in doing various tasks easily. If you have any other sqlcmd tips, please leave a comment and I will publish it with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology Tagged: sqlcmd

    Read the article

  • MySQL Clustering in a Sandbox

    MySQL's unique architecture allows for plugin storage engines. There is the MyISAM storage engine, the ARCHIVE storage engine and the InnoDB storage engine; so it makes sense then that MySQL's clustering solution involves a storage engine as well, namely the NDB (Network DataBase) storage engine.

    Read the article

  • Getting Query Parameters in Javascript

    - by PhubarBaz
    I find myself needing to get query parameters that are passed into a web app on the URL quite often. At first I wrote a function that creates an associative array (aka object) with all of the parameters as keys and returns it. But then I was looking at the revealing module pattern, a nice javascript design pattern designed to hide private functions, and came up with a way to do this without even calling a function. What I came up with was this nice little object that automatically initializes itself into the same associative array that the function call did previously. // Creates associative array (object) of query params var QueryParameters = (function() {     var result = {};     if (window.location.search)     {         // split up the query string and store in an associative array         var params = window.location.search.slice(1).split("&");         for (var i = 0; i < params.length; i++)         {             var tmp = params[i].split("=");             result[tmp[0]] = unescape(tmp[1]);         }     }     return result; }()); Now all you have to do to get the query parameters is just reference them from the QueryParameters object. There is no need to create a new object or call any function to initialize it. var debug = (QueryParameters.debug === "true"); or if (QueryParameters["debug"]) doSomeDebugging(); or loop through all of the parameters. for (var param in QueryParameters) var value = QueryParameters[param]; Hope you find this object useful.

    Read the article

  • What proportion of DBA skills are platform-independent?

    - by Arkaaito
    Problem: we've grown to the point where we need a real DBA. Real DBAs are hard to find. Possible solution: I know someone with extensive experience (currently working on MSSQL) who is looking for a new DBA job. He might be willing to consider working for a MySQL shop (I haven't asked). Snag: I don't know to what extent MSSQL DBA skills map to MySQL DBA skills. I'm a developer, so I know enough about MySQL to develop apps which use it (including the basic performance-tuning, index selection, etc.), do schema design, and perform simple utility tasks (backup with mysqldump, scripting, etc.). I don't know anything about MSSQL. Nor do I actually know much about the boundaries of the DBA role. Can anyone with more experience - perhaps as a DBA - weigh in on this? Are there enough similarities between MSSQL and MySQL that it's worth asking a MSSQL DBA if he'd be interested in applying for a MySQL position?

    Read the article

  • SQL SERVER – Plan Cache and Data Cache in Memory

    - by pinaldave
    I get following question almost all the time when I go for consultations or training. I often end up providing the scripts to my clients and attendees. Instead of writing new blog post, today in this single blog post, I am going to cover both the script and going to link to original blog posts where I have mentioned about this blog post. Plan Cache in Memory USE AdventureWorks GO SELECT [text], cp.size_in_bytes, plan_handle FROM sys.dm_exec_cached_plans AS cp CROSS APPLY sys.dm_exec_sql_text(plan_handle) WHERE cp.cacheobjtype = N'Compiled Plan' ORDER BY cp.size_in_bytes DESC GO Further explanation of this script is over here: SQL SERVER – Plan Cache – Retrieve and Remove – A Simple Script Data Cache in Memory USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.TYPE = 1 OR au.TYPE = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.TYPE = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Further explanation of this script is over here: SQL SERVER – Get Query Plan Along with Query Text and Execution Count Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Memory

    Read the article

  • SQL SERVER – Copy Column Headers from Resultset – SQL in Sixty Seconds #027 – Video

    - by pinaldave
    SQL Server Management Studio returns results in Grid View, Text View and to the file. When we copy results from Grid View to Excel there is a common complaint that the column  header displayed in resultset is not copied to the Excel. I often spend time in performance tuning databases and I run many DMV’s in SSMS to get a quick view of the server. In my case it is almost certain that I need all the time column headers when I copy my data to excel or any other place. SQL Server Management Studio have two different ways to do this. Method 1: Ad-hoc When result is rendered you can right click on the resultset and click on Copy Header. This will copy the headers along with the resultset. Additionally, you can use the shortcut key CTRL+SHIFT+C for coping column headers along with the resultset. Method 2: Option Setting at SSMS level This is SSMS level settings and I kept this option always selected as I often need the column headers when I select the resultset. Go Tools >> Options >> Query Results >> SQL Server >> Results to Grid >> Check the Box “Include column header when copying or saving the results.” Both of the methods are discussed in following SQL in Sixty Seconds Video. Here is the code used in the video. Related Tips in SQL in Sixty Seconds: Copy Column Headers in Query Analyzers in Result Set Getting Columns Headers without Result Data – SET FMTONLY ON If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQL SERVER – Get File Statistics Using fn_virtualfilestats

    - by pinaldave
    Quite often when I am staring at my SSMS I wonder what is going on under the hood in my SQL Server. I often want to know which database is very busy and which database is bit slow because of IO issue. Sometime, I think at the file level as well. I want to know which MDF or NDF is busiest and doing most of the work. Following query gets the same results very quickly. SELECT DB_NAME(vfs.DbId) DatabaseName, mf.name, mf.physical_name, vfs.BytesRead, vfs.BytesWritten, vfs.IoStallMS, vfs.IoStallReadMS, vfs.IoStallWriteMS, vfs.NumberReads, vfs.NumberWrites, (Size*8)/1024 Size_MB FROM ::fn_virtualfilestats(NULL,NULL) vfs INNER JOIN sys.master_files mf ON mf.database_id = vfs.DbId AND mf.FILE_ID = vfs.FileId GO When you run above query you will get many valuable information like what is the size of the file as well how many times the reads and writes are done for each file. It also displays the read/write data in bytes. Due to IO if there has been any stall (delay) in read or write, you can know that as well. I keep this handy but have not shared on blog earlier. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL View, T SQL, Technology Tagged: Statistics

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >