Search Results

Search found 40567 results on 1623 pages for 'database performance'.

Page 119/1623 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Access Creates new file every time I Compact & Repair

    - by NickSentowski
    It didn't always do this, but ever since I split my database and made the front-end an ACCDE file, any time I try to compact and repair either file, a new file called "Database 1" is generated and my original file size doesn't change. Is this normal? My ACCDB is roughly 20MB, and my ACCDE is just over 1M after being used the first time. Before opening, the ACCDE was only 600k (I have lots of forms and queries, and regularly store PDF attachments.

    Read the article

  • Techniques for querying a set of object in-memory in a Java application

    - by Edd Grant
    Hi All, We have a system which performs a 'coarse search' by invoking an interface on another system which returns a set of Java objects. Once we have received the search results I need to be able to further filter the resulting Java objects based on certain criteria describing the state of the attributes (e.g. from the initial objects return all objects where x.y z && a.b == c). The criteria used to filter the set of objects each time is partially user configurable, by this I mean that users will be able to select the values and ranges to match on but the attributes they can pick from will be a fixed set. The data sets are likely to contain <= 10,000 objects for each search. The search will be executed manually by the application user base probably no more than 2000 times a day (approx). It's probably worth mentioning that all the objects in the result set are known domain object classes which have Hibernate and JPA annotations describing their structure and relationship. Off the top of my head I can think of 3 ways of doing this: For each search persist the initial result set objects in our database, then use Hibernate to re-query them using the finer grained criteria. Use an in-memory Database (such as hsqldb?) to query and refine the initial result set. Write some custom code which iterates the initial result set and pulls out the desired records. Option 1 seems to involve a lot of toing and froing across a network to a physical Database (Oracle 10g) which might result in a lot of network and disk activity. It would also require the results from each search to be isolated from other result sets to ensure that different searches don't interfere with each other. Option 2 seems like a good idea in principle as it would allow me to do the finer query in memory and would not require the persistence of result data which would only be discarded after the search was complete. Gut feeling is that this could be pretty performant too but might result in larger memory overheads (which is fine as we can be pretty flexible on the amount of memory our JVM gets). Option 3 could be very performant but is something I would like to avoid as any code we write would require such careful testing that the time taken to acheive something flexible and robust enough would probably be prohibitive. I don't have time to prototype all 3 ideas so I am looking for comments people may have on the 3 options above, plus any further ideas I have not considered, to help me decide which idea might be most suitable. I'm currently leaning toward option 2 (in memory database) so would be keen to hear from people with experience of querying POJOs in memory too. Hopefully I have described the situation in enough detail but don't hesitate to ask if any further information is required to better understand the scenario. Cheers, Edd

    Read the article

  • How I can Optimize this mySQL transaction within java code?

    - by worldpython
    Dear All, I am new to MySql database. I've large table(ID,...). I select ID frequently with java code and.And that make a heavy load on transaction select from tableName where ID=someID notes: 1.Database could be 100,000 records 2.I can't cache result 3.ID is a primary key 4.I try to optimize time needed to return result from query. Any ideas for optimization ? thanks in advance

    Read the article

  • Inbreeding-immune database structure

    - by Nick Savage
    I have an application that requires a "simple" family tree. I would like to be able to perform queries that will give me data for an entire family given one id from a member in the family. I say simple because it does not need to take into account adoption or any other obscurities. The requirements for the application are as follows: Any two people will not be able to breed if they're from the same genetic line Needs to allow for the addition of new family lines (new people with no previous family) Need to be able to pull siblings, parents separately through queries I'm having trouble coming up with the proper structure for the database. So far I've come up with two solutions but they're not very reliable and will probably get out of hand quite quickly. Solution 1 involves placing a family_ids field on the people table and storing a list of unique family ids. Each time two people breed the lists are checked against each other to make sure no ids match and if everything checks out will merge the two lists and set that as the child's family_ids field. Example: Father (family_ids: (null)) breeds with Mother (family_ids: (213, 519)) -> Child (family_ids: (213, 519)) breeds with Random Person (family_ids: (813, 712, 122, 767)) -> Grandchild (family_ids: (213, 519, 813, 712, 122, 767)) And so on and so forth... The problem I see with this is the lists becoming unreasonably large as time goes on. Solution 2 uses cakephp's associations to declare: public $belongsTo = array( 'Father' => array( 'className' => 'User', 'foreignKey' => 'father_id' ), 'Mother' => array( 'className' => 'User', 'foreignKey' => 'mother_id' ) ); Now setting recursive to 2 will fetch the results of the mother and father, along with their mother and father, and so on and so forth all the way down the line. The problem with this route is that the data is in nested arrays and I'm unsure of how to efficiently work through the code. If anyone would be able to steer me in the direction of the most efficient way to handle what I want to achieve that would be tremendously helpful. Any and all help is greatly appreciated and I'll gladly answer any questions anyone has. Thanks a lot.

    Read the article

  • Help Desk Database Design

    - by user237244
    The company I work at has very specific and unique needs for a help desk system, so none of the open source systems will work for us. That being the case, I created a custom system using PHP and MySQL. It's far from perfect, but it's infinitely better than the last system they were using; trust me! It meets most of our needs quite nicely, but I have a question about the way I have the database set up. Here are the main tables: ClosedTickets ClosedTicketSolutions Locations OpenTickets OpenTicketSolutions Statuses Technicians When a user submits a help request, it goes in the "OpenTickets" table. As the technicians work on the problem, they submit entries with a description of what they've done. These entries go in the "OpenTicketSolutions" table. When the problem has been resolved, the last technician to work on the problem closes the ticket and it gets moved to the "ClosedTickets" table. All of the solution entries get moved to the "ClosedTicketSolutions" table as well. The other tables (Locations, Statuses, and Technicians) exist as a means of normalization (each location, status, and technician has an ID which is referenced). The problem I'm having now is this: When I want to view a list of all the open tickets, the SQL statement is somewhat complicated because I have to left join the "Locations", "Statuses", and "Technicians" tables. Fields from various tables need to be searchable as well. Check out how complicated the SQL statement is to search closed tickets for tickets submitted by anybody with a first name containing "John": SELECT ClosedTickets.*, date_format(ClosedTickets.EntryDate, '%c/%e/%y %l:%i %p') AS Formatted_Date, date_format(ClosedDate, '%c/%e/%y %l:%i %p') AS Formatted_ClosedDate, Concat(Technicians.LastName, ', ', Technicians.FirstName) AS TechFullName, Locations.LocationName, date_format(ClosedTicketSolutions.EntryDate, '%c/%e/%y') AS Formatted_Solution_EntryDate, ClosedTicketSolutions.HoursSpent AS SolutionHoursSpent, ClosedTicketSolutions.Tech_ID AS SolutionTech_ID, ClosedTicketSolutions.EntryText FROM ClosedTickets LEFT JOIN Technicians ON ClosedTickets.Tech_ID = Technicians.Tech_ID LEFT JOIN Locations ON ClosedTickets.Location_ID = Locations.Location_ID LEFT JOIN ClosedTicketSolutions ON ClosedTickets.TicketNum = ClosedTicketSolutions.TicketNum WHERE (ClosedTickets.FirstName LIKE '%John%') ORDER BY ClosedDate Desc, ClosedTicketSolutions.EntryDate, ClosedTicketSolutions.Entry_ID One thing that I'm not able to do right now is search both open and closed tickets at the same time. I don't think a union would work in my case. So I'm wondering if I should store the open and closed tickets in the same table and just have a field indicating whether or not the ticket is closed. The only problem I can forsee is that we have so many closed tickets already (nearly 30,000) so the whole system might perform slowly. Would it be a bad idea to combine the open and closed tickets?

    Read the article

  • SQLAuthority News – MS Access Database is the Way to Go – April 1st Humor

    - by pinaldave
    First of all, today is April 1- April Fool’s Day, so I have written this post for some light entertainment. My friend has just sent me an email about why a person should go for Access Database. For a short background, I used to be an MS Access user once (I will not call myself MS Access DBA), and I must say I had a good time with Database at that time. As time passed by, I moved from MS Access to SQL Server. Well, as for my friend’s email, his reasons considering MS Access usage really made me laugh. MS Access may have a few points where it totally makes sense to use it. However, in the email that I received, there was not a single reason which was valid.  In fact, I thought it is an April 1st joke- just delivered a little earlier. Let us see some of the reasons from that email. Thanks to Mahesh Bhesania for sending this email to me. MS Access comes with lots of free stuff, e.g. MS Excel MS Access is the most preferred desktop database system MS Access can import data from MS Excel and SQL Server MS Access provides a real time database MS Access has a free IDE-to-VB Script MS Access fits well in your hard drive I actually think that the above points are either incorrect beliefs of some users, or someone just wrote them to give some laughter with such inaccurate data. And, for the same reason I decided to browse the Internet and do some research on MS Access database to verify my thoughts. While searching on this subject, I found the following two interesting statements from the site: Microsoft Access Database, Why Choose It? Other software manufacturers are more likely to provide interfaces to MS Access than any other desktop database system Microsoft Access consulting rates are typically lower for Access consultants compared to Oracle or SQL Server consultants The second one is may be the worst reason for you to switch to MS Access if you are already an SQL Server consultant. With this cartoon, have you ever felt like you were one of these chickens at some point in time? I guess that the moment might have just happened before the minute we say “I guess we were on the same page?” Does this mean we are IN the same table, or ON the same table?! (I accept bad joke!) It is All Fools’ Day after all, so just laugh! If you have something funny but non-offensive to share, just  leave your comment here. Reference: Pinal Dave (http://blog.SQLAuthority.com), Cartoon source unknown. Filed under: Software Development, SQL, SQL Authority, SQL Humor, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: MS ACCESS

    Read the article

  • Perfect Your MySQL Database Administrators Skills

    - by Antoinette O'Sullivan
    With its proven ease-of-use, performance, and scalability, MySQL has become the leading database choice for web-based applications, used by high profile web properties including Google, Yahoo!, Facebook, YouTube, Wikipedia and thousands of mid-sized companies. Many organizations deploy both Oracle Database and MySQL side by side to serve different needs, and as a database professional you can find training courses on both topics at Oracle University! Check out the upcoming Oracle Database training courses and MySQL training courses. Even if you're only managing Oracle Databases at this point of time, getting familiar with MySQL Database will broaden your career path with growing job demand. Hone your skills as a MySQL Database Administrator by taking the MySQL for Database Administrators course which teaches you how to secure privileges, set resource limitations, access controls and describe backup and recovery basics. You also learn how to create and use stored procedures, triggers and views. You can take this 5 day course through three delivery methods: Training-on-Demand: Take this course at your own pace and at a time that suits you through this high-quality streaming video delivery. You also get to schedule time on a classroom environment to perform the hands-on exercises. Live-Virtual: Attend a live instructor led event from your own desk. 100s of events already of the calendar in many timezones. In-Class: Travel to an education center to attend this class. A sample of events is shown below:  Location  Date  Delivery Language  Budapest, Hungary  26 November 2012  Hungarian  Prague, Czech Republic  19 November 2012  Czech  Warsaw, Poland  10 December 2012  Polish  Belfast, Northern Ireland  26 November, 2012  English  London, England  26 November, 2012  English  Rome, Italy  19 November, 2012  Italian  Lisbon, Portugal  12 November, 2012  European Portugese  Porto, Portugal  21 January, 2013  European Portugese  Amsterdam, Netherlands  19 November, 2012  Dutch  Nieuwegein, Netherlands  8 April, 2013  Dutch  Barcelona, Spain  4 February, 2013  Spanish  Madrid, Spain  19 November, 2012  Spanish  Mechelen, Belgium  25 February, 2013  English  Windhof, Luxembourg  19 November, 2012  English  Johannesburg, South Africa  9 December, 2012  English  Cairo, Egypt  20 October, 2012  English  Nairobi, Kenya  26 November, 2012  English  Petaling Jaya, Malaysia  29 October, 2012  English  Auckland, New Zealand  5 November, 2012  English  Wellington, New Zealand  23 October, 2012  English  Brisbane, Australia  19 November, 2012  English  Edmonton, Canada  7 January, 2013  English  Vancouver, Canada  7 January, 2013  English  Ottawa, Canada  22 October, 2012  English  Toronto, Canada  22 October, 2012  English  Montreal, Canada  22 October, 2012  English  Mexico City, Mexico  10 December, 2012  Spanish  Sao Paulo, Brazil  10 December, 2012  Brazilian Portugese For more information on this course or any aspect of the MySQL curriculum, visit http://oracle.com/education/mysql.

    Read the article

  • But what version is the database now?

    - by BuckWoody
    When you upgrade your system to SQL Server 2008 R2, you’ll know that the instance is at that version by using the standard commands like SELECT @@VERSION or EXEC xp_msver. My system came back with this info when I typed those: Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86)   Apr  2 2010 15:53:02   Copyright (c) Microsoft Corporation  Developer Edition on Windows NT 6.0 <X86> (Build 6002: Service Pack 2) (Hypervisor) Index Name Internal_Value Character_Value 1 ProductName NULL Microsoft SQL Server 2 ProductVersion 655410 10.50.1600.1 3 Language 1033 English (United States) 4 Platform NULL NT INTEL X86 5 Comments NULL SQL 6 CompanyName NULL Microsoft Corporation 7 FileDescription NULL SQL Server Windows NT 8 FileVersion NULL 2009.0100.1600.01 ((KJ_RTM).100402-1540 ) 9 InternalName NULL SQLSERVR 10 LegalCopyright NULL Microsoft Corp. All rights reserved. 11 LegalTrademarks NULL Microsoft SQL Server is a registered trademark of Microsoft Corporation. 12 OriginalFilename NULL SQLSERVR.EXE 13 PrivateBuild NULL NULL 14 SpecialBuild 104857601 NULL 15 WindowsVersion 393347078 6.0 (6002) 16 ProcessorCount 1 1 17 ProcessorActiveMask 1 1 18 ProcessorType 586 PROCESSOR_INTEL_PENTIUM 19 PhysicalMemory 2047 2047 (2146934784) 20 Product ID NULL NULL   But a database properties are separate from the Instance. After an upgrade, you always want to make sure that the compatibility options (which have much to do with how NULLs and other objects are treated) is at what you expect. For the most part, as long as the application can handle it, I set my compatibility levels to the latest version. For SQL Server 2008, that was “10.0” or “10”. You can do this with the ALTER DATABASE command or you can just right-click the database and select “Properties” and then “Database Options” in SQL Server Management Studio. To check the database compatibility level, I use this query: SELECT name, cmptlevel FROM sys.sysdatabases When I did that this morning I saw that the databases (all of them) were at 10.0 – not 10.5 like the Instance. That’s expected – we didn’t revise the database format up with the Instance for this particular release. Didn’t want to catch you by surprise on that. While your databases should be at the “proper” level for your situation, you can’t rely on the compatibility level to indicate the Instance level. More info on the ALTER DATABASE command in SQL Server 2008 R2 is here: http://technet.microsoft.com/en-us/library/bb510680(SQL.105).aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Do fluent interfaces significantly impact runtime performance of a .NET application?

    - by stakx
    I'm currently occupying myself with implementing a fluent interface for an existing technology, which would allow code similar to the following snippet: using (var directory = Open.Directory(@"path\to\some\directory")) { using (var file = Open.File("foobar.html").In(directory)) { // ... } } In order to implement such constructs, classes are needed that accumulate arguments and pass them on to other objects. For example, to implement the Open.File(...).In(...) construct, you would need two classes: // handles 'Open.XXX': public static class OpenPhrase { // handles 'Open.File(XXX)': public static OpenFilePhrase File(string filename) { return new OpenFilePhrase(filename); } // handles 'Open.Directory(XXX)': public static DirectoryObject Directory(string path) { // ... } } // handles 'Open.File(XXX).XXX': public class OpenFilePhrase { internal OpenFilePhrase(string filename) { _filename = filename } // handles 'Open.File(XXX).In(XXX): public FileObject In(DirectoryObject directory) { // ... } private readonly string _filename; } That is, the more constituent parts statements such as the initial examples have, the more objects need to be created for passing on arguments to subsequent objects in the chain until the actual statement can finally execute. Question: I am interested in some opinions: Does a fluent interface which is implemented using the above technique significantly impact the runtime performance of an application that uses it? With runtime performance, I refer to both speed and memory usage aspects. Bear in mind that a potentially large number of temporary, argument-saving objects would have to be created for only very brief timespans, which I assume may put a certain pressure on the garbage collector. If you think there is significant performance impact, do you know of a better way to implement fluent interfaces?

    Read the article

  • How do I track down sporadic ASP.NET performance problems in a production environment?

    - by Steve Wortham
    I've had sporadic performance problems with my website for awhile now. 90% of the time the site is very fast. But occasionally it is just really, really slow. I mean like 5-10 seconds load time kind of slow. I thought I had narrowed it down to the server I was on so I migrated everything to a new dedicated server from a completely different web hosting company. But the problems continue. I guess what I'm looking for is a good tool that'll help me track down the problem, because it's clearly not the hardware. I'd like to be able to log certain events in my ASP.NET code and have that same logger also track server performance/resources at the time. If I can then look back at the logs then I can see what exactly my website was doing at the time of extreme slowness. Is there a .NET logging system that'll allow me to make calls into it with code while simultaneously tracking performance? What would you recommend?

    Read the article

  • How do I improve my performance with this singly linked list struct within my program?

    - by Jesus
    Hey guys, I have a program that does operations of sets of strings. We have to implement functions such as addition and subtraction of two sets of strings. We are suppose to get it down to the point where performance if of O(N+M), where N,M are sets of strings. Right now, I believe my performance is at O(N*M), since I for each element of N, I go through every element of M. I'm particularly focused on getting the subtraction to the proper performance, as if I can get that down to proper performance, I believe I can carry that knowledge over to the rest of things I have to implement. The '-' operator is suppose to work like this, for example. Declare set1 to be an empty set. Declare set2 to be a set with { a b c } elements Declare set3 to be a set with ( b c d } elements set1 = set2 - set3 And now set1 is suppose to equal { a }. So basically, just remove any element from set3, that is also in set2. For the addition implementation (overloaded '+' operator), I also do the sorting of the strings (since we have to). All the functions work right now btw. So I was wondering if anyone could a) Confirm that currently I'm doing O(N*M) performance b) Give me some ideas/implementations on how to improve the performance to O(N+M) Note: I cannot add any member variables or functions to the class strSet or to the node structure. The implementation of the main program isn't very important, but I will post the code for my class definition and the implementation of the member functions: strSet2.h (Implementation of my class and struct) // Class to implement sets of strings // Implements operators for union, intersection, subtraction, // etc. for sets of strings // V1.1 15 Feb 2011 Added guard (#ifndef), deleted using namespace RCH #ifndef _STRSET_ #define _STRSET_ #include <iostream> #include <vector> #include <string> // Deleted: using namespace std; 15 Feb 2011 RCH struct node { std::string s1; node * next; }; class strSet { private: node * first; public: strSet (); // Create empty set strSet (std::string s); // Create singleton set strSet (const strSet &copy); // Copy constructor ~strSet (); // Destructor int SIZE() const; bool isMember (std::string s) const; strSet operator + (const strSet& rtSide); // Union strSet operator - (const strSet& rtSide); // Set subtraction strSet& operator = (const strSet& rtSide); // Assignment }; // End of strSet class #endif // _STRSET_ strSet2.cpp (implementation of member functions) #include <iostream> #include <vector> #include <string> #include "strset2.h" using namespace std; strSet::strSet() { first = NULL; } strSet::strSet(string s) { node *temp; temp = new node; temp->s1 = s; temp->next = NULL; first = temp; } strSet::strSet(const strSet& copy) { if(copy.first == NULL) { first = NULL; } else { node *n = copy.first; node *prev = NULL; while (n) { node *newNode = new node; newNode->s1 = n->s1; newNode->next = NULL; if (prev) { prev->next = newNode; } else { first = newNode; } prev = newNode; n = n->next; } } } strSet::~strSet() { if(first != NULL) { while(first->next != NULL) { node *nextNode = first->next; first->next = nextNode->next; delete nextNode; } } } int strSet::SIZE() const { int size = 0; node *temp = first; while(temp!=NULL) { size++; temp=temp->next; } return size; } bool strSet::isMember(string s) const { node *temp = first; while(temp != NULL) { if(temp->s1 == s) { return true; } temp = temp->next; } return false; } strSet strSet::operator + (const strSet& rtSide) { strSet newSet; newSet = *this; node *temp = rtSide.first; while(temp != NULL) { string newEle = temp->s1; if(!isMember(newEle)) { if(newSet.first==NULL) { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = NULL; newSet.first = newNode; } else if(newSet.SIZE() == 1) { if(newEle < newSet.first->s1) { node *tempNext = newSet.first; node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = tempNext; newSet.first = newNode; } else { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = NULL; newSet.first->next = newNode; } } else { node *prev = NULL; node *curr = newSet.first; while(curr != NULL) { if(newEle < curr->s1) { if(prev == NULL) { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = curr; newSet.first = newNode; break; } else { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = curr; prev->next = newNode; break; } } if(curr->next == NULL) { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = NULL; curr->next = newNode; break; } prev = curr; curr = curr->next; } } } temp = temp->next; } return newSet; } strSet strSet::operator - (const strSet& rtSide) { strSet newSet; newSet = *this; node *temp = rtSide.first; while(temp != NULL) { string element = temp->s1; node *prev = NULL; node *curr = newSet.first; while(curr != NULL) { if( element < curr->s1 ) break; if( curr->s1 == element ) { if( prev == NULL) { node *duplicate = curr; newSet.first = newSet.first->next; delete duplicate; break; } else { node *duplicate = curr; prev->next = curr->next; delete duplicate; break; } } prev = curr; curr = curr->next; } temp = temp->next; } return newSet; } strSet& strSet::operator = (const strSet& rtSide) { if(this != &rtSide) { if(first != NULL) { while(first->next != NULL) { node *nextNode = first->next; first->next = nextNode->next; delete nextNode; } } if(rtSide.first == NULL) { first = NULL; } else { node *n = rtSide.first; node *prev = NULL; while (n) { node *newNode = new node; newNode->s1 = n->s1; newNode->next = NULL; if (prev) { prev->next = newNode; } else { first = newNode; } prev = newNode; n = n->next; } } } return *this; }

    Read the article

  • Advantage database throws an exception when attempting to delete a record with a like statement used

    - by ChrisR
    The code below shows that a record is deleted when the sql statement is: select * from test where qty between 50 and 59 but the sql statement: select * from test where partno like 'PART/005%' throws the exception: Advantage.Data.Provider.AdsException: Error 5072: Action requires read-write access to the table How can you reliably delete a record with a where clause applied? Note: I'm using Advantage Database v9.10.1.9, VS2008, .Net Framework 3.5 and WinXP 32 bit using System.IO; using Advantage.Data.Provider; using AdvantageClientEngine; using NUnit.Framework; namespace NetworkEidetics.Core.Tests.Dbf { [TestFixture] public class AdvantageDatabaseTests { private const string DefaultConnectionString = @"data source={0};ServerType=local;TableType=ADS_CDX;LockMode=COMPATIBLE;TrimTrailingSpaces=TRUE;ShowDeleted=FALSE"; private const string TestFilesDirectory = "./TestFiles"; [SetUp] public void Setup() { const string createSql = @"CREATE TABLE [{0}] (ITEM_NO char(4), PARTNO char(20), QTY numeric(6,0), QUOTE numeric(12,4)) "; const string insertSql = @"INSERT INTO [{0}] (ITEM_NO, PARTNO, QTY, QUOTE) VALUES('{1}', '{2}', {3}, {4})"; const string filename = "test.dbf"; var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var transaction = connection.BeginTransaction()) { using (var command = connection.CreateCommand()) { command.CommandText = string.Format(createSql, filename); command.Transaction = transaction; command.ExecuteNonQuery(); } transaction.Commit(); } using (var transaction = connection.BeginTransaction()) { for (var i = 0; i < 1000; ++i) { using (var command = connection.CreateCommand()) { var itemNo = string.Format("{0}", i); var partNumber = string.Format("PART/{0:d4}", i); var quantity = i; var quote = i * 10; command.CommandText = string.Format(insertSql, filename, itemNo, partNumber, quantity, quote); command.Transaction = transaction; command.ExecuteNonQuery(); } } transaction.Commit(); } connection.Close(); } } [TearDown] public void TearDown() { File.Delete("./TestFiles/test.dbf"); } [Test] public void CanDeleteRecord() { const string sqlStatement = @"select * from test"; Assert.AreEqual(1000, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(999, GetRecordCount(sqlStatement)); } [Test] public void CanDeleteRecordBetween() { const string sqlStatement = @"select * from test where qty between 50 and 59"; Assert.AreEqual(10, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(9, GetRecordCount(sqlStatement)); } [Test] public void CanDeleteRecordWithLike() { const string sqlStatement = @"select * from test where partno like 'PART/005%'"; Assert.AreEqual(10, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(9, GetRecordCount(sqlStatement)); } public int GetRecordCount(string sqlStatement) { var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = sqlStatement; var reader = command.ExecuteExtendedReader(); return reader.GetRecordCount(AdsExtendedReader.FilterOption.RespectFilters); } } } public void DeleteRecord(string sqlStatement, int rowIndex) { var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = sqlStatement; var reader = command.ExecuteExtendedReader(); reader.GotoBOF(); reader.Read(); if (rowIndex != 0) { ACE.AdsSkip(reader.AdsActiveHandle, rowIndex); } reader.DeleteRecord(); } connection.Close(); } } } }

    Read the article

  • The best way to predict performance without actually porting the code?

    - by ardiyu07
    I believe there are people with the same experience with me, where he/she must give a (estimated) performance report of porting a program from sequential to parallel with some designated multicore hardwares, with a very few amount of time given. For instance, if a 10K LoC sequential program was given and executes on Intel i7-3770k (not vectorized) in 100 ms, how long would it take to run if one parallelizes the code to a Tesla C2075 with NVIDIA CUDA, given that all kinds of parallelizing optimization techniques were done? (but you're only given 2-4 days to report the performance? assume that you didn't know the algorithm at all. Or perhaps it'd be safer if we just assume that it's an impossible situation to finish the job) Therefore, I'm wondering, what most likely be the fastest way to give such performance report? Is it safe to calculate solely by the hardware's capability, such as GFLOPs peak and memory bandwidth rate? Is there a mathematical way to calculate it? If there is, please prove your method with the corresponding problem description and the algorithm, and also the target hardwares' specifications. Or perhaps there already exists such tool to (roughly) estimate code porting? (Please don't the answer: 'kill yourself is the fastest way.')

    Read the article

  • SQL SERVER – Subquery or Join – Various Options – SQL Server Engine knows the Best

    - by pinaldave
    This is followup post of my earlier article SQL SERVER – Convert IN to EXISTS – Performance Talk, after reading all the comments I have received I felt that I could write more on the same subject to clear few things out. First let us run following four queries, all of them are giving exactly same resultset. USE AdventureWorks GO -- use of = SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID = ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of in SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID IN ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of exists SELECT * FROM HumanResources.Employee E WHERE EXISTS ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- Use of Join SELECT * FROM HumanResources.Employee E INNER JOIN HumanResources.EmployeeAddress EA ON E.EmployeeID = EA.EmployeeID GO Let us compare the execution plan of the queries listed above. Click on image to see larger image. It is quite clear from the execution plan that in case of IN, EXISTS and JOIN SQL Server Engines is smart enough to figure out what is the best optimal plan of Merge Join for the same query and execute the same. However, in the case of use of Equal (=) Operator, SQL Server is forced to use Nested Loop and test each result of the inner query and compare to outer query, leading to cut the performance. Please note that here I no mean suggesting that Nested Loop is bad or Merge Join is better. This can very well vary on your machine and amount of resources available on your computer. When I see Equal (=) operator used in query like above, I usually recommend to see if user can use IN or EXISTS or JOIN. As I said, this can very much vary on different system. What is your take in above query? I believe SQL Server Engines is usually pretty smart to figure out what is ideal execution plan and use it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Server Side Paging in SQL Server 2011 – Part2

    - by pinaldave
    The best part of the having blog is that SQL Community helps to keep it running with new ideas. Earlier I wrote about SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative. A very popular article on that subject. I had used variables for “number of the rows” and “number of the pages”. Blog reader send me email asking in their organizations these values are stored in the table. Is there any the new syntax can read the data from the table. Absolutely YES! USE AdventureWorks2008R2 GO CREATE TABLE PagingSetting (RowsPerPage INT, PageNumber INT) INSERT INTO PagingSetting (RowsPerPage, PageNumber) VALUES(10,5) GO SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET (SELECT RowsPerPage*PageNumber FROM PagingSetting) ROWS FETCH NEXT (SELECT RowsPerPage FROM PagingSetting) ROWS ONLY GO Here is the quick script: This is really an easy trick. I also wrote blog post on comparison of the performance over here: . SQL SERVER – Server Side Paging in SQL Server 2011 Performance Comparison Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: SQL Paging

    Read the article

  • SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – SQL and 10-11 May SharePoint

    - by pinaldave
    There were lots of request about providing more details for the blog post through email address specified in the article SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning. Here is the complete brochure of the course. There are two different courses are offered by Solid Quality Mentors 1) Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning – Pinal Dave Date: May 12-14, 2010 Price: Rs. 14,000/person for 3 days Discount Code: ‘SQLAuthority.com‘ Effective Price: Rs. 11,000/person for 3 days 2) SharePoint 2010 – Joy Rathnayake Date: May 10-11, 2010 Price: Rs. 11,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 8,000/person for 3 days Download the complete PDF brochure. Additionally there is special program of SolidQ India Insider. I will provide the details for the same very soon. Please do send me email if you need any additional details. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • Oracle Linux Delivers Top CPU Benchmark Results on Sun Blades

    - by sergio.leunissen
    From the Performance and Best Practices blog: Fresh SPEC CPU2006 results for Sun Blade X6275 M2 Server Modules running Oracle Linux 5.5. The highlights: The dual-node Sun Blade X6275 M2 server module, equipped with two Intel Xeon X5670 2.93 GHz processors per node and running the Oracle Enterprise Linux 5.5 operating system delivered the best SPECint_rate2006 and SPECfp_rate2006 benchmark results for all systems with Intel Xeon processor 5000 sequence. With a SPECint_rate2006 benchmark result of 679, the Sun Blade X6275 M2 server module, with two compute nodes per blade, delivers maximum performance for space constrained environments. Comparing Oracle's dual-node blade to HP's dual-node blade server, based on their single node performance, the Sun Blade X6275 M2 server module SPECfp_rate2006 score of 241 outperforms the best published HP ProLiant BL2X220c G5 server score by 3.2x. A single node of a Sun Blade X6275 M2 server module using 2.93 GHz Intel Xeon X5670 processors delivered 37% improvement in SPECint_rate2006 benchmark results and 22% improvement in SPECfp_rate2006 benchmark results compared to the previous generation Sun Blade X6275 server module. Both nodes of a Sun Blade X6275 M2 server module using 2.93 GHz Intel Xeon X5670 processors delivered 59% improvement on the SPECint_rate2006 benchmark and 40% improvement on the SPECfp_rate2006 benchmark compared to the previous generation Sun Blade X6275 server module.

    Read the article

  • SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – SQL and 10-11 May SharePoint

    - by pinaldave
    There were lots of request about providing more details for the blog post through email address specified in the article SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning. Here is the complete brochure of the course. There are two different courses are offered by Solid Quality Mentors 1) Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning – Pinal Dave Date: May 12-14, 2010 Price: Rs. 14,000/person for 3 days Discount Code: ‘SQLAuthority.com‘ Effective Price: Rs. 11,000/person for 3 days 2) SharePoint 2010 – Joy Rathnayake Date: May 10-11, 2010 Price: Rs. 11,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 8,000/person for 3 days Download the complete PDF brochure. Additionally there is special program of SolidQ India Insider. I will provide the details for the same very soon. Please do send me email if you need any additional details. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – GUID vs INT – Your Opinion

    - by pinaldave
    I think the title is clear what I am going to write in your post. This is age old problem and I want to compile the list stating advantages and disadvantages of using GUID and INT as a Primary Key or Clustered Index or Both (the usual case). Let me start a list by suggesting one advantage and one disadvantage in each case. INT Advantage: Numeric values (and specifically integers) are better for performance when used in joins, indexes and conditions. Numeric values are easier to understand for application users if they are displayed. Disadvantage: If your table is large, it is quite possible it will run out of it and after some numeric value there will be no additional identity to use. GUID Advantage: Unique across the server. Disadvantage: String values are not as optimal as integer values for performance when used in joins, indexes and conditions. More storage space is required than INT. Please note that I am looking to create list of all the generic comparisons. There can be special cases where the stated information is incorrect, feel free to comment on the same. Please leave your opinion and advice in comment section. I will combine a final list and update this blog after a week. By listing your name in post, I will also give due credit. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Constraint and Keys, SQL Data Storage, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 1)

    - by Sadequl Hussain
    It is a fact of life: SQL Server databases change homes. They move from one instance to another, from one version to the next, from old servers to new ones.  They move around as an organisation’s data grows, applications are enhanced or new versions of the database software are released. If not anything else, servers become old and unreliable and databases eventually need to find a new home. Consider the following scenarios: 1.     A new  database application is rolled out in a production server from the development or test environment 2.     A copy of the production database needs to be installed in a test server for troubleshooting purposes 3.     A copy of the development database is regularly refreshed in a test server during the system development life cycle 4.     A SQL Server is upgraded to a newer version. This can be an in-place upgrade or a side-by-side migration 5.     One or more databases need to be moved between different instances as part of a consolidation strategy. The instances can be running the same or different version of SQL Server 6.     A database has to be restored from a backup file provided by a third party application vendor 7.     A backup of the database is restored in the same or different instance for disaster recovery 8.     A database needs to be migrated within the same instance: a.     Files are moved from direct attached storage to storage area network b.    The same database is copied under a different name for another application Migrating SQL Server database applications is a complex topic in itself. There are a number of components that can be involved: jobs, DTS or SSIS packages, logins or linked servers are only few pieces of the puzzle. However, in this article we will focus only on the central part of migration: the installation of the database itself. Unless it is an in-place upgrade, typically the database is taken from a source server and installed in a destination instance.  Most of the time, a full backup file is used for the rollout. The backup file is either provided to the DBA or the DBA takes the backup and restores it in the target server. Sometimes the database is detached from the source and the files are copied to and attached in the destination. Regardless of the method of copying, moving, refreshing, restoring or upgrading the physical database, there are a number of steps the DBA should follow before and after it has been installed in the destination. It is these post database installation steps we are going to discuss below. Some of these steps apply in almost every scenario described above while some will depend on the type of objects contained within the database.  Also, the principles hold regardless of the number of databases involved. Step 1:  Make a copy of data and log files when attaching and detaching When detaching and attaching databases, ensure you have made copies of the data and log files if the destination is running a newer version of SQL Server. This is because once attached to a newer version, the database cannot be detached and attached back to an older version. Trying to do so will give you a message like the following: Server: Msg 602, Level 21, State 50, Line 1 Could not find row in sysindexes for database ID 6, object ID 1, index ID 1. Run DBCC CHECKTABLE on sysindexes. Connection Broken If you try to backup the attached database and restore it in the source, it will still fail. Similarly, if you are restoring the database in a newer version, it cannot be backed up or detached and put back in an older version of SQL. Unlike detach and attach method though, you do not lose the backup file or the original database here. When detaching and attaching a database, it is important you keep all the log files available along with the data files. It is possible to attach a database without a log file and SQL Server can be instructed to create a new log file, however this does not work if the database was detached when the primary file group was read-only. You will need all the log files in such cases. Step 2: Change database compatibility level Once the database has been restored or attached to a newer version of SQL Server, change the database compatibility level to reflect the newer version unless there is a compelling reason not to do so. When attaching or restoring from a previous version of SQL, the database retains the older version’s compatibility level.  The only time you would want to keep a database with an older compatibility level is when the code within your database is no longer supported by SQL Server. For example, outer joins with *= or the =* operators were still possible in SQL 2000 (with a warning message), but not in SQL 2005 anymore. If your stored procedures or triggers are using this form of join, you would want to keep the database with an older compatibility level.  For a list of compatibility issues between older and newer versions of SQL Server databases, refer to the Books Online under the sp_dbcmptlevel topic. Application developers and architects can help you in deciding whether you should change the compatibility level or not. You can always change the compatibility mode from the newest to an older version if necessary. To change the compatibility level, you can either use the database’s property from the SQL Server Management Studio or use the sp_dbcmptlevel stored procedure.   Bear in mind that you cannot run the built-in reports for databases from SQL Server Management Studio if you keep the database with an older compatibility level. The following figure shows the error message I received when trying to run the “Disk Usage by Top Tables” report against a database. This database was hosted in a SQL Server 2005 system and still had a compatibility mode 80 (SQL 2000).     Continues…

    Read the article

  • SQL SERVER – Cleaning Up SQL Server Indexes – Defragmentation, Fillfactor – Video

    - by pinaldave
    Storing data non-contiguously on disk is known as fragmentation. Before learning to eliminate fragmentation, you should have a clear understanding of the types of fragmentation. When records are stored non-contiguously inside the page, then it is called internal fragmentation. When on disk, the physical storage of pages and extents is not contiguous. We can get both types of fragmentation using the DMV: sys.dm_db_index_physical_stats. Here is the generic advice for reducing the fragmentation. If avg_fragmentation_in_percent > 5% and < 30%, then use ALTER INDEX REORGANIZE: This statement is replacement for DBCC INDEXDEFRAG to reorder the leaf level pages of the index in a logical order. As this is an online operation, the index is available while the statement is running. If avg_fragmentation_in_percent > 30%, then use ALTER INDEX REBUILD: This is replacement for DBCC DBREINDEX to rebuild the index online or offline. In such case, we can also use the drop and re-create index method.(Ref: MSDN) Here is quick video which covers many of the above mentioned topics. While Vinod and I were planning about Indexing course, we had plenty of fun and learning. We often recording few of our statement and just left it aside. Afterwords we thought it will be really funny Here is funny video shot by Vinod and Myself on the same subject: Here is the link to the SQL Server Performance:  Indexing Basics. Here is the additional reading material on the same subject: SQL SERVER – Fragmentation – Detect Fragmentation and Eliminate Fragmentation SQL SERVER – 2005 – Display Fragmentation Information of Data and Indexes of Database Table SQL SERVER – De-fragmentation of Database at Operating System to Improve Performance Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • The enterprise vendor con - connecting SSD's using SATA 2 (3Gbits) thus limiting there performance

    - by tonyrogerson
    When comparing SSD against Hard drive performance it really makes me cross when folk think comparing an array of SSD running on 3GBits/sec to hard drives running on 6GBits/second is somehow valid. In a paper from DELL (http://www.dell.com/downloads/global/products/pvaul/en/PowerEdge-PowerVaultH800-CacheCade-final.pdf) on increasing database performance using the DELL PERC H800 with Solid State Drives they compare four SSD drives connected at 3Gbits/sec against ten 10Krpm drives connected at 6Gbits [Tony slaps forehead while shouting DOH!]. It is true in the case of hard drives it probably doesn’t make much difference 3Gbit or 6Gbit because SAS and SATA are both end to end protocols rather than shared bus architecture like SCSI, so the hard drive doesn’t share bandwidth and probably can’t get near the 600MiBytes/second throughput that 6Gbit gives unless you are doing contiguous reads, in my own tests on a single 15Krpm SAS disk using IOMeter (8 worker threads, queue depth of 16 with a stripe size of 64KiB, an 8KiB transfer size on a drive formatted with an allocation size of 8KiB for a 100% sequential read test) I only get 347MiBytes per second sustained throughput at an average latency of 2.87ms per IO equating to 44.5K IOps, ok, if that was 3GBits it would be less – around 280MiBytes per second, oh, but wait a minute [...fingers tap desk] You’ll struggle to find in the commodity space an SSD that doesn’t have the SATA 3 (6GBits) interface, SSD’s are fast not only low latency and high IOps but they also offer a very large sustained transfer rate, consider the OCZ Agility 3 it so happens that in my masters dissertation I did the same test but on a difference box, I got 374MiBytes per second at an average latency of 2.67ms per IO equating to 47.9K IOps – cost of an 240GB Agility 3 is £174.24 (http://www.scan.co.uk/products/240gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-500mb-s-85k-iops), but that same drive set in a box connected with SATA 2 (3Gbits) would only yield around 280MiBytes per second thus losing almost 100MiBytes per second throughput and a ton of IOps too. So why the hell are “enterprise” vendors still only connecting SSD’s at 3GBits? Well, my conspiracy states that they have no interest in you moving to SSD because they’ll lose so much money, the argument that they use SATA 2 doesn’t wash, SATA 3 has been out for some time now and all the commodity stuff you buy uses it now. Consider the cost, not in terms of price per GB but price per IOps, SSD absolutely thrash Hard Drives on that, it was true that the opposite was also true that Hard Drives thrashed SSD’s on price per GB, but is that true now, I’m not so sure – a 300GByte 2.5” 15Krpm SAS drive costs £329.76 ex VAT (http://www.scan.co.uk/products/300gb-seagate-st9300653ss-savvio-15k3-25-hdd-sas-6gb-s-15000rpm-64mb-cache-27ms) which equates to £1.09 per GB compared to a 480GB OCZ Agility 3 costing £422.10 ex VAT (http://www.scan.co.uk/products/480gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-410mb-s-30k-iops) which equates to £0.88 per GB. Ok, I compared an “enterprise” hard drive with a “commodity” SSD, ok, so things get a little more complicated here, most “enterprise” SSD’s are SLC and most commodity are MLC, SLC gives more performance and wear, I’ll talk about that another day. For now though, don’t get sucked in by vendor marketing, SATA 2 (3Gbit) just doesn’t cut it, SSD need 6Gbit to breath and even that SSD’s are pushing. Alas, SSD’s are connected using SATA so all the controllers I’ve seen thus far from HP and DELL only do SATA 2 – deliberate? Well, I’ll let you decide on that one.

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • SQL SERVER – Performance Tuning Resolution

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by MidnightDBAs. Taking resolutions is such an interesting subject. I think just like records, these are broken way more often. I find this is the funniest thing as we all take resolutions every year but not every year, we can manage to keep them. Well, does it mean we should not take resolutions? In fact I support resolutions. Every year, I take a resolution that I will strive reduce my body weight and I usually manage to keep eating healthy till the end of January. When February begins, I begin to loose focus from my goal and as March starts, the “As usual” eating habits begin. Looking at the positive side, what would happen if every year I do not eat healthy in January, I think that might cause terrible consequences to my health in the long run. So keeping resolutions is a good practise and following them to the extent one can is commendable. Let us come back to the world of SQL Server. What is my resolution for year 2011 for SQL Server? There are many, I am going to list three of very important resolutions that I have taken this new year over here. To understand SQL Server Performance Tuning at a deeper Level I think I am already half way through. I have been being very much busy during any given month doing hands-on performance tuning for at least 12 days on an average. That means, I am doing this activity for almost doing 2 weeks a month. I believe that I have a good understanding of the subject. Note that the word that I have used is “good,” and not “best.” There are often cases when I am stumped, and I have no clue of what to do next. Then, I usually go for my “trial and error” method - whichever method works, I make sure to keep a note on my blog. My goal is that I should never ever go for the trial and error method again to achieve the same solution. I should know the solution right away when I see the problem. I do understand that Performance Tuning can be a strange animal at times and one cannot guess the right step every time. However, aiming a high goal never hurts and I am going to learn more and more in this focused area. Going further from Basic BI understanding I do fairly decent with BI concepts. I know the nbasics of SSIS, SSRS, SSAS, PowerPivot and SharePoint (and few other things MDS, StreamInsight, etc). However, I still consider myself as a beginner. I do not have hands-on experience like many other BI Gurus around. I think I want to take my learning further in this direction. I do not want to be a BI expert as the first step but the goal is to move ahead from basic level towards an advanced level. I am going to start presenting in User Group Sessions and other places on this subject. When I have to prepare new subject for presentations, I think I force myself to learn more. I am committed to learn a bit more in this direction. Learning new features SQL Server 2011 Denali This is new thing from “Microsoft” for all the SQL Geeks. I am eagerly waiting for final product later this year and I am planning to learn it well. I think if I follow my above two goals, I think this goal will be automatically covered. I am eager and excited for this new offering from Microsoft. I guess, these are my resolutions; may be next year about the same time, I must revisit this post and see how much successful I am in following my goal. On a lighter note, I am particularly fan of following cartoon strip (Courtesy: Calvin and Hobbes). I think when we cannot resolve our resolutions, we tend to act like Calvin. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Visual Studio 2010 on Macbook Air

    - by Kyle B.
    Does anyone here run Visual Studio 2010 (or VS12 RC) on a Macbook Air? I have the current model with 4GB ram, 13" screen, and 256GB SSD drive. Before I go through the effort of configuring this, I'd like to know if anyone from the community has done this and: Was the performance acceptable? If it is, I plan to get a larger cinema display monitor as a second display and do all my coding on this machine ditching my desktop. Did you use Boot camp, Parallels, or VMWare? I feel to maximize performance that boot camp would be necessary to make the most utilization of the memory, but am not sure if this completely necessary. I'd prefer to use a VM, but wasn't sure if this was practical and would value your input before buying a license. Did you also run anything else on the Windows installation, such as SQL Server express, IISExpress, etc? Did performance lag after a certain point? Note: I would have asked this in superuser.com, but felt this applied more directly to the programming community.

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >