Search Results

Search found 10738 results on 430 pages for 'xevent a day'.

Page 19/430 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Day 1: iPhone Web Development

    - by BizTalk Visionary
    Preamble: In an attempt to keep an ageing mind alive I have decided to write an iPhone web app. The app will pull together a whole host of disciplines and should be a challenge. The App: Over the last month or so I have been playing around with Google maps and Instead (see InstaMapper.com) so decided the app must build on this learning. Looking to create an Uber-where’s Tigger! Framework: After an exhaustive research period I’m going with JQTouch. tPhone Simulator:   MobiOne More to follow later……

    Read the article

  • Tip of the day: Don’t misuse the Link button control

    - by anas
    Misuse ? Yes it is ! I have seen a lot of developers who are using the LinkButton to do redirection only ! They are handling it’s click event to just write Response.Redirect ("url”) like this: protected void LinkButton1_Click( object sender, EventArgs e) { Response.Redirect( "~/ForgotPassword.aspx" ); } Ok so to understand why it’s not a good practice let’s discuss the redirection steps involved when using the mentioned method: User submits the page by clicking on the LinkButton control...(read more)

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 23 (sys.dm_db_index_usage_stats)

    - by Tamarick Hill
    The sys.dm_db_index_usage_stats Dynamic Management View is used to return usage information about the various indexes on your SQL Server instance. Let’s have a look at this DMV against our AdventureWorks2012 database so we can examine the information returned. SELECT * FROM sys.dm_db_index_usage_stats WHERE database_id = db_id('AdventureWorks2012') The first three columns in the result set represent the database_id, object_id, and index_id of a given row. You can join these columns back to other system tables to extract the actual database, object, and index names. The next four columns are probably the most beneficial columns within this DMV. First, the user_seeks column represents the number of times that a user query caused a seek operation against a particular index. The user_scans column represents how many times a user query caused a scan operation on a particular index. The user_lookups column represents how many times an index was used to perform a lookup operation. The user_updates column refers to how many times an index had to be updated due to a write operation that effected a particular index. The last_user_seek, last_user_scan, last_user_lookup, and last_user_update columns provide you with DATETIME information about when the last user scan, seek, lookup, or update operation was performed. The remaining columns in the result set are the same as the ones we previously discussed, except instead of the various operations being generated from user requests, they are generated from system background requests. This is an extremely useful DMV and one of my favorites when it comes to Index Maintenance. As we all know, indexes are extremely beneficial with improving the performance of your read operations. But indexes do have a downside as well. Indexes slow down the performance of your write operations, and they also require additional resources for storage. For this reason, in my opinion, it is important to regularly analyze the indexes on your system to make sure the indexes you have are being used efficiently. My AdventureWorks2012 database is only used for demonstrating or testing things, so I dont have a lot of meaningful information here, but for a Production system, if you see an index that is never getting any seeks, scans, or lookups, but is constantly getting a ton of updates, it more than likely would be a good candidate for you to consider removing. You would not be getting much benefit from the index, but yet it is incurring a cost on your system due to it constantly having to be updated for your write operations, not to mention the additional storage it is consuming. You should regularly analyze your indexes to ensure you keep your database systems as efficient and lean as possible. One thing to note is that these DMV statistics are reset every time SQL Server is restarted. Therefore it would not be a wise idea to make decisions about removing indexes after a Server Reboot or a cluster roll. If you restart your SQL Server instances frequently, for example if you schedule weekly/monthly cluster rolls, then you may not capture indexes that are being used for weekly/monthly reports that run for business users. And if you remove them, you may have some upset people at your desk on Monday morning. If you would like to begin analyzing your indexes to possibly remove the ones that your system is not using, I would recommend building a process to load this DMV information into a table on scheduled basis, depending on how frequently you perform an operation that would reset these statistics, then you can analyze the data over a period of time to get a more accurate view of what indexes are really being used and which ones or not. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188755.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Big Data – Buzz Words: What is NoSQL – Day 5 of 21

    - by Pinal Dave
    In yesterday’s blog post we explored the basic architecture of Big Data . In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – NoSQL. What is NoSQL? NoSQL stands for Not Relational SQL or Not Only SQL. Lots of people think that NoSQL means there is No SQL, which is not true – they both sound same but the meaning is totally different. NoSQL does use SQL but it uses more than SQL to achieve its goal. As per Wikipedia’s NoSQL Database Definition – “A NoSQL database provides a mechanism for storage and retrieval of data that uses looser consistency models than traditional relational databases.“ Why use NoSQL? A traditional relation database usually deals with predictable structured data. Whereas as the world has moved forward with unstructured data we often see the limitations of the traditional relational database in dealing with them. For example, nowadays we have data in format of SMS, wave files, photos and video format. It is a bit difficult to manage them by using a traditional relational database. I often see people using BLOB filed to store such a data. BLOB can store the data but when we have to retrieve them or even process them the same BLOB is extremely slow in processing the unstructured data. A NoSQL database is the type of database that can handle unstructured, unorganized and unpredictable data that our business needs it. Along with the support to unstructured data, the other advantage of NoSQL Database is high performance and high availability. Eventual Consistency Additionally to note that NoSQL Database may not provided 100% ACID (Atomicity, Consistency, Isolation, Durability) compliance.  Though, NoSQL Database does not support ACID they provide eventual consistency. That means over the long period of time all updates can be expected to propagate eventually through the system and data will be consistent. Taxonomy Taxonomy is the practice of classification of things or concepts and the principles. The NoSQL taxonomy supports column store, document store, key-value stores, and graph databases. We will discuss the taxonomy in detail in later blog posts. Here are few of the examples of the each of the No SQL Category. Column: Hbase, Cassandra, Accumulo Document: MongoDB, Couchbase, Raven Key-value : Dynamo, Riak, Azure, Redis, Cache, GT.m Graph: Neo4J, Allegro, Virtuoso, Bigdata As of now there are over 150 NoSQL Database and you can read everything about them in this single link. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – Hadoop. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 1 (sys.dm_exec_requests)

    - by Tamarick Hill
    The first DMO that I would like to introduce you to is one of the most common and basic DMV’s out there. I use the term DMV because this DMO is actually a view as opposed to a function. This DMV is server-scoped and it returns information about all requests that are currently executing on your SQL Server instance. To illustrate what this DMV returns, lets take a look at the results. As you can see, this DMV returns a wealth of information about requests occurring on your server. You are able to see the SPID, the start time of a request, current status, and the command the SPID is executing. In addition to this you see columns for sql_handle and plan_handle. These columns (when combined with other DMO’s we will discuss later) can return the actual sql text that is being executed on your server as well as the actual execution plan that is cached and being used. This DMV also returns information about various wait types that may be occurring for your spid. The percent_complete column displays a percentage to completion for certain database actions such as DBCC CheckDB, Database Restores, Rollback’s, etc. In addition to these, you are also able to see the amount of reads, writes, and cpu that the SPID has consumed. You will find this DMV to be one of the primary DMV’s that you use when looking for information about what is occurring on your server.

    Read the article

  • SQL SERVER – LOGBUFFER – Wait Type – Day 18 of 28

    - by pinaldave
    At first, I was not planning to write about this wait type. The reason was simple- I have faced this only once in my lifetime so far maybe because it is one of the top 5 wait types. I am not sure if it is a common wait type or not, but in the samples I had it really looks rare to me. From Book On-Line: LOGBUFFER Occurs when a task is waiting for space in the log buffer to store a log record. Consistently high values may indicate that the log devices cannot keep up with the amount of log being generated by the server. LOGBUFFER Explanation: The book online definition of the LOGBUFFER seems to be very accurate. On the system where I faced this wait type, the log file (LDF) was put on the local disk, and the data files (MDF, NDF) were put on SanDrives. My client then was not familiar about how the file distribution was supposed to be. Once we moved the LDF to a faster drive, this wait type disappeared. Reducing LOGBUFFER wait: There are several suggestions to reduce this wait stats: Move Transaction Log to Separate Disk from mdf and other files. (Make sure your drive where your LDF is has no IO bottleneck issues). Avoid cursor-like coding methodology and frequent commit statements. Find the most-active file based on IO stall time, as shown in the script written over here. You can also use fn_virtualfilestats to find IO-related issues using the script mentioned over here. Check the IO-related counters (PhysicalDisk:Avg.Disk Queue Length, PhysicalDisk:Disk Read Bytes/sec and PhysicalDisk :Disk Write Bytes/sec) for additional details. Read about them over here. If you have noticed, my suggestions for reducing the LOGBUFFER is very similar to WRITELOG. Although the procedures on reducing them are alike, I am not suggesting that LOGBUFFER and WRITELOG are same wait types. From the definition of the two, you will find their difference. However, they are both related to LOG and both of them can severely degrade the performance. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Advanced Solution – Wait Type – Day 7 of 28

    - by pinaldave
    Earlier we discussed about the what is the common solution to solve the issue with CXPACKET wait time. Today I am going to talk about few of the other suggestions which can help to reduce the CXPACKET wait. If you are going to suggest that I should focus on MAXDOP and COST THRESHOLD – I totally agree. I have covered them in details in yesterday’s blog post. Today we are going to discuss few other way CXPACKET can be reduced. Potential Reasons: If data is heavily skewed, there are chances that query optimizer may estimate the correct amount of the data leading to assign fewer thread to query. This can easily lead to uneven workload on threads and may create CXPAKCET wait. While retrieving the data one of the thread face IO, Memory or CPU bottleneck and have to wait to get those resources to execute its tasks, may create CXPACKET wait as well. Data which is retrieved is on different speed IO Subsystem. (This is not common and hardly possible but there are chances). Higher fragmentations in some area of the table can lead less data per page. This may lead to CXPACKET wait. As I said the reasons here mentioned are not the major cause of the CXPACKET wait but any kind of scenario can create the probable wait time. Best Practices to Reduce CXPACKET wait: Refer earlier article regarding MAXDOP and Cost Threshold. De-fragmentation of Index can help as more data can be obtained per page. (Assuming close to 100 fill-factor) If data is on multiple files which are on multiple similar speed physical drive, the CXPACKET wait may reduce. Keep the statistics updated, as this will give better estimate to query optimizer when assigning threads and dividing the data among available threads. Updating statistics can significantly improve the strength of the query optimizer to render proper execution plan. This may overall affect the parallelism process in positive way. Bad Practice: In one of the recent consultancy project, when I was called in I noticed that one of the ‘experienced’ DBA noticed higher CXPACKET wait and to reduce them, he has increased the worker threads. The reality was increasing worker thread has lead to many other issues. With more number of the threads, more amount of memory was used leading memory pressure. As there were more threads CPU scheduler faced higher ‘Context Switching’ leading further degrading performance. When I explained all these to ‘experienced’ DBA he suggested that now we should reduce the number of threads. Not really! Lower number of the threads may create heavy stalling for parallel queries. I suggest NOT to touch the setting of number of the threads when dealing with CXPACKET wait. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest reading book on-line for further clarification. All the discussion of Wait Stats over here is generic and it varies by system to system. You are recommended to test this on development server before implementing to production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Usual Solution – Wait Type – Day 6 of 28

    - by pinaldave
    CXPACKET has to be most popular one of all wait stats. I have commonly seen this wait stat as one of the top 5 wait stats in most of the systems with more than one CPU. Books On-Line: Occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if contention on this wait type becomes a problem. CXPACKET Explanation: When a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. There is an organizer/coordinator thread (thread 0), which takes waits for all the threads to complete and gathers result together to present on the client’s side. The organizer thread has to wait for the all the threads to finish before it can move ahead. The Wait by this organizer thread for slow threads to complete is called CXPACKET wait. Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query. Reducing CXPACKET wait: We cannot discuss about reducing the CXPACKET wait without talking about the server workload type. OLTP: On Pure OLTP system, where the transactions are smaller and queries are not long but very quick usually, set the “Maximum Degree of Parallelism” to 1 (one). This way it makes sure that the query never goes for parallelism and does not incur more engine overhead. EXEC sys.sp_configure N'cost threshold for parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Data-warehousing / Reporting server: As queries will be running for long time, it is advised to set the “Maximum Degree of Parallelism” to 0 (zero). This way most of the queries will utilize the parallel processor, and long running queries get a boost in their performance due to multiple processors. EXEC sys.sp_configure N'cost threshold for parallelism', N'0' GO RECONFIGURE WITH OVERRIDE GO Mixed System (OLTP & OLAP): Here is the challenge. The right balance has to be found. I have taken a very simple approach. I set the “Maximum Degree of Parallelism” to 2, which means the query still uses parallelism but only on 2 CPUs. However, I keep the “Cost Threshold for Parallelism” very high. This way, not all the queries will qualify for parallelism but only the query with higher cost will go for parallelism. I have found this to work best for a system that has OLTP queries and also where the reporting server is set up. Here, I am setting ‘Cost Threshold for Parallelism’ to 25 values (which is just for illustration); you can choose any value, and you can find it out by experimenting with the system only. In the following script, I am setting the ‘Max Degree of Parallelism’ to 2, which indicates that the query that will have a higher cost (here, more than 25) will qualify for parallel query to run on 2 CPUs. This implies that regardless of the number of CPUs, the query will select any two CPUs to execute itself. EXEC sys.sp_configure N'cost threshold for parallelism', N'25' GO EXEC sys.sp_configure N'max degree of parallelism', N'2' GO RECONFIGURE WITH OVERRIDE GO Read all the post in the Wait Types and Queue series. Additionally a must read comment of Jonathan Kehayias. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest you all to read the online book for further clarification. All the discussion of Wait Stats over here is generic and it varies from system to system. It is recommended that you test this on the development server before implementing on the production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • TechDays 2010 Portugal - The Day After

    - by Ricardo Peres
    Well, TechDays 2010 Portugal is over, time for a balance. I really enjoyed being a speaker, although my presentation took a lot more time than it should, it was gratifying to see so many people staying until the end. Lots of subjects were left behind, though. My presentation is available at my SkyDrive, here. Soon I will place there the source code, too. I would like to know if you've been there, and, if so, what do you think of my presentation! Feel free to send your thoughts, whatever they are. On the other hand, I saw some really interesting presentations, to name a few, from Nuno Antunes, Nuno Godinho, Filipe Prezado, Nuno Silva and my friend André Lage. I also had the chance to finally meet Caio Proiete and Pedro Perfeito. Perhaps we'll meet again at TechDays Remix, who knows.

    Read the article

  • OTN Database Developer Day in LA/OC

    - by shay.shmeltzer
    We are taking a little break from the Fusion OTN Developer Days, and instead we'll be taking part in several OTN Developer Days ran by the database team. The aim is to show what Oracle has to offer to various developer groups. As you might guess we specifically are going to be in the Java track. Specifically we are running a lab that will get you to experience Oracle JDeveloper (or OEPE) and will show you how to build an application based on EJB/JSF with Ajax UI. I'm going to be in the upcoming event on May 5th - if you are in the LA area and haven't experienced JDeveloper yet - come in and see what it is all about. Details here.

    Read the article

  • SQL SERVER – Guest Post – Glenn Berry – Wait Type – Day 26 of 28

    - by pinaldave
    Glenn Berry works as a Database Architect at NewsGator Technologies in Denver, CO. He is a SQL Server MVP, and has a whole collection of Microsoft certifications, including MCITP, MCDBA, MCSE, MCSD, MCAD, and MCTS. He is also an Adjunct Faculty member at University College – University of Denver, where he has been teaching since 2000. He is one wonderful blogger and often blogs at here. I am big fan of the Dynamic Management Views (DMV) scripts of Glenn. His script are extremely popular and the reality is that he has inspired me to start this series with his famous DMV which I have mentioned in very first  wait stats blog post (I had forgot to request his permission to re-use the script but when asked later on his whole hearty approved it). Here is is his excellent blog post on this subject of wait stats: Analyzing cumulative wait stats in SQL Server 2005 and above has become a popular and effective technique for diagnosing performance issues and further focusing your troubleshooting and diagnostic  efforts.  Rather than just guessing about what resource(s) that SQL Server is waiting on, you can actually find out by running a relatively simple DMV query. Once you know what resources that SQL Server is spending the most time waiting on, you can run more specific queries that focus on that resource to get a better idea what is causing the problem. I do want to throw out a few caveats about using wait stats as a diagnostic tool. First, they are most useful when your SQL Server instance is experiencing performance problems. If your instance is running well, with no indication of any resource pressure from other sources, then you should not worry that much about what the top wait types are. SQL Server will always be waiting on some resource, but many wait types are quite benign, and can be safely ignored. In spite of this, I quite often see experienced DBAs obsessing over the top wait type, even when their SQL Server instance is running extremely well. Second, I often see DBAs jump to the wrong conclusion based on seeing a particular well-known wait type. A good example is CXPACKET waits. People typically jump to the conclusion that high CXPACKET waits means that they should immediately change their instance-level MADOP setting to 1. This is not always the best solution. You need to consider your workload type, and look carefully for any important “missing” indexes that might be causing the query optimizer to use a parallel plan to compensate for the missing index. In this case, correcting the index problem is usually a better solution than changing MAXDOP, since you are curing the disease rather than just treating the symptom. Finally, you should get in the habit of clearing out your cumulative wait stats with the  DBCC SQLPERF(‘sys.dm_os_wait_stats’, CLEAR); command. This is especially important if you have made an configuration or index changes, or if your workload has changed recently. Otherwise, your cumulative wait stats will be polluted with the old stats from weeks or months ago (since the last time SQL Server was started or the stats were cleared).  If you make a change to your SQL Server instance, or add an index, you should clear out your wait stats, and then wait a while to see what your new top wait stats are. At any rate, enjoy Pinal Dave’s series on Wait Stats. This blog post has been written by Glenn Berry (Twitter | Blog) Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Watching Green Day and discovering Sitecore, priceless.

    - by jonel
    I’m feeling inspired and I’d like to share a technique we’ve implemented in Sitecore to address a URL mapping from our legacy site that we wanted to carry over to the new beautiful Littelfuse.com. The challenge is to carry over all of our series URLs that have been published in our datasheets, we currently have a lot of series and having to create a manual mapping for those could be really tedious. It has the format of http://www.littelfuse.com/series/series-name.html, for instance, http://www.littelfuse.com/series/flnr.html. It would have been easier if we have our information architecture defined like this but that would have been too easy. I took a solution that is 2-fold. First, I need to create a URL rewrite rule using the IIS URL Rewrite Module 2.0. Secondly, we need to implement a handler that will take care of the actual lookup of the actual series. It will be amazing after we’ve gone over the details. Let’s start with the URL rewrite. Create a new blank rule, you can name it with anything you wish. The key part here to talk about is the Pattern and the Action groups. The Pattern is nothing but regex. Basically, I’m telling it to match the regex I have defined. In the Action group, I am telling it what to do, in this case, rewrite to the redirect.aspx webform. In this implementation, I will be using Rewrite instead of redirect so the URL sticks in the browser. If you opt to use Redirect, then the URL bar will display the new URL our webform will redirect to. Let me explain one small thing, the (\w+) in my Pattern group’s regex, will actually translate to {R:1} in my Action’s group. This is where the magic begins. Now let’s see what our Redirect.aspx contains. Remember our {R:1} above which becomes the query string variable s? This are basic .Net code. The only thing that you will probably ask is the LFSearch class. It’s our own implementation of addressing finding items by using a field search, we supply the fieldname, the value of the field, the template name of the item we are after, and the value of true or false if we want to do an exact search, or not. If eureka, then redirect to that item’s Path (Url). If not, tell the user tough luck, here’s the 404 page as a consolation. Amazing, ain’t it?

    Read the article

  • Apress Deal of the Day - 10/Mar/2011 - Beginning Microsoft Word 2010

    - by TATWORTH
    Today's $10 deal at http://www.apress.com/info/dailydeal is Beginning Microsoft Word 2010 This has been on before, I bought a copy and have found it useful. Beginning Microsoft Word 2010 Beginning Word 2010 is a visually stimulating introductory guide that teaches the complete Word newbie (as well as slightly experienced yet equally baffled users) what they need to know to write that thesis or proposal tonight.

    Read the article

  • AZGroups May 10 2010 Day of Net

    WOW. Another event behind us. What a speaker line up this year huh? Scott Guthrie Scott Guthrie Scott Hanselman Jeffrey Palermo Tim Heuer Scott Guthrie Why is ScottGu listed 3 times? Because he gave us 4 hours of content. Amazing that hes got so much energy, coding talent, stage presence, and community concern to still donate this much of this time. I cant say how grateful we are as a community that ScottGu will agrees to come do our event. We also have to take a moment and...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 20 (sys.dm_tran_locks)

    - by Tamarick Hill
    The sys.dm_tran_locks DMV is used to return active lock resources on your server. Locking is a mechanism used by SQL Server to protect the integrity of data when you have multiple users that may potentially access the same data at the same time. Let’s run a query against this DMV so we can analyze the results. SELECT * FROM sys.dm_tran_locks As we can see, its a lot of lock information returned from this DMV. I will not go into detail about each of the columns returned, but I will touch on the ones that I feel are the most important. The first column in the output is the resource_type column which tells you the type of lock a particular row represents. It could be a PAGE lock, RID, OBJECT, DATABASE, or several other lock types. The resource_database_id represents the id of the database for a particular lock resource. The resource_lock_partition column represents the ID of a lock partition. When you have a table that is partitioned, locks can be escalated to the partition level before going to a table level lock. The request_mode column gives us information about the type of lock that is being requested. From the screenshots above we see RangeS-S locks which represent a share range lock and IS locks which represent Intent Shared locks. The request_status column displays whether the lock has been granted or whether the lock is waiting to be acquired. The request_session_id  shows the session_id that is requesting the lock. This DMV is the best place to go when you need to identify the exact locks that are being held or pending for individual requests. You might need this information when you are troubleshooting severe blocking or deadlocking problems on your server. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms190345.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Big Data – Evolution of Big Data – Day 3 of 21

    - by Pinal Dave
    In yesterday’s blog post we answered what is the Big Data. Today we will understand why and how the evolution of Big Data has happened. Though the answer is very simple, I would like to tell it in the form of a history lesson. Data in Flat File In earlier days data was stored in the flat file and there was no structure in the flat file.  If any data has to be retrieved from the flat file it was a project by itself. There was no possibility of retrieving the data efficiently and data integrity has been just a term discussed without any modeling or structure around. Database residing in the flat file had more issues than we would like to discuss in today’s world. It was more like a nightmare when there was any data processing involved in the application. Though, applications developed at that time were also not that advanced the need of the data was always there and there was always need of proper data management. Edgar F Codd and 12 Rules Edgar Frank Codd was a British computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases. He presented 12 rules for the Relational Database and suddenly the chaotic world of the database seems to see discipline in the rules. Relational Database was a promising land for all the unstructured database users. Relational Database brought into the relationship between data as well improved the performance of the data retrieval. Database world had immediately seen a major transformation and every single vendors and database users suddenly started to adopt the relational database models. Relational Database Management Systems Since Edgar F Codd proposed 12 rules for the RBDMS there were many different vendors who started them to build applications and tools to support the relationship between database. This was indeed a learning curve for many of the developer who had never worked before with the modeling of the database. However, as time passed by pretty much everybody accepted the relationship of the database and started to evolve product which performs its best with the boundaries of the RDBMS concepts. This was the best era for the databases and it gave the world extreme experts as well as some of the best products. The Entity Relationship model was also evolved at the same time. In software engineering, an Entity–relationship model (ER model) is a data model for describing a database in an abstract way. Enormous Data Growth Well, everything was going fine with the RDBMS in the database world. As there were no major challenges the adoption of the RDBMS applications and tools was pretty much universal. There was a race at times to make the developer’s life much easier with the RDBMS management tools. Due to the extreme popularity and easy to use system pretty much every data was stored in the RDBMS system. New age applications were built and social media took the world by the storm. Every organizations was feeling pressure to provide the best experience for their users based the data they had with them. While this was all going on at the same time data was growing pretty much every organization and application. Data Warehousing The enormous data growth now presented a big challenge for the organizations who wanted to build intelligent systems based on the data and provide near real time superior user experience to their customers. Various organizations immediately start building data warehousing solutions where the data was stored and processed. The trend of the business intelligence becomes the need of everyday. Data was received from the transaction system and overnight was processed to build intelligent reports from it. Though this is a great solution it has its own set of challenges. The relational database model and data warehousing concepts are all built with keeping traditional relational database modeling in the mind and it still has many challenges when unstructured data was present. Interesting Challenge Every organization had expertise to manage structured data but the world had already changed to unstructured data. There was intelligence in the videos, photos, SMS, text, social media messages and various other data sources. All of these needed to now bring to a single platform and build a uniform system which does what businesses need. The way we do business has also been changed. There was a time when user only got the features what technology supported, however, now users ask for the feature and technology is built to support the same. The need of the real time intelligence from the fast paced data flow is now becoming a necessity. Large amount (Volume) of difference (Variety) of high speed data (Velocity) is the properties of the data. The traditional database system has limits to resolve the challenges this new kind of the data presents. Hence the need of the Big Data Science. We need innovation in how we handle and manage data. We need creative ways to capture data and present to users. Big Data is Reality! Tomorrow In tomorrow’s blog post we will try to answer discuss Basics of Big Data Architecture. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Oracle Developer Day im Januar 2013:" Die Oracle Datenbank in der Praxis"

    - by britta wolf
    Was steckt in den Datenbank-Editionen? Einsatzgebiete, Tipps und Tricks zum Mitnehmen, inklusive Ausblick auf neue Funktionen ... Im Rahmen des Oracle Developer Days werden Sie neben vielen Tipps und Tricks zu folgenden Themen auf den neuesten Stand gebracht: Die Unterschiede der Editionen und ihre Geheimnisse Umfangreiche Basisausstattung auch ohne Option Performance und Skalierbarkeit in den einzelnen Editionen Kosten- und Ressourceneinsparung leicht gemacht Sicherheit in der Datenbank Steigerung der Verfügbarkeit mit einfachen Mitteln Der Umgang mit großen Datenmengen Cloud Technologien in der Oracle Datenbank Die kostenlose Veranstaltung findet an folgenden Terminen und Orten statt: 23.01.2013: Oracle Niederlassung Stuttgart 30.01.2013: Oracle Niederlassung Potsdam 05.02.2013: Oracle Niederlassung Düsseldorf Die Agenda und den Anmeldekontakt finden Sie hier.

    Read the article

  • Big Data – Beginning Big Data – Day 1 of 21

    - by Pinal Dave
    What is Big Data? I want to learn Big Data. I have no clue where and how to start learning about it. Does Big Data really means data is big? What are the tools and software I need to know to learn Big Data? I often receive questions which I mentioned above. They are good questions and honestly when we search online, it is hard to find authoritative and authentic answers. I have been working with Big Data and NoSQL for a while and I have decided that I will attempt to discuss this subject over here in the blog. In the next 21 days we will understand what is so big about Big Data. Big Data – Big Thing! Big Data is becoming one of the most talked about technology trends nowadays. The real challenge with the big organization is to get maximum out of the data already available and predict what kind of data to collect in the future. How to take the existing data and make it meaningful that it provides us accurate insight in the past data is one of the key discussion points in many of the executive meetings in organizations. With the explosion of the data the challenge has gone to the next level and now a Big Data is becoming the reality in many organizations. Big Data – A Rubik’s Cube I like to compare big data with the Rubik’s cube. I believe they have many similarities. Just like a Rubik’s cube it has many different solutions. Let us visualize a Rubik’s cube solving challenge where there are many experts participating. If you take five Rubik’s cube and mix up the same way and give it to five different expert to solve it. It is quite possible that all the five people will solve the Rubik’s cube in fractions of the seconds but if you pay attention to the same closely, you will notice that even though the final outcome is the same, the route taken to solve the Rubik’s cube is not the same. Every expert will start at a different place and will try to resolve it with different methods. Some will solve one color first and others will solve another color first. Even though they follow the same kind of algorithm to solve the puzzle they will start and end at a different place and their moves will be different at many occasions. It is  nearly impossible to have a exact same route taken by two experts. Big Market and Multiple Solutions Big Data is exactly like a Rubik’s cube – even though the goal of every organization and expert is same to get maximum out of the data, the route and the starting point are different for each organization and expert. As organizations are evaluating and architecting big data solutions they are also learning the ways and opportunities which are related to Big Data. There is not a single solution to big data as well there is not a single vendor which can claim to know all about Big Data. Honestly, Big Data is too big a concept and there are many players – different architectures, different vendors and different technology. What is Next? In this 31 days series we will be exploring many essential topics related to big data. I do not claim that you will be master of the subject after 31 days but I claim that I will be covering following topics in easy to understand language. Architecture of Big Data Big Data a Management and Implementation Different Technologies – Hadoop, Mapreduce Real World Conversations Best Practices Tomorrow In tomorrow’s blog post we will try to answer one of the very essential questions – What is Big Data? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 18 (sys.dm_io_virtual_file_stats)

    - by Tamarick Hill
    The sys.dm_io_virtual_file_stats Dynamic Management Function is used to return IO statistic information about each of your database files on your server. As input parameters, this function takes a database_id and a file_id. If you want to return IO statistic information for all files, you can simply pass in NULL values for both of these. Let’s have a look at this function  and examine its results: SELECT db_name(database_id) DatabaseName, * FROM sys.dm_io_virtual_file_stats(NULL, NULL) The first column in the result set is the DatabaseName which is just a column I created using the db_name() system function and the database_id column from this function. Next we have a file_id which represent the ID for the file, whether it be a data file or transaction log file. The ‘sample_ms’ column represents the total time in milliseconds that the instance has been up and running. Next we have the ‘num_of_reads’, ‘num_of_bytes_read’, and later ‘num_of_writes’, and ‘num_of_bytes_written’. These columns represent the number of reads or writes and number of bytes read or written against a particular file. These columns are beneficial when determining how often a particular file is being accessed. The ‘io_stall_read_ms’ and io_stall_write_ms’ columns each represent the the total time in milliseconds that users have had to wait for reads or writes against a file respectively. The ‘io_stall’ column is the sum of both read and write io stalls. The ‘size_on_disk_bytes’ column represents the size of the respective file on your disk subsystem. Lastly the ‘file_handle’ column is simply the Windows File handle. This Dynamic Management Function is useful when you are needing to analyze your database files for the purposes of segregating high IO databases. This DMF gives you a good view of which of your database files are being accessed the most and which ones may be generating the largest IO stalls. These could be your best candidates for moving into separate IO channels. For more information about this DMF, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms190326.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 27 (sys.dm_db_file_space_usage)

    - by Tamarick Hill
    The sys.dm_db_file_space usage DMV returns information about database file space usage.  This DMV was enhanced for the 2012 version to include 3 additional columns. Let’s query this DMV against our AdventureWorks2012 database and view the results. SELECT * FROM sys.dm_db_file_space_usage The column returned from this DMV are really self-explanatory, but I will give you a description, paraphrased from books online, below. The first three columns returned from this DMV represent the Database, File, and Filegroup for the current database context that executed the DMV query. The next column is the total_page_count which represents the total number of pages in the file. The allocated_extent_page_count represents the total number of pages in all extents that have been allocated. The unallocated_extent_page_count represents the number of pages in the unallocated extents within the file. The version_store_reserved_page_count column represents the number of pages that are allocated to the version store. The user_object_reserved_page_count represents the number of pages allocated for user objects. The internal_object_reserved_page_count represents the number of pages allocated for internal objects.  Lastly is the mixed_extent_page_count which represents the total number of pages that are part of mixed extents. This is a great DMV for retrieving usage space information from your database files. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms174412.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at another IO-related wait type. From Book On-Line: Occurs when a task is waiting for I/Os to finish. ASYNC_IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. If by any means your application that’s connected to SQL Server is processing the data very slowly, this type of wait can occur. Several long-running database operations like BACKUP, CREATE DATABASE, ALTER DATABASE or other operations can also create this wait type. Reducing ASYNC_IO_COMPLETION wait: When it is an issue related to IO, one should check for the following things associated to IO subsystem: Look at the programming and see if there is any application code which processes the data slowly (like inefficient loop, etc.). Note that it should be re-written to avoid this  wait type. Proper placing of the files is very important. We should check the file system for proper placement of the files – LDF and MDF on separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is a higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly and so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on the development setup (test environment). As soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very likely to happen that there are no proper indexes on the system and yet there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the following two articles I wrote that talk about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 21 (sys.dm_db_partition_stats)

    - by Tamarick Hill
    The sys.dm_db_partition_stats DMV returns page count and row count information for each table or index within your database. Lets have a quick look at this DMV so we can review some of the results. **NOTE: I am going to create an ‘ObjectName’ column in our result set so that we can more easily identify tables. SELECT object_name(object_id) ObjectName, * FROM sys.dm_db_partition_stats As stated above, the first column in our result set is an Object name based on the object_id column of this result set. The partition_id column refers to the partition_id of the index in question. Each index will have at least 1 unique partition_id and will have more depending on if the object has been partitioned. The index_id column relates back to the sys.indexes table and uniquely identifies an index on a given object. A value of 0 (zero) in this column would indicate the object is a HEAP and a value of 1 (one) would signify the Clustered Index. Next is the partition_number which would signify the number of the partition for a particular object_id. Since none of my tables in my result set have been partitioned, they all display 1 for the partition_number. Next we have the in_row_data_page_count which tells us the number of data pages used to store in-row data for a given index. The in_row_used_page_count is the number of pages used to store and manage the in-row data. If we look at the first row in the result set, we will see we have 700 for this column and 680 for the previous. This means that just to manage the data (not store it) is requiring 20 pages. The next column in_row_reserved_page_count is how many pages have been reserved, regardless if they are being used or not. The next 2 columns are used for storing LOB (Large Object) data which could be text, image, varchar(max), or varbinary(max) columns. The next two columns, row_overflow, represent pages used for data that exceed the 8,060 byte row size limit for the in-row data pages. The next columns used_page_count and reserved_page_count represent the sum of the in_row, lob, and row_overflow columns discussed earlier. Lastly is a row_count column which displays the number of rows that are in a particular index. This DMV is a very powerful resource for identifying page and row count information. By knowing the page counts for indexes within your database, you are able to easily calculate the size of indexes. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms187737.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at an IO-related wait types. From Book On-Line: Occurs while waiting for I/O operations to complete. This wait type generally represents non-data page I/Os. Data page I/O completion waits appear as PAGEIOLATCH_* waits. IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. This is a good indication that IO needs to be looked over here. Reducing IO_COMPLETION wait: When it is an issue concerning the IO, one should look at the following things related to IO subsystem: Proper placing of the files is very important. We should check the file system for proper placement of files – LDF and MDF on a separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk),etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as the configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on development (test environment) set up and as soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very possible that there are no proper indexes in the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the two articles that I wrote; they are about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Types, SQL White Papers, T SQL, Technology

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >