Search Results

Search found 2905 results on 117 pages for 'dave lee'.

Page 23/117 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • SQL SERVER – SSMS: Database Consistency History Report

    - by Pinal Dave
    Doctor and Database The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands. Back to SQL Server In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not? My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line. CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’. Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below: CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName). If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem. In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report. The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below. As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled. To demonstrate, I have caused corruption in one of the database and did below steps. Run CheckDB so that errors are reported. Fix the corruption by losing the data using repair option Run CheckDB again to check if corruption is cleared. After that I have launched the report and below is what we would see. If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted). DECLARE @curr_tracefilename VARCHAR(500); DECLARE @base_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc'; SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command ,       LoginName ,       StartTime ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired ,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time FROM::fn_trace_gettable( @base_tracefilename, DEFAULT) WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB' -- AND DatabaseName = @DatabaseName; Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words. DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001. Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment? Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • SQLAuthority News – #TechEdIn – TechEd India 2012 Memories and Photos

    - by pinaldave
    TechEd India 2012 was held in Bangalore last March 21 to 23, 2012. Just like every year, this event is bigger, grander and inspiring. Pinal Dave at TechEd India 2012 Family Event Every single year, TechEd is a special affair for my entire family.  Four months before the start of TechEd, I usually start to build the mental image of the event. I start to think  about various things. For the most part, what excites me most is presenting a session and meeting friends. Seriously, I start thinking about presenting my session 4 months earlier than the event!  I work on my presentation day and night. I want to make sure that what I present is accurate and that I have experienced it firsthand. My wife and my daughter also contribute to my efforts. For us, TechEd is a family event, and the two of them feel equally responsible as well. They give up their family time so I can bring out the best content for the Community. Pinal, Shaivi and Nupur at TechEd India 2012 Guinea Pigs (My Experiment Victims) I do not rehearse my session, ever. However, I test my demo almost every single day till the last moment that I have to present it already. I sometimes go over the demo more than 2-3 times a day even though the event is more than a month away. I have two “guinea pigs”: 1) Nupur Dave and 2) Vinod Kumar. When I am at home, I present my demos to my wife Nupur. At times I feel that people often backup their demo, but in my case, I have backup demo presenters. In the office during lunch time, I present the demos to Vinod. I am sure he can walk my demos easily with eyes closed. Pinal and Vinod at TechEd India 2012 My Sessions I’ve been determined to present my sessions in a real and practical manner. I prefer to present the subject that I myself would be eager to attend to and sit through if I were an audience. Just keeping that principle in mind, I have created two sessions this year. SQL Server Misconception and Resolution Pinal and Vinod at TechEd India 2012 We believe all kinds of stuff – that the earth is flat, or that the forbidden fruit is apple, or that the big bang theory explains the origin of the universe, and so many other things. Just like these, we have plenty of misconceptions in SQL Server as well. I have had this dream of co-presenting a session with Vinod Kumar for the past 3 years. I have been asking him every year if we could present a session together, but we never got it to work out, until this year came. Fortunately, we got a chance to stand on the same stage and present a single subject.  I believe that Vinod Kumar and I have an excellent synergy when we are working together. We know each other’s strengths and weakness. We know when the other person will speak and when he will keep quiet. The reason behind this synergy is that we have worked on 2 Video Learning Courses (SQL Server Indexes and SQL Server Questions and Answers) and authored 1 book (SQL Server Questions and Answers) together. Crowd Outside Session Hall This session was inspired from the “Laurel and Hardy” show so we performed a role-playing of those famous characters. We had an excellent time at the stage and, for sure, the audience had a wonderful time, too. We had an extremely large audience for this session and had a great time interacting with them. Speed Up! – Parallel Processes and Unparalleled Performance Pinal Dave at TechEd India 2012 I wanted to approach this session at level 400 and I was very determined to do so. The biggest challenge I had was that this was a total of 60 minutes of session and the audience profile was very generic. I had to present at level 100 as well at 400. I worked hard to tune up these demos. I wanted to make sure that my messages would land perfectly to the minds of the attendees, and when they walk out of the session, they could use the knowledge I shared on their servers. After the session, I felt an extreme satisfaction as I received lots of positive feedback at the event. At one point, so many people rushed towards me that I was a bit scared that the stage might break and someone would get injured. Fortunately, nothing like that happened and I was able to shake hands with everybody. Pinal Dave at TechEd India 2012 Crowd rushing to Pinal at TechEd India 2012 Networking This is one of the primary reasons many of us visit the annual TechEd event. I had a fantastic time meeting SQL Server enthusiasts. Well, it was a terrific time meeting old friends, user group members, MVPs and SQL Enthusiasts. I have taken many photographs with lots of people, but I have received a very few back. If you are reading this blog and have a photo of us at the event, would you please send it to me so I could keep it in my memory lane? SQL Track Speaker: Jacob and Pinal at TechEd India 2012 SQL Community: Pinal, Tejas, Nakul, Jacob, Balmukund, Manas, Sudeepta, Sahal at TechEd India 2012 Star Speakers: Amit and Balmukund at TechEd India 2012 TechED Rockstars: Nakul, Tejas and Pinal at TechEd India 2012 I guess TechEd is a mix of family affair and culture for me! Hamara TechEd (Our TechEd) Please tell me which photo you like the most! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Big Data – Operational Databases Supporting Big Data – RDBMS and NoSQL – Day 12 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Cloud in the Big Data Story. In this article we will understand the role of Operational Databases Supporting Big Data Story. Even though we keep on talking about Big Data architecture, it is extremely crucial to understand that Big Data system can’t just exist in the isolation of itself. There are many needs of the business can only be fully filled with the help of the operational databases. Just having a system which can analysis big data may not solve every single data problem. Real World Example Think about this way, you are using Facebook and you have just updated your information about the current relationship status. In the next few seconds the same information is also reflected in the timeline of your partner as well as a few of the immediate friends. After a while you will notice that the same information is now also available to your remote friends. Later on when someone searches for all the relationship changes with their friends your change of the relationship will also show up in the same list. Now here is the question – do you think Big Data architecture is doing every single of these changes? Do you think that the immediate reflection of your relationship changes with your family member is also because of the technology used in Big Data. Actually the answer is Facebook uses MySQL to do various updates in the timeline as well as various events we do on their homepage. It is really difficult to part from the operational databases in any real world business. Now we will see a few of the examples of the operational databases. Relational Databases (This blog post) NoSQL Databases (This blog post) Key-Value Pair Databases (Tomorrow’s post) Document Databases (Tomorrow’s post) Columnar Databases (The Day After’s post) Graph Databases (The Day After’s post) Spatial Databases (The Day After’s post) Relational Databases We have earlier discussed about the RDBMS role in the Big Data’s story in detail so we will not cover it extensively over here. Relational Database is pretty much everywhere in most of the businesses which are here for many years. The importance and existence of the relational database are always going to be there as long as there are meaningful structured data around. There are many different kinds of relational databases for example Oracle, SQL Server, MySQL and many others. If you are looking for Open Source and widely accepted database, I suggest to try MySQL as that has been very popular in the last few years. I also suggest you to try out PostgreSQL as well. Besides many other essential qualities PostgreeSQL have very interesting licensing policies. PostgreSQL licenses allow modifications and distribution of the application in open or closed (source) form. One can make any modifications and can keep it private as well as well contribute to the community. I believe this one quality makes it much more interesting to use as well it will play very important role in future. Nonrelational Databases (NOSQL) We have also covered Nonrelational Dabases in earlier blog posts. NoSQL actually stands for Not Only SQL Databases. There are plenty of NoSQL databases out in the market and selecting the right one is always very challenging. Here are few of the properties which are very essential to consider when selecting the right NoSQL database for operational purpose. Data and Query Model Persistence of Data and Design Eventual Consistency Scalability Though above all of the properties are interesting to have in any NoSQL database but the one which most attracts to me is Eventual Consistency. Eventual Consistency RDBMS uses ACID (Atomicity, Consistency, Isolation, Durability) as a key mechanism for ensuring the data consistency, whereas NonRelational DBMS uses BASE for the same purpose. Base stands for Basically Available, Soft state and Eventual consistency. Eventual consistency is widely deployed in distributed systems. It is a consistency model used in distributed computing which expects unexpected often. In large distributed system, there are always various nodes joining and various nodes being removed as they are often using commodity servers. This happens either intentionally or accidentally. Even though one or more nodes are down, it is expected that entire system still functions normally. Applications should be able to do various updates as well as retrieval of the data successfully without any issue. Additionally, this also means that system is expected to return the same updated data anytime from all the functioning nodes. Irrespective of when any node is joining the system, if it is marked to hold some data it should contain the same updated data eventually. As per Wikipedia - Eventual consistency is a consistency model used in distributed computing that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. In other words -  Informally, if no additional updates are made to a given data item, all reads to that item will eventually return the same value. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Big Data – Buzz Words: What is Hadoop – Day 6 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is NoSQL. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – Hadoop. What is Hadoop? Apache Hadoop is an open-source, free and Java based software framework offers a powerful distributed platform to store and manage Big Data. It is licensed under an Apache V2 license. It runs applications on large clusters of commodity hardware and it processes thousands of terabytes of data on thousands of the nodes. Hadoop is inspired from Google’s MapReduce and Google File System (GFS) papers. The major advantage of Hadoop framework is that it provides reliability and high availability. What are the core components of Hadoop? There are two major components of the Hadoop framework and both fo them does two of the important task for it. Hadoop MapReduce is the method to split a larger data problem into smaller chunk and distribute it to many different commodity servers. Each server have their own set of resources and they have processed them locally. Once the commodity server has processed the data they send it back collectively to main server. This is effectively a process where we process large data effectively and efficiently. (We will understand this in tomorrow’s blog post). Hadoop Distributed File System (HDFS) is a virtual file system. There is a big difference between any other file system and Hadoop. When we move a file on HDFS, it is automatically split into many small pieces. These small chunks of the file are replicated and stored on other servers (usually 3) for the fault tolerance or high availability. (We will understand this in the day after tomorrow’s blog post). Besides above two core components Hadoop project also contains following modules as well. Hadoop Common: Common utilities for the other Hadoop modules Hadoop Yarn: A framework for job scheduling and cluster resource management There are a few other projects (like Pig, Hive) related to above Hadoop as well which we will gradually explore in later blog posts. A Multi-node Hadoop Cluster Architecture Now let us quickly see the architecture of the a multi-node Hadoop cluster. A small Hadoop cluster includes a single master node and multiple worker or slave node. As discussed earlier, the entire cluster contains two layers. One of the layer of MapReduce Layer and another is of HDFC Layer. Each of these layer have its own relevant component. The master node consists of a JobTracker, TaskTracker, NameNode and DataNode. A slave or worker node consists of a DataNode and TaskTracker. It is also possible that slave node or worker node is only data or compute node. The matter of the fact that is the key feature of the Hadoop. In this introductory blog post we will stop here while describing the architecture of Hadoop. In a future blog post of this 31 day series we will explore various components of Hadoop Architecture in Detail. Why Use Hadoop? There are many advantages of using Hadoop. Let me quickly list them over here: Robust and Scalable – We can add new nodes as needed as well modify them. Affordable and Cost Effective – We do not need any special hardware for running Hadoop. We can just use commodity server. Adaptive and Flexible – Hadoop is built keeping in mind that it will handle structured and unstructured data. Highly Available and Fault Tolerant – When a node fails, the Hadoop framework automatically fails over to another node. Why Hadoop is named as Hadoop? In year 2005 Hadoop was created by Doug Cutting and Mike Cafarella while working at Yahoo. Doug Cutting named Hadoop after his son’s toy elephant. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – MapReduce. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Number-Crunching with SQL Server – Exceed the Functionality of Excel

    - by Pinal Dave
    Imagine this. Your users have developed an Excel spreadsheet that extracts data from your SQL Server database, manipulates that data through the use of Excel formulas and, possibly, some VBA code which is then used to calculate P&L, hedging requirements or even risk numbers. Management comes to you and tells you that they need to get rid of the spreadsheet and that the results of the spreadsheet calculations need to be persisted on the database. SQL Server has a very small set of functions for analyzing data. Excel has hundreds of functions for analyzing data, with many of them focused on specific financial and statistical calculations. Is it even remotely possible that you can use SQL Server to replace the complex calculations being done in a spreadsheet? Westclintech has developed a library of functions that match or exceed the functionality of Excel’s functions and contains many functions that are not available in EXCEL. Their XLeratorDB library of functions contains over 700 functions that can be incorporated into T-SQL statements. XLeratorDB takes advantage of the SQL CLR architecture introduced in SQL Server 2005. SQL CLR permits managed code to be compiled into the database and run alongside built-in SQL Server functions like COUNT or SUM. The Westclintech developers have taken advantage of this architecture to bring robust analytical functions to the database. In our hypothetical spreadsheet, let’s assume that our users are using the YIELD function and that the data are extracted from a table in our database called BONDS. Here’s what the spreadsheet might look like. We go to column G and see that it contains the following formula. Obviously, SQL Server does not offer a native YIELD function. However, with XLeratorDB we can replicate this calculation in SQL Server with the following statement: SELECT *, wct.YIELD(CAST(GETDATE() AS date),Maturity,Rate,Price,100,Frequency,Basis) AS YIELD FROM BONDS This produces the following result. This illustrates one of the best features about XLeratorDB; it is so easy to use. Since I knew that the spreadsheet was using the YIELD function I could use the same function with the same calling structure to do the calculation in SQL Server. I didn’t need to know anything at all about the mechanics of calculating the yield on a bond. It was pretty close to cut and paste. In fact, that’s one way to construct the SQL. Just copy the function call from the cell in the spreadsheet and paste it into SMS and change the cell references to column names. I built the SQL for this query by starting with this. SELECT * ,YIELD(TODAY(),B2,C2,D2,100,E2,F2) FROM BONDS I then changed the cell references to column names. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) ,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Finally, I replicated the TODAY() function using GETDATE() and added the schema name to the function name. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) --,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) ,wct.YIELD(GETDATE(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Then I am able to execute the statement returning the results seen above. The XLeratorDB libraries are heavy on financial, statistical, and mathematical functions. Where there is an analog to an Excel function, the XLeratorDB function uses the same naming conventions and calling structure as the Excel function, but there are also hundreds of additional functions for SQL Server that are not found in Excel. You can find the functions by opening Object Explorer in SQL Server Management Studio (SSMS) and expanding the Programmability folder under the database where the functions have been installed. The  Functions folder expands to show 3 sub-folders: Table-valued Functions; Scalar-valued functions, Aggregate Functions, and System Functions. You can expand any of the first three folders to see the XLeratorDB functions. Since the wct.YIELD function is a scalar function, we will open the Scalar-valued Functions folder, scroll down to the wct.YIELD function and and click the plus sign (+) to display the input parameters. The functions are also Intellisense-enabled, with the input parameters displayed directly in the query tab. The Westclintech website contains documentation for all the functions including examples that can be copied directly into a query window and executed. There are also more one hundred articles on the site which go into more detail about how some of the functions work and demonstrate some of the extensive business processes that can be done in SQL Server using XLeratorDB functions and some T-SQL. XLeratorDB is organized into libraries: finance, statistics; math; strings; engineering; and financial options. There is also a windowing library for SQL Server 2005, 2008, and 2012 which provides functions for calculating things like running and moving averages (which were introduced in SQL Server 2012), FIFO inventory calculations, financial ratios and more, without having to use triangular joins. To get started you can download the XLeratorDB 15-day free trial from the Westclintech web site. It is a fully-functioning, unrestricted version of the software. If you need more than 15 days to evaluate the software, you can simply download another 15-day free trial. XLeratorDB is an easy and cost-effective way to start adding sophisticated data analysis to your SQL Server database without having to know anything more than T-SQL. Get XLeratorDB Today and Now! Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Excel

    Read the article

  • SQLAuthority News – 7th Anniversary of Blog – A Personal Note

    - by Pinal Dave
    Special Day Today is a very special day – seven years ago I blogged for the very first time.  Seven years ago, I didn’t know what I was doing, I didn’t know how to blog, or even what a blog was or what to write.  I was working as a DBA, and I was trying to solve a problem – at my job, there were a few issues I had to fix again and again and again.  There were days when I was rewriting the same solution over and over, and there were times when I would get very frustrated because I could not write the same elegant solution that I had written before.  I came up with a solution to this problem – posting these solutions online, where I could access them whenever I needed them.  At that point, I had no idea what a blog was, or even how the internet worked, I had no idea that a blog would be visible to others.  Can you believe it? Google it on Yahoo! After a few posts on this “blog,” there was a surprise for me – an e-mail saying that someone had left me a comment.  I was surprised, because I didn’t even know you could comment on a blog!  I logged on and read my comment.  It said: “I like your script,but there is a small bug.  If you could fix it, it will run on multiple other versions of SQL Server.”  I was like, “wow, someone figured out how to find my blog, and they figured out how to fix my script!”  I found the bug, I fixed the script, and a wrote a thank you note to the guy.  My first question for him was: how did you figure it out – not the script, but how to find my blog?  He said he found it from Yahoo Search (this was in the time before Google, believe it or not). From that day, my life changed.  I wrote a few more posts, I got a few more comments, and I started to watch my traffic.  People were reading, commenting, and giving feedback.  At the end of the day, people enjoyed what I was writing.  This was a fantastic feeling!  I never thought I would be writing for others.  Even today, I don’t feel like I am writing for others, but that I am simply posting what I am learning every day.  From that very first day, I decided that I would not change my intent or my blog’s purpose. 72 Million Views – 2600 Posts – 57000 comments – 10 books – 9 courses Today, this blog is my habit, my addiction, my baby.  Every day I try to learn something new, and that lesson gets posted on the blog.  Lately there have been days where I am traveling for a full 24 hours, but even on those days I try to learn something new, and later when I have free time, I will still post it to the blog.  Because of this habit, this blog has over 72 millions views, I have written more than 2600 posts, and there are 57,000 comments and counting.  I have also written 10 books, 9 courses, and learned so many things.  This blog has given me back so much more than I ever put it into it.  It gave me an education, a reason to learn something new every day, and a way to connect to people.  I like to think of it as a learning chain, a relay where we all pass knowledge from one to another. Never Ending Journey When I started the blog, I thought I would write for a few days and stop, but now after seven years I haven’t stopped and I have no intention of stopping!  However, change happens, and for this blog it will start today.  This blog started as a single resource for SQL Server, but now it has grown beyond, to Sharepoint, Personal Development, Developer Training, MySQL, Big Data, and lots of other things.  Truly speaking, this blog is more than just SQL Server, and that was always my intention.  I named it “SQL Authority,” not “SQL Server Authority”!  Loudly and clearly, I would like to announce that I am going to go back to my roots and start writing more about SQL, more about big data, and more about the other technology like relational databases, MySQL, Oracle, and others.  My goal is not to become a comprehensive resource for every technology, my goal is to learn something new every day – and now it can be so much more than just SQL Server.  I will learn it, and post it here for you. I have written a very long post on this anniversary, but here is the summary: Thank You.  You all have been wonderful.  Seven years is a long journey, and it makes me emotional.  I have been “with” this blog before I met my wife, before we had our daughter.  This blog is like a fourth member of the family.  Keep reading, keep commenting, keep supporting.  Thank you all. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: About Me, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL

    Read the article

  • SQL SERVER – What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1

    - by Pinal Dave
    This is the first part of the series Incremental Statistics. Here is the index of the complete series. What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1 Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2 DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3 Statistics are considered one of the most important aspects of SQL Server Performance Tuning. You might have often heard the phrase, with related to performance tuning. “Update Statistics before you take any other steps to tune performance”. Honestly, I have said above statement many times and many times, I have personally updated statistics before I start to do any performance tuning exercise. You may agree or disagree to the point, but there is no denial that Statistics play an extremely vital role in the performance tuning. SQL Server 2014 has a new feature called Incremental Statistics. I have been playing with this feature for quite a while and I find that very interesting. After spending some time with this feature, I decided to write about this subject over here. New in SQL Server 2014 – Incremental Statistics Well, it seems like lots of people wants to start using SQL Server 2014′s new feature of Incremetnal Statistics. However, let us understand what actually this feature does and how it can help. I will try to simplify this feature first before I start working on the demo code. Code for all versions of SQL Server Here is the code which you can execute on all versions of SQL Server and it will update the statistics of your table. The keyword which you should pay attention is WITH FULLSCAN. It will scan the entire table and build brand new statistics for you which your SQL Server Performance Tuning engine can use for better estimation of your execution plan. UPDATE STATISTICS TableName(StatisticsName) WITH FULLSCAN Who should learn about this? Why? If you are using partitions in your database, you should consider about implementing this feature. Otherwise, this feature is pretty much not applicable to you. Well, if you are using single partition and your table data is in a single place, you still have to update your statistics the same way you have been doing. If you are using multiple partitions, this may be a very useful feature for you. In most cases, users have multiple partitions because they have lots of data in their table. Each partition will have data which belongs to itself. Now it is very common that each partition are populated separately in SQL Server. Real World Example For example, if your table contains data which is related to sales, you will have plenty of entries in your table. It will be a good idea to divide the partition into multiple filegroups for example, you can divide this table into 3 semesters or 4 quarters or even 12 months. Let us assume that we have divided our table into 12 different partitions. Now for the month of January, our first partition will be populated and for the month of February our second partition will be populated. Now assume, that you have plenty of the data in your first and second partition. Now the month of March has just started and your third partition has started to populate. Due to some reason, if you want to update your statistics, what will you do? In SQL Server 2012 and earlier version You will just use the code of WITH FULLSCAN and update the entire table. That means even though you have only data in third partition you will still update the entire table. This will be VERY resource intensive process as you will be updating the statistics of the partition 1 and 2 where data has not changed at all. In SQL Server 2014 You will just update the partition of Partition 3. There is a special syntax where you can now specify which partition you want to update now. The impact of this is that it is smartly merging the new data with old statistics and update the entire statistics without doing FULLSCAN of your entire table. This has a huge impact on performance. Remember that the new feature in SQL Server 2014 does not change anything besides the capability to update a single partition. However, there is one feature which is indeed attractive. Previously, when table data were changed 20% at that time, statistics update were triggered. However, now the same threshold is applicable to a single partition. That means if your partition faces 20% data, change it will also trigger partition level statistics update which, when merged to your final statistics will give you better performance. In summary If you are not using a partition, this feature is not applicable to you. If you are using a partition, this feature can be very helpful to you. Tomorrow: We will see working code of SQL Server 2014 Incremental Statistics. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Statistics, Statistics

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #053 – Final Post in Series

    - by Pinal Dave
    It has been a fantastic journey to write memory lane series for an entire year. This series gave me the opportunity to go back and see what I have contributed to this blog throughout the last 7 years. This was indeed fantastic series as this provided me the opportunity to witness how technology has grown throughout the year and how I have progressed in my career while writing this blog post. This series was indeed fantastic experience readers as many joined during the last few years and were not sure what they have missed in recent years. Let us continue with the final episode of the Memory Lane Series. Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Get Current User – Get Logged In User Here is the straight script which list logged in SQL Server users. Disable All Triggers on a Database – Disable All Triggers on All Servers Question : How to disable all the triggers for a database? Additionally, how to disable all the triggers for all servers? For answer execute the script in the blog post. Importance of Master Database for SQL Server Startup I have received following questions many times. I will list all the questions here and answer them together. What is the purpose of Master database? Should our backup Master database? Which database is must have database for SQL Server for startup? Which are the default system database created when SQL Server 2005 is installed for the first time? What happens if Master database is corrupted? Answers to all of the questions are very much related. 2008 DECLARE Multiple Variables in One Statement SQL Server is a great product and it has many features which are very unique to SQL Server. Regarding feature of SQL Server where multiple variable can be declared in one statement, it is absolutely possible to do. 2009 How to Enable Index – How to Disable Index – Incorrect syntax near ‘ENABLE’ Many times I have seen that the index is disabled when there is a large update operation on the table. Bulk insert of very large file updates in any table using SSIS is usually preceded by disabling the index and followed by enabling the index. I have seen many developers running the following query to disable the index. 2010 List of all the Views from Database Many emails I received suggesting that they have hundreds of the view and now have no clue what is going on and how many of them have indexes and how many does not have an index. Some even asked me if there is any way they can get a list of the views with the property of Index along with it. Here is the quick script which does exactly the same. You can also include many other columns from the same view. Minimum Maximum Memory – Server Memory Options I was recently reading about SQL Server Memory Options over here. While reading this one line really caught my attention is minimum value allowed for maximum memory options. The default setting for min server memory is 0, and the default setting for max server memory is 2147483647. The minimum amount of memory you can specify for max server memory is 16 megabytes (MB). 2011 Fundamentals of Columnstore Index There are two kinds of storage in a database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data are relevant or not, column store queries need only to search a much lesser number of the columns. How to Ignore Columnstore Index Usage in Query In summary the question in simple words “How can we ignore using the column store index in selective queries?” Very interesting question – you can use I can understand there may be the cases when the column store index is not ideal and needs to be ignored the same. You can use the query hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX to ignore the column store index. The SQL Server Engine will use any other index which is best after ignoring the column store index. 2012 Storing Variable Values in Temporary Array or Temporary List SQL Server does not support arrays or a dynamic length storage mechanism like list. Absolutely there are some clever workarounds and few extra-ordinary solutions but everybody can;t come up with such solution. Additionally, sometime the requirements are very simple that doing extraordinary coding is not required. Here is the simple case. Move Database Files MDF and LDF to Another Location It is not common to keep the Database on the same location where OS is installed. Usually Database files are in SAN, Separate Disk Array or on SSDs. This is done usually for performance reason and manageability perspective. Now the challenges comes up when database which was installed at not preferred default location and needs to move to a different location. Here is the quick tutorial how you can do it. UNION ALL and ORDER BY – How to Order Table Separately While Using UNION ALL If your requirement is such that you want your top and bottom query of the UNION resultset independently sorted but in the same result set you can add an additional static column and order by that column. Let us re-create the same scenario. Copy Data from One Table to Another Table – SQL in Sixty Seconds #031 – Video http://www.youtube.com/watch?v=FVWIA-ACMNo Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Professional Development – Difference Between Bio, CV and Resume

    - by Pinal Dave
    Applying for work can be very stressful – you want to put your best foot forward, and it can be very hard to sell yourself to a potential employer while highlighting your best characteristics and answering questions.  On top of that, some jobs require different application materials – a biography (or bio), a curriculum vitae (or CV), or a resume.  These things seem so interchangeable, so what is the difference? Let’s start with the one most of us have heard of – the resume.  A resume is a summary of your job and education history.  If you have ever applied for a job, you will have used a resume.  The ability to write a good resume that highlights your best characteristics and emphasizes your qualifications for a specific job is a skill that will take you a long way in the world.  For such an essential skill, unfortunately it is one that many people struggle with. RESUME So let’s discuss what makes a great resume.  First, make sure that your name and contact information are at the top, in large print (slightly larger font than the rest of the text, size 14 or 16 if the rest is size 12, for example).  You need to make sure that if you catch the recruiter’s attention and they know how to get a hold of you. As for qualifications, be quick and to the point.  Make your job title and the company the headline, and include your skills, accomplishments, and qualifications as bullet points.  Use good action verbs, like “finished,” “arranged,” “solved,” and “completed.”  Include hard numbers – don’t just say you “changed the filing system,” say that you “revolutionized the storage of over 250 files in less than five days.”  Doesn’t that sentence sound much more powerful? Curriculum Vitae (CV) Now let’s talk about curriculum vitae, or “CVs”.  A CV is more like an expanded resume.  The same rules are still true: put your name front and center, keep your contact info up to date, and summarize your skills with bullet points.  However, CVs are often required in more technical fields – like science, engineering, and computer science.  This means that you need to really highlight your education and technical skills. Difference between Resume and CV Resumes are expected to be one or two pages long – CVs can be as many pages as necessary.  If you are one of those people lucky enough to feel limited by the size constraint of resumes, a CV is for you!  On a CV you can expand on your projects, highlight really exciting accomplishments, and include more educational experience – including GPA and test scores from the GRE or MCAT (as applicable).  You can also include awards, associations, teaching and research experience, and certifications.  A CV is a place to really expand on all your experience and how great you will be in this particular position. Biography (Bio) Chances are, you already know what a bio is, and you have even read a few of them.  Think about the one or two paragraphs that every author includes in the back flap of a book.  Think about the sentences under a blogger’s photo on every “About Me” page.  That is a bio.  It is a way to quickly highlight your life experiences.  It is essentially the way you would introduce yourself at a party. Where a bio is required for a job, chances are they won’t want to know about where you were born and how many pets you have, though.  This is a way to summarize your entire job history in quick-to-read format – and sometimes during a job hunt, being able to get to the point and grab the recruiter’s interest is the best way to get your foot in the door.  Think of a bio as your entire resume put into words. Most bios have a standard format.  In paragraph one, talk about your most recent position and accomplishments there, specifically how they relate to the job you are applying for.  If you have teaching or research experience, training experience, certifications, or management experience, talk about them in paragraph two.  Paragraph three and four are for highlighting publications, education, certifications, associations, etc.  To wrap up your bio, provide your contact info and availability (dates and times). Where to use What? For most positions, you will know exactly what kind of application to use, because the job announcement will state what materials are needed – resume, CV, bio, cover letter, skill set, etc.  If there is any confusion, choose whatever the industry standard is (CV for technical fields, resume for everything else) or choose which of your documents is the strongest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, Professional Development, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Big Data – Buzz Words: What is HDFS – Day 8 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is MapReduce. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – HDFS. What is HDFS ? HDFS stands for Hadoop Distributed File System and it is a primary storage system used by Hadoop. It provides high performance access to data across Hadoop clusters. It is usually deployed on low-cost commodity hardware. In commodity hardware deployment server failures are very common. Due to the same reason HDFS is built to have high fault tolerance. The data transfer rate between compute nodes in HDFS is very high, which leads to reduced risk of failure. HDFS creates smaller pieces of the big data and distributes it on different nodes. It also copies each smaller piece to multiple times on different nodes. Hence when any node with the data crashes the system is automatically able to use the data from a different node and continue the process. This is the key feature of the HDFS system. Architecture of HDFS The architecture of the HDFS is master/slave architecture. An HDFS cluster always consists of single NameNode. This single NameNode is a master server and it manages the file system as well regulates access to various files. In additional to NameNode there are multiple DataNodes. There is always one DataNode for each data server. In HDFS a big file is split into one or more blocks and those blocks are stored in a set of DataNodes. The primary task of the NameNode is to open, close or rename files and directory and regulate access to the file system, whereas the primary task of the DataNode is read and write to the file systems. DataNode is also responsible for the creation, deletion or replication of the data based on the instruction from NameNode. In reality, NameNode and DataNode are software designed to run on commodity machine build in Java language. Visual Representation of HDFS Architecture Let us understand how HDFS works with the help of the diagram. Client APP or HDFS Client connects to NameSpace as well as DataNode. Client App access to the DataNode is regulated by NameSpace Node. NameSpace Node allows Client App to connect to the DataNode based by allowing the connection to the DataNode directly. A big data file is divided into multiple data blocks (let us assume that those data chunks are A,B,C and D. Client App will later on write data blocks directly to the DataNode. Client App does not have to directly write to all the node. It just has to write to any one of the node and NameNode will decide on which other DataNode it will have to replicate the data. In our example Client App directly writes to DataNode 1 and detained 3. However, data chunks are automatically replicated to other nodes. All the information like in which DataNode which data block is placed is written back to NameNode. High Availability During Disaster Now as multiple DataNode have same data blocks in the case of any DataNode which faces the disaster, the entire process will continue as other DataNode will assume the role to serve the specific data block which was on the failed node. This system provides very high tolerance to disaster and provides high availability. If you notice there is only single NameNode in our architecture. If that node fails our entire Hadoop Application will stop performing as it is a single node where we store all the metadata. As this node is very critical, it is usually replicated on another clustered as well as on another data rack. Though, that replicated node is not operational in architecture, it has all the necessary data to perform the task of the NameNode in the case of the NameNode fails. The entire Hadoop architecture is built to function smoothly even there are node failures or hardware malfunction. It is built on the simple concept that data is so big it is impossible to have come up with a single piece of the hardware which can manage it properly. We need lots of commodity (cheap) hardware to manage our big data and hardware failure is part of the commodity servers. To reduce the impact of hardware failure Hadoop architecture is built to overcome the limitation of the non-functioning hardware. Tomorrow In tomorrow’s blog post we will discuss the importance of the relational database in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – ?Finding Out What Changed in a Deleted Database – Notes from the Field #041

    - by Pinal Dave
    [Note from Pinal]: This is a 41th episode of Notes from the Field series. The real world is full of challenges. When we are reading theory or book, we sometimes do not realize how real world reacts works and that is why we have the series notes from the field, which is extremely popular with developers and DBA. Let us talk about interesting problem of how to figure out what has changed in the DELETED database. Well, you think I am just throwing the words but in reality this kind of problems are making our DBA’s life interesting and in this blog post we have amazing story from Brian Kelley about the same subject. In this episode of the Notes from the Field series database expert Brian Kelley explains a how to find out what has changed in deleted database. Read the experience of Brian in his own words. Sometimes, one of the hardest questions to answer is, “What changed?” A similar question is, “Did anything change other than what we expected to change?” The First Place to Check – Schema Changes History Report: Pinal has recently written on the Schema Changes History report and its requirement for the Default Trace to be enabled. This is always the first place I look when I am trying to answer these questions. There are a couple of obvious limitations with the Schema Changes History report. First, while it reports what changed, when it changed, and who changed it, other than the base DDL operation (CREATE, ALTER, DELETE), it does not present what the changes actually were. This is not something covered by the default trace. Second, the default trace has a fixed size. When it hits that size, the changes begin to overwrite. As a result, if you wait too long, especially on a busy database server, you may find your changes rolled off. But the Database Has Been Deleted! Pinal cited another issue, and that’s the inability to run the Schema Changes History report if the database has been dropped. Thankfully, all is not lost. One thing to remember is that the Schema Changes History report is ultimately driven by the Default Trace. As you may have guess, it’s a trace, like any other database trace. And the Default Trace does write to disk. The trace files are written to the defined LOG directory for that SQL Server instance and have a prefix of log_: Therefore, you can read the trace files like any other. Tip: Copy the files to a working directory. Otherwise, you may occasionally receive a file in use error. With the Default Trace files, if you ask the question early enough, you can see the information for a deleted database just the same as any other database. Testing with a Deleted Database: Here’s a short script that will create a database, create a schema, create an object, and then drop the database. Without the database, you can’t do a standard Schema Changes History report. CREATE DATABASE DeleteMe; GO USE DeleteMe; GO CREATE SCHEMA Test AUTHORIZATION dbo; GO CREATE TABLE Test.Foo (FooID INT); GO USE MASTER; GO DROP DATABASE DeleteMe; GO This sets up the perfect situation where we can’t retrieve the information using the Schema Changes History report but where it’s still available. Finding the Information: I’ve sorted the columns so I can see the Event Subclass, the Start Time, the Database Name, the Object Name, and the Object Type at the front, but otherwise, I’m just looking at the trace files using SQL Profiler. As you can see, the information is definitely there: Therefore, even in the case of a dropped/deleted database, you can still determine who did what and when. You can even determine who dropped the database (loginame is captured). The key is to get the default trace files in a timely manner in order to extract the information. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Use your own domain email and tired of SPAM? SPAMfighter FTW

    - by Dave Campbell
    I wouldn't post this if I hadn't tried it... and I paid for it myself, so don't anybody be thinking I'm reviewing something someone sent me! Long ago and far away I got very tired of local ISPs and 2nd phone lines and took the plunge and got hooked up to cable... yeah I know the 2nd phone line concept may be hard for everyone to understand, but that's how it was in 'the old days'. To avoid having to change email addresses all the time, I decided to buy a domain name, get minimal hosting, and use that for all email into the house. That way if I changed providers, all the email addresses wouldn't have to change. Of course, about a dozen domains later, I have LOTS of pop email addresses and even an exchange address to my client's server... times have changed. What also has changed is the fact that we get SPAM... 'back in the day' when I was a beta tester for the first ISP in Phoenix, someone tried sending an ad to all of us, and what he got in return for his trouble was a bunch of core dumps that locked up his email... if you don't know what a core dump is, ask your grandfather. But in today's world, we're all much more civilized than that, and as with many things, the criminals seem to have much more rights than we do, so we get inundated with email offering all sorts of wild schemes that you'd have to be brain-dead to accept, but yet... if people weren't accepting them, they'd stop sending them. I keep hoping that survival of the smartest would weed out the mental midgets that respond and then the jumk email stop, but that hasn't happened yet anymore than finding high-quality hearing aids at the checkout line of Safeway because of all the dimwits playing music too loud inside their car... but that's another whole topic and I digress. So what's the solution for all the spam? And I mean *all*... on that old personal email address, I am now getting over 150 spam messages a day! Yes I know that's why God invented the delete key, but I took it on as a challenge, and it's a matter of principle... why should I switch email addresses, or convert from [email protected] to something else, or have all my email filtered through some service just because some A-Hole somewhere has a site up trying to phish Ma & Pa Kettle (ask your grandfather about that too) out of their retirement money? Well... I got an email from my cousin the other day while I was writing yet another email rule, and there was a banner on the bottom of his email that said he was protected by SPAMfighter. SPAMfighter huh.... so I took a look at their site, and found yet one more of the supposed tools to help us. But... I read that they're a Microsoft Gold Partner... and that doesn't come lightly... so I took a gamble and here's what I found: I installed it, and had to do a couple things: 1) SPAMfighter stuffed the SPAMfighter folder into my client's exchange address... I deleted it, made a new SPAMfighter folder where I wanted it to go, then in the SPAMfighter Clients settings for Outlook, I told it to put all spam there. 2) It didn't seem to be doing anything. There's a ribbon button that you can select "Block", and I did that, wondering if I was 'training' it, but it wasn't picking up duplicates 3) I sent email to support, and wrote a post on the forum (not to self: reply to that post). By the time the folks from the home office responded, it was the next day, and first up, SPAMfighter knocked down everything that came through when Outlook opend... two thumbs up! I disabled my 'garbage collection' rule from Outlook, and told Outlook not to use the junk folder thinking it was interfering. 4) Day 2 seemed to go about like Day 1... but I hung in there. 5) Day 3 is now a whole new day... I had left Outlook open and hadn't looked at the PC since sometime late yesterday afternoon, and when I looked this morning, *every bit* of spam was in the SPAMfighter folder!! I'm a new paying customer After watching SPAMfighter work this morning, I've purchased a 1-year license, and I now can sit and watch as emails come in and disappear from my inbox into the SPAMfighter folder. No more continual tweaking of the rules. I've got SPAMfighter set to 'Very Hard' filtering... personally I'd rather pull the few real emails out of the SPAMfighter folder than pull spam out of the real folders. Yes this is simply another way of using the delete key, but you know what? ... it feels good :) Here's a screenshot of the stats after just about 48 hours of being onboard: Note that all the ones blocked by me were during Day 1 and 2... I've blocked none today, and everything is blocked. Stay in the 'Light!

    Read the article

  • SQL SERVER – Backing Up and Recovering the Tail End of a Transaction Log – Notes from the Field #042

    - by Pinal Dave
    [Notes from Pinal]: The biggest challenge which people face is not taking backup, but the biggest challenge is to restore a backup successfully. I have seen so many different examples where users have failed to restore their database because they made some mistake while they take backup and were not aware of the same. Tail Log backup was such an issue in earlier version of SQL Server but in the latest version of SQL Server, Microsoft team has fixed the confusion with additional information on the backup and restore screen itself. Now they have additional information, there are a few more people confused as they have no clue about this. Previously they did not find this as a issue and now they are finding tail log as a new learning. Linchpin People are database coaches and wellness experts for a data driven world. In this 42nd episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words, Backing Up and Recovering the Tail End of a Transaction Log. Many times when restoring a database over an existing database SQL Server will warn you about needing to make a tail end of the log backup. This might be your reminder that you have to choose to overwrite the database or could be your reminder that you are about to write over and lose any transactions since the last transaction log backup. You might be asking yourself “What is the tail end of the transaction log”. The tail end of the transaction log is simply any committed transactions that have occurred since the last transaction log backup. This is a very crucial part of a recovery strategy if you are lucky enough to be able to capture this part of the log. Most organizations have chosen to accept some amount of data loss. You might be shaking your head at this statement however if your organization is taking transaction logs backup every 15 minutes, then your potential risk of data loss is up to 15 minutes. Depending on the extent of the issue causing you to have to perform a restore, you may or may not have access to the transaction log (LDF) to be able to back up those vital transactions. For example, if the storage array or disk that holds your transaction log file becomes corrupt or damaged then you wouldn’t be able to recover the tail end of the log. If you do have access to the physical log file then you can still back up the tail end of the log. In 2013 I presented a session at the PASS Summit called “The Ultimate Tail Log Backup and Restore” and have been invited back this year to present it again. During this session I demonstrate how you can back up the tail end of the log even after the data file becomes corrupt. In my demonstration I set my database offline and then delete the data file (MDF). The database can’t become more corrupt than that. I attempt to bring the database back online to change the state to RECOVERY PENDING and then backup the tail end of the log. I can do this by specifying WITH NO_TRUNCATE. Using NO_TRUNCATE is equivalent to specifying both COPY_ONLY and CONTINUE_AFTER_ERROR. It as its name says, does not try to truncate the log. This is a great demo however how could I achieve backing up the tail end of the log if the failure destroys my entire instance of SQL and all I had was the LDF file? During my demonstration I also demonstrate that I can attach the log file to a database on another instance and then back up the tail end of the log. If I am performing proper backups then my most recent full, differential and log files should be on a server other than the one that crashed. I am able to achieve this task by creating new database with the same name as the failed database. I then set the database offline, delete my data file and overwrite the log with my good log file. I attempt to bring the database back online and then backup the log with NO_TRUNCATE just like in the first example. I encourage each of you to view my blog post and watch the video demonstration on how to perform these tasks. I really hope that none of you ever have to perform this in production, however it is a really good idea to know how to do this just in case. It really isn’t a matter of “IF” you will have to perform a restore of a production system but more of a “WHEN”. Being able to recover the tail end of the log in these sever cases could be the difference of having to notify all your business customers of data loss or not. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Jquery Fancybox not working after postback

    - by Lee Englestone
    I have a Fancybox (or more accurately) a number of fancy boxes on an asp.net page. My Fancybox (jquery plugin) works fine until a postback occurs on the page then it refuses to work. Any thoughts? Anyone experienced similar behaviour? UPDATE : Some Code.. I have a databound repeater with a fancybox on each repeating item. They are instanciated by (outside the repeater) $(document).ready(function() { $("a.watchvideo").fancybox({ 'overlayShow': false, 'frameWidth' : 480, 'frameHeight' : 400 }); }); The anchor tag is repeated.. href="#watchvideo_<%#Eval("VideoId")%" As is a div with id="watchvideo_<%#Eval("VideoId") %> As is a script element that instanciates the flash movies Yes the VideoIds are being output the the page. UPDATE : It's not a problem with the flash.. It is not a problem with the flash as i've tried it without the flash, it wont even pop a window with a simple message in. UPDATE : I wonder if it is the updatepanel. http://stackoverflow.com/questions/301473/rebinding-events-in-jquery-after-ajax-update-updatepanel -- lee

    Read the article

  • In Reporting Services how to filter drop down parameter list in based on other selected parameter?

    - by Lee Englestone
    Question In a Reporting Services Report, How do I filter a second drop down list of cars to only show cars whose ManufacturerId is equal the selected Manufacturer (from the first drop down list)? Report Datasets I have 2 datasets. Dataset 1. A list of Manufacturers. From a stored procedure Report_Manufacturers_P Dataset 2. A list of Cars, including a column called Manufacturers id. From a stored procedure Report_Cars_P Report Parameters On the Report I have 2 Parameters. Parameter 1. ManufacturerId. Set from A drop down list of Manufacturers (DataSet 1). Parameter 2. CarId. Set from A drop down list of Cars (DataSet 2). I've tried.. Creating another sproc called Report_Manufacturer_Cars_P that takes the ManufacturerId as an integer and returns a list of cars made by that manufacturer. Any Ideas. As selecting a Manufacturer doesn't seem to want to kick off anything that filters the Car list? Thanks in advance, -- Lee

    Read the article

  • Troubles Iterating Over A HashMap with JSF, MyFaces & Facelets

    - by Lee Theobald
    Hi all, I'm having some trouble looping over a HashMap to print out it's values to the screen. Could someone double check my code to see what I'm doing wrong. I can't seem to find anything wrong but there must be something. In a servlet, I am adding the following to the request: Map<String, String> facetValues = new HashMap<String, String>(); // Filling the map req.setAttribute(facetField.getName(), facetValues); In one case "facetField.getName()" evaluates to "discipline". So in my page I have the following: <ui:repeat value="${requestScope.discipline}" var="item"> <li>Item: <c:out value="${item}"/>, Key: <c:out value="${item.key}"/>, Value: <c:out value="${item.item}"/></li> </ui:repeat> The loop is ran once but all the outputs are blank?!? I would have at least expected something in item if it's gone over the loop once. Checking the debug popup for Facelets, discipline is there and on the loop. Printing it to the screen results in something that looks like a map to me (I've shortened the output) : {300=0, 1600=0, 200=0, ... , 2200=0} I've also tried with a c:forEach but I'm getting the same results. So does anyone have any ideas where I'm going wrong? Thanks for any input, Lee

    Read the article

  • MSBuild file for deployment process

    - by Lee Englestone
    I could do with some pointers, code examples or references that may help me do the following in an msbuild file to help speed up the deployment process.. This scenario involves getting a developers 'local' version onto a 'development' server.. Increment a developers local Web Applications Assembly version number Publish a developers local Web Application files somewhere .rar the publsihed files or folder into the format v[IncrementedAssemblyNumber].rar Copy the .rar to somewhere Backup (.rar) the existing live website folder (located elsewhere) in the format Pre_v[IncrementedAssemblyNumber].rar Move the backed up .rar to a /Backup folder. Overwrite the development web files with the published local web files Should be simple for all those MSBUILD Gurus out there. Like I said, answers or 'Good and applicable' links would be much appreciated. Also i'm thinking of getting one of the MSbuild books. From what I can tell there are 2, possibly 3 contenders. I am not using TFS. Can anyone recommend a book for beginning MSBUILD? Ideally from people that have read more than one book on the subject. Cheers, -- Lee

    Read the article

  • Distribute budget over for ranked components in SQL

    - by Lee
    Assume I have a budget of $10 (any integer) and I want to distribute it over records which have rank field with varying needs. Example: rank Req. Fulfilled? 1 $3 Y 2 $4 Y 3 $2 Y 4 $3 N Those ranks from 1 to 3 should be fulfilled because they are within budget. whereas, the one ranked 4 should not. I want an SQL query to solve that. Below is my initial script: CREATE TABLE budget ( id VARCHAR (32), budget INTEGER, PRIMARY KEY (id)); CREATE TABLE component ( id VARCHAR (32), rank INTEGER, req INTEGER, satisfied BOOLEAN, PRIMARY KEY (id)); INSERT INTO budget (id,budget) VALUES ('1',10); INSERT INTO component (id,rank,req) VALUES ('1',1,3); INSERT INTO component (id,rank,req) VALUES ('2',2,4); INSERT INTO component (id,rank,req) VALUES ('3',3,2); INSERT INTO component (id,rank,req) VALUES ('4',4,3); Thanks in advance for your help. Lee

    Read the article

  • Really Need a Facebook App to post to Facebook Page Wall?

    - by Lee Englestone
    Hi, I'm using the Facebook API Graph (C# & ASP.NET) to try to dynamically post to a Facebook page I created. Looking at the code samples floating around.. they suggest creating a Facebook App first (which I have done).. However.. I have 3 different pages I want to post to.. Do I need to create an app for each? (I want to post different things to the 3 pages, not the same posts to each) I just want messages & links to appear as Wall posts. I'm not bothered about having an 'app' that has 'canvas' that is placed in an IFrame. Question : So do I still need to write one or more Facebook Apps to post to my 3 different Facebook pages?? Where I am so far.. I can pass in my apps credentials and get back the access_token. But my posts don't appear to be going anywhere. I'd rather drop the 'facebook application /canvas' approach if possible If I can post directly to a wall (For the above reasons). Oh, and before you ask. I don't want to post to my Apps Wall page, I want to post to my other pages (Unless I have to post to my Apps Wall page first?). I'm sure loads of people have the same questions.. Thanks in advance. -- Lee

    Read the article

  • How to design app to be modular / support plugins

    - by Lee
    I'm currently in the process of refactoring my webplayer so that we'll be more easily able to run it on our other internet radio stations. Much of the setup between these players will be very similar, however, some will need to have different UI plugins / other plugins. Currently in the webplayer I do something like this in it's init(): _this.ui = new UI(); _this.ui.playlist = new Playlist(); _this.ui.channelDropdown = new ChannelDropdown(); _this.ui.timecode = ne Timecode(); etc etc This works fine but that blocks me into requiring those objects at run time. What I'd like to do is be able to add those based on the stations needs. Basically my question is, do I need to add some kind of "addPlugin()" functionality here? And if I do that, do I need to constantly check from my WebPlayer object if that plugin exists before it attempts to use it? Like... if (_hasPlugin('playlist')) this.plugins.playlist.add(track); I apologize if some of this might not be clear... really trying to get my head wrapped around all of this. I feel I'm closer but I'm still stuck. Any advice on how I should proceed with this would be greatly appreciated. Thanks in advance, Lee

    Read the article

  • unmet dependencies in Ubuntu 12.04

    - by lee.O
    I tried today to install a dvb-card on my Ubuntu 12.04 (Linux blauhai-linux 3.2.0-25-generic #40-Ubuntu SMP Wed May 23 20:30:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux ). The installation failed with an error. After that, i tried to install python (it was already installed but i got this error): linux:~$ sudo apt-get install git Reading package lists... Done Building dependency tree Reading state information... Done git is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: python-glade2:i386 : Depends: python:i386 (< 2.5) but it is not going to be installed Depends: python-support:i386 (= 0.3.4) but it is not installable Depends: python:i386 (= 2.4) but it is not going to be installed Depends: libglade2-0:i386 (= 1:2.5.1) but it is not going to be installed Depends: python-gtk2:i386 (= 2.8.6-8) but it is not going to be installed python-numeric:i386 : Depends: python:i386 (< 2.5) but it is not going to be installed Depends: python:i386 (= 2.3) but it is not going to be installed Depends: python-central:i386 (= 0.5.7) but it is not installable E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). well, i can read and tried the proposed command, but then i get this: linux:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libopenal1:i386 libsdl-ttf2.0-0:i386 libkrb5-3:i386 libgconf-2-4:i386 libsm-dev libatk1.0-0:i386 libk5crypto3:i386 libstdc++5:i386 libqt4-declarative:i386 libxcomposite1:i386 libice-dev libgail18:i386 libldap-2.4-2:i386 libao-common libv4l-0:i386 liblcms1:i386 libqt4-qt3support:i386 libroken18-heimdal:i386 libunistring0:i386 libcupsimage2:i386 libgphoto2-port0:i386 libidn11:i386 libnss3:i386 libcaca0:i386 gtk2-engines:i386 libgudev-1.0-0:i386 libjpeg-turbo8:i386 libpthread-stubs0 libcairo-gobject2:i386 libavc1394-0:i386 libjpeg8:i386 libotr2 libaio1:i386 libsane:i386 odbcinst1debian2 odbcinst1debian2:i386 libqt4-test:i386 libqt4-script:i386 libqt4-designer:i386 libsdl-mixer1.2:i386 libqt4-network:i386 libqt4-dbus:i386 libcap2:i386 libproxy1:i386 ibus-gtk:i386 libdbus-glib-1-2:i386 libtdb1:i386 libasn1-8-heimdal:i386 libspeex1:i386 libxslt1.1:i386 libgomp1:i386 libcapi20-3:i386 libibus-1.0-0:i386 libcairo2:i386 libgnutls26:i386 libopenal-data odbcinst libgssapi3-heimdal:i386 libcanberra0:i386 libtasn1-3:i386 libfreetype6:i386 x11proto-kb-dev gtk2-engines-murrine:i386 libwavpack1:i386 libqt4-opengl:i386 libsoup-gnome2.4-1:i386 libv4lconvert0:i386 gstreamer0.10-plugins-good:i386 libc6-i386 lib32gcc1 libqt4-xmlpatterns:i386 librsvg2-common:i386 libdatrie1:i386 xtrans-dev libavahi-common-data:i386 libiec61883-0:i386 lib32asound2 libgdk-pixbuf2.0-0:i386 libsdl-image1.2:i386 libp11-kit0:i386 x11proto-input-dev libwind0-heimdal:i386 libpixman-1-0:i386 libsdl1.2debian:i386 libxaw7:i386 libgdbm3:i386 libcups2:i386 libcurl3:i386 libqtcore4:i386 libxinerama1:i386 libesd0:i386 libmikmod2:i386 libkrb5support0:i386 libxft2:i386 libxt-dev libcroco3:i386 libpulse-mainloop-glib0:i386 libice6:i386 libaa1:i386 libieee1284-3:i386 libgcrypt11:i386 libthai0:i386 libao4:i386 libkeyutils1:i386 libxmu6:i386 libcanberra-gtk0:i386 libvorbisfile3:i386 libqt4-sql:i386 esound-common libxpm4:i386 libqt4-svg:i386 libusb-0.1-4:i386 libgail-common:i386 libxrender1:i386 libhcrypto4-heimdal:i386 libraw1394-11:i386 libnspr4:i386 libshout3:i386 libdv4:i386 libhx509-5-heimdal:i386 libxau-dev libqt4-xml:i386 gstreamer0.10-x:i386 libgettextpo0:i386 libxss1:i386 libgd2-xpm:i386 libheimbase1-heimdal:i386 libtiff4:i386 libsdl-net1.2:i386 libjasper1:i386 libgnome-keyring0:i386 libxtst6:i386 gtk2-engines-pixbuf:i386 libqtgui4:i386 libtag1c2a:i386 librsvg2-2:i386 libavahi-client3:i386 libssl0.9.8:i386 libmpg123-0:i386 libmad0:i386 libsasl2-2:i386 xorg-sgml-doctools libgsoap1 gtk2-engines-oxygen:i386 libfontconfig1:i386 xaw3dg:i386 libpango1.0-0:i386 libsm6:i386 libx11-dev libheimntlm0-heimdal:i386 libpulsedsp:i386 lib32stdc++6 libx11-doc libqt4-sql-mysql:i386 libxcb-render0:i386 libodbc1:i386 libexif12:i386 libqt4-scripttools:i386 librtmp0:i386 libgssapi-krb5-2:i386 libxi6:i386 libqtwebkit4:i386 libxcb1-dev libxp6:i386 libaudio2:i386 libxcursor1:i386 libxcb-shm0:i386 libxt6:i386 libxv1:i386 libsasl2-modules:i386 libavahi-common3:i386 libxrandr2:i386 x11proto-core-dev libsqlite3-0:i386 libmng1:i386 libgtk2.0-0:i386 libxdmcp-dev libpthread-stubs0-dev libltdl7:i386 libkrb5-26-heimdal:i386 libssl1.0.0:i386 glib-networking:i386 libgpg-error0:i386 libsoup2.4-1:i386 libgphoto2-2:i386 libtag1-vanilla:i386 libaudiofile1:i386 libglade2-0:i386 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: default-jre default-jre-headless icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-netx icedtea-netx-common libglade2-0:i386 libpython3.2 openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib python3 python3-minimal python3-uno python3.2 python3.2-minimal Suggested packages: icedtea-plugin sun-java6-fonts fonts-ipafont-gothic fonts-ipafont-mincho ttf-telugu-fonts ttf-oriya-fonts ttf-kannada-fonts ttf-bengali-fonts python3-doc python3-tk python3.2-doc binfmt-support The following packages will be REMOVED: activity-log-manager-control-center aisleriot alacarte apparmor apport apport-gtk apt-xapian-index aptdaemon apturl apturl-common bluez bluez-alsa bluez-alsa:i386 bluez-gstreamer checkbox checkbox-qt command-not-found compiz compiz-gnome compiz-plugins-main-default compizconfig-backend-gconf deja-dup duplicity eog evolution-data-server firefox firefox-globalmenu firefox-gnome-support foomatic-db-compressed-ppds gconf-editor gconf2 gdb gedit gir1.2-mutter-3.0 gir1.2-peas-1.0 gir1.2-rb-3.0 gir1.2-totem-1.0 gir1.2-ubuntuoneui-3.0 gksu gnome-applets gnome-applets-data gnome-bluetooth gnome-contacts gnome-control-center gnome-media gnome-menus gnome-orca gnome-panel gnome-panel-data gnome-session-fallback gnome-shell gnome-sudoku gnome-terminal gnome-terminal-data gnome-themes-standard gnome-tweak-tool gnome-user-share gstreamer0.10-gconf gwibber gwibber-service gwibber-service-facebook gwibber-service-identica gwibber-service-twitter hplip hplip-data ia32-libs ia32-libs-multiarch:i386 ibus ibus-pinyin ibus-table indicator-datetime indicator-power jockey-common jockey-gtk landscape-client-ui-install language-selector-common language-selector-gnome launchpad-integration libcanberra-gtk-module libcanberra-gtk-module:i386 libcanberra-gtk3-module libcompizconfig0 libfolks-eds25 libgksu2-0 libgnome-media-profiles-3.0-0 libgnome2-0 libgnome2-common libgnomevfs2-0 libgnomevfs2-common libgweather-3-0 libgweather-common libgwibber-gtk2 libgwibber2 libmetacity-private0 libmutter0 libpeas-1.0-0 libpurple-bin libpython2.7 libreoffice-gnome librhythmbox-core5 libsyncdaemon-1.0-1 libtotem0 libubuntuoneui-3.0-1 light-themes lsb-release metacity metacity-common mutter-common nautilus-dropbox nautilus-share network-manager-gnome nvidia-common nvidia-settings nvidia-settings-updates onboard oneconf openjdk-7-jdk openjdk-7-jre openprinting-ppds pidgin pidgin-libnotify pidgin-otr printer-driver-foo2zjs printer-driver-ptouch printer-driver-pxljr printer-driver-sag-gdi printer-driver-splix python python-appindicator python-apport python-apt python-apt-common python-aptdaemon python-aptdaemon.gtk3widgets python-aptdaemon.pkcompat python-brlapi python-cairo python-central python-chardet python-configglue python-crypto python-cups python-cupshelpers python-dateutil python-dbus python-debian python-debtagshw python-defer python-dirspec python-egenix-mxdatetime python-egenix-mxtools python-gconf python-gdbm python-gi python-gi-cairo python-glade2:i386 python-gmenu python-gnomekeyring python-gnupginterface python-gobject python-gobject-2 python-gpgme python-gst0.10 python-gtk2 python-httplib2 python-ibus python-imaging python-keyring python-launchpadlib python-lazr.restfulclient python-lazr.uri python-libproxy python-libxml2 python-louis python-mako python-markupsafe python-minimal python-notify python-numeric:i386 python-oauth python-openssl python-packagekit python-pam python-pexpect python-piston-mini-client python-pkg-resources python-problem-report python-protobuf python-pyatspi2 python-pycurl python-pyinotify python-renderpm python-reportlab python-reportlab-accel python-serial python-simplejson python-smbc python-software-properties python-speechd python-twisted-bin python-twisted-core python-twisted-names python-twisted-web python-ubuntu-sso-client python-ubuntuone-client python-ubuntuone-control-panel python-ubuntuone-storageprotocol python-uno python-virtkey python-wadllib python-xapian python-xdg python-xkit python-zeitgeist python-zope.interface python2.7 python2.7-minimal rhythmbox rhythmbox-mozilla rhythmbox-plugin-cdrecorder rhythmbox-plugin-magnatune rhythmbox-plugin-zeitgeist rhythmbox-plugins rhythmbox-ubuntuone screen-resolution-extra sessioninstaller skype software-center software-center-aptdaemon-plugins software-properties-common software-properties-gtk system-config-printer-common system-config-printer-gnome system-config-printer-udev texlive-extra-utils totem totem-mozilla totem-plugins ubuntu-artwork ubuntu-desktop ubuntu-minimal ubuntu-sso-client ubuntu-sso-client-gtk ubuntu-standard ubuntu-system-service ubuntuone-client ubuntuone-client-gnome ubuntuone-control-panel ubuntuone-couch ubuntuone-installer ufw unattended-upgrades unity unity-2d unity-common unity-lens-applications unity-lens-video unity-scope-musicstores unity-scope-video-remote update-manager update-manager-core update-notifier update-notifier-common usb-creator-common usb-creator-gtk virtualbox virtualbox-dkms virtualbox-qt xdiagnose xul-ext-ubufox zeitgeist zeitgeist-core zeitgeist-datahub The following NEW packages will be installed: default-jre default-jre-headless icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-netx icedtea-netx-common libglade2-0:i386 libpython3.2 openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib python3 python3-minimal python3-uno python3.2 python3.2-minimal WARNING: The following essential packages will be removed. This should NOT be done unless you know exactly what you are doing! python-minimal python2.7-minimal (due to python-minimal) 0 upgraded, 16 newly installed, 273 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 39.1 MB of archives. After this operation, 324 MB disk space will be freed. You are about to do something potentially harmful. To continue type in the phrase 'Yes, do as I say!' ?] Thats not good, is it?! Should i run this command or should i run another command to fix this problem? Would be great if somebody can help me. :) Thanks in advance. best regards

    Read the article

  • SQL SERVER – How to Recover SQL Database Data Deleted by Accident

    - by Pinal Dave
    In Repair a SQL Server database using a transaction log explorer, I showed how to use ApexSQL Log, a SQL Server transaction log viewer, to recover a SQL Server database after a disaster. In this blog, I’ll show you how to use another SQL Server disaster recovery tool from ApexSQL in a situation when data is accidentally deleted. You can download ApexSQL Recover here, install, and play along. With a good SQL Server disaster recovery strategy, data recovery is not a problem. You have a reliable full database backup with valid data, a full database backup and subsequent differential database backups, or a full database backup and a chain of transaction log backups. But not all situations are ideal. Here we’ll address some sub-optimal scenarios, where you can still successfully recover data. If you have only a full database backup This is the least optimal SQL Server disaster recovery strategy, as it doesn’t ensure minimal data loss. For example, data was deleted on Wednesday. Your last full database backup was created on Sunday, three days before the records were deleted. By using the full database backup created on Sunday, you will be able to recover SQL database records that existed in the table on Sunday. If there were any records inserted into the table on Monday or Tuesday, they will be lost forever. The same goes for records modified in this period. This method will not bring back modified records, only the old records that existed on Sunday. If you restore this full database backup, all your changes (intentional and accidental) will be lost and the database will be reverted to the state it had on Sunday. What you have to do is compare the records that were in the table on Sunday to the records on Wednesday, create a synchronization script, and execute it against the Wednesday database. If you have a full database backup followed by differential database backups Let’s say the situation is the same as in the example above, only you create a differential database backup every night. Use the full database backup created on Sunday, and the last differential database backup (created on Tuesday). In this scenario, you will lose only the data inserted and updated after the differential backup created on Tuesday. If you have a full database backup and a chain of transaction log backups This is the SQL Server disaster recovery strategy that provides minimal data loss. With a full chain of transaction logs, you can recover the SQL database to an exact point in time. To provide optimal results, you have to know exactly when the records were deleted, because restoring to a later point will not bring back the records. This method requires restoring the full database backup first. If you have any differential log backup created after the last full database backup, restore the most recent one. Then, restore transaction log backups, one by one, it the order they were created starting with the first created after the restored differential database backup. Now, the table will be in the state before the records were deleted. You have to identify the deleted records, script them and run the script against the original database. Although this method is reliable, it is time-consuming and requires a lot of space on disk. How to easily recover deleted records? The following solution enables you to recover SQL database records even if you have no full or differential database backups and no transaction log backups. To understand how ApexSQL Recover works, I’ll explain what happens when table data is deleted. Table data is stored in data pages. When you delete table records, they are not immediately deleted from the data pages, but marked to be overwritten by new records. Such records are not shown as existing anymore, but ApexSQL Recover can read them and create undo script for them. How long will deleted records stay in the MDF file? It depends on many factors, as time passes it’s less likely that the records will not be overwritten. The more transactions occur after the deletion, the more chances the records will be overwritten and permanently lost. Therefore, it’s recommended to create a copy of the database MDF and LDF files immediately (if you cannot take your database offline until the issue is solved) and run ApexSQL Recover on them. Note that a full database backup will not help here, as the records marked for overwriting are not included in the backup. First, I’ll delete some records from the Person.EmailAddress table in the AdventureWorks database.   I can delete these records in SQL Server Management Studio, or execute a script such as DELETE FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 Then, I’ll start ApexSQL Recover and select From DELETE operation in the Recovery tab.   In the Select the database to recover step, first select the SQL Server instance. If it’s not shown in the drop-down list, click the Server icon right to the Server drop-down list and browse for the SQL Server instance, or type the instance name manually. Specify the authentication type and select the database in the Database drop-down list.   In the next step, you’re prompted to add additional data sources. As this can be a tricky step, especially for new users, ApexSQL Recover offers help via the Help me decide option.   The Help me decide option guides you through a series of questions about the database transaction log and advises what files to add. If you know that you have no transaction log backups or detached transaction logs, or the online transaction log file has been truncated after the data was deleted, select No additional transaction logs are available. If you know that you have transaction log backups that contain the delete transactions you want to recover, click Add transaction logs. The online transaction log is listed and selected automatically.   Click Add if to add transaction log backups. It would be best if you have a full transaction log chain, as explained above. The next step for this option is to specify the time range.   Selecting a small time range for the time of deletion will create the recovery script just for the accidentally deleted records. A wide time range might script the records deleted on purpose, and you don’t want that. If needed, you can check the script generated and manually remove such records. After that, for all data sources options, the next step is to select the tables. Be careful here, if you deleted some data from other tables on purpose, and don’t want to recover them, don’t select all tables, as ApexSQL Recover will create the INSERT script for them too.   The next step offers two options: to create a recovery script that will insert the deleted records back into the Person.EmailAddress table, or to create a new database, create the Person.EmailAddress table in it, and insert the deleted records. I’ll select the first one.   The recovery process is completed and 11 records are found and scripted, as expected.   To see the script, click View script. ApexSQL Recover has its own script editor, where you can review, modify, and execute the recovery script. The insert into statements look like: INSERT INTO Person.EmailAddress( BusinessEntityID, EmailAddressID, EmailAddress, rowguid, ModifiedDate) VALUES( 70, 70, N'[email protected]' COLLATE SQL_Latin1_General_CP1_CI_AS, 'd62c5b4e-c91f-403f-b630-7b7e0fda70ce', '20030109 00:00:00.000' ); To execute the script, click Execute in the menu.   If you want to check whether the records are really back, execute SELECT * FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 As shown, ApexSQL Recover recovers SQL database data after accidental deletes even without the database backup that contains the deleted data and relevant transaction log backups. ApexSQL Recover reads the deleted data from the database data file, so this method can be used even for databases in the Simple recovery model. Besides recovering SQL database records from a DELETE statement, ApexSQL Recover can help when the records are lost due to a DROP TABLE, or TRUNCATE statement, as well as repair a corrupted MDF file that cannot be attached to as SQL Server instance. You can find more information about how to recover SQL database lost data and repair a SQL Server database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • South Florida Code Camp 2010 &ndash; VI &ndash; 2010-02-27

    - by Dave Noderer
    Catching up after our sixth code camp here in the Ft Lauderdale, FL area. Website at: http://www.fladotnet.com/codecamp. For the 5th time, DeVry University hosted the event which makes everything else really easy! Statistics from 2010 South Florida Code Camp: 848 registered (we use Microsoft Group Events) ~ 600 attended (516 took name badges) 64 speakers (including speaker idol) 72 sessions 12 parallel tracks Food 400 waters 600 sodas 900 cups of coffee (it was cold!) 200 pounds of ice 200 pizza's 10 large salad trays 900 mouse pads Photos on facebook Dave Noderer: http://www.facebook.com/home.php#!/album.php?aid=190812&id=693530361 Joe Healy: http://www.facebook.com/devfish?ref=mf#!/album.php?aid=202787&id=720054950 Will Strohl:http://www.facebook.com/home.php#!/album.php?aid=2045553&id=1046966128&ref=mf Veronica Gonzalez: http://www.facebook.com/home.php#!/album.php?aid=150954&id=672439484 Florida Speaker Idol One of the sessions at code camp was the South Florida Regional speaker idol competition. After user group level competitions there are five competitors. I acted as MC and score keeper while Ed Hill, Bob O’Connell, John Dunagan and Shervin Shakibi were judges. This statewide competition is being run by Roy Lawsen in Lakeland and the winner, Jeff Truman from Naples will move on to the state finals to be held at the Orlando Code Camp on 3/27/2010: http://www.orlandocodecamp.com/. Each speaker has 10 minutes. The participants were: Alex Koval Jeff Truman Jared Nielsen Chris Catto Venkat Narayanasamy They all did a great job and I’m working with each to make sure they don’t stop there and start speaking at meetings. Thanks to everyone involved! Volunteers As always events like this don’t happen without a lot of help! The key people were: Ed Hill, Bob O’Connell – DeVry For the months leading up to the event, Ed collects all of the swag, books, etc and stores them. He holds meeting with various DeVry departments to coordinate the day, he works with the students in the days  before code camp to stuff bags, print signs, arrange tables and visit BJ’s for our supplies (I go and pay but have a small car!). And of course the day of the event he is there at 5:30 am!! We took two SUV’s to BJ’s, i was really worried that the 36 cases of water were going to break his rear axle! He also helps with the students and works very hard before and after the event. Rainer Haberman – Speakers and Volunteer of the Year Rainer has helped over the past couple of years but this time he took full control of arranging the tracks. I did some preliminary work solicitation speakers but he took over all communications after that. We have tried various organizations around speakers, chair per track, central team but having someone paying attention to the details is definitely the way to go! This was the first year I did not have to jump in at the last minute and re-arrange everything. There were lots of kudo’s from the speakers too saying they felt it was more organized than they have experienced in the past from any code camp. Thanks Rainer! Ray Alamonte – Book Swap We saw the idea of a book swap from the Alabama Code Camp and thought we would give it a try. Ray jumped in and took control. The idea was to get people to bring their old technical books to swap or for others to buy. You got a ticket for each book you brought that you could then turn in to buy another book. If you did not have a ticket you could buy a book for $1. Net proceeds were $153 which I rounded up and donated to the Red Cross. There is plenty going on in Haiti and Chile! I don’t think we really got a count of how many books came in. I many cases the books barely hit the table before being picked up again. At the end we were left with a dozen books which we donated to the DeVry library. A great success we will definitely do again! Jace Weiss / Ratchelen Hut – Coffee and Snacks Wow, this was an eye opener. In past years a few of us would struggle to give some attention to coffee, snacks, etc. But it was always tenuous and always ended up running out of coffee. In the past we have tried buying Dunkin Donuts coffee, renting urns, borrowing urns, etc. This year I actually purchased 2 – 100 cup Westbend commercial brewers plus a couple of small urns (30 and 60 cup we used for decaf). We got them both started early (although i forgot to push the on button on one!) and primed it with 10 boxes of Joe from Dunkin. then Jace and Rachelen took over.. once a batch was brewed they would refill the boxes, keep the area clean and at one point were filling cups. We never ran out of coffee and served a few hundred more than last  year. We did look but next year I’ll get a large insulated (like gatorade) dispensing container. It all went very smoothly and having help focused on that one area was a big win. Thanks Jace and Rachelen! Ken & Shirley Golding / Roberta Barbosa – Registration Ken & Shirley showed up and took over registration. This year we printed small name tags for everyone registered which was great because it is much easier to remember someone’s name when they are labeled! In any case it went the smoothest it has ever gone. All three were actively pulling people through the registration, answering questions, directing them to bags and information very quickly. I did not see that there was too big a line at any time. Thanks!! Scott Katarincic / Vishal Shukla – Website For the 3rd?? year in a row, Scott was in charge of the website starting in August or September when I start on code camp. He handles all the requests, makes changes to the site and admin. I think two years ago he wrote all the backend administration and tunes it and the website a bit but things are pretty stable. The only thing I do is put up the sponsors. It is a big pressure off of me!! Thanks Scott! Vishal jumped into the web end this year and created a new Silverlight agenda page to replace the old ajax page. We will continue to enhance this but it is definitely a good step forward! Thanks! Alex Funkhouser – T-shirts/Mouse pads/tables/sponsors Alex helps in many areas. He helps me bring in sponsors and handles all the logistics for t-shirts, sponsor tables and this year the mouse pads. He is also a key person to help promote the event as well not to mention the after after party which I did not attend and don’t want to know much about! Students There were a number of student volunteers but don’t have all of their names. But thanks to them, they stuffed bags, patrolled pizza and helped with moving things around. Sponsors We had a bunch of great sponsors which allowed us to feed people and give a way a lot of great swag. Our major sponsors of DeVry, Microsoft (both DPE and UGSS), Infragistics, Telerik, SQL Share (End to End, SQL Saturdays), and Interclick are very much appreciated. The other sponsors Applied Innovations (also supply code camp hosting), Ultimate Software (a great local SW company), Linxter (reliable cloud messaging we are lucky to have here!), Mediascend (a media startup), SoftwareFX (another local SW company we are happy to have back participating in CC), CozyRoc (if you do SSIS, check them out), Arrow Design (local DNN and Silverlight experts),Boxes and Arrows (a local SW consulting company) and Robert Half. One thing we did this year besides a t-shirt was a mouse pad. I like it because it will be around for a long time on many desks. After much investigation and years of using mouse pad’s I’ve determined that the 1/8” fabric top is the best and that is what we got!   So now I get a break for a few months before starting again!

    Read the article

  • 11 Types of Developers

    - by Lee Brandt
    Jack Dawson Jack Dawson is the homeless drifter in Titanic. At one point in the movie he says, “I figure life’s a gift, and I don’t intend on wasting it.” He is happy to wander wherever life takes him. He works himself from place to place, making just enough money to make it to his next adventure. The “Jack Dawson” developer clings on to any new technology as the ‘next big thing’, and will find ways to shoe-horn it in to places where it is not a fit. He is very appealing to the other developers because they want to try the newest techniques and tools too, He will only stay until the new technology either bores him or becomes problematic. Jack will also be hard to find once the technology has been implemented, because he will be on to the next shiny thing. However, having a Jack Dawson on your team can be beneficial. Jack can be a great ally when attempting to convince a stodgy, corporate entity to upgrade. Jack usually has an encyclopedic recall of all the new features of the technology upgrade and is more than happy to interject them in any conversation. Tom Smykowski Tom is the neurotic employee in Office Space, and is deathly afraid of being fired. He will do only what is necessary to keep the status quo. He believes as long as nothing changes, his job is safe. He will scoff at anything new and be the naysayer during any change initiative. Tom can be useful in off-setting Jack Dawson. Jack will constantly be pushing for change and Tom will constantly be fighting it. When you see that Jack is getting kind of bored with a new technology and Tom has finally stopped wetting himself at the mere mention of it, then it is probably the sweet spot of beginning to implement that new technology (providing it is the right tool for the job). Ray Consella Ray is the guy who built the Field of Dreams. He took a risk. Sometimes he screwed it up, but he knew he didn’t want to end up regretting not attempting it. He constantly doubted himself, but he knew he had to keep going. Granted, he was doing what the voices in his head were telling him to do, but my point is he was driven to do something that most people considered crazy. Even when his friends, his wife and even he told himself he was crazy, somewhere inside himself, he knew it was the right thing to do. These are the innovators. These are the Bill Gates and Steve Jobs of the world. The take risks, they fail, they learn and the get better. Obviously, this kind of person thrives in start-ups and smaller companies, but that is due to their natural aversion to bureaucracy. They want to see their ideas put into motion quickly, and withdrawn quickly if it doesn’t work. Short feedback cycles are essential to Ray. He wants to know if his idea is working or not. He wants to modify or reverse his idea if it is not working or makes things worse. These are the agilistas. May I always be one.

    Read the article

  • Big Data – Is Big Data Relevant to me? – Big Data Questionnaires – Guest Post by Vinod Kumar

    - by Pinal Dave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions of SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. I think the series from Pinal is a good one for anyone planning to start on Big Data journey from the basics. In my daily customer interactions this buzz of “Big Data” always comes up, I react generally saying – “Sir, do you really have a ‘Big Data’ problem or do you have a big Data problem?” Generally, there is a silence in the air when I ask this question. Data is everywhere in organizations – be it big data, small data, all data and for few it is bad data which is same as no data :). Wow, don’t discount me as someone who opposes “Big Data”, I am a big supporter as much as I am a critic of the abuse of this term by the people. In this post, I wanted to let my mind flow so that you can also think in the direction I want you to see these concepts. In any case, this is not an exhaustive dump of what is in my mind – but you will surely get the drift how I am going to question Big Data terms from customers!!! Is Big Data Relevant to me? Many of my customers talk to me like blank whiteboard with no idea – “why Big Data”. They want to jump into the bandwagon of technology and they want to decipher insights from their unexplored data a.k.a. unstructured data with structured data. So what are these industry scenario’s that come to mind? Here are some of them: Financials Fraud detection: Banks and Credit cards are monitoring your spending habits on real-time basis. Customer Segmentation: applies in every industry from Banking to Retail to Aviation to Utility and others where they deal with end customer who consume their products and services. Customer Sentiment Analysis: Responding to negative brand perception on social or amplify the positive perception. Sales and Marketing Campaign: Understand the impact and get closer to customer delight. Call Center Analysis: attempt to take unstructured voice recordings and analyze them for content and sentiment. Medical Reduce Re-admissions: How to build a proactive follow-up engagements with patients. Patient Monitoring: How to track Inpatient, Out-Patient, Emergency Visits, Intensive Care Units etc. Preventive Care: Disease identification and Risk stratification is a very crucial business function for medical. Claims fraud detection: There is no precise dollars that one can put here, but this is a big thing for the medical field. Retail Customer Sentiment Analysis, Customer Care Centers, Campaign Management. Supply Chain Analysis: Every sensors and RFID data can be tracked for warehouse space optimization. Location based marketing: Based on where a check-in happens retail stores can be optimize their marketing. Telecom Price optimization and Plans, Finding Customer churn, Customer loyalty programs Call Detail Record (CDR) Analysis, Network optimizations, User Location analysis Customer Behavior Analysis Insurance Fraud Detection & Analysis, Pricing based on customer Sentiment Analysis, Loyalty Management Agents Analysis, Customer Value Management This list can go on to other areas like Utility, Manufacturing, Travel, ITES etc. So as you can see, there are obviously interesting use cases for each of these industry verticals. These are just representative list. Where to start? A lot of times I try to quiz customers on a number of dimensions before starting a Big Data conversation. Are you getting the data you need the way you want it and in a timely manner? Can you get in and analyze the data you need? How quickly is IT to respond to your BI Requests? How easily can you get at the data that you need to run your business/department/project? How are you currently measuring your business? Can you get the data you need to react WITHIN THE QUARTER to impact behaviors to meet your numbers or is it always “rear-view mirror?” How are you measuring: The Brand Customer Sentiment Your Competition Your Pricing Your performance Supply Chain Efficiencies Predictive product / service positioning What are your key challenges of driving collaboration across your global business?  What the challenges in innovation? What challenges are you facing in getting more information out of your data? Note: Garbage-in is Garbage-out. Hold good for all reporting / analytics requirements Big Data POCs? A number of customers get into the realm of setting a small team to work on Big Data – well it is a great start from an understanding point of view, but I tend to ask a number of other questions to such customers. Some of these common questions are: To what degree is your advanced analytics (natural language processing, sentiment analysis, predictive analytics and classification) paired with your Big Data’s efforts? Do you have dedicated resources exploring the possibilities of advanced analytics in Big Data for your business line? Do you plan to employ machine learning technology while doing Advanced Analytics? How is Social Media being monitored in your organization? What is your ability to scale in terms of storage and processing power? Do you have a system in place to sort incoming data in near real time by potential value, data quality, and use frequency? Do you use event-driven architecture to manage incoming data? Do you have specialized data services that can accommodate different formats, security, and the management requirements of multiple data sources? Is your organization currently using or considering in-memory analytics? To what degree are you able to correlate data from your Big Data infrastructure with that from your enterprise data warehouse? Have you extended the role of Data Stewards to include ownership of big data components? Do you prioritize data quality based on the source system (that is Facebook/Twitter data has lower quality thresholds than radio frequency identification (RFID) for a tracking system)? Do your retention policies consider the different legal responsibilities for storing Big Data for a specific amount of time? Do Data Scientists work in close collaboration with Data Stewards to ensure data quality? How is access to attributes of Big Data being given out in the organization? Are roles related to Big Data (Advanced Analyst, Data Scientist) clearly defined? How involved is risk management in the Big Data governance process? Is there a set of documented policies regarding Big Data governance? Is there an enforcement mechanism or approach to ensure that policies are followed? Who is the key sponsor for your Big Data governance program? (The CIO is best) Do you have defined policies surrounding the use of social media data for potential employees and customers, as well as the use of customer Geo-location data? How accessible are complex analytic routines to your user base? What is the level of involvement with outside vendors and third parties in regard to the planning and execution of Big Data projects? What programming technologies are utilized by your data warehouse/BI staff when working with Big Data? These are some of the important questions I ask each customer who is actively evaluating Big Data trends for their organizations. These questions give you a sense of direction where to start, what to use, how to secure, how to analyze and more. Sign off Any Big data is analysis is incomplete without a compelling story. The best way to understand this is to watch Hans Rosling – Gapminder (2:17 to 6:06) videos about the third world myths. Don’t get overwhelmed with the Big Data buzz word, the destination to what your data speaks is important. In this blog post, we did not particularly look at any Big Data technologies. This is a set of questionnaire one needs to keep in mind as they embark their journey of Big Data. I did write some of the basics in my blog: Big Data – Big Hype yet Big Opportunity. Do let me know if these questions make sense?  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >