Search Results

Search found 36788 results on 1472 pages for 'sql 2008'.

Page 28/1472 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database

    - by Pinal Dave
    In the yesterday’s blog post we have seen that it is extremely easy to install the NuoDB database on your local machine. Now that the application is properly set up, let us explore NuoDB a bit more and get you familiar with the how it works and what the important areas of the NuoDB are that you should learn. As we have already installed NuoDB, now we will quickly start with two of the important areas in NuoDB: 1) Admin and 2) Explorer. In this blog post I will explore how the Admin Section of the NuoDB Console works.  In the next blog post we will learn how the Explorer Section works. Let us go to the NuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the following screen: On this screen you can see a big Start QuickStart button. Click on the button and it will bring you to following screen. On this screen you will find very important information about Domain and Database Settings. It is our habit that we do not read what is written on the screen and keep on clicking on continue without reading. While we are familiar with most wizards, we can often miss the very important message on the screen. Please note the information of Domain Settings and Database Settings from the following screen before clicking on Create Database. Domain Settings User: quickstart Password: quickstart Database Settings User: dba Password: goalie Database: test Schema: HOCKEY Once you click on the Create Database button it will immediately start creating sample database. First, it will start a Storage Manager and right after that it will start a Transaction Engine. Once the engine is up, it will Create a Schema and Sample Data. On the success of the creating the sample database it will show the following screen. Now is the time where we can explore the NuoDB Admin or NuoDB Explorer. If you click on Admin, it will first show following login screen. Enter for the username “domain” and for the password “bird”. Alternatively you can enter “quickstart”  twice for username and password.  It works as too. Once you enter into the Admin Section, on the left side you can see information about NuoDB and Admin Console and on the right side you can see the domain overview area. From this Administrative section you can do any of the following tasks: Create a view of the entire domain Add and remove databases Start and stop NuoDB Transaction Engines and Storage Managers Monitor transaction across all the NuoDB databases On the right side of the Admin Section we can see various information about a particular NuoDB domain. You can quickly view various alerts, find out information about the number of host machines that are provisioned for the domain, and see the number of databases and processes that are running in the domain. If you click on the “1 host” link you will be able to see various processes, CPU usage and other information. In the Processes Section you can see that there are two different types of processes. The first process (where you can see the floppy drive icon) represents a running Storage Manager process and the second process a running Transaction Engine process. You can click on the links for the Storage Manager and Transaction Engine to see further statistical details right down to the last byte of the data. There are various charts available for analysis as well. I think the product is quite mature and the user can add different monitor charts to the Admin section. Additionally, the Admin section is the place where you can create and manage new databases. I hope today’s tutorial gives you enough confidence that you can try out NuoDB and checkout various administrative activities with the database. I am personally impressed with their dashboard related to various counters. For more information about how the NuoDB architecture works and what a Storage Manager or Transaction Engine does, check out this short video with NuoDB CTO Seth Proctor:  In the next blog post, we will try out the Explorer section of NuoDB, which allows us to run SQL queries and write SQL code.  Meanwhile, I strongly suggest you download and install NuoDB and get yourself familiar with the product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB

    - by Pinal Dave
    Let us take a look at the application you own at your business. If you pay attention to the underlying database for that application you will be amazed. Every successful business these days processes way more data than they used to process before. The number of transactions and the amount of data is growing at an exponential rate. Every single day there is way more data to process than before. Big data is no longer a concept; it is now turning into reality. If you look around there are so many different big data solutions and it can be a quite difficult task to figure out where to begin. Personally, I have been experimenting with a lot of different solutions which allow my database to scale immediately without much hassle while maintaining optimal database performance.  There are for sure some solutions out there, but for many I even have to learn their specific language and there is a lot of new exploration to do. Honestly, what I prefer is a product, which works with the language I know (SQL) and follows all the RDBMS concepts which I am familiar with (ACID etc.). NuoDB is one such solution.  It is an operational NewSQL database built on a patented emergent architecture with full support for SQL and ACID guarantees. In this blog post, I will explore how one can download and install NuoDB database. Step 1: Follow me and go to the NuoDB download page. Simply fill out the form, accept the online license agreement, and you will be taken directly to a page where you can select any platform you prefer to install NuoDB. In my example below, I select the Windows 64-bit platform as it is one of the most popular NuoDB platforms. (You can also run NuoDB on Amazon Web Services but I prefer to install it on my local machine for the purposes of this blog). Step 2: Once you have downloaded the NuoDB installer, double click on it to install it on the Windows platform. Here is the enlarged the icon of the installer. Step 3: Follow the wizard installation, as it is pretty straight forward and easy to do so. I have selected all the options to install as the overall installation is very simple and it does not take up much space. I have installed it on my C drive but you can select your preferred drive. It is quite possible that if you do not have 64 bit Java, it will throw following error. If you face following error, I suggest you to download 64-bit Java from here. Make sure that you download 64-bit Java from following link: http://java.com/en/download/manual.jsp If already have Java 64-bit installed, you can continue with the installation as described in following image. Otherwise, install Java and start from with Step 1. As in my case, I already have 64-bit Java installed – and you won’t believe me when I say that the entire installation of NuoDB only took me around 90 seconds. Click on Finish to end to exit the installation. Step 4: Once the installation is successful, NuoDB will automatically open the following two tabs – Console and DevCenter — in your preferred browser. On the Console tab you can explore various components of the NuoDB solution, e.g. QuickStart, Admin, Explorer, Storefront and Samples. We will see various components and their usage in future blog posts. If you follow these steps in this post, which I have followed to install NuoDB, you will agree that the installation of NuoDB is extremely smooth and it was indeed a pleasure to install a database product with such ease. If you have installed other database products in the past, you will absolutely agree with me. So download NuoDB and install it today, and in tomorrow’s blog post I will take the installation to the next level. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • SQL SERVER – What are Actions in SSAS and How to Make a Reporting Action

    - by Pinal Dave
    Actions are used for customized browsing and drilling of data for the end-user. It’s an event that a user can raise while accessing the cube data. They are used in cube browsers like excel and are triggered when a user in a client tool clicks on a particular member, level, dimension, cells or may be the cube itself.  For example a user might be able to see a reporting services report, open a web page or drill through to detailed information related to the cube data. Analysis server supports 3 types of actions :- Report Drill-through Standard Actions In this blog post, I will explain the Reporting  action. The objective of this action is to return a report with details of the product where the sales amount is greater than 1000 in cube browser analysis. You need to create a basic cube first with the facts and dimensions you want in the analysis. Following are the steps to create reporting action. Go to SQL server data tools and open the analysis services project. Navigate to actions and click on new reporting action. 2.) Specify the name of the action and choose target type as attribute members since we have to create the action on members for a attribute. 3.) Specify the Target object of your report action. Target object would be the dimension or attribute on which you want the report to appear. In our case it is product name. 4.) Next you have to define the condition on which you want the report link to appear. However, this is an optional feature. In this example we are specifying a condition, which will check if the sales amount is greater than 10,000. So, that the link appears only for those products where the defined condition is met. 5.) Next you have to specify the server name on which the report is present, report path  and the report format in which you want the report to appear. 6.) Additionally you can specify the parameters. As with conditional expression, the parameters should be a valid MDX expression. The parameter name should be same as the one defined in the report. 7.) Deploy your solution after you are done with specifying parameters and go to the cube browser. 8.) Click on the analyze in excel button, this will open your cube in excel 9.) Make an analysis which shows product names and their sales amount. 10.) Right click on a product where sales amount is greater than 10000 you will see the reporting action link. Click on that and you will be taken to your reporting services report. 11.) Clicking on the link will take you to the URL of the report. I created this report using report project wizard in SQL server data tools. So, this is how we can launch reports from a cube browser. Similarly you can open web pages, run applications and a number of  other tasks. Koenig Solutions offers SSAS training which contains all Analysis Services including Reporting in great detail. In my next blog post I will talk about drill-through actions. Author: Namita Sharma, Senior Corporate Trainer at Koenig Solutions. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSAS

    Read the article

  • SQL SERVER – Core Concepts – Elasticity, Scalability and ACID Properties – Exploring NuoDB an Elastically Scalable Database System

    - by pinaldave
    I have been recently exploring Elasticity and Scalability attributes of databases. You can see that in my earlier blog posts about NuoDB where I wanted to look at Elasticity and Scalability concepts. The concepts are very interesting, and intriguing as well. I have discussed these concepts with my friend Joyti M and together we have come up with this interesting read. The goal of this article is to answer following simple questions What is Elasticity? What is Scalability? How ACID properties vary from NOSQL Concepts? What are the prevailing problems in the current database system architectures? Why is NuoDB  an innovative and welcome change in database paradigm? Elasticity This word’s original form is used in many different ways and honestly it does do a decent job in holding things together over the years as a person grows and contracts. Within the tech world, and specifically related to software systems (database, application servers), it has come to mean a few things - allow stretching of resources without reaching the breaking point (on demand). What are resources in this context? Resources are the usual suspects – RAM/CPU/IO/Bandwidth in the form of a container (a process or bunch of processes combined as modules). When it is about increasing resources the simplest idea which comes to mind is the addition of another container. Another container means adding a brand new physical node. When it is about adding a new node there are two questions which comes to mind. 1) Can we add another node to our software system? 2) If yes, does adding new node cause downtime for the system? Let us assume we have added new node, let us see what the new needs of the system are when a new node is added. Balancing incoming requests to multiple nodes Synchronization of a shared state across multiple nodes Identification of “downstate” and resolution action to bring it to “upstate” Well, adding a new node has its advantages as well. Here are few of the positive points Throughput can increase nearly horizontally across the node throughout the system Response times of application will increase as in-between layer interactions will be improved Now, Let us put the above concepts in the perspective of a Database. When we mention the term “running out of resources” or “application is bound to resources” the resources can be CPU, Memory or Bandwidth. The regular approach to “gain scalability” in the database is to look around for bottlenecks and increase the bottlenecked resource. When we have memory as a bottleneck we look at the data buffers, locks, query plans or indexes. After a point even this is not enough as there needs to be an efficient way of managing such large workload on a “single machine” across memory and CPU bound (right kind of scheduling)  workload. We next move on to either read/write separation of the workload or functionality-based sharing so that we still have control of the individual. But this requires lots of planning and change in client systems in terms of knowing where to go/update/read and for reporting applications to “aggregate the data” in an intelligent way. What we ideally need is an intelligent layer which allows us to do these things without us getting into managing, monitoring and distributing the workload. Scalability In the context of database/applications, scalability means three main things Ability to handle normal loads without pressure E.g. X users at the Y utilization of resources (CPU, Memory, Bandwidth) on the Z kind of hardware (4 processor, 32 GB machine with 15000 RPM SATA drives and 1 GHz Network switch) with T throughput Ability to scale up to expected peak load which is greater than normal load with acceptable response times Ability to provide acceptable response times across the system E.g. Response time in S milliseconds (or agreed upon unit of measure) – 90% of the time The Issue – Need of Scale In normal cases one can plan for the load testing to test out normal, peak, and stress scenarios to ensure specific hardware meets the needs. With help from Hardware and Software partners and best practices, bottlenecks can be identified and requisite resources added to the system. Unfortunately this vertical scale is expensive and difficult to achieve and most of the operational people need the ability to scale horizontally. This helps in getting better throughput as there are physical limits in terms of adding resources (Memory, CPU, Bandwidth and Storage) indefinitely. Today we have different options to achieve scalability: Read & Write Separation The idea here is to do actual writes to one store and configure slaves receiving the latest data with acceptable delays. Slaves can be used for balancing out reads. We can also explore functional separation or sharing as well. We can separate data operations by a specific identifier (e.g. region, year, month) and consolidate it for reporting purposes. For functional separation the major disadvantage is when schema changes or workload pattern changes. As the requirement grows one still needs to deal with scale need in manual ways by providing an abstraction in the middle tier code. Using NOSQL solutions The idea is to flatten out the structures in general to keep all values which are retrieved together at the same store and provide flexible schema. The issue with the stores is that they are compromising on mostly consistency (no ACID guarantees) and one has to use NON-SQL dialect to work with the store. The other major issue is about education with NOSQL solutions. Would one really want to make these compromises on the ability to connect and retrieve in simple SQL manner and learn other skill sets? Or for that matter give up on ACID guarantee and start dealing with consistency issues? Hybrid Deployment – Mac, Linux, Cloud, and Windows One of the challenges today that we see across On-premise vs Cloud infrastructure is a difference in abilities. Take for example SQL Azure – it is wonderful in its concepts of throttling (as it is shared deployment) of resources and ability to scale using federation. However, the same abilities are not available on premise. This is not a mistake, mind you – but a compromise of the sweet spot of workloads, customer requirements and operational SLAs which can be supported by the team. In today’s world it is imperative that databases are available across operating systems – which are a commodity and used by developers of all hues. An Ideal Database Ability List A system which allows a linear scale of the system (increase in throughput with reasonable response time) with the addition of resources A system which does not compromise on the ACID guarantees and require developers to learn new paradigms A system which does not force fit a new way interacting with database by learning Non-SQL dialect A system which does not force fit its mechanisms for providing availability across its various modules. Well NuoDB is the first database which has all of the above abilities and much more. In future articles I will cover my hands-on experience with it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • SQLAuthority News – 1600 Blog Post Articles – A Milestone

    - by pinaldave
    It was really a very interesting moment for me when I was writing my 1600th milestone blog post. Now it`s a lot more exciting because this time it`s my 1600th blog post. Every time I write a milestone blog post such as this, I have the same excitement as when I was writing my very first blog post. Today I want to write about a few statistics of the blog. Statistics I am frequently asked about my blog stats, so I have already published my blog stats which are measured by WordPress.com. Currently, I have more than 22 Million+ Views on this blog from various sources. There are more than 6200+ feed subscribers in Google Reader only; I think I don`t have to count all other subscribers. My LinkedIn has 1250+ connection, while my Twitter has 2150+. Because I feel that I`m well connected with the Community, I am very thankful to you, my readers. Today I also want to say Thank You to those experts who have helped me to improve. I have maintained a list of all the articles I have written. If you go to my first articles, you will notice that they were a little different from the articles I am writing today. The reason for this is simple: I have two kinds of people helping me write all the better: readers and experts. To my Readers You read the articles and gave me feedback about what was right or wrong, what you liked or disliked. Quite often, you were helpful in writing guest posts, and I also recognize how you were a bit brutal in criticizing some articles, making me re-write them. Because of you, I was able to write better blog posts. To Experts You read the articles and helped me improve. I get inspiration from you and learned a lot from you. Just like everybody, I am a guy who is trying to learn. There are times when I had vague understanding of some subjects, and you did not hesitate to help me. Number of Posts Many ask me if the number of posts is important to me. My answer is YES. Actually, it`s just not about the number of my posts; it is about my blog, my routine, my learning experience and my journey. During the last four years, I have decided that I would be learning one thing a day. This blog has helped me accomplish this goal because in here I have been able to keep my notes and bookmarks. Whatever I learn or experience, I blog and share it with the Community. For me, the blog post number is more than just a number: it`s a summary of my experiences and memories. Once again, thanks for reading and supporting my blog! Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Milestone, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – What is a Technology Evangelist?

    - by pinaldave
    When you hear that someone is an “evangelist” the first thing that might pop into your mind is the Christian church.  In fact, the term did come from Christianity, and basically means someone who spreads the news about their faith.  In the technology world, the same definition is true. Technology evangelists are individuals who, professionally or in their spare time, spread the news about the latest new products.  Sounds like a salesperson, right?  No they are absolutely different. Salespeople also keep up to date with a large number of people, and like to convince others to buy their product – and some will go to any lengths to sell!  An evangelist, on the other hand, is brutally honest about the product, even if sometimes it means not making a sale.  An evangelist is out there to tell the TRUTH.  A salesperson needs to make sales. An Evangelist offers a Solution independent of Technology used – a Salesperson offers Particular Technology. With this definition in mind, you can probably think of a few technology evangelists you already know.  Maybe it’s a relative or a neighbor, someone who loves keeping up with the latest trends and is always willing to tell you about them if you ask even the simplest question.  And, in fact, they probably are evangelists and don’t even know it.  For a long time, the work of technology evangelism was in the hands of community and community technology leaders. Luckily now various organizations have understood the importance of the community and helping community to reach their goals. This has lead them to create role of “Technology Evangelists”. Let me talk about one of the most famous Evangelist of the SQL Server technology. Technology Evangelist only belongs to technology and above any country, race, location or any other thing. They are dedicated to the technology. Vinod Kumar is such a man, who have given a lot to community. For years he was a Technology Evangelist for Microsoft, and maintained a blog that was dedicated to spreading his enthusiasm for his favorite products.  He is one of the most respected Evangelists in the field, and has done a lot of work to define the job for other professionals. Vinod’s career has since progressed to the Microsoft Technology Center (read his post), but he is continuing to be a strong presence in the evangelism community.  I have a lot of respect for Vinod.  He has done a lot for the community and technology evangelism.  Everybody has dream to serve community the way he does, and he is a great role model for evangelists everywhere. On his blog, Vinod created one of the best descriptions of a Technology Evangelist.  It defined the position and also made the distinction between evangelist and salesperson extremely clear.  I will include the highlights of that list here, because no one can say it better than Vinod: Bundle of energy – Passion is their middle name Wonderful Story tellers Empathy, Trust, Loyalty, Openness, Accessibility and Warmth Technology Enthusiast – Doers Love people, people and more people – Community oriented Unique Style and Leadership qualities !!! Self-Confident, Self-Motivated but a student (To read the full list, see: Evangelism Beyond Borders with Evangelists) His blog is a must-read for anyone interested in technology evangelism as a career or simply a hobby.  His advice about how to gain an audience and become a trusted advisor is the best in the business. I think there is an evangelist in everyone. I, too, consider myself a technology evangelist.  Regular readers of this blog will recognize that I am dedicated to bringing information to the masses, and that I pride myself on being both brutally and honest and giving every product fair consideration. I think there is no better way of saying following subject. “Once an Evangelist – Always an Evangelist!” Reference: Pinal Dave (http://blog.SQLAuthority.com)     Filed under: About Me, Database, MVP, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Evangelist

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Top 10 “Ease of Use” Features of expressor Studio

    - by pinaldave
    expressor Studio is new data integration platform that is being marketed as the most easy to use tool of its kind.  But “easy to use” can be a relative term – an expert can find a very complex system easy, but a beginner might be stumped.  A recent article online discussed exactly what makes expressor Studio so easy use, and here is my view on this subject. Simple Installation There is one pop-up for one .exe file, and nothing more.  You can’t get much simpler than this.  It is also in the familiar Windows design, so there should be no surprises. No 3rd party software dependency Have you ever tried to download software, only to be slowed down by the need to download a compatible system to run the program, and another to read the user manual, and so on?  expressor Studio was designed specifically to avoid this problem. Microsoft Office Like Ribbon Bar and Menus As mentioned before, everything is in the familiar Windows design, from the pop up windows to the tool bars and menus.  There should be no learning curve for using this program, or even simply trying to navigate around a new system. General Development Design Interface This software has been designed to be simple and straightforward.  Projects can be arranged in a simple “tree” design, that is totally collapsible and can easy be added to or “trimmed” with a click of a button.  It was meant to be logical and easy to follow. Integrated Contextual Help This is a fancy way of saying that you can practically yell “help!” if you do get stuck on something.  Solving a problem is as simple as highlighting and hitting F1 for contextual help. Visual Indicators and Messages Wouldn’t it be nice to know exactly where something has gone wrong before trying to complete a project.  expressor Studio has a built in system to catch mistakes and highlight them in a bright color, flash a warning message, and even disable functions before you can continue – and possibly lose hours of work. Property Inputs and Selectors Every operator will have a list of requirements that need to be filled in.  But don’t worry; you won’t have to make stuff up to fill in the boxes.  Each one will have a drop-down menu with options to choose from – but not too many as to be confusing. Connection Wizards Configuring connections can be the hardest part of a project.  But not with the expressor Studioconnection wizard.  A familiar, Windows-style menu will walk you through connections so quickly you’ll forget what trouble it used to be. Templates With large, complex projects, a majority of your time is often spent simply setting up the files and inputting data.  But expressor Studio allows you to create one file and then save it as a template, saving you hours of boring data input. Extension Manager Let’s say that you need a little more functionality or some new features on your program. A lot of software requires you to download complex plug-ins that need to be decompressed and installed.  However, expressor Studio has extended its system to an Extension Manager, which allows for quick and easy installation of the functionality you need, without the need to download and decompress. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • SQL SERVER – Sharing your ETL Resources Across Applications with Ease

    - by pinaldave
    Frequently an organization will find that the same resources are used in multiple ETL applications, for example, the same database, general purpose processing logic, or file system locations.  Creating an easy way to reuse these resources across multiple applications would increase efficiency and reduce errors.  Moreover, not every ETL developer has the same skill set, and it is likely that one developer will be more adept at writing code while another is more comfortable configuring database connections.  Real productivity gains will come when these developers are able to work independently while still making their work available to others assigned to the same project.  These are the benefits of a centralized version control system. Of course, most version control systems could be used to store and serve files, but the real need is to store and serve entire ETL applications so that each developer’s ongoing work can immediately benefit from another developer’s completed work.  In other words, the version control system needs to be tightly integrated with the tools used to develop the ETL application. The following screen shot shows such a tool. Desktop ETL tool that tightly integrates with a central version control system Developers can checkout or commit entire projects or just a single artifact.  Each artifact may be managed independently so if you need to go back to an earlier version of one artifact, changes you may have made to other artifacts are not lost.  By being tightly integrated into the graphical environment used to create and edit the project artifacts, it is extremely easy and straight-forward to move your files to and from the version control system and there is no dependency on another vendor’s version control system.  The built in version control system is optimized for managing the artifacts of ETL applications. It is equally important that the version control system supports all of the actions one typically performs such as rollbacks, locking and unlocking of files, and the ability to resolve conflicts.  Note that this particular ETL tool also has the capability to switch back and forth between multiple version control systems. It also needs to be easy to determine the status of an artifact.  Not just that it has been committed or modified, but when and by whom.  Generally you must query the version control system for this information, but having it displayed within the development environment is more desirable. Who’s ETL tool works in this fashion?  Last month I mentioned the data integration solution offered by expressor software.  The version control features I described in this post are all available in their just released expressor 3.1 Standard Edition through the integration of their expressor Studio development environment with a centralized metadata repository and version control system. You can download their Studio application, which is free, or evaluate the full Standard Edition on your own hardware.  It may be worth your time. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Introduction to Big Data – Guest Post

    - by pinaldave
    BIG Data – such a big word – everybody talks about this now a days. It is the word in the database world. In one of the conversation I asked my friend Jasjeet Sigh the same question – what is Big Data? He instantly came up with a very effective write-up.  Jasjeet is working as a Technical Manager with Koenig Solutions. He leads the SQL domain, and holds rich IT industry experience. Talking about Koenig, it is a 19 year old IT training company that offers several certification choices. Some of its courses include SharePoint Training, Project Management certifications, Microsoft Trainings, Business Intelligence programs, Web Design and Development courses etc. Big Data, as the name suggests, is about data that is BIG in nature. The data is BIG in terms of size, and it is difficult to manage such enormous data with relational database management systems that are quite popular these days. Big Data is not just about being large in size, it is also about the variety of the data that differs in form or type. Some examples of Big Data are given below : Scientific data related to weather and atmosphere, Genetics etc Data collected by various medical procedures, such as Radiology, CT scan, MRI etc Data related to Global Positioning System Pictures and Videos Radio Frequency Data Data that may vary very rapidly like stock exchange information Apart from difficulties in managing and storing such data, it is difficult to query, analyze and visualize it. The characteristics of Big Data can be defined by four Vs: Volume: It simply means a large volume of data that may span Petabyte, Exabyte and so on. However it also depends organization to organization that what volume of data they consider as Big Data. Variety: As discussed above, Big Data is not limited to relational information or structured Data. It can also include unstructured data like pictures, videos, text, audio etc. Velocity:  Velocity means the speed by which data changes. The higher is the velocity, the more efficient should be the system to capture and analyze the data. Missing any important point may lead to wrong analysis or may even result in loss. Veracity: It has been recently added as the fourth V, and generally means truthfulness or adherence to the truth. In terms of Big Data, it is more of a challenge than a characteristic. It is difficult to ascertain the truth out of the enormous amount of data and the one that has high velocity. There are always chances of having un-precise and uncertain data. It is a challenging task to clean such data before it is analyzed. Big Data can be considered as the next big thing in the IT sector in terms of innovation and development. If appropriate technologies are developed to analyze and use the information, it can be the driving force for almost all industrial segments. These include Retail, Manufacturing, Service, Finance, Healthcare etc. This will help them to automate business decisions, increase productivity, and innovate and develop new products. Thanks Jasjeet Singh for an excellent write up.  Jasjeet Sign is working as a Technical Manager with Koenig Solutions. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Big Data

    Read the article

  • SQL Contests – Solution – Identify the Database Celebrity

    - by Pinal Dave
    Last week we were running contest Identify the Database Celebrity and we had received a fantastic response to the contest. Thank you to the kind folks at NuoDB as they had offered two USD 100 Amazon Gift Cards to the winners of the contest. We had also additional contest that users have to download and install NuoDB and identified the sample database. You can read about the contest over here. Here is the answer to the questions which we had asked earlier in the contest. Part 1: Identify Database Celebrity Personality 1 – Edgar Frank “Ted” Codd (August 19, 1923 – April 18, 2003) was an English computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases. He made other valuable contributions to computer science, but the relational model, a very influential general theory of data management, remains his most mentioned achievement. (Wki) Personality 2 – James Nicholas “Jim” Gray (born January 12, 1944; lost at sea January 28, 2007; declared deceased May 16, 2012) was an American computer scientist who received the Turing Award in 1998 “for seminal contributions to database and transaction processing research and technical leadership in system implementation.” (Wiki) Personality 3 – Jim Starkey (born January 6, 1949 in Illinois) is a database architect responsible for developing InterBase, the first relational database to support multi-versioning, the blob column type, type event alerts, arrays and triggers. Starkey is the founder of several companies, including the web application development and database tool company Netfrastructure and NuoDB. (Wiki) Part 2: Identify NuoDB Samples Database Names In this part of the contest one has to Download NuoDB and install the sample database Hockey. Hockey is sample database and contains few tables. Users have to install sample database and inform the name of the sample databases. Here is the valid answer. HOCKEY PLAYERS SCORING TEAM Once again, it was indeed fun to run this contest. I have received great feedback about it and lots of people wants me to run similar contest in future. I promise to run similar interesting contests in the near future. Winners Within next two days, we will let winners send emails. Winners will have to confirm their email address and NuoDB team will send them directly Amazon Cards. Once again it was indeed fun to run this contest. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL – Business Intelligence: Derive Data or Information?

    - by Pinal Dave
    We all know the value of information in our lives. Whether it’s a personal decision or a business initiated one, people need it. But the question is: who is to make the distinction between data and information? We all come across a whole lot of data daily, that may be significant or not. We filter what’s required and forget about the rest. Information is filtered and distilled data. Filtering and distillation can also alter its actual meaning and natural state. Therefore, in this blog we discover some ways to ensure that we’re using business intelligence derived from the right information for making critical management decisions. Four key questions managers must ask themselves before making a decision: 1. Am I working with data or information? 2. What is it’s context? 3. How recent is it? 4. How was it derived or what is the source? The first question is probably the most important. You must know what you’re dealing with here. If you see use of adjectives and conclusions drawn, it’s information. Not raw data. You very next concern must be whether this is guised to present a particular viewpoint or perspective. It makes a lot of difference if you take a decision based on someone’s propaganda to distort real facts. Therefore, the context and the intentions of the distillation process must be clear to you. The next consideration is whether data is recent enough to hold any value. Since it has a very short shelf life, you must ensure that its context and value is not lost out of time. The last and the most important consideration is how was it derived in the first place. The observer effect is what calls the shots here. The source can change the context to a great extent if the collection methodology  and purpose is not clear. Gathering intelligence for decision making requires users to be keen observers and not take the information provided on its face value alone. These probing questions will allow you to make sure that you’re working with clean and accurate data devoid of any influence or manipulations. Only then can you be sure of deriving true business intelligence for your organization. BI technology is also a great way to ensure accuracy of reports. SQL BI Platform  provides advanced tools and techniques for all your BI needs and concerns. Koenig Solutions offers this course along with a host of other Business Intelligence and IT courses on all latest technologies available in the market today. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Reseting Identity Values for All Tables

    - by pinaldave
    Sometime email requesting help generates more questions than the motivation to answer them. Let us go over one of the such examples. I have converted the complete email conversation to chat format for easy consumption. I almost got a headache after around 20 email exchange. I am sure if you can read it and feel my pain. DBA: “I deleted all of the data from my database and now it contains table structure only. However, when I tried to insert new data in my tables I noticed that my identity values starts from the same number where they actually were before I deleted the data.” Pinal: “How did you delete the data?” DBA: “Running Delete in Loop?” Pinal: “What was the need of such need?” DBA: “It was my development server and I needed to repopulate the database.” Pinal: “Oh so why did not you use TRUNCATE which would have reset the identity of your table to the original value when the data got deleted? This will work only if you want your database to reset to the original value. If you want to set any other value this may not work.” DBA: (silence for 2 days) DBA: “I did not realize it. Meanwhile I regenerated every table’s schema and dropped the table and re-created it.” Pinal: “Oh no, that would be extremely long and incorrect way. Very bad solution.” DBA: “I understand, should I just take backup of the database before I insert the data and when I need, I can use the original backup to restore the database. This way I will have identity beginning with 1.” Pinal: “This going totally downhill. It is wrong to do so on multiple levels. Did you even read my earlier email about TRUNCATE.” DBA: “Yeah. I found it in spam folder.” Pinal: (I decided to stay silent) DBA: (After 2 days) “Can you provide me script to reseed identity for all of my tables to value 1 without asking further question.” Pinal: USE DATABASE; EXEC sp_MSForEachTable ' IF OBJECTPROPERTY(object_id(''?''), ''TableHasIdentity'') = 1 DBCC CHECKIDENT (''?'', RESEED, 1)' GO Our conversation ended here. If you have directly jumped to this statement, I encourage you to read the conversation one time. There is difference between reseeding identity value to 1 and reseeding it to original value – I will write an another blog post on this subject in future. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – What is SSAS Tabular Data model and Why to use it – Part 2

    - by Pinal Dave
    In my last article, I talked about the basics of tabular data model and why use it. Then I demonstrated step by step creation of a basic tabular model project. In this part I’m going to throw some light on how to create measures and analyses in excel. If you look at the tabular project closely, you will notice that we have not defined any measure yet. So, in the first step we will define the measure first.  Open the solution and select the column you want to define as a measure. Then, click on the summation icon on the toolbar. You will see the aggregated results at the bottom of that column. You have also other choices as well like average, min, max, count and distinct count. After creating the required measures, we need to analyze our data in excel. To do this, click on the excel icon in the upper left corner of the toolbar. This will open your analysis in excel. Notice the pivot table field list here. I have highlighted the measures that we created in the earlier step. Now, we can use these measures in our analysis Now, we have to put the required fields in their respective places as column labels, row labels, Values and Report filter for analysis. See below snapshot for details, it shows region wise sales on a yearly basis You can even apply filters on the above analysis by placing the slicer field in report filter. In our example, we will take an English product name as a filter. You can use the filter as depicted in the below snapshot. Optionally, you can also use the slider to filter data more interactively. Further to improve our analysis, we can insert pivot charts That’s all for this time, in my next post I’m going to show in detail about how to create hierarchies, perspectives, KPI’s  and many more features. Author: Namita Sharma, Senior Corporate Trainer at Koenig Solutions. Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSAS

    Read the article

  • SQL SERVER – A Brief Note on SET TEXTSIZE

    - by pinaldave
    Here is a small conversation I received. I thought though an old topic, indeed a thought provoking for the moment. Question: Is there any difference between LEFT function and SET TEXTSIZE? I really like this small but interesting question. The question does not specify the difference between usage or performance. Anyway we will quickly take a look at how TEXTSIZE works. You can run the following script to see how LEFT and SET TEXTSIZE works. USE TempDB GO -- Create TestTable CREATE TABLE MyTable (ID INT, MyText VARCHAR(MAX)) GO INSERT MyTable (ID, MyText) VALUES(1, REPLICATE('1234567890', 100)) GO -- Select Data SELECT ID, MyText FROM MyTable GO -- Using Left SELECT ID, LEFT(MyText, 10) MyText FROM MyTable GO -- Set TextSize SET TEXTSIZE 10; SELECT ID, MyText FROM MyTable; SET TEXTSIZE 2147483647 GO -- Clean up DROP TABLE MyTable GO Now let us see the usage result which we receive from both of the example. If you are going to ask what you should do – I really do not know. I can tell you where I will use either of the same. LEFT seems to be easy to use but again if you like to do extra work related to SET TEXTSIZE go for it. Here is how I will use SET TEXTSIZE. If I am selecting data from in my SSMS for testing or any other non production related work from a large table which has lots of columns with varchar data, I will consider using this statement to reduce the amount of the data which is retrieved in the result set. In simple word, for testing purpose I will use it. On the production server, there should be a specific reason to use the same. Here is my candid opinion – I do not think they can be directly comparable even though both of them give the exact same result. LEFT is applicable only on the column of a single SELECT statement. where it is used but it SET TEXTSIZE applies to all the columns in the SELECT and follow up SELECT statements till the SET TEXTSIZE is not modified again in the session. Uncomparable! I hope this sample example gives you idea how to use SET TEXTSIZE in your daily use. I would like to know your opinion about how and when do you use this feature. Please leave a comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Advantages of Distance Learning

    - by Pinal Dave
    Distance education is extremely popular – almost overnight, it seems.  Almost everyone has taken an online course, or knows someone who has, or is considering joining an online school.  There are many advantages and disadvantages to attending an online school – but the same can be said of attending a physical school!  Let’s take a look at the top reasons to use distance education. 1) Flexibility.  Physical universities are usually willing to make some concessions to student – like night classes, study hours, and online networks.  However, nothing is going to beat the flexibility of distance education.  You can attend classes and take notes anytime, anywhere, wearing anything you’d like! 2) Affordability.  We don’t need to get into hard numbers to understand how an expensive university can be.  Students are taking on more and more debt just to get an education.  Many of these fees pay for room, board, and facilities.   Distance education cuts out all these costs, and makes attending school much more affordable for the average student. 3) Try before you buy.  Did you know that the average college student changes his or her major 10 times before they graduate?  You can imagine that this kind of indecision plays a huge part in WHEN you graduate – not being able to make up your mind can cost you big bucks if you have to stay in school for extra years!  Distance education allows you to take different classes from a wide range of disciplines.  Do you want to study forensic science or English literature?  Now you don’t have to pay for classes you can’t afford just to find out. 4) Pace yourself.  Some students struggle in a traditional classroom setting – classes can be taught too fast, too slow, or there are too many distractions.  Distance education allows mature students to set the pace themselves.  They can rewatch lectures they didn’t catch the first time, or go through classes quickly if they are already familiar with the material – cutting out the chance of burning out or getting bored. 5) Lifelong learning.  Maybe you already have a degree, but would like to learn more about your field, or a related field, or maybe even about something completely unrelated – just because you are curious!  Distance education allows you to learn whatever you want ,whenever you want (and yes, wearing anything you’d like!). 6) Attend whatever college you want.  Because of the popularity of distance education, physical campuses are getting in on the game by offering online courses – often just uploaded versions of classes already taught at their campus.  Ever wanted to attend Harvard, but knew you couldn’t get in?  Take a class online!  Of course, you probably should not attempt to lie and say you have a Harvard degree, but Ivy League colleges are prestigious because they are the best in their field – take advantage of the best by taking an online course! I am a big believer in continuing education, whether it is online courses, returning to school, or even take informal classes online.  Distance education can be a great way to accomplish these goals and become a lifelong learner. My friends at provides training through virtual classrooms for students who want to avoid travelling. Distance learning course allows IT aspirants to connect with trainers using the internet.  I encourage everyone to check it out! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • SQL SERVER – Unable to DELETE Project in Data Quality Projects (DQS)

    - by pinaldave
    Here is the email which made me write this blog post. When I write a blog post I write keeping in mind that if the developer is not familiar with the concept he will attempt this on the development server. If due to any reason you attempt it on any other server than your personal server, developer should make sure to have complete confidence on his own expertise and understand the risk behind it.  Well, let us read the email which I received. I have modified it a bit to remove information related to organizational and individual. “I just read your blog post on Beginning DQS. I went ahead and followed every single screenshot and it worked fine. I was able to execute the DQS project successfully. However, the same blog post got me in trouble – a serious trouble. After first successful deployment I went ahead and created a few of my own knowledge base and projects. I played around a bit and then decided to get back to real work. Now we had deployed DQS on production server only, so experiment on production server. Now, when I got back to my work, I forgot to close all the windows. My manager found the window open and have seen my test projects. He has asked me to delete my experiments immediately and have said words which I cannot write to you. Here is the problem. I am not able to delete the project which I have created earlier. I am able to open it and play with it but the delete option is disabled and grayed out (see attached image). Now I believe there is nothing wrong with this project as it was just a test project. Would you please write to my manager that it is not harmful to leave that project there as it is? It is also not using any resources. I think he will believe you.” As I said this kind of email makes me uncomfortable. I do not want someone to execute anything on production server. I often write notes and disclaimer on my post when something is dangerous to execute on production server. However, if someone is not expert with SQL Server and attempts something new on production server, I think the major issue is here with the person (admin) who gave new developer permission to production server. This has to be carefully avoided. Here was my response to the individual. “I cannot write to your manager anything as he has not asked me anything. Honestly I believe he is correct in his behavior as you should have not executed anything on the production server without prior approval and testing on the development server. Any R&D must be done on local box or development box. I suggest you request your manager to prevent access to users who does not need access. If he is a good manager, he might have already implemented by now recent event. I also see your screenshot. Here is the issue: While you were playing with project, you might have closed the project half the way, without completing it. Due to the same reason it is locked. You can open and continue from the same place where you have left the project. If you do not need the project any more. Right click on it, click on unlock the project. This will enable the DELETE option and now you can delete the project. Next time, be safe out there. It may be dangerous to have admin access to production server when not needed.“ I have yet not heard from him but I believe he will take my words positively. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2

    - by pinaldave
    Yesterday I wrote a real world story of how a friend who thought they have an issue with intrusion or virus whereas the issue was really in the code. I strongly suggest you read my earlier blog post Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 before continuing this blog post as this is second part of the first blog post. Let me reproduce the simple scenario in T-SQL. Building Sample Data USE [TestDB] GO -- Creating Table Products CREATE TABLE [dbo].[Products]( [ProductID] [int] NOT NULL, [ProductDesc] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )) ON [PRIMARY] GO -- Creating Table ProductDetails CREATE TABLE [dbo].[ProductDetails]( [ProductDetailID] [int] NOT NULL, [ProductID] [int] NOT NULL, [Total] [int] NOT NULL, CONSTRAINT [PK_ProductDetails] PRIMARY KEY CLUSTERED ( [ProductDetailID] ASC )) ON [PRIMARY] GO ALTER TABLE [dbo].[ProductDetails] WITH CHECK ADD CONSTRAINT [FK_ProductDetails_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Products] ([ProductID]) ON UPDATE CASCADE ON DELETE CASCADE GO -- Insert Data into Table USE TestDB GO INSERT INTO Products (ProductID, ProductDesc) SELECT 1, 'Bike' UNION ALL SELECT 2, 'Car' UNION ALL SELECT 3, 'Books' GO INSERT INTO ProductDetails ([ProductDetailID],[ProductID],[Total]) SELECT 1, 1, 200 UNION ALL SELECT 2, 1, 100 UNION ALL SELECT 3, 1, 111 UNION ALL SELECT 4, 2, 200 UNION ALL SELECT 5, 3, 100 UNION ALL SELECT 6, 3, 100 UNION ALL SELECT 7, 3, 200 GO Select Data from Tables -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Delete Data from Products Table -- Deleting Data DELETE FROM Products WHERE ProductID = 1 GO Select Data from Tables Again -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Clean up Data -- Clean up DROP TABLE ProductDetails DROP TABLE Products GO My friend was confused as there was no delete was firing over ProductsDetails Table still there was a delete happening. The reason was because there is a foreign key created between Products and ProductsDetails Table with the keywords ON DELETE CASCADE. Due to ON DELETE CASCADE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. Workaround 1: Design Changes – 3 Tables Change the design to have more than two tables. Create One Product Mater Table with all the products. It should historically store all the products list in it. No products should be ever removed from it. Add another table called Current Product and it should contain only the table which should be visible in the product catalogue. Another table should be called as ProductHistory table. There should be no use of CASCADE keyword among them. Workaround 2: Design Changes - Column IsVisible You can keep the same two tables. 1) Products and 2) ProductsDetails. Add a column with BIT datatype to it and name it as a IsVisible. Now change your application code to display the catalogue based on this column. There should be no need to delete anything. Workaround 3: Bad Advices (Bad advises begins here) The reason I have said bad advices because these are going to be bad advices for sure. You should make necessary design changes and not use poor workarounds which can damage the system and database integrity further. Here are the examples 1) Do not delete the data – well, this is not a real solution but can give time to implement design changes. 2) Do not have ON CASCADE DELETE – in this case, you will have entry in productsdetails which will have no corresponding product id and later on there will be lots of confusion. 3) Duplicate Data – you can have all the data of the product table move to the product details table and repeat them at each row. Now remove CASCADE code. This will let you delete the product table rows without any issue. There are so many things wrong this suggestion, that I will not even start here. (Bad advises ends here)  Well, did I miss anything? Please help me with your suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • Personal Technology – Laptop Screen Blank – No Post – No BIOS – No Boot

    - by Pinal Dave
    If your laptop Screen is Blank and there is no POST, BIOS or boot, you can follow the steps mentioned here and there are chances that it will work if there is no hardware failure inside. Step 1: Remove the power cord from the laptop Step 2: Remove the battery from the laptop Step 3: Hold power button (keep it pressed) for almost 60 seconds Step 4: Plug power back in laptop Step 5: Start computer and it should just start normally. Step 6: Now shut down Step 7: Insert the battery back in the laptop Step 8: Start laptop again and it should work Note 1: If your laptop does not work after inserting back the memory. Remove the memory and repeat above process. Do not insert the battery back as it is malfunctioning. Note 2: If your screen is faulty or have issues with your hardware (motherboard, screen or anything else) this method will not fix your computer. Those, who care about how I come up with this not SQL related blog post, here is the very funny true story. If you are a married man, you will know what I am going to describe next. May be you have faced the same situation or at least you feel and understand my situation. My wife’s computer suddenly stops working when she was searching for my daughter’s mathematics worksheets online. While the fatal accident happened with my wife’s computer (which was my loyal computer for over 4 years before she got it), I was working in my home office, fixing a high priority issue (live order’s database was corrupted) with one of the largest eCommerce websites.  While I was working on production server where I was fixing database corruption, my wife ran to my home office. Here is how the conversation went: Wife: This computer does not work. I: Restart it. Wife: It does not start. I: What did you do with it? Wife: Nothing, it just stopped working. I: Okey, I will look into it later, working on the very urgent issue. Wife: I was printing my daughter’s worksheet. I: Hm.. Okey. Wife: It was the mathematics worksheet, which you promised you will teach but you never get around to do it, so I am doing it myself. I: Thanks. I appreciate it. I am very busy with this issue as million dollar transaction are not happening as the database got corrupted and … Wife: So what … umm… You mean to say that you care about this customer more than your daughter. You know she got A+ in every other class but in mathematics she got only A. She missed that extra credit question. I: She is only 4, it is okay. Wife: She is 4.5 years old not 4. So you are not going to fix this computer which does not start at all. I think our daughter next time will even get lower grades as her dad is busy fixing something. I: Alright, I give up bring me that computer. Our daughter who was listening everything so far she finally decided to speak up. Daughter: Dad, it is a laptop not computer. I: Yes, sweety get that laptop here and your dad is going to fix the this small issue of million dollar issue later on. I decided to pay attention to my wife’s computer. She was right. No matter what I do, it will not boot up, it will not start, no BIOS, no POST screen. The computer starts for a second but nothing comes up on the screen. The light indicating hard drive comes up for a second and goes off. Nothing happens. I removed every single USB drive from the laptop but it still would not start. It was indeed no fun for me. Finally I remember my days when I was not married and used to study in University of Southern California, Los Angeles. I remembered that I used to have very old second (or maybe third or fourth) hand computer with me. In polite words, I had pre-owned computer and it used to face very similar issues again and again. I had small routine I used to follow to fix my old computer and I had decided to follow the same steps again with this computer. Step 1: Remove the power cord from the laptop Step 2: Remove the battery from the laptop Step 3: Hold power button (keep it pressed) for almost 60 seconds Step 4: Plug power back in laptop Step 5: Start computer and it should just start normally. Step 6: Now shut down Step 7: Insert the battery back in the laptop Step 8: Start laptop again and it should work Note 1: If your laptop does not work after inserting back the memory. Remove the memory and repeat above process. Do not insert the battery back as it is malfunctioning. Note 2: If your screen is faulty or have issues with your hardware (motherboard, screen or anything else) this method will not fix your computer. Once I followed above process, her computer worked. I was very delighted, that now I can go back to solving the problem where millions of transactions were waiting as I was fixing corrupted database and it the current state of the database was in emergency mode. Once I fixed the computer, I looked at my wife and asked. I: Well, now this laptop is back online, can I get guaranteed that she will get A+ in mathematics in this week’s quiz? Wife: Sure, I promise. I: Fantastic. After saying that I started to look at my database corruption and my wife interrupted me again. Wife: Btw, I forgot to tell you. Our daughter had got A in mathematics last week but she had another quiz today and she already have received A+ there. I kept my promise. I looked at her and she started to walk outside room, before I say anything my phone rang. DBA from eCommerce company had called me, as he was wondering why there is no activity from my side in last 10 minutes. DBA: Hey bud, are you still connected. I see um… no activity in last 10 minutes. I: Oh, well, I was just saving the world. I am back now. After two hours I had fixed the database corruption and everything was normal. I was outsmarted by my wife but honestly I still respect and love her the same as she is the one who spends countless hours with our daughter so she does not miss me and I can continue writing blogs and keep on doing technology evangelism. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Humor, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL Server 2008 Cluster Installation - First network name always fails

    - by boflynn
    I'm testing failover clustering in Windows Server 2008 to host a SQL Server 2008 installation using this installation guide. My base cluster is installed and working properly, as well as clustering the DTC service. However, when it comes time to install SQL Server, my first attempt at installation always fails with the same message and seems to "taint" the network name. For example, with my previous cluster attempt, I was installing SQL Server as VSQL. After approximately 15 attempts of installation and trying to resolve the errors, e.g. changing domain accounts for SQL, setting SPNs, etc., I typoed the network name as VQSL and the installation worked. Similarly on my current cluster, I tried installing with the SQL service named PROD-C1-DB and got the same errors as last time until I tried changing the name to anything else, e.g. PROD-C1-DB1, SQL, TEST, etc., at which point the install works. It will even install to VSQL now. While testing, my install routine was: Run setup.exe from patched media, selecting appropriate options After the install fails, I'd chose "Remove node from a SQL Server failover cluster" and remove the single, failed, node Attempt to diagnose problem, inspect event logs, etc. Delete the computer account that was created for the SQL Service from Active Directory Delete the MSSQL10.MSSQLSERVER folder from the shared data drive The error message I receive from the SQL Server installer is: The following error has occurred: The cluster resource 'SQL Server' could not be brought online. Error: The group or resource is not in the correct state to perform the requested operation. (Exception from HRESULT: 0x8007139F) Along with hundreds of the following errors in the Application event log: [sqsrvres] checkODBCConnectError: sqlstate = 28000; native error = 4818; message = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. System configuration notes: Windows Server 2008 Enterprise Edition x64 SQL Server 2008 Enterprise Edition x64 using slipstreamed SP1+CU1 media Dell PowerEdge servers Fibre attached storage

    Read the article

  • SQL SERVER – Vacation, Travel and Study – A New Concept

    - by pinaldave
    Quite often when developers go to training sessions they either find it very boring because of study or great because they treat it as a vacation. There should be a perfect balance between study and extra activities. Studying is Boring Studying is very hard. Nobody likes to study, very few people are going to list “studying” as one of their favorite hobbies.  Already my young daughter knows she doesn’t want to study, and I don’t want to either.  If you read my blog regularly you know that I am always saying that we need to be students for life.  However, all philosophy aside, if you are put in a room with an instructor to study for eight hours a day, you are going to feel bored, uncomfortable, and unhappy.  I was a trainer myself, and I understand that all-day study sessions are no fun – even for the trainer.  I always tried to be entertaining, but even eight hours of jokes and laughter is tiresome.  Eight hours at a comedy club would be boring after a little while – and if we can’t even enjoy fun stuff for eight hours straight, how can we expect to study for eight hours straight? Studying for Career or Certification Even those who have advanced degrees and went to college for years, or even decades, find studying hard.  There is a difference between studying for a career and studying for a certification.  At least to get a degree there is a variety of subjects, with labs, exams, and practice problems to make things more interesting.  You can also choose your major and what you want to spend your time studying.  For certification you do not have this luxury.  You have to learn and memorize specific parts, and there is no option to change your major if you don’t like it.  Your option is to gain your certification, or fail.  Many people will find that last option unacceptable. Studying at Vacation We have established: eight hours of uninterrupted study is boring.  That is why I am so excited about what my very good friend is doing with Koenig Solutions.  His whole goal was to make classes that are intensive but not in a traditional format.  He adds in aspects of the vacation.  It is true that you will study and sit with instructors for six or eight hours a day, but in the mornings and evenings you can go out and see the sights in exotic locations.  He has chosen the locations for his training courses for their proximity to tourist attractions like the Himalayas, the Taj Mahal, and Goa, India’s most popular resort town.  Every location has access to great experiences like river rafting, safari tours, or meditation.  There are five locations to choose from: Dehli, Dehradun, Shimla (close to the Himalayas), Goa Beach, and Dubai.  After a day of classes and hours of sight-seeing, you will be more than ready to return to campus tired and ready to study.  This is the kind of study I can do! My friend’s point is that studying and fun can still go hand-in-hand.  How many times have we heard a professor say this?  But this time it is true.  There is great fun in learning in exotic locations.  If you want to travel in India and are interested in also taking the opportunity to learn something, let Koenig Solutions know. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – UNION ALL and ORDER BY – How to Order Table Separately While Using UNION ALL

    - by pinaldave
    I often see developers trying following syntax while using ORDER BY. SELECT Columns FROM TABLE1 ORDER BY Columns UNION ALL SELECT Columns FROM TABLE2 ORDER BY Columns However the above query will return following error. Msg 156, Level 15, State 1, Line 5 Incorrect syntax near the keyword ‘ORDER’. It is not possible to use two different ORDER BY in the UNION statement. UNION returns single resultsetand as per the Logical Query Processing Phases. However, if your requirement is such that you want your top and bottom query of the UNION resultset independently sorted but in the same resultset you can add an additional static column and order by that column. Let us re-create the same scenario. First create two tables and populated with sample data. USE tempdb GO -- Create table CREATE TABLE t1 (ID INT, Col1 VARCHAR(100)); CREATE TABLE t2 (ID INT, Col1 VARCHAR(100)); GO -- Sample Data Build INSERT INTO t1 (ID, Col1) SELECT 1, 'Col1-t1' UNION ALL SELECT 2, 'Col2-t1' UNION ALL SELECT 3, 'Col3-t1'; INSERT INTO t2 (ID, Col1) SELECT 3, 'Col1-t2' UNION ALL SELECT 2, 'Col2-t2' UNION ALL SELECT 1, 'Col3-t2'; GO If we SELECT the data from both the table using UNION ALL . -- SELECT without ORDER BY SELECT ID, Col1 FROM t1 UNION ALL SELECT ID, Col1 FROM t2 GO We will get the data in following order. However, our requirement is to get data in following order. If we need data ordered by Column1 we can ORDER the resultset ordered by Column1. -- SELECT with ORDER BY SELECT ID, Col1 FROM t1 UNION ALL SELECT ID, Col1 FROM t2 ORDER BY ID GO Now to get the data in independently sorted in UNION ALL let us add additional column OrderKey and use ORDER BY  on that column. I think the description does not do proper justice let us see the example here. -- SELECT with ORDER BY - with ORDER KEY SELECT ID, Col1, 'id1' OrderKey FROM t1 UNION ALL SELECT ID, Col1, 'id2' OrderKey FROM t2 ORDER BY OrderKey, ID GO The above query will give the desired result. Now do not forget to clean up the database by running the following script. -- Clean up DROP TABLE t1; DROP TABLE t2; GO Here is the complete script used in this example. USE tempdb GO -- Create table CREATE TABLE t1 (ID INT, Col1 VARCHAR(100)); CREATE TABLE t2 (ID INT, Col1 VARCHAR(100)); GO -- Sample Data Build INSERT INTO t1 (ID, Col1) SELECT 1, 'Col1-t1' UNION ALL SELECT 2, 'Col2-t1' UNION ALL SELECT 3, 'Col3-t1'; INSERT INTO t2 (ID, Col1) SELECT 3, 'Col1-t2' UNION ALL SELECT 2, 'Col2-t2' UNION ALL SELECT 1, 'Col3-t2'; GO -- SELECT without ORDER BY SELECT ID, Col1 FROM t1 UNION ALL SELECT ID, Col1 FROM t2 GO -- SELECT with ORDER BY SELECT ID, Col1 FROM t1 UNION ALL SELECT ID, Col1 FROM t2 ORDER BY ID GO -- SELECT with ORDER BY - with ORDER KEY SELECT ID, Col1, 'id1' OrderKey FROM t1 UNION ALL SELECT ID, Col1, 'id2' OrderKey FROM t2 ORDER BY OrderKey, ID GO -- Clean up DROP TABLE t1; DROP TABLE t2; GO I am sure there are many more ways to achieve this, what method would you use if you have to face the similar situation? Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Asynchronous Update and Timestamp – Check if Row Values are Changed Since Last Retrieve

    - by pinaldave
    Here is the question received just this morning. “Pinal, Our application is much different than other application you might have come across. In simple words, I would like to call it Asynchronous Updated Application. We need your quick opinion about one of the situation which we are facing. From business side: We have bidding system (similar to eBay but not exactly) and where multiple parties bid on one item, during the last few minutes of bidding many parties try to bid at the same time with the same price. When they hit submit, we would like to check if the original data which they retrieved is changed or not. If the original data which they have retrieved is the same, we will accept their new proposed price. If original data are changed, they will have to resubmit the data with new price. From technical side: We have a row which we retrieve in our application. Multiple users are retrieving the same row. Some of the users will update the value of the row and submit. However, only the very first user should be allowed to update the row and remaining all the users will have to re-fetch the row and updated it once again. We do not want to lock any record as that will create other problems. Do you have any solution for this kind of situation?” Fantastic Question. I believe there is good chance that we can use timestamp datatype in this kind of application. Before we continue let us see following simple example. USE tempdb GO CREATE TABLE SampleTable (ID INT, Col1 VARCHAR(100), TimeStampCol TIMESTAMP) GO INSERT INTO SampleTable (ID, Col1) VALUES (1, 'FirstVal') GO SELECT ID, Col1, TimeStampCol FROM SampleTable st GO UPDATE SampleTable SET Col1 = 'NextValue' GO SELECT ID, Col1, TimeStampCol FROM SampleTable st GO DROP TABLE SampleTable GO Now let us see the resultset. Here is the simple explanation of the scenario. We created a table with simple column with TIMESTAMP datatype. When we inserted a very first value the timestamp was generated. When we updated any value in that row, the timestamp was updated with the new value. Every single time when we update any value in the row, it will generate new timestamp value. Now let us apply this in an original question’s scenario. In that case multiple users are retrieving the same row. Everybody will have the same now same TimeStamp with them. Before any user update any value they should once again retrieve the timestamp from the table and compare with the timestamp they have with them. If both of the timestamp have the same value – the original row has not been updated and we can safely update the row with the new value. After initial update, now the row will contain a new timestamp. Any subsequent update to the same row should also go to the same process of checking the value of the timestamp they have in their memory. In this case, the timestamp from memory will be different from the timestamp in the row. This indicates that row in the table has changed and new updates should not be allowed. I believe timestamp can be very very useful in this kind of scenario. Is there any better alternative? Please leave a comment with the suggestion and I will post on the blog with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Now Customers Can Actually Locate Your Resources with URL Rewriter 2.0 RTW

    - by The Official Microsoft IIS Site
    Today, Microsoft announced the final release of IIS URL Rewriter 2.0 RTW . Now the first reason might be obvious why you would want to rewrite a URL – when you are at a cocktail party with loud music and tasty appetizers and a potential customer asks you where they can get more info on your snazzy new idea. And you proudly blurt out next to their ear over the roar of the bass, “Just go to h-t-t-p colon slash slash w-w-w dot my new idea dot com slash items dot a-s-p-x question mark cat ID equals new...(read more)

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >