Search Results

Search found 62701 results on 2509 pages for 'sql function'.

Page 33/2509 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • MySQL – How to Write Loop in MySQL

    - by Pinal Dave
    Since, I have written courses on MySQL, I quite often get emails about MySQL courses. Here is the question, which I have received quite often. “How do I loop queries in MySQL?” Well, currently MySQL does not allow to write loops with the help of ad-hoc SQL. You have to write stored procedure (routine) for the same. Here is the example, how we can create a procedure in MySQL which will look over the code. In this example I have used SELECT 1 statement and looped over it. In reality you can put there any code and loop over it. This procedure accepts one parameter which is the number of the count the loop will iterate itself. delimiter // CREATE PROCEDURE doiterate(p1 INT) BEGIN label1: LOOP SET p1 = p1 - 1; IF p1 > 0 THEN SELECT 1; ITERATE label1; END IF; LEAVE label1; END LOOP label1; END// delimiter ; CALL doiterate(100); You can also use WHILE to loop as well, we will see that in future blog posts. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • SQLAuthority News – Mark the Date: October 16, 2013 – Introducing NuoDB Blackbirds: THE Distributed Database

    - by Pinal Dave
    I am very excited to announce first on this blog about the release of NuoDB Blackbirds (NuoDB Release 2.0). NuoDB is my favorite application to work with data now a days. They are increasingly gaining market share as well as brining out new features with their every new release. I was very excited when I learned that NuoDB is releasing their flagship release of 2.0 on October 16, 2013. Interesting enough I will be in USA while this release happens and I will be watching it live during my day time. Even though if I had to stay up the entire night to just watch this release, I would do it. Here is the details of the announcements: Introducing NuoDB Blackbirds: THE Distributed Database Date: October 16, 2013 Time: 1:00 PM EDT Location: Online Registration Link What is the best DBMS architecture to handle today’s and tomorrow’s evolving needs? The days of shared disk are over. The times are “a-changin” and IT infrastructure has to change with them. Join NuoDB live for the introduction of our latest major product release, NuoDB Blackbirds, and take a look at why the NuoDB distributed database architecture is the only answer for customers like Fathom Voice, a leading provider of Voice Over IP (VoIP). NuoDB CEO, Barry Morris, welcomes Cameron Weeks, CEO of Fathom Voice to discuss how his company is using DBMS to break away from the pack and become the hottest player in VoIP. The webcast will include demonstrations of a single, logical database running in multiple geographies and a live Q&A. If due to any reason, you cannot watch it live, do not worry at all, just register at this Registration Link, as after the event you will get the link to watch the event on-demand. You can watch the launch event at any time if you have registered for the launch. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: NuoDB

    Read the article

  • MySQL – Export the Resultset to CSV file

    - by Pinal Dave
    In SQL Server, you can use BCP command to export the result set to a csv file. In MySQL too, You can export data from a table or result set as a csv file in many methods. Here are two methods. Method 1 : Make use of Work Bench If you are using Work Bench as a querying tool, you can make use of it’s Export option in the result window. Run the following code in Work Bench SELECT db_names FROM mysql_testing; The result will be shown in the result windows. There is an option called “File”. Click on it and it will prompt you a window to save the result set (Screen shot attached to show how file option can be used). Choose the directory and type out the name of the file. Method 2 : Make use of OUTFILE command You can do the export using a query with OUTFILE command as shown below SELECT db_names FROM mysql_testing INTO OUTFILE 'C:/testing.csv' FIELDS ENCLOSED BY '"' TERMINATED BY ';' ESCAPED BY '"' LINES TERMINATED BY '\r\n'; After the execution of the above code, you can find a file named testing.csv in C drive of the server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL Tagged: CSV

    Read the article

  • Big Data – Basics of Big Data Architecture – Day 4 of 21

    - by Pinal Dave
    In yesterday’s blog post we understood how Big Data evolution happened. Today we will understand basics of the Big Data Architecture. Big Data Cycle Just like every other database related applications, bit data project have its development cycle. Though three Vs (link) for sure plays an important role in deciding the architecture of the Big Data projects. Just like every other project Big Data project also goes to similar phases of the data capturing, transforming, integrating, analyzing and building actionable reporting on the top of  the data. While the process looks almost same but due to the nature of the data the architecture is often totally different. Here are few of the question which everyone should ask before going ahead with Big Data architecture. Questions to Ask How big is your total database? What is your requirement of the reporting in terms of time – real time, semi real time or at frequent interval? How important is the data availability and what is the plan for disaster recovery? What are the plans for network and physical security of the data? What platform will be the driving force behind data and what are different service level agreements for the infrastructure? This are just basic questions but based on your application and business need you should come up with the custom list of the question to ask. As I mentioned earlier this question may look quite simple but the answer will not be simple. When we are talking about Big Data implementation there are many other important aspects which we have to consider when we decide to go for the architecture. Building Blocks of Big Data Architecture It is absolutely impossible to discuss and nail down the most optimal architecture for any Big Data Solution in a single blog post, however, we can discuss the basic building blocks of big data architecture. Here is the image which I have built to explain how the building blocks of the Big Data architecture works. Above image gives good overview of how in Big Data Architecture various components are associated with each other. In Big Data various different data sources are part of the architecture hence extract, transform and integration are one of the most essential layers of the architecture. Most of the data is stored in relational as well as non relational data marts and data warehousing solutions. As per the business need various data are processed as well converted to proper reports and visualizations for end users. Just like software the hardware is almost the most important part of the Big Data Architecture. In the big data architecture hardware infrastructure is extremely important and failure over instances as well as redundant physical infrastructure is usually implemented. NoSQL in Data Management NoSQL is a very famous buzz word and it really means Not Relational SQL or Not Only SQL. This is because in Big Data Architecture the data is in any format. It can be unstructured, relational or in any other format or from any other data source. To bring all the data together relational technology is not enough, hence new tools, architecture and other algorithms are invented which takes care of all the kind of data. This is collectively called NoSQL. Tomorrow Next four days we will answer the Buzz Words – Hadoop. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • MySQL – Beginning Temporary Tables in MySQL

    - by Pinal Dave
    MySQL supports Temporary tables to store the resultsets temporarily for a given connection. Temporary tables are created with the keyword TEMPORARY along with the CREATE TABLE statement. Let us create the temporary table named Temp CREATE TEMPORARY TABLE TEMP (id INT); Now you can find out the column names using DESC command DESC TEMP; The above returns the following result This table can be accessed only for the current connection and it can be used like a permanent table and automatically dropped when the connection is closed. However, you can not find temporary tables using INFORMATION_SCHEMA. TABLES system view. It will only list out the permanent tables. MySQL usually stores the data of temporary tables in memory and processed by Memory Storage engine. But if the data size is too large MySQL automatically converts this to the on – disk table and use MyISAM engine. You can also create a permanent table with the same name of a temporary table in the same connection. However the structure of permanent table is visible only if the temporary table with the same name is dropped. Let us create a permanent table with the same name Temp as below CREATE TABLE TEMP (id INT, names VARCHAR(100)); Now running the following command stills gives you the structure of the temporary table temp created earlier. DESC TEMP; You can drop the temporary table using DROP TEMPORARY TABLE command; DROP TEMPORARY TABLE TEMP; After you executed the temporary table, run the following command DESC TEMP; Now you will see the structure of the permanent table named temp In summary – If there is a Temporary Table in MySQL it gets first priority over the permanent table in the session. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • MySQL – Introduction to CONCAT and CONCAT_WS functions

    - by Pinal Dave
    MySQL supports two types of concatenation functions. They are CONCAT and CONCAT_WS CONCAT function just concats all the argument values as such SELECT CONCAT('Television','Mobile','Furniture'); The above code returns the following TelevisionMobileFurniture If you want to concatenate them with a comma, either you need to specify the comma at the end of each value, or pass comma as an argument along with the values SELECT CONCAT('Television,','Mobile,','Furniture'); SELECT CONCAT('Television',',','Mobile',',','Furniture'); Both the above return the following Television,Mobile,Furniture However you can omit the extra work by using CONCAT_WS function. It stands for Concatenate with separator. This is very similar to CONCAT function, but accepts separator as the first argument. SELECT CONCAT_WS(',','Television','Mobile','Furniture'); The result is Television,Mobile,Furniture If you want pipeline as a separator, you can use SELECT CONCAT_WS('|','Television','Mobile','Furniture'); The result is Television|Mobile|Furniture So CONCAT_WS is very flexible in concatenating values along with separate. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • Big Data – Beginning Big Data Series Next Month in 21 Parts

    - by Pinal Dave
    Big Data is the next big thing. There was a time when we used to talk in terms of MB and GB of the data. However, the industry is changing and we are now moving to a conversation where we discuss about data in Petabyte, Exabyte and Zettabyte. It seems that the world is now talking about increased Volume of the data. In simple world we all think that Big Data is nothing but plenty of volume. In reality Big Data is much more than just a huge volume of the data. When talking about the data we need to understand about variety and volume along with volume. Though Big data look like a simple concept, it is extremely complex subject when we attempt to start learning the same. My Journey I have recently presented on Big Data in quite a few organizations and I have received quite a few questions during this roadshow event. I have collected all the questions which I have received and decided to post about them on the blog. In the month of October 2013, on every weekday we will be learning something new about Big Data. Every day I will share a concept/question and in the same blog post we will learn the answer of the same. Big Data – Plenty of Questions I received quite a few questions during my road trip. Here are few of the questions. I want to learn Big Data – where should I start? Do I need to know SQL to learn Big Data? What is Hadoop? There are so many organizations talking about Big Data, and every one has a different approach. How to start with big Data? Do I need to know Java to learn about Big Data? What is different between various NoSQL languages. I will attempt to answer most of the questions during the month long series in the next month. Big Data – Big Subject Big Data is a very big subject and I no way claim that I will be covering every single big data concept in this series. However, I promise that I will be indeed sharing lots of basic concepts which are revolving around Big Data. We will discuss from fundamentals about Big Data and continue further learning about it. I will attempt to cover the concept so simple that many of you might have wondered about it but afraid to ask. Your Role! During this series next month, I need your one help. Please keep on posting questions you might have related to big data as blog post comments and on Facebook Page. I will monitor them closely and will try to answer them as well during this series. Now make sure that you do not miss any single blog post in this series as every blog post will be linked to each other. You can subscribe to my feed or like my Facebook page or subscribe via email (by entering email in the blog post). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Big Data, PostADay, SQL, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • MySQL – How to Find mysqld.exe with Command Prompt – Fix: ‘mysql’ is not recognized as an internal or external command, operable program or batch file

    - by Pinal Dave
    One of the most popular question I get after watching my MySQL courses on Pluralsight is that beginning users are not able to find where they have installed MySQL Server. The error they receive is as follows when they type mysqld command on their default command line. ‘mysql‘ is not recognized as an internal or external command, operable program or batch file. This error comes up if user try to execute mysqld command on default command prompt. The user should execute this command where mysql.exe file exists.  If you are using Windows Explorer you can easily search on your drive mysqld.exe and find the location of the file and execute the above command there. However, if you want to find out with command prompt the location of mysqld.exe file you can follow the direction here. Step 1: Open a command prompt Open command prompt from Start >> Run >> cmd >> enter Step 2: Change directory You need to change the default directory to root directory, hence type cd\ command on the prompt to change the default directory to c:\ . Here we are assuming that you have installed MySQL on your c: drive. If you have installed it on any other drive change the drive to that letter. Step 3: Search Drive Type the command dir mysqld.exe /s /p on the command prompt. It will search your directories and will list the directory where mysqld.exe is located. Step 4: Change Directory Now once again change your command prompt file location to the folder where your mysqld.exe is located. In my case it is located here in folder C:\Program Files\MySQL\MySQL Server 5.6\bin hence I will run following command: cd C:\Program Files\MySQL\MySQL Server 5.6\bin . Step 5: Execute mysqld.exe Now you can once again mysqld.exe on your command prompt. You can use this method to search pretty much any file with the help of command prompt. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • T-SQL Tuesday #006: Tiger/Line Spatial Data

    - by Mike C
    This month’s T-SQL Tuesday post is about LOB data http://sqlblog.com/blogs/michael_coles/archive/2010/05/03/t-sql-tuesday-006-what-about-blob.aspx . For this one I decided to post a sample Tiger/Line SQL database I use all the time in live demos. For those who aren't familiar with it, Tiger/Line data is a dataset published by the U.S. Census Bureau . Tiger/Line has a lot of nice detailed geospatial data down to a very detailed level. It actually goes from the U.S. state level all the way down to...(read more)

    Read the article

  • Big Data – Evolution of Big Data – Day 3 of 21

    - by Pinal Dave
    In yesterday’s blog post we answered what is the Big Data. Today we will understand why and how the evolution of Big Data has happened. Though the answer is very simple, I would like to tell it in the form of a history lesson. Data in Flat File In earlier days data was stored in the flat file and there was no structure in the flat file.  If any data has to be retrieved from the flat file it was a project by itself. There was no possibility of retrieving the data efficiently and data integrity has been just a term discussed without any modeling or structure around. Database residing in the flat file had more issues than we would like to discuss in today’s world. It was more like a nightmare when there was any data processing involved in the application. Though, applications developed at that time were also not that advanced the need of the data was always there and there was always need of proper data management. Edgar F Codd and 12 Rules Edgar Frank Codd was a British computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases. He presented 12 rules for the Relational Database and suddenly the chaotic world of the database seems to see discipline in the rules. Relational Database was a promising land for all the unstructured database users. Relational Database brought into the relationship between data as well improved the performance of the data retrieval. Database world had immediately seen a major transformation and every single vendors and database users suddenly started to adopt the relational database models. Relational Database Management Systems Since Edgar F Codd proposed 12 rules for the RBDMS there were many different vendors who started them to build applications and tools to support the relationship between database. This was indeed a learning curve for many of the developer who had never worked before with the modeling of the database. However, as time passed by pretty much everybody accepted the relationship of the database and started to evolve product which performs its best with the boundaries of the RDBMS concepts. This was the best era for the databases and it gave the world extreme experts as well as some of the best products. The Entity Relationship model was also evolved at the same time. In software engineering, an Entity–relationship model (ER model) is a data model for describing a database in an abstract way. Enormous Data Growth Well, everything was going fine with the RDBMS in the database world. As there were no major challenges the adoption of the RDBMS applications and tools was pretty much universal. There was a race at times to make the developer’s life much easier with the RDBMS management tools. Due to the extreme popularity and easy to use system pretty much every data was stored in the RDBMS system. New age applications were built and social media took the world by the storm. Every organizations was feeling pressure to provide the best experience for their users based the data they had with them. While this was all going on at the same time data was growing pretty much every organization and application. Data Warehousing The enormous data growth now presented a big challenge for the organizations who wanted to build intelligent systems based on the data and provide near real time superior user experience to their customers. Various organizations immediately start building data warehousing solutions where the data was stored and processed. The trend of the business intelligence becomes the need of everyday. Data was received from the transaction system and overnight was processed to build intelligent reports from it. Though this is a great solution it has its own set of challenges. The relational database model and data warehousing concepts are all built with keeping traditional relational database modeling in the mind and it still has many challenges when unstructured data was present. Interesting Challenge Every organization had expertise to manage structured data but the world had already changed to unstructured data. There was intelligence in the videos, photos, SMS, text, social media messages and various other data sources. All of these needed to now bring to a single platform and build a uniform system which does what businesses need. The way we do business has also been changed. There was a time when user only got the features what technology supported, however, now users ask for the feature and technology is built to support the same. The need of the real time intelligence from the fast paced data flow is now becoming a necessity. Large amount (Volume) of difference (Variety) of high speed data (Velocity) is the properties of the data. The traditional database system has limits to resolve the challenges this new kind of the data presents. Hence the need of the Big Data Science. We need innovation in how we handle and manage data. We need creative ways to capture data and present to users. Big Data is Reality! Tomorrow In tomorrow’s blog post we will try to answer discuss Basics of Big Data Architecture. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQLAuthority Guest Post – Lessons from Life and Work by Srini Chandra (Author of 3 Lives, in search of bliss)

    - by pinaldave
    Work and life are confusing terms together. How can one consider work outside of life. Work should be part of life or are we considering ourselves dead when we are at work. I have often seen developers and DBA complaining and confused about their job, work and life. Complaining is easy and everyone can do. I have heard quite often expression – “I do not have any other option.” I requested Srini Chanda (renowned author of Amazon Best Seller 3 Lives, in search of bliss (Amazon | Flipkart) to write a guest post on this subject which developer can read and appreciate. Let us see Srini’s thoughts in his own words. Each of us who works in the technology industry carries an especially heavy burden nowadays. For, fate has placed in our hands an awesome power to shape our society and its consciousness. For that reason, we must pay more and more attention to issues of professionalism, social responsibility and ethics. Equally importantly, the responsibility lies in our hands to ensure that we view our work and career as an opportunity to enlighten and lift ourselves up. Story: A Prisoner, 20 years and a Wheel Many years ago, I heard this story from a professor when I was a student at Carnegie Mellon. A man was sentenced to 20 years in prison. During his time in prison, he was asked to turn a wheel every day. So, every day he turned the wheel. At times, when he was tired or puzzled and stopped turning the wheel, he would be flogged with a whip. The man did not know anything about the wheel other than that it was placed outside his jail somewhere. He wondered if the wheel crushed corn or if it ground wheat or something similar. He wondered if turning the wheel was useful to anyone. At the end of his jail term, he rushed out to see what the wheel was doing. To his disappointment, he found that the wheel was not connected to anything. All these years, he had been toiling for nothing. He gave a loud, frustrated shout and dropped dead. How many of us are turning wheels wondering what it is connected to? How many of us have unstated, uncaring attitudes towards our careers? How many of us view work as drudgery, as no more than a way to earn that next paycheck? How many of us have wondered about the spiritually uplifting aspect of work? Can a workforce that views work as merely a chore, be ethical? Can it produce truly life enhancing technology? Can it make positive contributions to the quality of life of a society? I think not. Thanks to Pinal and you, his readers, for giving me this opportunity to share my thoughts in a series of guest posts. I’d like to present a few ways over the next few weeks, in which we can tap into the liberating potential of work and make our lives better in the process. Now, please allow me to tell you another version of the story that the good professor shared with us in the classroom that day. Story: A Prisoner, 20 years, a Wheel and the LIFE A man was sentenced to 20 years in prison. During his time in prison, he was asked to turn a wheel every day. So, every day he turned the wheel. At first, his whole body and mind rebelled against his predicament. So, his limbs grew weary and his mind became numb and confused. And then, his self-awareness began to grow. He began to wonder how he came to be in the prison in the first place. He looked around and saw all his fellow prisoners also turning the wheel. His wife, his parents, his friends and his children – they were all in the prison too, and turning their own wheels! He began to wonder how this came about. As he wondered more and more, he began to focus less on his physical drudgery and boredom. And he began to clearly see his inner spirit which guided him in ways that allowed him to see the world with a universal view. His inner spirit guided him towards the source of eternal wisdom and happiness. He began to see the source of happiness in everything around him – his prison bound relationships, even his jailers and in his wheel. He became a source of light to those around him. His wheel jokes and humor infected them with joy and happiness. Finally, the day came for his release from jail. He walked calmly outside the jail and laughed aloud when he saw that the wheel was not connected to anything. He knelt down, kissed it and thanked it for the wisdom it taught him. Life is the prison. The wheel is your work. Both are sacred. Both have enormous powers to teach us wisdom and bring us happiness. Whether we allow them to do so, is a choice we have to make. Over the next few weeks, I hope to share with you a few lessons that I have learnt at the wheel in my two decades of my career (prison). Thank you for reading, and do let me know what you think. Reference: Srini Chandra (3 Lives, in search of bliss), Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, T SQL, Technology

    Read the article

  • Participating in 3 SQL events in 8 days

    - by AaronBertrand
    Well, 8 days not including lead and lag travel time. A quick summary of the three events, and the flights it took to get me to each: SQL Saturday #105 - Dublin, Ireland Flights on March 21st: Providence -> Philadelphia (236 miles) Philadelphia -> Dublin (3,260 miles) Time zone change: +4 when we got there, plus Daylight Saving Time kicked in, so +5 after the event. I spoke at this event, and manned the SQL Sentry booth with cohort Scott Fallen. This event was a fantastic SQL Saturday - very...(read more)

    Read the article

  • MS12-070 : Security Updates for all supported versions of SQL Server

    - by AaronBertrand
    This week there was a security release for all supported versions of SQL Server . Each version has 32-bit and 64-bit patches, and each version has GDR (General Distribution Release) and QFE (Quick-Fix Engineering) patches. GDR should be applied if you are at the base (RTM or SP) build for your version, while QFE should be applied if you have installed any cumulative updates after the RTM or SP build. ( More details here .) SQL Server 2005 RTM, SP1, SP2, SP3 - not supported SP4 - GDR = 9.00.5069,...(read more)

    Read the article

  • SQL Server 2012 : The Data Tools installer is now available

    - by AaronBertrand
    Last week when RC0 was released, the updated installer for "Juneau" (SQL Server Data Tools) was not available. Depending on how you tried to get it, you either ended up on a blank search page, or a page offering the CTP3 bits. Important note: the CTP3 Juneau bits are not compatible with SQL Server 2012 RC0. If you already have Visual Studio 2010 installed (meaning Standard/Pro/Premium/Ultimate), you will need to install Service Pack 1 before continuing. You can get to the installer simply by opening...(read more)

    Read the article

  • Bit-Twiddling in SQL

    - by Mike C
    Someone posted a question to the SQL Server forum the other day asking how to count runs of zero bits in an integer using SQL. Basically the poster wanted to know how to efficiently determine the longest contiguous string of zero-bits (known as a run of bits) in any given 32-bit integer. Here are a couple of examples to demonstrate the idea: Decimal = Binary = Zero Run 999,999,999 decimal = 00 111011 1 00 11010 11 00 1 00 1 11111111 binary = 2 contiguous zero bits 666,666,666 decimal = 00100111 10111100...(read more)

    Read the article

  • T-SQL Tuesday #005: Creating SSMS Custom Reports

    - by Mike C
    This is my contribution to the T-SQL Tuesday blog party, started by Adam Machanic and hosted this month by Aaron Nelson . Aaron announced this month's topic is "reporting" so I figured I'd throw a blog up on a reporting topic I've been interested in for a while -- namely creating custom reports in SSMS. Creating SSMS custom reports isn't difficult, but like most technical work it's very detailed with a lot of little steps involved. So this post is a little longer than usual and includes a lot of...(read more)

    Read the article

  • T-SQL Tuesday #005: Creating SSMS Custom Reports

    - by Mike C
    This is my contribution to the T-SQL Tuesday blog party, started by Adam Machanic and hosted this month by Aaron Nelson . Aaron announced this month's topic is "reporting" so I figured I'd throw a blog up on a reporting topic I've been interested in for a while -- namely creating custom reports in SSMS. Creating SSMS custom reports isn't difficult, but like most technical work it's very detailed with a lot of little steps involved. So this post is a little longer than usual and includes a lot of...(read more)

    Read the article

  • Event on SQL Server 2008 Disk IO and the new Complex Event Processing (StreamInsight) feature in R2

    - by tonyrogerson
    Allan Mitchell and myself are doing a double act, Allan is becoming one of the leading guys in the UK on StreamInsight and will give an introduction to this new exciting technology; on top of that I'll being talking about SQL Server Disk IO - well, "Disk" might not be relevant anymore because I'll be talking about SSD and IOFusion - basically I'll be talking about the underpinnings - making sure you understand and get it right, how to monitor etc... If you've any specific problems or questions just ping me an email [email protected]. To register for the event see: http://sqlserverfaq.com/events/217/SQL-Server-and-Disk-IO-File-GroupsFiles-SSDs-FusionIO-InRAM-DBs-Fragmentation-Tony-Rogerson-Complex-Event-Processing-Allan-Mitchell.aspx 18:15 SQL Server and Disk IOTony Rogerson, SQL Server MVPTony's Blog; Tony on TwitterIn this session Tony will talk about RAID levels, how SQL server writes to and reads from disk, the effect SSD has and will talk about other options for throughput enhancement like Fusion IO. He will look at the effect fragmentation has and how to minimise the impact, he will look at the File structure of a database and talk about what benefits multiple files and file groups bring. We will also touch on Database Mirroring and the effect that has on throughput, how to get a feeling for the throughput you should expect.19:15 Break19:45 Complex Event Processing (CEP)Allan Mitchell, SQL Server MVPhttp://sqlis.com/sqlisStreamInsight is Microsoft’s first foray into the world of Complex Event Processing (CEP) and Event Stream Processing (ESP).  In this session I want to show an introduction to this technology.  I will show how and why it is useful.  I will get us used to some new terminology but best of all I will show just how easy it is to start building your first CEP/ESP application.

    Read the article

  • SQL Server v.Next (Denali) : Deriving sets using SEQUENCE

    - by AaronBertrand
    One complaint about SEQUENCE is that there is no simple construct such as NEXT (@n) VALUES FOR so that you could get a range of SEQUENCE values as a set. In a previous post about SEQUENCE , I mentioned that to get a range of rows from a sequence, you should use the system stored procedure sys.sp_sequence_get_range . There are some issues with this stored procedure: the parameter names are not easy to memorize; it requires multiple conversions to and from SQL_VARIANT; and, producing a set from the...(read more)

    Read the article

  • Error with SQL Server Setup 2012 on Windows 2012

    - by Jeff
    I am trying to install SQL Server on Windows 2012. I was able to finally get the wizard up and running after making some changes on the server, but now it fails no matter what I do with the following error: TITLE: SQL Server Setup failure. SQL Server Setup has encountered the following error: There is an error in XML document (108, 148).. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&EvtType=0x066FCAFD%25400x5539C151 LinkID: 20476 Product Name: Microsoft SQL Server Message Source setup.rll Message ID: 50000 EvtType: 0x066FCAFD%400x5539C151 What I've tried: Installing from commandline with /q Result from CL installation: Error result: -2147467259 Result facility code: 0 Result error code: 16389 Please review the summary.txt log for further details The Verbose CL installation reveals: Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_1033: NotInstalled Sco: File 'C:\SQL Install\1036_FRA_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\1036_FRA_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_1036: NotInstalled Sco: File 'C:\SQL Install\1040_ITA_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\1040_ITA_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_1040: NotInstalled Sco: File 'C:\SQL Install\1041_JPN_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\1041_JPN_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_1041: NotInstalled Sco: File 'C:\SQL Install\1042_KOR_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\1042_KOR_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_1042: NotInstalled Sco: File 'C:\SQL Install\1046_PTB_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\1046_PTB_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_1046: NotInstalled Sco: File 'C:\SQL Install\1049_RUS_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\1049_RUS_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_1049: NotInstalled Sco: File 'C:\SQL Install\2052_CHS_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\2052_CHS_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_2052: NotInstalled Sco: File 'C:\SQL Install\3082_ESN_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\3082_ESN_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_3082: NotInstalled Sco: File 'C:\SQL Install\1053_SVE_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\1053_SVE_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_1053: NotInstalled Sco: File 'C:\SQL Install\x64\setup\x64\sql_ssms.msi' does not exist Sco: File 'C:\SQL Install\x64\setup\x64\sql_ssms.msi' does not exist Package ID sql_ssms_Cpu64: NotInstalled Sco: File 'C:\SQL Install\1028_CHT_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1028_CHT_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1028: NotInstalled Sco: File 'C:\SQL Install\1031_DEU_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1031_DEU_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1031: NotInstalled Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1033: NotInstalled Sco: File 'C:\SQL Install\1036_FRA_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1036_FRA_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1036: NotInstalled Sco: File 'C:\SQL Install\1040_ITA_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1040_ITA_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1040: NotInstalled Sco: File 'C:\SQL Install\1041_JPN_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1041_JPN_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1041: NotInstalled Sco: File 'C:\SQL Install\1042_KOR_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1042_KOR_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1042: NotInstalled Sco: File 'C:\SQL Install\1046_PTB_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1046_PTB_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1046: NotInstalled Sco: File 'C:\SQL Install\1049_RUS_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1049_RUS_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1049: NotInstalled Sco: File 'C:\SQL Install\2052_CHS_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\2052_CHS_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_2052: NotInstalled Sco: File 'C:\SQL Install\3082_ESN_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\3082_ESN_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_3082: NotInstalled Sco: File 'C:\SQL Install\1053_SVE_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1053_SVE_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1053: NotInstalled Sco: File 'C:\SQL Install\x64\setup\sql_common_core_msi\x64\sql_common_core.msi' does not e Sco: File 'C:\SQL Install\x64\setup\sql_common_core_msi\x64\sql_common_core.msi' does not e Package ID sql_common_core_Cpu64: NotInstalled Sco: File 'C:\SQL Install\1028_CHT_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1028_CHT_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1028: NotInstalled Sco: File 'C:\SQL Install\1031_DEU_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1031_DEU_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1031: NotInstalled Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1033: NotInstalled Sco: File 'C:\SQL Install\1036_FRA_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1036_FRA_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1036: NotInstalled Sco: File 'C:\SQL Install\1040_ITA_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1040_ITA_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1040: NotInstalled Sco: File 'C:\SQL Install\1041_JPN_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1041_JPN_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1041: NotInstalled Sco: File 'C:\SQL Install\1042_KOR_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1042_KOR_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1042: NotInstalled Sco: File 'C:\SQL Install\1046_PTB_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1046_PTB_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1046: NotInstalled Sco: File 'C:\SQL Install\1049_RUS_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1049_RUS_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1049: NotInstalled Sco: File 'C:\SQL Install\2052_CHS_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\2052_CHS_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_2052: NotInstalled Sco: File 'C:\SQL Install\3082_ESN_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\3082_ESN_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_3082: NotInstalled Sco: File 'C:\SQL Install\1053_SVE_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1053_SVE_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1053: NotInstalled Sco: File 'C:\Users\Administrator\AppData\Local\Temp\2\SQL Server 2012\Setup\1033_ENU_LP\x6 lSupport.msi' does not exist Sco: File 'C:\Users\Administrator\AppData\Local\Temp\2\SQL Server 2012\Setup\1033_ENU_LP\x6 lSupport.msi' does not exist Package ID SqlSupport_Cpu64: NotInstalled Sco: File 'C:\SQL Install\redist\watson\x86\dw20shared.msi' does not exist Sco: File 'C:\SQL Install\redist\watson\x86\dw20shared.msi' does not exist Package ID WatsonX86_Cpu32: NotInstalled Package ID sqlncli_Cpu64: NotInstalled Package ID SqlLocalDB_Cpu64: NotInstalled Package ID SqlLocalDB_CTP3_Cpu64: NotInstalled Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\x86\SSDTStub.msi' does not exist Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\x86\SSDTStub.msi' does not exist Package ID SSDTStub_Cpu32: NotInstalled Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\x86\SSDTDBSvcExternals.msi' does not exist Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\x86\SSDTDBSvcExternals.msi' does not exist What does this mean?

    Read the article

  • Using SQL Developer to Debug your Anonymous PL/SQL Blocks

    - by JeffS
    Everyone knows that SQL Developer has a PL/SQL debugger – check! Everyone also knows that it’s only setup for debugging standalone PL/SQL objects like Functions, Procedures, and Packages, right? – NO! SQL Developer can also debug your Stored Java Procedures AND it can debug your standalone PLSQL blocks. These bits of PLSQL which do not live in the database are also known as ‘Anonymous Blocks.’ Anonymous PL/SQL blocks can be submitted to interactive tools such as SQL*Plus and Enterprise Manager, or embedded in an Oracle Precompiler or OCI program. At run time, the program sends these blocks to the Oracle database, where they are compiled and executed. Here’s an example of something you might want help debugging: Declare x number := 0; Begin Dbms_Output.Put(Sysdate || ' ' || Systimestamp); For Stuff In 1..100 Loop Dbms_Output.Put_Line('Stuff is equal to ' || Stuff || '.'); x := Stuff; End Loop; End; / With the power of remote debugging and unshared worksheets, we are going to be able to debug this ANON block! The trick – we need to create a dummy stored procedure and call it in our ANON block. Then we’re going to create an unshared worksheet and execute the script from there while the SQL Developer session is listening for remote debug connections. We step through the dummy procedure, and this takes OUT to our calling ANON block. Then we can use watches, breakpoints, and all that fancy debugger stuff! First things first, create this dummy procedure - create or replace procedure do_nothing is begin null; end; Then mouse-right-click on your Connection and select ‘Remote Debug.’ For an in-depth post on how to use the remote debugger, check out Barry’s excellent post on the subject. Open an unshared worksheet using Ctrl+Shift+N. This gives us a dedicated connection for our worksheet and any scripts or commands executed in it. Paste in your ANON block you want to debug. Add in a call to the dummy procedure above to the first line of your BEGIN block like so Begin do_nothing(); ... Then we need to setup the machine for remote debug for the session we have listening – basically we connect to SQL Developer. You can do that via a Environment Variable, or you can just add this line to your script - CALL DBMS_DEBUG_JDWP.CONNECT_TCP( 'localhost', '4000' ); Where ‘localhost’ is the machine where SQL Developer is running and ’4000′ is the port you started the debug listener on. Ok, with that all set, now just RUN the script. Once the PL/SQL call is made, the debugger will be invoked. You’ll end up in the DO_NOTHING() object. Debugging an ANON block from SQL Developer is possible! If you step out to the ANON block, we’ll end up in the script that’s used to call the procedure – which is the script you want to debug. The Anonymous Block is opened in a new SQL Dev page You can now step through the block, using watches and breakpoints as expected. I’m guessing your scripts are going to be a bit more complicated than mine, but this serves as a decent example to get you started. Here’s a screenshot of a watch and breakpoint defined in the anon block being debugged: Breakpoints, watches, and callstacks - oh my! For giggles, I created a breakpoint with a passcount of 90 for the FOR LOOP to see if it works. And of course it does You Might Also EnjoyUsing Pass Counts to Turbo Charge Your PL/SQL BreakpointsSQL Developer Tip: Viewing REFCURSOR OutputThe PL/SQL Debugger Strikes Back: Episode VDebugging PL/SQL with SQL Developer: Episode IVHow to find dependent objects in your PL/SQL Programs using SQL Developer

    Read the article

  • Cumulative Update packages for SQL Server 2008 R2 RTM & SQL Server 2008 SP1

    - by ssqa.net
    Here is the news on Cumulative Update release news on SQL Server 2008 R2 RTM & SQL Server 2008 Service Pack 1. First let us go through SQL Server 2008 R2 RTM cumulative update, release consist the only hotfixes that were released in Cumulative Update 5, 6, & 7 for SQL Server 2008 SP1. Cumulative Update 1 for SQL 2008 R2 RTM is only intended as a post-RTM rollup for Cumulative Update 5-7 for the release version of SQL Server 2008 SP1 customers who plan to upgrade to SQL Server 2008 R2 and...(read more)

    Read the article

  • SQL Server Configuration timeouts - and a workaround [SSIS]

    - by jamiet
    Ever since I started writing SSIS packages back in 2004 I have opted to store configurations in .dtsConfig (.i.e. XML) files rather than in a SQL Server table (aka SQL Server Configurations) however recently I inherited some packages that used SQL Server Configurations and thus had to immerse myself in their murky little world. To all the people that have ever gone onto the SSIS forum and asked questions about ambiguous behaviour of SQL Server Configurations I now say this... I feel your pain! The biggest problem I have had was in dealing with the change to the order in which configurations get applied that came about in SSIS 2008. Those changes are detailed on MSDN at SSIS Package Configurations however the pertinent bits are: As the utility loads and runs the package, events occur in the following order: The dtexec utility loads the package. The utility applies the configurations that were specified in the package at design time and in the order that is specified in the package. (The one exception to this is the Parent Package Variables configurations. The utility applies these configurations only once and later in the process.) The utility then applies any options that you specified on the command line. The utility then reloads the configurations that were specified in the package at design time and in the order specified in the package. (Again, the exception to this rule is the Parent Package Variables configurations). The utility uses any command-line options that were specified to reload the configurations. Therefore, different values might be reloaded from a different location. The utility applies the Parent Package Variable configurations. The utility runs the package. To understand how these steps differ from SSIS 2005 I recommend reading Doug Laudenschlager’s blog post Understand how SSIS package configurations are applied. The very nature of SQL Server Configurations means that the Connection String for the database holding the configuration values needs to be supplied from the command-line. Typically then the call to execute your package resembles this: dtexec /FILE Package.dtsx /SET "\Package.Connections[SSISConfigurations].Properties[ConnectionString]";"\"Data Source=SomeServer;Initial Catalog=SomeDB;Integrated Security=SSPI;\"", The problem then is that, as per the steps above, the package will (1) attempt to apply all configurations using the Connection String stored in the package for the "SSISConfigurations" Connection Manager before then (2) applying the Connection String from the command-line and then (3) apply the same configurations all over again. In the packages that I inherited that first attempt to apply the configurations would timeout (not unexpected); I had 8 SQL Server Configurations in the package and thus the package was waiting for 2 minutes until all the Configurations timed out (i.e. 15seconds per Configuration) - in a package that only executes for ~8seconds when it gets to do its actual work a delay of 2minutes was simply unacceptable. We had three options in how to deal with this: Get rid of the use of SQL Server configurations and use .dtsConfig files instead Edit the packages when they get deployed Change the timeout on the "SSISConfigurations" Connection Manager #1 was my preferred choice but, for reasons I explain below*, wasn't an option in this particular instance. #2 was discounted out of hand because it negates the point of using Configurations in the first place. This left us with #3 - change the timeout on the Connection Manager. This is done by going into the properties of the Connection Manager, opening the "All" tab and changing the Connect Timeout property to some suitable value (in the screenshot below I chose 2 seconds). This change meant that the attempts to apply the SQL Server configurations timed out in 16 seconds rather than two minutes; clearly this isn't an optimum solution but its certainly better than it was. So there you have it - if you are having problems with SQL Server configuration timeouts within SSIS try changing the timeout of the Connection Manager. Better still - don't bother using SQL Server Configuration in the first place. Even better - install RC0 of SQL Server 2012 to start leveraging SSIS parameters and leave the nasty old world of configurations behind you. @Jamiet * Basically, we are leveraging a SSIS execution/logging framework in which the client had invested a lot of resources and SQL Server Configurations are an integral part of that.

    Read the article

  • More information on the Patch Tuesday updates for SQL Server

    - by AaronBertrand
    Last week, Microsoft released a series of patches for all supported versions of SQL Server (from SQL Server 2005 SP3 all the way to SQL Server 2008 R2). The reason for the patch against SQL Server installations is largely a client-side issue with the XML viewer application, and for SQL Server specifically, the exploit is limited to potential information disclosure. A very easy way to avoid exposure to this exploit is simply to never open a file with the .disco extension (these files are likely already...(read more)

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >