Search Results

Search found 6276 results on 252 pages for 'join'.

Page 72/252 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Manipulating values from database table with php

    - by charliecodex23
    I currently have 5 tables in MySQL database. Some of them share foreign keys and are interdependent of each other. I am displaying classes accordingly to their majors. Each class is taught during the fall, spring or all_year. In my database I have a table named semester which has an id, year, and semester fields. The semester field in particular is a tinyint that has three values 0, 1, 2. This signifies the fall, spring or all_year. When I display the query instead of having it show 0 or 1 or 2 can I have it show fall, spring etc? Extra: How can I add space to the end of each loop so the data doesn't look clustered? Key 0 Fall 1 Spring 2 All-year PHP <? try { $pdo = new PDO ("mysql:host=$hostname;dbname=$dbname","$username","$pw"); } catch (PDOException $e) { echo "Failed to get DB handle: " . $e->getMessage() . "\n"; exit; } $query = $pdo->prepare("SELECT course.name, course.code, course.description, course.hours, semester.semester, semester.year FROM course LEFT JOIN major_course_xref ON course.id = major_course_xref.course_id LEFT JOIN major ON major.id = major_course_xref.major_id LEFT JOIN course_semester_xref ON course.id = course_semester_xref.course_id LEFT JOIN semester ON course_semester_xref.semester_id = semester.id"); $query->execute(); if ($query->execute()){ while ($row = $query->fetch(PDO::FETCH_ASSOC)){ print $row['name'] . "<br>"; print $row['code'] . "<br>"; print $row['description'] . "<br>"; print $row['hours'] . " hrs.<br>"; print $row['semester'] . "<br>"; print $row['year'] . "<br>"; } } else echo 'Could not fetch results.'; unset($pdo); unset($query); ?> Current Display Computer Programming I CPSC1400 Introduction to disciplined, object-oriented program development. 4 hrs. 0 2013 Desire Display Computer Programming I CPSC1400 Introduction to disciplined, object-oriented program development. 4 hrs. Fall 2013

    Read the article

  • Forcing the use of an index can improve performance?

    - by aF.
    Imagine that we have a query like this: select a.col1, b.col2 from t1 a inner join t2 b on a.col1 = b.col2 where a.col1 = 'abc' Both col1 and col2 don't have any index. If I add another restriction on the where clause, one that is always correct but with a column with an index: select a.col1, b.col2 from t1 a inner join t2 b on a.col1 = b.col2 where a.col1 = 'abc' and a.id >= 0 -- column always true and with index May the query perform faster since it may use the index on id column?

    Read the article

  • Postgres: Find table foreign keys (Faster alternative)

    - by Najera
    Is there faster alternative to this: Take almost 1 minute in our server. SELECT tc.constraint_name, tc.table_name, kcu.column_name, ccu.table_name AS foreign_table_name, ccu.column_name AS foreign_column_name FROM information_schema.table_constraints AS tc JOIN information_schema.key_column_usage AS kcu ON tc.constraint_name = kcu.constraint_name JOIN information_schema.constraint_column_usage AS ccu ON ccu.constraint_name = tc.constraint_name WHERE constraint_type = 'FOREIGN KEY' AND tc.table_name='mytable'; Maybe using pg_class metadata?, thanks.

    Read the article

  • How to execute query with empty fields?

    - by Kliver Max
    I have a SQL Query: SELECT documents.*, t_rights.rights, documents_list.docs FROM documents INNER JOIN t_rights on t_rights.num=documents.type_right INNER JOIN documents_list on documents_list.num=documents.document1 WHERE code_document=1 or code_document=1 In case if i have fields documents.document1 and documents.document1 with some value all works fine. But if this field empty i get empty query result. Its possible make query like this with empty fields?

    Read the article

  • sql: how to use count for a particular column ,(or) to count a particular column by its field name

    - by Aravintha Bashyam
    in my query i need a to count a particular column by its field name, SELECT C.INC_COUNT, MIN_X, MIN_Y, MAX_X, MAX_Y, B.STATE_ABBR, B.STATE_NAME, B.LATITUDE, B.LONGITUDE, A.STATE, GEO_ID, concat(A.LSAD_TRANS,' ' , A.NAME) DIST_NAME, A.LSAD, GeometryType(SHAPE) GEO_TYPE, AsText(SHAPE) GEOM from SHAPE_LAYERS A join SHAPE_LAYER_STATE_DESC B on ( A.state = B.state) left outer join INC_DIST_SUMMARY_ALL C on (C.SHAPE_GEO_ID = A. GEO_ID) here i have to count by B.STATE_NAME ,C.INC_COUNT if exmple the field name nevada means i have to get all neveda value count and the C.INC_COUNT.

    Read the article

  • SELECT Data from multiple tables?

    - by Kyle R
    I have 3 tables, with 3 fields all the same. I basically want to select information from each table For example: userid = 1 I want to select data from all 3 tables, where userid = 1 I am currently using: SELECT r.*, p.*, l.* FROM random r LEFT JOIN pandom p ON r.userid = p.userid LEFT JOIN landom l ON l.userid = r.userid WHERE r.userid = '1' LIMIT 0, 30 But it doesn't seem to work.

    Read the article

  • MySql: make this query faster… is there a way ? PART TWO

    - by robert
    This is part two of the question: http://stackoverflow.com/questions/2913639/mysql-make-this-query-faster-theres-a-way this query still run slowly: SELECT b.id, b.name, c.name FROM bookcorr as a JOIN books as b on b.id = a.books_id = JOIN Library as c on c.id = a.library_id WHERE a.category_id = '2521' AND a.library_id = '4983' ORDER BY b.name ASC LIMIT 0,15 Any suggest ?

    Read the article

  • What are JOINs in SQL (for)?

    - by sabwufer
    I have been using MySQL for 2 years now, yet I still don't know what you actually do with the JOIN statement. I really didn't come across any situation where I was unable to solve a problem with the statements and syntax I already know (SELECT, INSERT, UPDATE, ordering, ...) What does JOIN do in MySQL? (Where) Do I need it? Should I generally avoid it?

    Read the article

  • apache+mod_wsgi configuration for django project(s) on a quad core

    - by Stefano
    I've been experiment quite some time with a "typical" django setting upon nginx+apache2+mod_wsgi+memcached(+postgresql) (reading the doc and some questions on SO and SF, see comments) Since I'm still unsatisfied with the behavior (definitely because of some bad misconfiguration on my part) I would like to know what a good configuration would look like with these hypotesis: Quad-Core Xeon 2.8GHz 8 gigs memory several django projects (anything special related to this?) These are excerpts form my current confs: apache2 SetEnv VHOST null #WSGIPythonOptimize 2 <VirtualHost *:8082> ServerName subdomain.domain.com ServerAlias www.domain.com SetEnv VHOST subdomain.domain AddDefaultCharset UTF-8 ServerSignature Off LogFormat "%{X-Real-IP}i %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" custom ErrorLog /home/project1/var/logs/apache_error.log CustomLog /home/project1/var/logs/apache_access.log custom AllowEncodedSlashes On WSGIDaemonProcess subdomain.domain user=www-data group=www-data threads=25 WSGIScriptAlias / /home/project1/project/wsgi.py WSGIProcessGroup %{ENV:VHOST} </VirtualHost> wsgi.py import os import sys # setting all the right paths.... _realpath = os.path.realpath(os.path.dirname(__file__)) _public_html = os.path.normpath(os.path.join(_realpath, '../')) sys.path.append(_realpath) sys.path.append(os.path.normpath(os.path.join(_realpath, 'apps'))) sys.path.append(os.path.normpath(_public_html)) sys.path.append(os.path.normpath(os.path.join(_public_html, 'libs'))) sys.path.append(os.path.normpath(os.path.join(_public_html, 'django'))) os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' import django.core.handlers.wsgi _application = django.core.handlers.wsgi.WSGIHandler() def application(environ, start_response): """ Launches django passing over some environment (domain name) settings """ application_group = environ['mod_wsgi.application_group'] """ wsgi application group is required. It's also used to generate the HOST.DOMAIN.TLD:PORT parameters to pass over """ assert application_group fields = application_group.replace('|', '').split(':') server_name = fields[0] os.environ['WSGI_APPLICATION_GROUP'] = application_group os.environ['WSGI_SERVER_NAME'] = server_name if len(fields) > 1 : os.environ['WSGI_PORT'] = fields[1] splitted = server_name.rsplit('.', 2) assert splitted >= 2 splited.reverse() if len(splitted) > 0 : os.environ['WSGI_TLD'] = splitted[0] if len(splitted) > 1 : os.environ['WSGI_DOMAIN'] = splitted[1] if len(splitted) > 2 : os.environ['WSGI_HOST'] = splitted[2] return _application(environ, start_response)` folder structure in case it matters (slightly shortened actually) /home/www-data/projectN/var/logs /project (contains manage.py, wsgi.py, settings.py) /project/apps (all the project ups are here) /django /libs Please forgive me in advance if I overlooked something obvious. My main question is about the apache2 wsgi settings. Are those fine? Is 25 threads an /ok/ number with a quad core for one only django project? Is it still ok with several django projects on different virtual hosts? Should I specify 'process'? Any other directive which I should add? Is there anything really bad in the wsgi.py file? I've been reading about potential issues with the standard wsgi.py file, should I switch to that? Or.. should this conf just be running fine, and I should look for issues somewhere else? So, what do I mean by "unsatisfied": well, I often get quite high CPU WAIT; but what is worse, is that relatively often apache2 gets stuck. It just does not answer anymore, and has to be restarted. I have setup a monit to take care of that, but it ain't a real solution. I have been wondering if it's an issue with the database access (postgresql) under heavy load, but even if it was, why would the apache2 processes get stuck? Beside these two issues, performance is overall great. I even tried New Relic and got very good average results.

    Read the article

  • Flow-Design Cheat Sheet &ndash; Part I, Notation

    - by Ralf Westphal
    You want to avoid the pitfalls of object oriented design? Then this is the right place to start. Use Flow-Oriented Analysis (FOA) and –Design (FOD or just FD for Flow-Design) to understand a problem domain and design a software solution. Flow-Orientation as described here is related to Flow-Based Programming, Event-Based Programming, Business Process Modelling, and even Event-Driven Architectures. But even though “thinking in flows” is not new, I found it helpful to deviate from those precursors for several reasons. Some aim at too big systems for the average programmer, some are concerned with only asynchronous processing, some are even not very much concerned with programming at all. What I was looking for was a design method to help in software projects of any size, be they large or tiny, involing synchronous or asynchronous processing, being local or distributed, running on the web or on the desktop or on a smartphone. That´s why I took ideas from all of the above sources and some additional and came up with Event-Based Components which later got repositioned and renamed to Flow-Design. In the meantime this has generated some discussion (in the German developer community) and several teams have started to work with Flow-Design. Also I´ve conducted quite some trainings using Flow-Orientation for design. The results are very promising. Developers find it much easier to design software using Flow-Orientation than OOAD-based object orientation. Since Flow-Orientation is moving fast and is not covered completely by a single source like a book, demand has increased for at least an overview of the current state of its notation. This page is trying to answer this demand by briefly introducing/describing every notational element as well as their translation into C# source code. Take this as a cheat sheet to put next to your whiteboard when designing software. However, please do not expect any explanation as to the reasons behind Flow-Design elements. Details on why Flow-Design at all and why in this specific way you´ll find in the literature covering the topic. Here´s a resource page on Flow-Design/Event-Based Components, if you´re able to read German. Notation Connected Functional Units The basic element of any FOD are functional units (FU): Think of FUs as some kind of software code block processing data. For the moment forget about classes, methods, “components”, assemblies or whatever. See a FU as an abstract piece of code. Software then consists of just collaborating FUs. I´m using circles/ellipses to draw FUs. But if you like, use rectangles. Whatever suites your whiteboard needs best.   The purpose of FUs is to process input and produce output. FUs are transformational. However, FUs are not called and do not call other FUs. There is no dependency between FUs. Data just flows into a FU (input) and out of it (output). From where and where to is of no concern to a FU.   This way FUs can be concatenated in arbitrary ways:   Each FU can accept input from many sources and produce output for many sinks:   Flows Connected FUs form a flow with a start and an end. Data is entering a flow at a source, and it´s leaving it through a sink. Think of sources and sinks as special FUs which conntect wires to the environment of a network of FUs.   Wiring Details Data is flowing into/out of FUs through wires. This is to allude to electrical engineering which since long has been working with composable parts. Wires are attached to FUs usings pins. They are the entry/exit points for the data flowing along the wires. Input-/output pins currently need not be drawn explicitly. This is to keep designing on a whiteboard simple and quick.   Data flowing is of some type, so wires have a type attached to them. And pins have names. If there is only one input pin and output pin on a FU, though, you don´t need to mention them. The default is Process for a single input pin, and Result for a single output pin. But you´re free to give even single pins different names.   There is a shortcut in use to address a certain pin on a destination FU:   The type of the wire is put in parantheses for two reasons. 1. This way a “no-type” wire can be easily denoted, 2. this is a natural way to describe tuples of data.   To describe how much data is flowing, a star can be put next to the wire type:   Nesting – Boards and Parts If more than 5 to 10 FUs need to be put in a flow a FD starts to become hard to understand. To keep diagrams clutter free they can be nested. You can turn any FU into a flow: This leads to Flow-Designs with different levels of abstraction. A in the above illustration is a high level functional unit, A.1 and A.2 are lower level functional units. One of the purposes of Flow-Design is to be able to describe systems on different levels of abstraction and thus make it easier to understand them. Humans use abstraction/decomposition to get a grip on complexity. Flow-Design strives to support this and make levels of abstraction first class citizens for programming. You can read the above illustration like this: Functional units A.1 and A.2 detail what A is supposed to do. The whole of A´s responsibility is decomposed into smaller responsibilities A.1 and A.2. FU A thus does not do anything itself anymore! All A is responsible for is actually accomplished by the collaboration between A.1 and A.2. Since A now is not doing anything anymore except containing A.1 and A.2 functional units are devided into two categories: boards and parts. Boards are just containing other functional units; their sole responsibility is to wire them up. A is a board. Boards thus depend on the functional units nested within them. This dependency is not of a functional nature, though. Boards are not dependent on services provided by nested functional units. They are just concerned with their interface to be able to plug them together. Parts are the workhorses of flows. They contain the real domain logic. They actually transform input into output. However, they do not depend on other functional units. Please note the usage of source and sink in boards. They correspond to input-pins and output-pins of the board.   Implicit Dependencies Nesting functional units leads to a dependency tree. Boards depend on nested functional units, they are the inner nodes of the tree. Parts are independent, they are the leafs: Even though dependencies are the bane of software development, Flow-Design does not usually draw these dependencies. They are implicitly created by visually nesting functional units. And they are harmless. Boards are so simple in their functionality, they are little affected by changes in functional units they are depending on. But functional units are implicitly dependent on more than nested functional units. They are also dependent on the data types of the wires attached to them: This is also natural and thus does not need to be made explicit. And it pertains mainly to parts being dependent. Since boards don´t do anything with regard to a problem domain, they don´t care much about data types. Their infrastructural purpose just needs types of input/output-pins to match.   Explicit Dependencies You could say, Flow-Orientation is about tackling complexity at its root cause: that´s dependencies. “Natural” dependencies are depicted naturally, i.e. implicitly. And whereever possible dependencies are not even created. Functional units don´t know their collaborators within a flow. This is core to Flow-Orientation. That makes for high composability of functional units. A part is as independent of other functional units as a motor is from the rest of the car. And a board is as dependend on nested functional units as a motor is on a spark plug or a crank shaft. With Flow-Design software development moves closer to how hardware is constructed. Implicit dependencies are not enough, though. Sometimes explicit dependencies make designs easier – as counterintuitive this might sound. So FD notation needs a ways to denote explicit dependencies: Data flows along wires. But data does not flow along dependency relations. Instead dependency relations represent service calls. Functional unit C is depending on/calling services on functional unit S. If you want to be more specific, name the services next to the dependency relation: Although you should try to stay clear of explicit dependencies, they are fundamentally ok. See them as a way to add another dimension to a flow. Usually the functionality of the independent FU (“Customer repository” above) is orthogonal to the domain of the flow it is referenced by. If you like emphasize this by using different shapes for dependent and independent FUs like above. Such dependencies can be used to link in resources like databases or shared in-memory state. FUs can not only produce output but also can have side effects. A common pattern for using such explizit dependencies is to hook a GUI into a flow as the source and/or the sink of data: Which can be shortened to: Treat FUs others depend on as boards (with a special non-FD API the dependent part is connected to), but do not embed them in a flow in the diagram they are depended upon.   Attributes of Functional Units Creation and usage of functional units can be modified with attributes. So far the following have shown to be helpful: Singleton: FUs are by default multitons. FUs in the same of different flows with the same name refer to the same functionality, but to different instances. Think of functional units as objects that get instanciated anew whereever they appear in a design. Sometimes though it´s helpful to reuse the same instance of a functional unit; this is always due to valuable state it holds. Signify this by annotating the FU with a “(S)”. Multiton: FUs on which others depend are singletons by default. This is, because they usually are introduced where shared state comes into play. If you want to change them to be a singletons mark them with a “(M)”. Configurable: Some parts need to be configured before the can do they work in a flow. Annotate them with a “(C)” to have them initialized before any data items to be processed by them arrive. Do not assume any order in which FUs are configured. How such configuration is happening is an implementation detail. Entry point: In each design there needs to be a single part where “it all starts”. That´s the entry point for all processing. It´s like Program.Main() in C# programs. Mark the entry point part with an “(E)”. Quite often this will be the GUI part. How the entry point is started is an implementation detail. Just consider it the first FU to start do its job.   Patterns / Standard Parts If more than a single wire is attached to an output-pin that´s called a split (or fork). The same data is flowing on all of the wires. Remember: Flow-Designs are synchronous by default. So a split does not mean data is processed in parallel afterwards. Processing still happens synchronously and thus one branch after another. Do not assume any specific order of the processing on the different branches after the split.   It is common to do a split and let only parts of the original data flow on through the branches. This effectively means a map is needed after a split. This map can be implicit or explicit.   Although FUs can have multiple input-pins it is preferrable in most cases to combine input data from different branches using an explicit join: The default output of a join is a tuple of its input values. The default behavior of a join is to output a value whenever a new input is received. However, to produce its first output a join needs an input for all its input-pins. Other join behaviors can be: reset all inputs after an output only produce output if data arrives on certain input-pins

    Read the article

  • SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance

    - by pinaldave
    Earlier, I had written two articles related to Shrinking Database. I wrote about why Shrinking Database is not good. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server SQL SERVER – What the Business Says Is Not What the Business Wants I received many comments on Why Database Shrinking is bad. Today we will go over a very interesting example that I have created for the same. Here are the quick steps of the example. Create a test database Create two tables and populate with data Check the size of both the tables Size of database is very low Check the Fragmentation of one table Fragmentation will be very low Truncate another table Check the size of the table Check the fragmentation of the one table Fragmentation will be very low SHRINK Database Check the size of the table Check the fragmentation of the one table Fragmentation will be very HIGH REBUILD index on one table Check the size of the table Size of database is very HIGH Check the fragmentation of the one table Fragmentation will be very low Here is the script for the same. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO Let us check the table size and fragmentation. Now let us TRUNCATE the table and check the size and Fragmentation. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can clearly see that after TRUNCATE, the size of the database is not reduced and it is still the same as before TRUNCATE operation. After the Shrinking database operation, we were able to reduce the size of the database. If you notice the fragmentation, it is considerably high. The major problem with the Shrink operation is that it increases fragmentation of the database to very high value. Higher fragmentation reduces the performance of the database as reading from that particular table becomes very expensive. One of the ways to reduce the fragmentation is to rebuild index on the database. Let us rebuild the index and observe fragmentation and database size. -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REBUILD GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can notice that after rebuilding, Fragmentation reduces to a very low value (almost same to original value); however the database size increases way higher than the original. Before rebuilding, the size of the database was 5 MB, and after rebuilding, it is around 20 MB. Regular rebuilding the index is rebuild in the same user database where the index is placed. This usually increases the size of the database. Look at irony of the Shrinking database. One person shrinks the database to gain space (thinking it will help performance), which leads to increase in fragmentation (reducing performance). To reduce the fragmentation, one rebuilds index, which leads to size of the database to increase way more than the original size of the database (before shrinking). Well, by Shrinking, one did not gain what he was looking for usually. Rebuild indexing is not the best suggestion as that will create database grow again. I have always remembered the excellent post from Paul Randal regarding Shrinking the database is bad. I suggest every one to read that for accuracy and interesting conversation. Let us run following script where we Shrink the database and REORGANIZE. -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Shrink the Database DBCC SHRINKDATABASE (ShrinkIsBed); GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REORGANIZE GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can see that REORGANIZE does not increase the size of the database or remove the fragmentation. Again, I no way suggest that REORGANIZE is the solution over here. This is purely observation using demo. Read the blog post of Paul Randal. Following script will clean up the database -- Clean up USE MASTER GO ALTER DATABASE ShrinkIsBed SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO DROP DATABASE ShrinkIsBed GO There are few valid cases of the Shrinking database as well, but that is not covered in this blog post. We will cover that area some other time in future. Additionally, one can rebuild index in the tempdb as well, and we will also talk about the same in future. Brent has written a good summary blog post as well. Are you Shrinking your database? Well, when are you going to stop Shrinking it? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL Monitor’s data repository

    - by Chris Lambrou
    As one of the developers of SQL Monitor, I often get requests passed on by our support people from customers who are looking to dip into SQL Monitor’s own data repository, in order to pull out bits of information that they’re interested in. Since there’s clearly interest out there in playing around directly with the data repository, I thought I’d write some blog posts to start to describe how it all works. The hardest part for me is knowing where to begin, since the schema of the data repository is pretty big. Hmmm… I guess it’s tricky for anyone to write anything but the most trivial of queries against the data repository without understanding the hierarchy of monitored objects, so perhaps my first post should start there. I always imagine that whenever a customer fires up SSMS and starts to explore their SQL Monitor data repository database, they become immediately bewildered by the schema – that was certainly my experience when I did so for the first time. The following query shows the number of different object types in the data repository schema: SELECT type_desc, COUNT(*) AS [count] FROM sys.objects GROUP BY type_desc ORDER BY type_desc;  type_desccount 1DEFAULT_CONSTRAINT63 2FOREIGN_KEY_CONSTRAINT181 3INTERNAL_TABLE3 4PRIMARY_KEY_CONSTRAINT190 5SERVICE_QUEUE3 6SQL_INLINE_TABLE_VALUED_FUNCTION381 7SQL_SCALAR_FUNCTION2 8SQL_STORED_PROCEDURE100 9SYSTEM_TABLE41 10UNIQUE_CONSTRAINT54 11USER_TABLE193 12VIEW124 With 193 tables, 124 views, 100 stored procedures and 381 table valued functions, that’s quite a hefty schema, and when you browse through it using SSMS, it can be a bit daunting at first. So, where to begin? Well, let’s narrow things down a bit and only look at the tables belonging to the data schema. That’s where all of the collected monitoring data is stored by SQL Monitor. The following query gives us the names of those tables: SELECT sch.name + '.' + obj.name AS [name] FROM sys.objects obj JOIN sys.schemas sch ON sch.schema_id = obj.schema_id WHERE obj.type_desc = 'USER_TABLE' AND sch.name = 'data' ORDER BY sch.name, obj.name; This query still returns 110 tables. I won’t show them all here, but let’s have a look at the first few of them:  name 1data.Cluster_Keys 2data.Cluster_Machine_ClockSkew_UnstableSamples 3data.Cluster_Machine_Cluster_StableSamples 4data.Cluster_Machine_Keys 5data.Cluster_Machine_LogicalDisk_Capacity_StableSamples 6data.Cluster_Machine_LogicalDisk_Keys 7data.Cluster_Machine_LogicalDisk_Sightings 8data.Cluster_Machine_LogicalDisk_UnstableSamples 9data.Cluster_Machine_LogicalDisk_Volume_StableSamples 10data.Cluster_Machine_Memory_Capacity_StableSamples 11data.Cluster_Machine_Memory_UnstableSamples 12data.Cluster_Machine_Network_Capacity_StableSamples 13data.Cluster_Machine_Network_Keys 14data.Cluster_Machine_Network_Sightings 15data.Cluster_Machine_Network_UnstableSamples 16data.Cluster_Machine_OperatingSystem_StableSamples 17data.Cluster_Machine_Ping_UnstableSamples 18data.Cluster_Machine_Process_Instances 19data.Cluster_Machine_Process_Keys 20data.Cluster_Machine_Process_Owner_Instances 21data.Cluster_Machine_Process_Sightings 22data.Cluster_Machine_Process_UnstableSamples 23… There are two things I want to draw your attention to: The table names describe a hierarchy of the different types of object that are monitored by SQL Monitor (e.g. clusters, machines and disks). For each object type in the hierarchy, there are multiple tables, ending in the suffixes _Keys, _Sightings, _StableSamples and _UnstableSamples. Not every object type has a table for every suffix, but the _Keys suffix is especially important and a _Keys table does indeed exist for every object type. In fact, if we limit the query to return only those tables ending in _Keys, we reveal the full object hierarchy: SELECT sch.name + '.' + obj.name AS [name] FROM sys.objects obj JOIN sys.schemas sch ON sch.schema_id = obj.schema_id WHERE obj.type_desc = 'USER_TABLE' AND sch.name = 'data' AND obj.name LIKE '%_Keys' ORDER BY sch.name, obj.name;  name 1data.Cluster_Keys 2data.Cluster_Machine_Keys 3data.Cluster_Machine_LogicalDisk_Keys 4data.Cluster_Machine_Network_Keys 5data.Cluster_Machine_Process_Keys 6data.Cluster_Machine_Services_Keys 7data.Cluster_ResourceGroup_Keys 8data.Cluster_ResourceGroup_Resource_Keys 9data.Cluster_SqlServer_Agent_Job_History_Keys 10data.Cluster_SqlServer_Agent_Job_Keys 11data.Cluster_SqlServer_Database_BackupType_Backup_Keys 12data.Cluster_SqlServer_Database_BackupType_Keys 13data.Cluster_SqlServer_Database_CustomMetric_Keys 14data.Cluster_SqlServer_Database_File_Keys 15data.Cluster_SqlServer_Database_Keys 16data.Cluster_SqlServer_Database_Table_Index_Keys 17data.Cluster_SqlServer_Database_Table_Keys 18data.Cluster_SqlServer_Error_Keys 19data.Cluster_SqlServer_Keys 20data.Cluster_SqlServer_Services_Keys 21data.Cluster_SqlServer_SqlProcess_Keys 22data.Cluster_SqlServer_TopQueries_Keys 23data.Cluster_SqlServer_Trace_Keys 24data.Group_Keys The full object type hierarchy looks like this: Cluster Machine LogicalDisk Network Process Services ResourceGroup Resource SqlServer Agent Job History Database BackupType Backup CustomMetric File Table Index Error Services SqlProcess TopQueries Trace Group Okay, but what about the individual objects themselves represented at each level in this hierarchy? Well that’s what the _Keys tables are for. This is probably best illustrated by way of a simple example – how can I query my own data repository to find the databases on my own PC for which monitoring data has been collected? Like this: SELECT clstr._Name AS cluster_name, srvr._Name AS instance_name, db._Name AS database_name FROM data.Cluster_SqlServer_Database_Keys db JOIN data.Cluster_SqlServer_Keys srvr ON db.ParentId = srvr.Id -- Note here how the parent of a Database is a Server JOIN data.Cluster_Keys clstr ON srvr.ParentId = clstr.Id -- Note here how the parent of a Server is a Cluster WHERE clstr._Name = 'dev-chrisl2' -- This is the hostname of my own PC ORDER BY clstr._Name, srvr._Name, db._Name;  cluster_nameinstance_namedatabase_name 1dev-chrisl2SqlMonitorData 2dev-chrisl2master 3dev-chrisl2model 4dev-chrisl2msdb 5dev-chrisl2mssqlsystemresource 6dev-chrisl2tempdb 7dev-chrisl2sql2005SqlMonitorData 8dev-chrisl2sql2005TestDatabase 9dev-chrisl2sql2005master 10dev-chrisl2sql2005model 11dev-chrisl2sql2005msdb 12dev-chrisl2sql2005mssqlsystemresource 13dev-chrisl2sql2005tempdb 14dev-chrisl2sql2008SqlMonitorData 15dev-chrisl2sql2008master 16dev-chrisl2sql2008model 17dev-chrisl2sql2008msdb 18dev-chrisl2sql2008mssqlsystemresource 19dev-chrisl2sql2008tempdb These results show that I have three SQL Server instances on my machine (a default instance, one named sql2005 and one named sql2008), and each instance has the usual set of system databases, along with a database named SqlMonitorData. Basically, this is where I test SQL Monitor on different versions of SQL Server, when I’m developing. There are a few important things we can learn from this query: Each _Keys table has a column named Id. This is the primary key. Each _Keys table has a column named ParentId. A foreign key relationship is defined between each _Keys table and its parent _Keys table in the hierarchy. There are two exceptions to this, Cluster_Keys and Group_Keys, because clusters and groups live at the root level of the object hierarchy. Each _Keys table has a column named _Name. This is used to uniquely identify objects in the table within the scope of the same shared parent object. Actually, that last item isn’t always true. In some cases, the _Name column is actually called something else. For example, the data.Cluster_Machine_Services_Keys table has a column named _ServiceName instead of _Name (sorry for the inconsistency). In other cases, a name isn’t sufficient to uniquely identify an object. For example, right now my PC has multiple processes running, all sharing the same name, Chrome (one for each tab open in my web-browser). In such cases, multiple columns are used to uniquely identify an object within the scope of the same shared parent object. Well, that’s it for now. I’ve given you enough information for you to explore the _Keys tables to see how objects are stored in your own data repositories. In a future post, I’ll try to explain how monitoring data is stored for each object, using the _StableSamples and _UnstableSamples tables. If you have any questions about this post, or suggestions for future posts, just submit them in the comments section below.

    Read the article

  • Working With Extended Events

    - by Fatherjack
    SQL Server 2012 has made working with Extended Events (XE) pretty simple when it comes to what sessions you have on your servers and what options you have selected and so forth but if you are like me then you still have some SQL Server instances that are 2008 or 2008 R2. For those servers there is no built-in way to view the Extended Event sessions in SSMS. I keep coming up against the same situations – Where are the xel log files? What events, actions or predicates are set for the events on the server? What sessions are there on the server already? I got tired of this being a perpetual question and wrote some TSQL to save as a snippet in SQL Prompt so that these details are permanently only a couple of clicks away. First, some history. If you just came here for the code skip down a few paragraphs and it’s all there. If you want a little time to reminisce about SQL Server then stick with me through the next paragraph or two. We are in a bit of a cross-over period currently, there are many versions of SQL Server but I would guess that SQL Server 2008, 2008 R2 and 2012 comprise the majority of installations. With each of these comes a set of management tools, of which SQL Server Management Studio (SSMS) is one. In 2008 and 2008 R2 Extended Events made their first appearance and there was no way to work with them in the SSMS interface. At some point the Extended Events guru Jonathan Kehayias (http://www.sqlskills.com/blogs/jonathan/) created the SQL Server 2008 Extended Events SSMS Addin which is really an excellent tool to ease XE session administration. This addin will install in SSMS 2008 or 2008R2 but not SSMS 2012. If you use a compatible version of SSMS then I wholly recommend downloading and using it to make your work with XE much easier. If you have SSMS 2012 installed, and there is no reason not to as it will let you work with all versions of SQL Server, then you cannot install this addin. If you are working with SQL Server 2012 then SSMS 2012 has built in functionality to manage XE sessions – this functionality does not apply for 2008 or 2008 R2 instances though. This means you are somewhat restricted and have to use TSQL to manage XE sessions on older versions of SQL Server. OK, those of you that skipped ahead for the code, you need to start from here: So, you are working with SSMS 2012 but have a SQL Server that is an earlier version that needs an XE session created or you think there is a session created but you aren’t sure, or you know it’s there but can’t remember if it is running and where the output is going. How do you find out? Well, none of the information is hidden as such but it is a bit of a wrangle to locate it and it isn’t a lot of code that is unlikely to remain in your memory. I have created two pieces of code. The first examines the SYS.Server_Event_… management views in combination with the SYS.DM_XE_… management views to give the name of all sessions that exist on the server, regardless of whether they are running or not and two pieces of TSQL code. One piece will alter the state of the session: if the session is running then the code will stop the session if executed and vice versa. The other piece of code will drop the selected session. If the session is running then the code will stop it first. Do not execute the DROP code unless you are sure you have the Create code to hand. It will be dropped from the server without a second chance to change your mind. /**************************************************************/ /***   To locate and describe event sessions on a server    ***/ /***                                                        ***/ /***   Generates TSQL to start/stop/drop sessions           ***/ /***                                                        ***/ /***        Jonathan Allen - @fatherjack                    ***/ /***                 June 2013                                ***/ /***                                                        ***/ /**************************************************************/ SELECT  [EES].[name] AS [Session Name - all sessions] ,         CASE WHEN [MXS].[name] IS NULL THEN ISNULL([MXS].[name], 'Stopped')              ELSE 'Running'         END AS SessionState ,         CASE WHEN [MXS].[name] IS NULL              THEN ISNULL([MXS].[name],                          'ALTER EVENT SESSION [' + [EES].[name]                          + '] ON SERVER STATE = START;')              ELSE 'ALTER EVENT SESSION [' + [EES].[name]                   + '] ON SERVER STATE = STOP;'         END AS ALTER_SessionState ,         CASE WHEN [MXS].[name] IS NULL              THEN ISNULL([MXS].[name],                          'DROP EVENT SESSION [' + [EES].[name]                          + '] ON SERVER; -- This WILL drop the session. It will no longer exist. Don't do it unless you are certain you can recreate it if you need it.')              ELSE 'ALTER EVENT SESSION [' + [EES].[name]                   + '] ON SERVER STATE = STOP; ' + CHAR(10)                   + '-- DROP EVENT SESSION [' + [EES].[name]                   + '] ON SERVER; -- This WILL stop and drop the session. It will no longer exist. Don't do it unless you are certain you can recreate it if you need it.'         END AS DROP_Session FROM    [sys].[server_event_sessions] AS EES         LEFT JOIN [sys].[dm_xe_sessions] AS MXS ON [EES].[name] = [MXS].[name] WHERE   [EES].[name] NOT IN ( 'system_health', 'AlwaysOn_health' ) ORDER BY SessionState GO I have excluded the system_health and AlwaysOn sessions as I don’t want to accidentally execute the drop script for these sessions that are created as part of the SQL Server installation. It is possible to recreate the sessions but that is a whole lot of aggravation I’d rather avoid. The second piece of code gathers details of running XE sessions only and provides information on the Events being collected, any predicates that are set on those events, the actions that are set to be collected, where the collected information is being logged and if that logging is to a file target, where that file is located. /**********************************************/ /***    Running Session summary                ***/ /***                                        ***/ /***    Details key values of XE sessions     ***/ /***    that are in a running state            ***/ /***                                        ***/ /***        Jonathan Allen - @fatherjack    ***/ /***        June 2013                        ***/ /***                                        ***/ /**********************************************/ SELECT  [EES].[name] AS [Session Name - running sessions] ,         [EESE].[name] AS [Event Name] ,         COALESCE([EESE].[predicate], 'unfiltered') AS [Event Predicate Filter(s)] ,         [EESA].[Action] AS [Event Action(s)] ,         [EEST].[Target] AS [Session Target(s)] ,         ISNULL([EESF].[value], 'No file target in use') AS [File_Target_UNC] -- select * FROM    [sys].[server_event_sessions] AS EES         INNER JOIN [sys].[dm_xe_sessions] AS MXS ON [EES].[name] = [MXS].[name]         INNER JOIN [sys].[server_event_session_events] AS [EESE] ON [EES].[event_session_id] = [EESE].[event_session_id]         LEFT JOIN [sys].[server_event_session_fields] AS EESF ON ( [EES].[event_session_id] = [EESF].[event_session_id]                                                               AND [EESF].[name] = 'filename'                                                               )         CROSS APPLY ( SELECT    STUFF(( SELECT  ', ' + sest.name                                         FROM    [sys].[server_event_session_targets]                                                 AS SEST                                         WHERE   [EES].[event_session_id] = [SEST].[event_session_id]                                       FOR                                         XML PATH('')                                       ), 1, 2, '') AS [Target]                     ) AS EEST         CROSS APPLY ( SELECT    STUFF(( SELECT  ', ' + [sesa].NAME                                         FROM    [sys].[server_event_session_actions]                                                 AS sesa                                         WHERE   [sesa].[event_session_id] = [EES].[event_session_id]                                       FOR                                         XML PATH('')                                       ), 1, 2, '') AS [Action]                     ) AS EESA WHERE   [EES].[name] NOT IN ( 'system_health', 'AlwaysOn_health' ) /*Optional to exclude 'out-of-the-box' traces*/ I hope that these scripts are useful to you and I would be obliged if you would keep my name in the script comments. I have no problem with you using it in production or personal circumstances, however it has no warranty or guarantee. Don’t use it unless you understand it and are happy with what it is going to do. I am not ever responsible for the consequences of executing this script on your servers.

    Read the article

  • Signals and threads - good or bad design decision?

    - by Jens
    I have to write a program that performs highly computationally intensive calculations. The program might run for several days. The calculation can be separated easily in different threads without the need of shared data. I want a GUI or a web service that informs me of the current status. My current design uses BOOST::signals2 and BOOST::thread. It compiles and so far works as expected. If a thread finished one iteration and new data is available it calls a signal which is connected to a slot in the GUI class. My question(s): Is this combination of signals and threads a wise idea? I another forum somebody advised someone else not to "go down this road". Are there potential deadly pitfalls nearby that I failed to see? Is my expectation realistic that it will be "easy" to use my GUI class to provide a web interface or a QT, a VTK or a whatever window? Is there a more clever alternative (like other boost libs) that I overlooked? following code compiles with g++ -Wall -o main -lboost_thread-mt <filename>.cpp code follows: #include <boost/signals2.hpp> #include <boost/thread.hpp> #include <boost/bind.hpp> #include <iostream> #include <iterator> #include <string> using std::cout; using std::cerr; using std::string; /** * Called when a CalcThread finished a new bunch of data. */ boost::signals2::signal<void(string)> signal_new_data; /** * The whole data will be stored here. */ class DataCollector { typedef boost::mutex::scoped_lock scoped_lock; boost::mutex mutex; public: /** * Called by CalcThreads call the to store their data. */ void push(const string &s, const string &caller_name) { scoped_lock lock(mutex); _data.push_back(s); signal_new_data(caller_name); } /** * Output everything collected so far to std::out. */ void out() { typedef std::vector<string>::const_iterator iter; for (iter i = _data.begin(); i != _data.end(); ++i) cout << " " << *i << "\n"; } private: std::vector<string> _data; }; /** * Several of those can calculate stuff. * No data sharing needed. */ struct CalcThread { CalcThread(string name, DataCollector &datcol) : _name(name), _datcol(datcol) { } /** * Expensive algorithms will be implemented here. * @param num_results how many data sets are to be calculated by this thread. */ void operator()(int num_results) { for (int i = 1; i <= num_results; ++i) { std::stringstream s; s << "["; if (i == num_results) s << "LAST "; s << "DATA " << i << " from thread " << _name << "]"; _datcol.push(s.str(), _name); } } private: string _name; DataCollector &_datcol; }; /** * Maybe some VTK or QT or both will be used someday. */ class GuiClass { public: GuiClass(DataCollector &datcol) : _datcol(datcol) { } /** * If the GUI wants to present or at least count the data collected so far. * @param caller_name is the name of the thread whose data is new. */ void slot_data_changed(string caller_name) const { cout << "GuiClass knows: new data from " << caller_name << std::endl; } private: DataCollector & _datcol; }; int main() { DataCollector datcol; GuiClass mc(datcol); signal_new_data.connect(boost::bind(&GuiClass::slot_data_changed, &mc, _1)); CalcThread r1("A", datcol), r2("B", datcol), r3("C", datcol), r4("D", datcol), r5("E", datcol); boost::thread t1(r1, 3); boost::thread t2(r2, 1); boost::thread t3(r3, 2); boost::thread t4(r4, 2); boost::thread t5(r5, 3); t1.join(); t2.join(); t3.join(); t4.join(); t5.join(); datcol.out(); cout << "\nDone" << std::endl; return 0; }

    Read the article

  • T-SQL Improvements And Data Types in ms sql 2008

    - by Aamir Hasan
     Microsoft SQL Server 2008 is a new version released in the first half of 2008 introducing new properties and capabilities to SQL Server product family. All these new and enhanced capabilities can be defined as the classic words like secure, reliable, scalable and manageable. SQL Server 2008 is secure. It is reliable. SQL2008 is scalable and is more manageable when compared to previous releases. Now we will have a look at the features that are making MS SQL Server 2008 more secure, more reliable, more scalable, etc. in details.Microsoft SQL Server 2008 provides T-SQL enhancements that improve performance and reliability. Itzik discusses composable DML, the ability to declare and initialize variables in the same statement, compound assignment operators, and more reliable object dependency information. Table-Valued ParametersInserts into structures with 1-N cardinality problematicOne order -> N order line items"N" is variable and can be largeDon't want to force a new order for every 20 line itemsOne database round-trip / line item slows things downNo ARRAY data type in SQL ServerXML composition/decomposition used as an alternativeTable-valued parameters solve this problemTable-Valued ParametersSQL Server has table variablesDECLARE @t TABLE (id int);SQL Server 2008 adds strongly typed table variablesCREATE TYPE mytab AS TABLE (id int);DECLARE @t mytab;Parameters must use strongly typed table variables Table Variables are Input OnlyDeclare and initialize TABLE variable  DECLARE @t mytab;  INSERT @t VALUES (1), (2), (3);  EXEC myproc @t;Procedure must declare variable READONLY  CREATE PROCEDURE usetable (    @t mytab READONLY ...)  AS    INSERT INTO lineitems SELECT * FROM @t;    UPDATE @t SET... -- no!T-SQL Syntax EnhancementsSingle statement declare and initialize  DECLARE @iint = 4;Compound Assignment Operators  SET @i += 1;Row constructors  DECLARE @t TABLE (id int, name varchar(20));  INSERT INTO @t VALUES    (1, 'Fred'), (2, 'Jim'), (3, 'Sue');Grouping SetsGrouping Sets allow multiple GROUP BY clauses in a single SQL statementMultiple, arbitrary, sets of subtotalsSingle read pass for performanceNested subtotals provide ever better performanceGrouping Sets are an ANSI-standardCOMPUTE BY is deprecatedGROUPING SETS, ROLLUP, CUBESQL Server 2008 - ANSI-syntax ROLLUP and CUBEPre-2008 non-ANSI syntax is deprecatedWITH ROLLUP produces n+1 different groupings of datawhere n is the number of columns in GROUP BYWITH CUBE produces 2^n different groupingswhere n is the number of columns in GROUP BYGROUPING SETS provide a "halfway measure"Just the number of different groupings you needGrouping Sets are visible in query planGROUPING_ID and GROUPINGGrouping Sets can produce non-homogeneous setsGrouping set includes NULL values for group membersNeed to distinguish by grouping and NULL valuesGROUPING (column expression) returns 0 or 1Is this a group based on column expr. or NULL value?GROUPING_ID (a,b,c) is a bitmaskGROUPING_ID bits are set based on column expressions a, b, and cMERGE StatementMultiple set operations in a single SQL statementUses multiple sets as inputMERGE target USING source ON ...Operations can be INSERT, UPDATE, DELETEOperations based onWHEN MATCHEDWHEN NOT MATCHED [BY TARGET] WHEN NOT MATCHED [BY SOURCE]More on MERGEMERGE statement can reference a $action columnUsed when MERGE used with OUTPUT clauseMultiple WHEN clauses possible For MATCHED and NOT MATCHED BY SOURCEOnly one WHEN clause for NOT MATCHED BY TARGETMERGE can be used with any table sourceA MERGE statement causes triggers to be fired onceRows affected includes total rows affected by all clausesMERGE PerformanceMERGE statement is transactionalNo explicit transaction requiredOne Pass Through TablesAt most a full outer joinMatching rows = when matchedLeft-outer join rows = when not matched by targetRight-outer join rows = when not matched by sourceMERGE and DeterminismUPDATE using a JOIN is non-deterministicIf more than one row in source matches ON clause, either/any row can be used for the UPDATEMERGE is deterministicIf more than one row in source matches ON clause, its an errorKeeping Track of DependenciesNew dependency views replace sp_dependsViews are kept in sync as changes occursys.dm_sql_referenced_entitiesLists all named entities that an object referencesExample: which objects does this stored procedure use?sys.dm_sql_referencing_entities 

    Read the article

  • “Query cost (relative to the batch)” <> Query cost relative to batch

    - by Dave Ballantyne
    OK, so that is quite a contradictory title, but unfortunately it is true that a common misconception is that the query with the highest percentage relative to batch is the worst performing.  Simply put, it is a lie, or more accurately we dont understand what these figures mean. Consider the two below simple queries: SELECT * FROM Person.BusinessEntity JOIN Person.BusinessEntityAddress ON Person.BusinessEntity.BusinessEntityID = Person.BusinessEntityAddress.BusinessEntityID go SELECT * FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID After executing these and looking at the plans, I see this : So, a 13% / 87% split ,  but 13% / 87% of WHAT ? CPU ? Duration ? Reads ? Writes ? or some magical weighted algorithm ?  In a Profiler trace of the two we can find the metrics we are interested in. CPU and duration are well out but what about reads (210 and 1935)? To save you doing the maths, though you are more than welcome to, that’s a 90.2% / 9.8% split.  Close, but no cigar. Lets try a different tact.  Looking at the execution plan the “Estimated Subtree cost” of query 1 is 0.29449 and query 2 its 1.96596.  Again to save you the maths that works out to 13.03% and 86.97%, round those and thats the figures we are after.  But, what is the worrying word there ? “Estimated”.  So these are not “actual”  execution costs,  but what’s the problem in comparing the estimated costs to derive a meaning of “Most Costly”.  Well, in the case of simple queries such as the above , probably not a lot.  In more complicated queries , a fair bit. By modifying the second query to also show the total number of lines on each order SELECT *,COUNT(*) OVER (PARTITION BY Sales.SalesOrderDetail.SalesOrderID) FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID The split in percentages is now 6% / 94% and the profiler metrics are : Even more of a discrepancy. Estimates can be out with actuals for a whole host of reasons,  scalar UDF’s are a particular bug bear of mine and in-fact the cost of a udf call is entirely hidden inside the execution plan.  It always estimates to 0 (well, a very small number). Take for instance the following udf Create Function dbo.udfSumSalesForCustomer(@CustomerId integer) returns money as begin Declare @Sum money Select @Sum= SUM(SalesOrderHeader.TotalDue) from Sales.SalesOrderHeader where CustomerID = @CustomerId return @Sum end If we have two statements , one that fires the udf and another that doesn't: Select CustomerID from Sales.Customer order by CustomerID go Select CustomerID,dbo.udfSumSalesForCustomer(Customer.CustomerID) from Sales.Customer order by CustomerID The costs relative to batch is a 50/50 split, but the has to be an actual cost of firing the udf. Indeed profiler shows us : No where even remotely near 50/50!!!! Moving forward to window framing functionality in SQL Server 2012 the optimizer sees ROWS and RANGE ( see here for their functional differences) as the same ‘cost’ too SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid RANGE unbounded preceding) from Sales.SalesOrderdetail go SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid Rows unbounded preceding) from Sales.SalesOrderdetail By now it wont be a great display to show you the Profiler trace reads a *tiny* bit different. So moral of the story, Percentage relative to batch can give a rough ‘finger in the air’ measurement, but dont rely on it as fact.

    Read the article

  • SQL SERVER – Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical function LEAD() and LAG(). This functions accesses data from a subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join . It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Let us fun following query. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result. When we look at above resultset it is very clear that LEAD function gives us value which is going to come in next line and LAG function gives us value which was encountered in previous line. If we have to generate the same result without using this function we will have to use self join. In future blog post we will see the same. Let us explore this function a bit more. This function not only provide previous or next line but it can also access any line before or after using offset. Let us fun following query, where LEAD and LAG function accesses the row with offset of 2. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID,2) OVER (ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID,2) OVER (ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result. You can see the LEAD and LAG functions  now have interval of  rows when they are returning results. As there is interval of two rows the first two rows in LEAD function and last two rows in LAG function will return NULL value. You can easily replace this NULL Value with any other default value by passing third parameter in LEAD and LAG function. Let us fun following query. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID,2,0) OVER (ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID,2,0) OVER (ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result, where NULL are now replaced with value 0. Just like any other analytic function we can easily partition this function as well. Let us see the use of PARTITION BY in this clause. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result, where now the data is partitioned by SalesOrderID and LEAD and LAG functions are returning the appropriate result in that window. As now there are smaller partition in my query, you will see higher presence of NULL. In future blog post we will see how this functions are compared to SELF JOIN. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • PASS Summit 2011 &ndash; Part IV

    - by Tara Kizer
    This is the final blog for my PASS Summit 2011 series.  Well okay, a mini-series, I guess. On the last day of the conference, I attended Keith Elmore’ and Boris Baryshnikov’s (both from Microsoft) “Introducing the Microsoft SQL Server Code Named “Denali” Performance Dashboard Reports, Jeremiah Peschka’s (blog|twitter) “Rewrite your T-SQL for Great Good!”, and Kimberly Tripp’s (blog|twitter) “Isolated Disasters in VLDBs”. Keith and Boris talked about the lifecycle of a session, figuring out the running time and the waiting time.  They pointed out the transient nature of the reports.  You could be drilling into it to uncover a problem, but the session may have ended by the time you’ve drilled all of the way down.  Also, the reports are for troubleshooting live problems and not historical ones.  You can use Management Data Warehouse for historical troubleshooting.  The reports provide similar benefits to the Activity Monitor, however Activity Monitor doesn’t provide context sensitive drill through. One thing I learned in Keith’s and Boris’ session was that the buffer cache hit ratio should really never be below 87% due to the read-ahead mechanism in SQL Server.  When a page is read, it will read the entire extent.  So for every page read, you get 7 more read.  If you need any of those 7 extra pages, well they are already in cache.  I had a lot of fun in Jeremiah’s session about refactoring code plus I learned a lot.  His slides were visually presented in a fun way, which just made for a more upbeat presentation.  Jeremiah says that before you start refactoring, you should look at your system.  Investigate missing or too many indexes, out-of-date statistics, and other areas that could be leading to your code running slow.  He talked about code standards.  He suggested using common abbreviations for aliases instead of one-letter aliases.  I’m a big offender of one-letter aliases, but he makes a good point.  He said that join order does not matter to the optimizer, but it does matter to those who have to read your code.  Now let’s get into refactoring! Eliminate useless things – useless/unneeded joins and columns.  If you don’t need it, get rid of it! Instead of using DISTINCT/JOIN, replace with EXISTS Simplify your conditions; use UNION or better yet UNION ALL instead of OR to avoid a scan and use indexes for each union query Branching logic – instead of IF this, IF that, and on and on…use dynamic SQL (sp_executesql, please!) or use a parameterized query in the application Correlated subqueries – YUCK! Replace with a join Eliminate repeated patterns Last, but certainly not least, was Kimberly’s session.  Kimberly is my favorite speaker.  I attended her two-day pre-conference seminar at PASS Summit 2005 as well as a SQL Immersion Event last December.  Did I mention she’s my favorite speaker?  Okay, enough of that. Kimberly’s session was packed with demos.  I had seen some of it in the SQL Immersion Event, but it was very nice to get a refresher on these, especially since I’ve got a VLDB with some growing pains.  One key takeaway from her session is the idea to use a log shipping solution with a load delay, such as 6, 8, or 24 hours behind the primary.  In the case of say an accidentally dropped table in a VLDB, we could retrieve it from the secondary database rather than waiting an eternity for a restore to complete.  Kimberly let us know that in SQL Server 2012 (it finally has a name!), online rebuilds are supported even if there are LOB columns in your table.  This will simplify custom code that intelligently figures out if an online rebuild is possible. There was actually one last time slot for sessions that day, but I had an airplane to catch and my kids to see!

    Read the article

  • SQLAuthority News – Who I Am And How I Got Here – True Story as Blog Post

    - by pinaldave
    Here are few of the sample questions I get every day? “Give me shortcut to become superstar?” “How do I become like you?” “Which book I should read so I know everything?” “Can you share your secret to be successful? I want to know it but do not share with others.” There is generic answer I always give is to work hard and read good educational material or watch good online videos. One of the emails really caught my attention. It was from a friend and SQL Server Expert John Sansom (Blog | Twitter). He wrote if I would like to share my story with the world about “Who I am and How I got Here”. I was very much intrigued with his suggestion. John is one guy I respect a lot. Every single topic he writes, I read it with dedication. I eagerly wait for his Weekly Summary of Best SQL Links. If you have not read them, you are missing something out. Writing a guest post for him was like walking in memory lane. I remembered the time when I was beginning my career and I was bit overconfident and bit naive. I had my share of mistakes when I started my career. As time passed by I realize the truth. Well, we all do mistakes. Though, I am proud that as soon as I know my mistakes I corrected them. I never acted on impulse or when I am angry. I think that alone has helped me analysis the situation better and become better human being. During the course, I have lost my ego and it is replaced by passion. I am much more happy and successful in my work. Quite often people ask me if I am always online and wether I have family or not. Honestly, I am able to work hard because of my family. They support me and they encourage me to be enjoy in what I do. They support everything I do and personally, I do not miss a single occasion to join them in daily chores of fun. If there was a shortcut to success – I want know. I learnt SQL Server hard way and I am still learning. There are so many things, I have to learn. There is not enough time to learn everything which we want to learn. I am constantly working on it every day. I welcome you to join my journey as well. Please join me with my journey to learn SQL Server – more the merrier. I have written a story of my life as a guest post.  Read Here: A Journey to SQL Authority Special thanks to John Sansom (Blog | Twitter) for giving me space to talk my story. Indeed I am honored. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Generate a merge statement from table structure

    - by Nigel Rivett
    This code generates a merge statement joining on he natural key and checking all other columns to see if they have changed. The full version deals with type 2 processing and an audit trail but this version is useful. Just the insert or update part is handy too. Change the table at the top (spt_values in master in the version) and the join columns for the merge in @nk. The output generated is at the top and the code to run to generate it below. Output merge spt_values a using spt_values b on a.name = b.name and a.number = b.number and a.type = b.type when matched and (1=0 or (a.low b.low) or (a.low is null and b.low is not null) or (a.low is not null and b.low is null) or (a.high b.high) or (a.high is null and b.high is not null) or (a.high is not null and b.high is null) or (a.status b.status) or (a.status is null and b.status is not null) or (a.status is not null and b.status is null) ) then update set low = b.low , high = b.high , status = b.status when not matched by target then insert ( name , number , type , low , high , status ) values ( b.name , b.number , b.type , b.low , b.high , b.status ); Generator set nocount on declare @t varchar(128) = 'spt_values' declare @i int = 0 -- this is the natural key on the table used for the merge statement join declare @nk table (ColName varchar(128)) insert @nk select 'Number' insert @nk select 'Name' insert @nk select 'Type' declare @cols table (seq int, nkseq int, type int, colname varchar(128)) ;with cte as ( select ordinal_position, type = case when columnproperty(object_id(@t), COLUMN_NAME,'IsIdentity') = 1 then 3 when nk.ColName is not null then 1 else 0 end, COLUMN_NAME from information_schema.columns c left join @nk nk on c.column_name = nk.ColName where table_name = @t ) insert @cols (seq, nkseq, type, colname) select ordinal_position, row_number() over (partition by type order by ordinal_position) , type, COLUMN_NAME from cte declare @result table (i int, j int, k int, data varchar(500)) select @i = @i + 1 insert @result (i, data) select @i, 'merge ' + @t + ' a' select @i = @i + 1 insert @result (i, data) select @i, ' using cte b' select @i = @i + 1 insert @result (i, j, data) select @i, nkseq, ' ' + case when nkseq = 1 then 'on' else 'and' end + ' a.' + ColName + ' = b.' + ColName from @cols where type = 1 select @i = @i + 1 insert @result (i, data) select @i, ' when matched and (1=0' select @i = @i + 1 insert @result (i, j, k, data) select @i, seq, 1, ' or (a.' + ColName + ' b.' + ColName + ')' + ' or (a.' + ColName + ' is null and b.' + ColName + ' is not null)' + ' or (a.' + ColName + ' is not null and b.' + ColName + ' is null)' from @cols where type 1 select @i = @i + 1 insert @result (i, data) select @i, ' )' select @i = @i + 1 insert @result (i, data) select @i, ' then update set' select @i = @i + 1 insert @result (i, j, data) select @i, nkseq, ' ' + case when nkseq = 1 then ' ' else ', ' end + colname + ' = b.' + colname from @cols where type = 0 select @i = @i + 1 insert @result (i, data) select @i, ' when not matched by target then insert' select @i = @i + 1 insert @result (i, data) select @i, ' (' select @i = @i + 1 insert @result (i, j, data) select @i, seq, ' ' + case when seq = 1 then ' ' else ', ' end + colname from @cols where type 3 select @i = @i + 1 insert @result (i, data) select @i, ' )' select @i = @i + 1 insert @result (i, data) select @i, ' values' select @i = @i + 1 insert @result (i, data) select @i, ' (' select @i = @i + 1 insert @result (i, j, data) select @i, seq, ' ' + case when seq = 1 then ' ' else ', ' end + 'b.' + colname from @cols where type 3 select @i = @i + 1 insert @result (i, data) select @i, ' );' select data from @result order by i,j,k,data

    Read the article

  • Script to UPDATE STATISTICS with time window

    - by Bill Graziano
    I recently spent some time troubleshooting odd query plans and came to the conclusion that we needed better statistics.  We’ve been running sp_updatestats but apparently it wasn’t sampling enough of the table to get us what we needed.  I have a pretty limited window at night where I can hammer the disks while this runs.  The script below just calls UPDATE STATITICS on all tables that “need” updating.  It defines need as any table whose statistics are older than the number of days you specify (30 by default).  It also has a throttle so it breaks out of the loop after a set amount of time (60 minutes).  That means it won’t start processing a new table after this time but it might take longer than this to finish what it’s doing.  It always processes the oldest statistics first so it will eventually get to all of them.  It defaults to sample 25% of the table.  I’m not sure that’s a good default but it works for now.  I’ve tested this in SQL Server 2005 and SQL Server 2008.  I liked the way Michelle parameterized her re-index script and I took the same approach. CREATE PROCEDURE dbo.UpdateStatistics ( @timeLimit smallint = 60 ,@debug bit = 0 ,@executeSQL bit = 1 ,@samplePercent tinyint = 25 ,@printSQL bit = 1 ,@minDays tinyint = 30 )AS/******************************************************************* Copyright Bill Graziano 2010*******************************************************************/SET NOCOUNT ON;PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Launching...'IF OBJECT_ID('tempdb..#status') IS NOT NULL DROP TABLE #status;CREATE TABLE #status( databaseID INT , databaseName NVARCHAR(128) , objectID INT , page_count INT , schemaName NVARCHAR(128) Null , objectName NVARCHAR(128) Null , lastUpdateDate DATETIME , scanDate DATETIME CONSTRAINT PK_status_tmp PRIMARY KEY CLUSTERED(databaseID, objectID));DECLARE @SQL NVARCHAR(MAX);DECLARE @dbName nvarchar(128);DECLARE @databaseID INT;DECLARE @objectID INT;DECLARE @schemaName NVARCHAR(128);DECLARE @objectName NVARCHAR(128);DECLARE @lastUpdateDate DATETIME;DECLARE @startTime DATETIME;SELECT @startTime = GETDATE();DECLARE cDB CURSORREAD_ONLYFOR select [name] from master.sys.databases where database_id > 4OPEN cDBFETCH NEXT FROM cDB INTO @dbNameWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN SELECT @SQL = ' use ' + QUOTENAME(@dbName) + ' select DB_ID() as databaseID , DB_NAME() as databaseName ,t.object_id ,sum(used_page_count) as page_count ,s.[name] as schemaName ,t.[name] AS objectName , COALESCE(d.stats_date, ''1900-01-01'') , GETDATE() as scanDate from sys.dm_db_partition_stats ps join sys.tables t on t.object_id = ps.object_id join sys.schemas s on s.schema_id = t.schema_id join ( SELECT object_id, MIN(stats_date) as stats_date FROM ( select object_id, stats_date(object_id, stats_id) as stats_date from sys.stats) as d GROUP BY object_id ) as d ON d.object_id = t.object_id where ps.row_count > 0 group by s.[name], t.[name], t.object_id, COALESCE(d.stats_date, ''1900-01-01'') ' SET ANSI_WARNINGS OFF; Insert #status EXEC ( @SQL); SET ANSI_WARNINGS ON; END FETCH NEXT FROM cDB INTO @dbNameENDCLOSE cDBDEALLOCATE cDBDECLARE cStats CURSORREAD_ONLYFOR SELECT databaseID , databaseName , objectID , schemaName , objectName , lastUpdateDate FROM #status WHERE DATEDIFF(dd, lastUpdateDate, GETDATE()) >= @minDays ORDER BY lastUpdateDate ASC, page_count desc, [objectName] ASC OPEN cStatsFETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN IF DATEDIFF(mi, @startTime, GETDATE()) > @timeLimit BEGIN PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + '*** Time Limit Reached ***'; GOTO __DONE; END SELECT @SQL = 'UPDATE STATISTICS ' + QUOTENAME(@dBName) + '.' + QUOTENAME(@schemaName) + '.' + QUOTENAME(@ObjectName) + ' WITH SAMPLE ' + CAST(@samplePercent AS NVARCHAR(100)) + ' PERCENT;'; IF @printSQL = 1 PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + @SQL + ' (Last Updated: ' + CAST(@lastUpdateDate AS VARCHAR(100)) + ')' IF @executeSQL = 1 BEGIN EXEC (@SQL); END END FETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateEND__DONE:CLOSE cStatsDEALLOCATE cStatsPRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Completed.'GO

    Read the article

  • Indexed view deadlocking

    - by Dave Ballantyne
    Deadlocks can be a really tricky thing to track down the root cause of.  There are lots of articles on the subject of tracking down deadlocks, but seldom do I find that in a production system that the cause is as straightforward.  That being said,  deadlocks are always caused by process A needs a resource that process B has locked and process B has a resource that process A needs.  There may be a longer chain of processes involved, but that is the basic premise. Here is one such (much simplified) scenario that was at first non-obvious to its cause: The system has two tables,  Products and Stock.  The Products table holds the description and prices of a product whilst Stock records the current stock level. USE tempdb GO CREATE TABLE Product ( ProductID INTEGER IDENTITY PRIMARY KEY, ProductName VARCHAR(255) NOT NULL, Price MONEY NOT NULL ) GO CREATE TABLE Stock ( ProductId INTEGER PRIMARY KEY, StockLevel INTEGER NOT NULL ) GO INSERT INTO Product SELECT TOP(1000) CAST(NEWID() AS VARCHAR(255)), ABS(CAST(CAST(NEWID() AS VARBINARY(255)) AS INTEGER))%100 FROM sys.columns a CROSS JOIN sys.columns b GO INSERT INTO Stock SELECT ProductID,ABS(CAST(CAST(NEWID() AS VARBINARY(255)) AS INTEGER))%100 FROM Product There is a single stored procedure of GetStock: Create Procedure GetStock as SELECT Product.ProductID,Product.ProductName FROM dbo.Product join dbo.Stock on Stock.ProductId = Product.ProductID where Stock.StockLevel <> 0 Analysis of the system showed that this procedure was causing a performance overhead and as reads of this data was many times more than writes,  an indexed view was created to lower the overhead. CREATE VIEW vwActiveStock With schemabinding AS SELECT Product.ProductID,Product.ProductName FROM dbo.Product join dbo.Stock on Stock.ProductId = Product.ProductID where Stock.StockLevel <> 0 go CREATE UNIQUE CLUSTERED INDEX PKvwActiveStock on vwActiveStock(ProductID) This worked perfectly, performance was improved, the team name was cheered to the rafters and beers all round.  Then, after a while, something else happened… The system updating the data changed,  The update pattern of both the Stock update and the Product update used to be: BEGIN TRAN UPDATE... COMMIT BEGIN TRAN UPDATE... COMMIT BEGIN TRAN UPDATE... COMMIT It changed to: BEGIN TRAN UPDATE... UPDATE... UPDATE... COMMIT Nothing that would raise an eyebrow in even the closest of code reviews.  But after this change we saw deadlocks occuring. You can reproduce this by opening two sessions. In session 1 begin transaction Update Product set ProductName ='Test' where ProductID = 998 Then in session 2 begin transaction Update Stock set Stocklevel = 5 where ProductID = 999 Update Stock set Stocklevel = 5 where ProductID = 998 Hop back to session 1 and.. Update Product set ProductName ='Test' where ProductID = 999 Looking at the deadlock graphs we could see the contention was between two processes, one updating stock and the other updating product, but we knew that all the processes do to the tables is update them.  Period.  There are separate processes that handle the update of stock and product and never the twain shall meet, no reason why one should be requiring data from the other.  Then it struck us,  AH the indexed view. Naturally, when you make an update to any table involved in a indexed view, the view has to be updated.  When this happens, the data in all the tables have to be read, so that explains our deadlocks.  The data from stock is read when you update product and vice-versa. The fix, once you understand the problem fully, is pretty simple, the apps did not guarantee the order in which data was updated.  Luckily it was a relatively simple fix to order the updates and deadlocks went away.  Note, that there is still a *slight* risk of a deadlock occurring, if both a stock update and product update occur at *exactly* the same time.

    Read the article

  • XNA Multiplayer Games and Networking

    - by JoshReuben
    ·        XNA communication must by default be lightweight – if you are syncing game state between players from the Game.Update method, you must minimize traffic. That game loop may be firing 60 times a second and player 5 needs to know if his tank has collided with any player 3 and the angle of that gun turret. There are no WCF ServiceContract / DataContract niceties here, but at the same time the XNA networking stack simplifies the details. The payload must be simplistic - just an ordered set of numbers that you would map to meaningful enum values upon deserialization.   Overview ·        XNA allows you to create and join multiplayer game sessions, to manage game state across clients, and to interact with the friends list ·        Dependency on Gamer Services - to receive notifications such as sign-in status changes and game invitations ·        two types of online multiplayer games: system link game sessions (LAN) and LIVE sessions (WAN). ·        Minimum dev requirements: 1 Xbox 360 console + Creators Club membership to test network code - run 1 instance of game on Xbox 360, and 1 on a Windows-based computer   Network Sessions ·        A network session is made up of players in a game + up to 8 arbitrary integer properties describing the session ·        create custom enums – (e.g. GameMode, SkillLevel) as keys in NetworkSessionProperties collection ·        Player state: lobby, in-play   Session Types ·        local session - for split-screen gaming - requires no network traffic. ·        system link session - connects multiple gaming machines over a local subnet. ·        Xbox LIVE multiplayer session - occurs on the Internet. Ranked or unranked   Session Updates ·        NetworkSession class Update method - must be called once per frame. ·        performs the following actions: o   Sends the network packets. o   Changes the session state. o   Raises the managed events for any significant state changes. o   Returns the incoming packet data. ·        synchronize the session à packet-received and state-change events à no threading issues   Session Config ·        Session host - gaming machine that creates the session. XNA handles host migration ·        NetworkSession properties: AllowJoinInProgress , AllowHostMigration ·        NetworkSession groups: AllGamers, LocalGamers, RemoteGamers   Subscribe to NetworkSession events ·        GamerJoined ·        GamerLeft ·        GameStarted ·        GameEnded – use to return to lobby ·        SessionEnded – use to return to title screen   Create a Session session = NetworkSession.Create(         NetworkSessionType.SystemLink,         maximumLocalPlayers,         maximumGamers,         privateGamerSlots,         sessionProperties );   Start a Session if (session.IsHost) {     if (session.IsEveryoneReady)     {        session.StartGame();        foreach (var gamer in SignedInGamer.SignedInGamers)        {             gamer.Presence.PresenceMode =                 GamerPresenceMode.InCombat;   Find a Network Session AvailableNetworkSessionCollection availableSessions = NetworkSession.Find(     NetworkSessionType.SystemLink,       maximumLocalPlayers,     networkSessionProperties); availableSessions.AllowJoinInProgress = true;   Join a Network Session NetworkSession session = NetworkSession.Join(     availableSessions[selectedSessionIndex]);   Sending Network Data var packetWriter = new PacketWriter(); foreach (LocalNetworkGamer gamer in session.LocalGamers) {     // Get the tank associated with this player.     Tank myTank = gamer.Tag as Tank;     // Write the data.     packetWriter.Write(myTank.Position);     packetWriter.Write(myTank.TankRotation);     packetWriter.Write(myTank.TurretRotation);     packetWriter.Write(myTank.IsFiring);     packetWriter.Write(myTank.Health);       // Send it to everyone.     gamer.SendData(packetWriter, SendDataOptions.None);     }   Receiving Network Data foreach (LocalNetworkGamer gamer in session.LocalGamers) {     // Keep reading while packets are available.     while (gamer.IsDataAvailable)     {         NetworkGamer sender;          // Read a single packet.         gamer.ReceiveData(packetReader, out sender);          if (!sender.IsLocal)         {             // Get the tank associated with this packet.             Tank remoteTank = sender.Tag as Tank;              // Read the data and apply it to the tank.             remoteTank.Position = packetReader.ReadVector2();             …   End a Session if (session.AllGamers.Count == 1)         {             session.EndGame();             session.Update();         }   Performance •        Aim to minimize payload, reliable in order messages •        Send Data Options: o   Unreliable, out of order -(SendDataOptions.None) o   Unreliable, in order (SendDataOptions.InOrder) o   Reliable, out of order (SendDataOptions.Reliable) o   Reliable, in order (SendDataOptions.ReliableInOrder) o   Chat data (SendDataOptions.Chat) •        Simulate: NetworkSession.SimulatedLatency , NetworkSession.SimulatedPacketLoss •        Voice support – NetworkGamer properties: HasVoice ,IsTalking , IsMutedByLocalUser

    Read the article

  • Fetching Partition Information

    - by Mike Femenella
    For a recent SSIS package at work I needed to determine the distinct values in a partition, the number of rows in each partition and the file group name on which each partition resided in order to come up with a grouping mechanism. Of course sys.partitions comes to mind for some of that but there are a few other tables you need to link to in order to grab the information required. The table I’m working on contains 8.8 billion rows. Finding the distinct partition keys from this table was not a fast operation. My original solution was to create  a temporary table, grab the distinct values for the partitioned column, then update via sys.partitions for the rows and the $partition function for the partitionid and finally look back to the sys.filegroups table for the filegroup names. It wasn’t pretty, it could take up to 15 minutes to return the results. The primary issue is pulling distinct values from the table. Queries for distinct against 8.8 billion rows don’t go quickly. A few beers into a conversation with a friend and we ended up talking about work which led to a conversation about the task described above. The solution was already built in SQL Server, just needed to pull it together. The first table I needed was sys.partition_range_values. This contains one row for each range boundary value for a partition function. In my case I have a partition function which uses dayid values. For example July 4th would be represented as an int, 20130704. This table lists out all of the dayid values which were defined in the function. This eliminated the need to query my source table for distinct dayid values, everything I needed was already built in here for me. The only caveat was that in my SSIS package I needed to create a bucket for any dayid values that were out of bounds for my function. For example if my function handled 20130501 through 20130704 and I had day values of 20130401 or 20130705 in my table, these would not be listed in sys.partition_range_values. I just created an “everything else” bucket in my ssis package just in case I had any dayid values unaccounted for. To get the number of rows for a partition is very easy. The sys.partitions table contains values for each partition. Easy enough to achieve by querying for the object_id and index value of 1 (the clustered index) The final piece of information was the filegroup name. There are 2 options available to get the filegroup name, sys.data_spaces or sys.filegroups. For my query I chose sys.filegroups but really it’s a matter of preference and data needs. In order to bridge between sys.partitions table and either sys.data_spaces or sys.filegroups you need to get the container_id. This can be done by joining sys.allocation_units.container_id to the sys.partitions.hobt_id. sys.allocation_units contains the field data_space_id which then lets you join in either sys.data_spaces or sys.file_groups. The end result is the query below, which typically executes for me in under 1 second. I’ve included the join to sys.filegroups and to sys.dataspaces, and I’ve  just commented out the join sys.filegroups. As I mentioned above, this shaves a good 10-15 minutes off of my original ssis package and is a really easy tweak to get a boost in my ETL time. Enjoy.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >