Search Results

Search found 19449 results on 778 pages for 'query builder'.

Page 50/778 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Does query plan optimizer works well with joined/filtered table-valued functions?

    - by smoothdeveloper
    In SQLSERVER 2005, I'm using table-valued function as a convenient way to perform arbitrary aggregation on subset data from large table (passing date range or such parameters). I'm using theses inside larger queries as joined computations and I'm wondering if the query plan optimizer work well with them in every condition or if I'm better to unnest such computation in my larger queries. Does query plan optimizer unnest table-valued functions if it make sense? If it doesn't, what do you recommend to avoid code duplication that would occur by manually unnesting them? If it does, how do you identify that from the execution plan? code sample: create table dbo.customers ( [key] uniqueidentifier , constraint pk_dbo_customers primary key ([key]) ) go /* assume large amount of data */ create table dbo.point_of_sales ( [key] uniqueidentifier , customer_key uniqueidentifier , constraint pk_dbo_point_of_sales primary key ([key]) ) go create table dbo.product_ranges ( [key] uniqueidentifier , constraint pk_dbo_product_ranges primary key ([key]) ) go create table dbo.products ( [key] uniqueidentifier , product_range_key uniqueidentifier , release_date datetime , constraint pk_dbo_products primary key ([key]) , constraint fk_dbo_products_product_range_key foreign key (product_range_key) references dbo.product_ranges ([key]) ) go . /* assume large amount of data */ create table dbo.sales_history ( [key] uniqueidentifier , product_key uniqueidentifier , point_of_sale_key uniqueidentifier , accounting_date datetime , amount money , quantity int , constraint pk_dbo_sales_history primary key ([key]) , constraint fk_dbo_sales_history_product_key foreign key (product_key) references dbo.products ([key]) , constraint fk_dbo_sales_history_point_of_sale_key foreign key (point_of_sale_key) references dbo.point_of_sales ([key]) ) go create function dbo.f_sales_history_..snip.._date_range ( @accountingdatelowerbound datetime, @accountingdateupperbound datetime ) returns table as return ( select pos.customer_key , sh.product_key , sum(sh.amount) amount , sum(sh.quantity) quantity from dbo.point_of_sales pos inner join dbo.sales_history sh on sh.point_of_sale_key = pos.[key] where sh.accounting_date between @accountingdatelowerbound and @accountingdateupperbound group by pos.customer_key , sh.product_key ) go -- TODO: insert some data -- this is a table containing a selection of product ranges declare @selectedproductranges table([key] uniqueidentifier) -- this is a table containing a selection of customers declare @selectedcustomers table([key] uniqueidentifier) declare @low datetime , @up datetime -- TODO: set top query parameters . select saleshistory.customer_key , saleshistory.product_key , saleshistory.amount , saleshistory.quantity from dbo.products p inner join @selectedproductranges productrangeselection on p.product_range_key = productrangeselection.[key] inner join @selectedcustomers customerselection on 1 = 1 inner join dbo.f_sales_history_..snip.._date_range(@low, @up) saleshistory on saleshistory.product_key = p.[key] and saleshistory.customer_key = customerselection.[key] I hope the sample makes sense. Much thanks for your help!

    Read the article

  • SQL SERVER – Detect Virtual Log Files (VLF) in LDF

    - by pinaldave
    In one of the recent training engagements, I was asked if it true that there are multiple small log files in the large log file (LDF). I found this question very interesting as the answer is yes. Multiple small Virtual Log Files commonly known as VLFs together make an LDF file. The writing of the VLF is sequential and resulting in the writing of the LDF file is sequential as well. This leads to another talk that one does not need more than one log file in most cases. However, in short, you can use following DBCC command to know how many Virtual Log Files or VLFs are present in your log file. DBCC LOGINFO You can find the result of above query to something as displayed in following image. You can see the column which is marked as 2 which means it is active VLF and the one with 0 which is inactive VLF. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Big Data – Buzz Words: What is NewSQL – Day 10 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the relational database. In this article we will take a quick look at the what is NewSQL. What is NewSQL? NewSQL stands for new scalable and high performance SQL Database vendors. The products sold by NewSQL vendors are horizontally scalable. NewSQL is not kind of databases but it is about vendors who supports emerging data products with relational database properties (like ACID, Transaction etc.) along with high performance. Products from NewSQL vendors usually follow in memory data for speedy access as well are available immediate scalability. NewSQL term was coined by 451 groups analyst Matthew Aslett in this particular blog post. On the definition of NewSQL, Aslett writes: “NewSQL” is our shorthand for the various new scalable/high performance SQL database vendors. We have previously referred to these products as ‘ScalableSQL‘ to differentiate them from the incumbent relational database products. Since this implies horizontal scalability, which is not necessarily a feature of all the products, we adopted the term ‘NewSQL’ in the new report. And to clarify, like NoSQL, NewSQL is not to be taken too literally: the new thing about the NewSQL vendors is the vendor, not the SQL. In other words - NewSQL incorporates the concepts and principles of Structured Query Language (SQL) and NoSQL languages. It combines reliability of SQL with the speed and performance of NoSQL. Categories of NewSQL There are three major categories of the NewSQL New Architecture – In this framework each node owns a subset of the data and queries are split into smaller query to sent to nodes to process the data. E.g. NuoDB, Clustrix, VoltDB MySQL Engines – Highly Optimized storage engine for SQL with the interface of MySQ Lare the example of such category. E.g. InnoDB, Akiban Transparent Sharding – This system automatically split database across multiple nodes. E.g. Scalearc  Summary In simple words – NewSQL is kind of database following relational database principals and provides scalability like NoSQL. Tomorrow In tomorrow’s blog post we will discuss about the Role of Cloud Computing in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Template Browser – A Very Important and Useful Feature of SSMS

    - by pinaldave
    Let me start today’s blog post with a direction question. How many of you have ever used Template Browser? Template Browser is a very important and useful feature of SQL Server Management Studio (SSMS). Every time when I am talking about SQL Server there is always someone comes up with the question, why there is no step by step procedure included in SSMS for features. Honestly every time I get this question, the question I ask back is How many of you have ever used Template Browser? I think the answer to this question is most of the time either no or we have not heard of the feature. One of the people asked me back – have you ever written about it on your blog? I have not yet written about it. Basically there is nothing much to write about it. It is pretty straight forward feature, like any other feature and it is indeed difficult to elaborate. However, I will try to give a quick introduction to this feature. Templates are like a quick cheat sheet or quick reference. Templates are available to create objects like databases, tables, views, indexes, stored procedures, triggers, statistics, and functions. Templates are also available for Analysis Services as well. The template scripts contain parameters to help you customize the code. You can Replace Template Parameters dialog box to insert values into the script. Additionally users can create new custom templates as well with folder structure. To open a template from Template Explorer Go to View menu >> Template Explorer or type CTRL+ALT+L. You will find a list of categories click on any category and expand the folder structure. For our sample example let us expand Index Folder. In this folder you will notice the various T-SQL Scripts. These scripts can be opened by double click or can be dragged to editor area and modified as needed. Sample template is now available in the query editor area with all the necessary parameter place folder. You can replace the same parameter by typing either CTRL+SHIFT+M or by going to Query Menu >> Specify Values for Template Parameters. In this screen it will show  Specify Values for Template Parameters dialog box, accept the value or replace it with a new value. This will now get your script ready to go. Check it one more time and change the script to fit your requirement. I personally use template explorer for two things. First one is obviously for templates but the hidden one and an important one is for learning new features and T-SQL commands. There is so much to learn and so little time. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Did You Know? More online seminars!

    - by Kalen Delaney
    I am in Tucson again, having just recorded two more online workshops to be broadcast by SSWUG. We haven't set the dates yet, but we are thinking about offering a special package deal for the two of them. The topics really are related and I think they would work well together. They are both on aspects of Query Processing. The first was on how to interpret Query Plans and is an introduction to the topic. However, it only includes a discussion of how SQL Server actually processes your queries. For example,...(read more)

    Read the article

  • SQL SERVER – Retrieving Random Rows from Table Using NEWID()

    - by pinaldave
    I have previously written about how to get random rows from SQL Server. SQL SERVER – Generate A Single Random Number for Range of Rows of Any Table – Very interesting Question from Reader SQL SERVER – Random Number Generator Script – SQL Query However, I have not blogged about following trick before. Let me share the trick here as well. You can generate random scripts using following methods as well. USE AdventureWorks2012 GO -- Method 1 SELECT TOP 100 * FROM Sales.SalesOrderDetail ORDER BY NEWID() GO -- Method 2 SELECT TOP 100 * FROM Sales.SalesOrderDetail ORDER BY CHECKSUM(NEWID()) GO You will notice that using NEWID() in the ORDER BY will return random rows in the result set. How many of you knew this trick? You can run above script multiple times and it will give random rows every single time. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Mysql: ROLLBACK for multiple queries

    - by Raj
    Hi I have more than three MySql queiries in a PHP script triggered by scheduled task. If a query catch an error, script throw an exception and rollback that Mysql query. It works fine. However if first query works fine, but not 2nd query, throw an exception, it rollback 2nd one but not 1st query. I am using begin_trans(), commit and rollback() for individual queries because Sometimes i need to rollback one query, sometimes all queries. Is there any way to rollback one query or all queries? Thanks in advance UPDATE: I got it working, there was no problem with in begin_trans(), commit and rollback(), the database connection config was different for one query from other queries, crazy code without any comments!!!

    Read the article

  • Query notation for the sitecore 'source' field in template builder

    - by M.R.
    I am trying to set the the source field of a template using the query notation (or xpath - whichever works), but none of them seems to be working. My content tree is a multisite content tree: France --Page 1 ----Page1A -------Page1AA --Page 2 --Page 3 --METADATA ----Regions US --Page 1 ----Page1A -------Page1AA --Page 2 --Page 3 --METADATA ----Regions Each site has its own METADATA folder, and I want it so that when adding a page inside each of the main country nodes, I want the values to reflect whatever is in the METADATA of that site. I have two different fields for now - a droplink and a treelistex field. So I thought I can just get the parent item that is a country site, and get the metadata folder for that. When I put the following query in both the fields, I get different results: query:./ancestor::*[@@templatename='CountryHome']/METADATA/Regions/* For the droplink field, I get only the first Region (one item) For the treelistex field, I get the entire content tree I then tried to modify the query a little bit and took the 'query' notation out ./ancestor::*[@@templatename='CountryHome']/METADATA/Regions/* If I go to the developer center/xpath builder, and set the context node to any item underneath the main country site, it returns me exactly what I need, but when I put this in the source, I get the entire content tree in both the cases. Help!

    Read the article

  • Help with MySQL query... Need help ordering a group of rows

    - by user156814
    I can tell it best by explaining the query I have, and what I need. I need to be able to get a group of items from the database, grouped by category, manufacturer, and year made. The groupings need to be sorted based on total amount of items within the group. This part is done with the query below. Secondly, I need to be able to show an image of the most expensive item out of the group, which is why I use MAX(items.current_price). I thought MAX() gets the ENTIRE row corresponding to the largest column value. I was wrong, as MAX only gets the numeric value of the largest price. So the query doesnt work well for that. SELECT items.id, items.year, items.manufacturer, COUNT(items.id) AS total, MAX(items.current_price) AS price, items.gallery_url, FROM ebay AS items WHERE items.primary_category_id = 213 AND items.year <> '' AND items.manufacturer <> '' AND items.bad_item <> 1 GROUP BY items.primary_category_id, items.manufacturer, items.year ORDER BY total DESC, price ASC LIMIT 10 if that doesnt explain it well, the results should be something like this id 10548 year 1989 manufacturer bowman total 451 price 8500.00 (The price of the most expensive item in the table/ not the price of item 10548) gallery_url http://ebay.xxxxx (The image of item 10548) A little help please. Thanks

    Read the article

  • how to diffrentiate between same field names of two tables in a select query??

    - by developer
    i have more than two tables in my database and all of them contains same field names like table A table B table C field1 field1 field1 field2 field2 field2 field3 field3 field3 . . . . . . . . . . . . I have to write a SELECT query which gets almost all same fields from these 3 tables.Iam using something like this :- select a.field1,a.field2,a.field3,b.field1,b.field2,b.field3,c.field1,c.field2,c.field3 from table A as a, table B as b,table C as c where so and so. but when i print field1's value it gives me the last table values. How can i get all the values of three tables with the same field names??? do i have to write individual query for every table OR there is any ways of fetching them all in a single query????

    Read the article

  • Why would using a Temp table be faster than a nested query?

    - by Mongus Pong
    We are trying to optimise some of our queries. One query is doing the following: SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, INTO [#Gadget] FROM task t SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) as Client FROM [#Gadget] order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC DROP TABLE [#Gadget] (I have removed the complex subquery, cos I dont think its relevant other than to explain why this query has been done as a two stage process.) Now I would have thought it would be far more efficient to merge this down into a single query using subqueries as : SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) FROM ( SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, FROM task t ) as sub order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC This would give the optimiser better information to work out what was going on and avoid any temporary tables. It should be faster. But it turns out it is a lot slower. 8 seconds vs under 5 seconds. I cant work out why this would be the case as all my knowledge of databases imply that subqueries would always be faster than using temporary tables. Can anyone explain what could be going on!?!?

    Read the article

  • How to remove/hide <div></div> tags only without the content?

    - by candies
    For example I have: <div id ="test">[the content here]</div> The content within the div tags will appear after I called the id of div using ajax. This is the code: function dinamic(add) { var kode = add.value; if (!kode) return; xmlhttp2.open('get', '../template/get_id.php?kode='+kode, true); xmlhttp2.onreadystatechange = function() { if ((xmlhttp2.readyState == 4) && (xmlhttp2.status == 200)) { var add = document.getElementById("test"); add.innerHTML = xmlhttp2.responseText; } return false; } xmlhttp2.send(null); } So it will appear <div id="test">A</div> I'd like to put the content of div - A into mysql query. $test = $_GET['test']; $query = "select * from example where category='$test'"; I've tried to make variable $test of the div id to get the content but result of the query in category is none. I tried again, I put the div in to the query $query = "select * from example where category='<div id=\"test\">A</div>'"; Yes, It works. But when I did query on navicat, no results I got because there's spaces between A that is <div> and </div>. How to remove/hide the div tags only so its only appear the content? > $query = "select * from example where category='A'"; < Edit: If I echo the query on firefox browser will say "$query = "select * from example where category='[space]A[space]'";" And look at the bug(I use firebug), it will say "$query = "select * from example where category='<div id="test">A</div>'";" So my guessing why can't get result after query on navicat is there's spaces between A([space]A[space]), just have no idea how to remove/hide the div tags, I want to get this result only "$query = "select * from example where category='A'";" Thanks.

    Read the article

  • mysql query trying to search by alias involving CASES and aggregate functions UGH!

    - by dqhendricks
    I have two tables left joined. The query is grouped by the left table's ID column. The right table has a date column called close_date. The problem is, if there are any right table records that have not been closed (thus having a close_date of 0000-00-00), then I do not want any of the left table records to be shown, and if there are NO right table records with a close_date of 0000-00-00, I would like only the right table record with the MAX close date to be returned. So for simplicity sake, let's say the tables look like this: Table1 id 1 2 Table2 table1_id | close_date 1 | 0000-00-00 1 | 2010-01-01 2 | 2010-01-01 2 | 2010-01-02 I would like the query to only return this: Table1.id | Table2.close_date 2 | 2010-01-02 I tried to come up with an answer using aliased CASES and aggregate functions, but I could not search by the result, and I was attempting not to make a 3 mile long query to solve the problem. I looked through a few of the related posts on here, but none seem to meet the criteria of this particular case. Any pushes in the right direction would be greatly appreciated. Thanks!

    Read the article

  • Slow MySQL query....only sometimes

    - by Shane N
    I have a query that's used in a reporting system of ours that sometimes runs quicker than a second, and other times takes 1 to 10 minutes to run. Here's the entry from the slow query log: # Query_time: 543 Lock_time: 0 Rows_sent: 0 Rows_examined: 124948974 use statsdb; SELECT count(distinct Visits.visitorid) as 'uniques' FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime>=1275721200 and visittime<=1275807599 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9 AND Visits.visitorid NOT IN (SELECT Visits.visitorid FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime<1275721200 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9); It's basically counting unique visitors, and it's doing that by counting the visitors for today and then substracting those that have been here before. If you know of a better way to do this, let me know. I just don't understand why sometimes it can be so quick, and other times takes so long - even with the same exact query under the same server load. Here's the EXPLAIN on this query. As you can see it's using the indexes I've set up: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY Visits range visittime_visitorid,visitorid visittime_visitorid 4 NULL 82500 Using where; Using index 1 PRIMARY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where 2 DEPENDENT SUBQUERY Visits ref visittime_visitorid,visitorid visitorid 8 func 1 Using where 2 DEPENDENT SUBQUERY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where I tried to optimize the query a few weeks ago and came up with a variation that consistently took about 2 seconds, but in practice it ended up taking more time since 90% of the time the old query returned much quicker. Two seconds per query is too long because we are calling the query up to 50 times per page load, with different time periods. Could the quick behavior be due to the query being saved in the query cache? I tried running 'RESET QUERY CACHE' and 'FLUSH TABLES' between my benchmark tests and I was still getting quick results most of the time. Note: last night while running the query I got an error: Unable to save result set. My initial research shows that may be due to a corrupt table that needs repair. Could this be the reason for the behavior I'm seeing? In case you want server info: Accessing via PHP 4.4.4 MySQL 4.1.22 All tables are InnoDB We run optimize table on all tables weekly The sum of both the tables used in the query is 500 MB MySQL config: key_buffer = 350M max_allowed_packet = 16M thread_stack = 128K sort_buffer = 14M read_buffer = 1M bulk_insert_buffer_size = 400M set-variable = max_connections=150 query_cache_limit = 1048576 query_cache_size = 50777216 query_cache_type = 1 tmp_table_size = 203554432 table_cache = 120 thread_cache_size = 4 wait_timeout = 28800 skip-external-locking innodb_file_per_table innodb_buffer_pool_size = 3512M innodb_log_file_size=100M innodb_log_buffer_size=4M

    Read the article

  • How to factorize common tags with nokogiri builder ?

    - by plafoucriere
    Hi, I'd like to create several builders, with common tags, in order to have xml docs like : <xml version="1.0"?> <a_kind_of_root> <!-- This part is common --> <event_date>20100514</event_date> <event_id>123</event_id> <event_type>Conference</event_type> <!-- This part is specific to the builder --> <my_tag>some text</my_tag> </a_kind_of_root> </xml> <xml version="1.0"?> <another_kind_of_root> <!-- This part is common --> <event_date>20100514</event_date> <event_id>123</event_id> <event_type>Conference</event_type> <!-- This part is specific to the builder --> <my_other_tag>some integer</my_other_tag> </another_kind_of_root> </xml> I don't know how to put the common part inside a Nokogiri::XML::Builder Thanks

    Read the article

  • Five Query Optimizations in MySQL

    Query optimization is an often overlooked part of applications. Sean Hull encourages at least some attention to query optimization up front and helps you identify some of the more common optimizations you may run across.

    Read the article

  • How to build a Query Template Explorer

    Having introduced his cross-platform Query Template solution, Michael now gives us the technical details on how to integrate his .NET controls into applications both simple and complex. With screenshots and code samples, this has everything you need to build your own powerful SQL editor or Query Template explorer.

    Read the article

  • Search For a Query in RDL Files with PowerShell

    - by AllenMWhite
    In tracking down poorly performing queries for clients I often encounter the query text in a trace file I've captured, but don't know the source of the query. I've found that many of the poorest performing queries are those written into the reports the business users need to make their decisions. If I can't figure out where they came from, usually years after the queries were written, I can't fix them. First thing I did was find a great utility called RSScripter , which opens up a Windows dialog...(read more)

    Read the article

  • SQL Server Prefetch and Query Performance

    Prefetching can make a surprising difference to SQL Server query execution times where there is a high incidence of waiting for disk i/o operations, but the benefits come at a cost. Mostly, the Query Optimizer gets it right, but occasionally there are queries that would benefit from tuning. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • SQL SERVER – 2012 – All Download Links in Single Page – SQL Server 2012

    - by pinaldave
    SQL Server 2012 RTM is just announced and recently I wrote about all the SQL Server 2012 Certification on single page. As a feedback, I received suggestions to have a single page where everything about SQL Server 2012 is listed. I will keep this page updated as new updates are announced. Microsoft SQL Server 2012 Evaluation Microsoft SQL Server 2012 enables a cloud-ready information platform that will help organizations unlock breakthrough insights across the organization. Microsoft SQL Server 2012 Express Microsoft SQL Server 2012 Express is a powerful and reliable free data management system that delivers a rich and reliable data store for lightweight Web Sites and desktop applications. Microsoft SQL Server 2012 Feature Pack The Microsoft SQL Server 2012 Feature Pack is a collection of stand-alone packages which provide additional value for Microsoft SQL Server 2012. Microsoft SQL Server 2012 Report Builder Report Builder provides a productive report-authoring environment for IT professionals and power users. It supports the full capabilities of SQL Server 2012 Reporting Services. Microsoft SQL Server 2012 Master Data Services Add-in For Microsoft Excel The Master Data Services Add-in for Excel gives multiple users the ability to update master data in a familiar tool without compromising the data’s integrity in Master Data Services. Microsoft SQL Server 2012 Performance Dashboard Reports The SQL Server 2012 Performance Dashboard Reports are Reporting Services report files designed to be used with the Custom Reports feature of SQL Server Management Studio. Microsoft SQL Server 2012 PowerPivot for Microsoft Excel® 2010 Microsoft PowerPivot for Microsoft Excel 2010 provides ground-breaking technology; fast manipulation of large data sets, streamlined integration of data, and the ability to effortlessly share your analysis through Microsoft SharePoint. Microsoft SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Technologies 2010 The SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint 2010 technologies allows you to integrate your reporting environment with the collaborative SharePoint 2010 experience. Microsoft SQL Server 2012 Semantic Language Statistics The Semantic Language Statistics Database is a required component for the Statistical Semantic Search feature in Microsoft SQL Server 2012 Semantic Language Statistics. Microsoft ®SQL Server 2012 FileStream Driver – Windows Logo Certification Catalog file for Microsoft SQL Server 2012 FileStream Driver that is certified for WindowsServer 2008 R2. It meets Microsoft standards for compatibility and recommended practices with the Windows Server 2008 R2 operating systems. Microsoft SQL Server StreamInsight 2.0 Microsoft StreamInsight is Microsoft’s Complex Event Processing technology to help businesses create event-driven applications and derive better insights by correlating event streams from multiple sources with near-zero latency. Microsoft JDBC Driver 4.0 for SQL Server Download the Microsoft JDBC Driver 4.0 for SQL Server, a Type 4 JDBC driver that provides database connectivity through the standard JDBC application program interfaces (APIs) available in Java Platform, Enterprise Edition 5 and 6. Data Quality Services Performance Best Practices Guide This guide focuses on a set of best practices for optimizing performance of Data Quality Services (DQS). Microsoft Drivers 3.0 for SQL Server for PHP The Microsoft Drivers 3.0 for SQL Server for PHP provide connectivity to Microsoft SQLServer from PHP applications. Product Documentation for Microsoft SQL Server 2012 for firewall and proxy restricted environments The Microsoft SQL Server 2012 setup installs only the Help Viewer…install any documentation. All of the SQL Server documentation is available online. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – 3 Challenges for DBA and Smart Solutions

    - by Pinal Dave
    Developer’s life is never easy. DBA’s life is even crazier. DBA’s Life When a developer wakes up in the morning, most of the time have no idea what different challenges they are going to face that day. Of course, most of the developers know the project and roadmap, which they are working on. However, developers have no clue what coding challenges which they are going face for that day. DBA’s life is even crazier. When DBA wakes up in the morning – they often thank that they were not disturbed during the night due to server issues. The very next thing they wish is that they do not want to challenge which they can’t solve for that day. The problems DBA face every single day are mostly unpredictable and they just have to solve them as they come during the day. Though the life of DBA is not always bad. There are always ways and methods how one can overcome various challenges. Let us see three of the challenges and how a DBA can use various tools to overcome them. Challenge #1 Synchronize Data Across Server A Very common challenge DBA receive is that they have to synchronize the data across the servers. If you try to manually write that up, it may take forever to accomplish the task. It is nearly impossible to do the same with the help of the T-SQL. However, thankfully there are tools like dbForge Studio which can save a day and synchronize data across servers. Read my detailed blog post about the same over here: SQL SERVER – Synchronize Data Exclusively with T-SQL. Challenge #2 SQL Report Builder DBA’s are often asked to build reports on the go. It really annoys DBA’s, but hardly people care about it. No matter how busy a DBA is, they are just called upon to build reports on things on very short notice. I personally like to avoid any task which is given to me accidently and personally building report can be boring. I rather spend time with High Availability, disaster recovery, performance tuning rather than building report. I use SQL third party tool when I have to work with SQL Report. Others have extended reporting capabilities. The latter group of products includes the SQL report builder built-in todbForge Studio for SQL Server. I have blogged about this earlier over here: SQL SERVER – SQL Report Builder in dbForge Studio for SQL Server. Challenge #3 Work with the OTHER Database The manager does not understand that MySQL is different from SQL Server and SQL Server is different from Oracle. For them everything is same. In my career hundreds of times I have faced a situation that I am given a database to manage or do some task when their regular DBA is on vacation or leave. When I try to explain I do not understand the underlying the technology, I have been usually told that my manager has trust on me and I can do anything. Honestly, I can’t but I hardly dare to argue. I fall back on the third party tool to manage database when it is not in my comfort zone. For example, I was once given MySQL performance tuning task (at that time I did not know MySQL so well). To simplify search for a problem query let us use MySQL Profiler in dbForge Studio for MySQL. It provides such commands as a Query Profiling Mode and Generate Execution Plan. Here is the blog post discussing about the same: MySQL – Profiler : A Simple and Convenient Tool for Profiling SQL Queries. Well, that’s it! There were many different such occasions when I have been saved by the tool. May be some other day I will write part 2 of this blog post. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL Tagged: Devart, SQL Tool

    Read the article

  • SQL Server Reporting Services - website blank, builder works

    - by Keith
    We have a few reports in SQL Server Reporting Services. For some reason when we run the report from the website, it doesn't return any data. When I run the same report from the Report Builder, it returns data. I looked in the logs and the only errors I could find is: ReportingServicesService!library!8!6/15/2012-08:12:33:: i INFO: Current DB Version Unknown, Instance Version C.0.8.54. ReportingServicesService!library!8!6/15/2012-08:12:33:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights., ;Info: Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights. ReportingServicesService!library!8!6/15/2012-08:12:33:: e ERROR: Exception caught while starting service. Error: Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights. I'm not really sure why it would be a different version. It's all SQL Server 2008 R2 and I haven't made any changes to it since it's been running.

    Read the article

  • Wrong DNS query in Active directory network with NetBIOS enabled client

    - by koankoder
    The setup: Active Directory is enabled on the network (abcd.com) We have a single character host name (1.abcd.com) one of the desktop has an old XP with NetBIOS stuff enabled The Problem Whenever we query for any host name from the XP machine, the first character alone is taken for DNS query (one.abcd.com will query for o.abcd.com, two.abcd.com will query for t.abcd.com) Even if we give some IP, the application queries with numeric prefix (10.x.x.x will query for 1.abcd.com).Since we already have 1.abcd.com, all query and traffic ends up in 1.abcd.com After discussion with network guys, it seems netbios DNS queries by having some prefix etc. but none of them is actually sure on what is happening. Is there any docs which can explain this behavior ? Is this valid behavior in NetBIOS environment ?

    Read the article

  • SQL SERVER – 3 Online SQL Courses at Pluralsight and Free Learning Resources

    - by pinaldave
    Usain Bolt is an inspiration for all. He broke his own record multiple times because he wanted to do better! Read more about him on wikipedia. He is great and indeed fastest man on the planet. Usain Bolt – World’s Fastest Man “Can you teach me SQL Server Performance Tuning?” This is one of the most popular questions which I receive all the time. The answer is YES. I would love to do performance tuning training for anyone, anywhere.  It is my favorite thing to do, and it is my favorite thing to train others in.  If possible, I would love to do training 24 hours a day, 7 days a week, 365 days a year.  To me, it doesn’t feel like a job. Of course, as much as I would love to do performance tuning 24/7/365, obviously I am just one human being and can only be in one place t one time.  It is also very difficult to train more than one person at a time, and it is difficult to train two or more people at a time, especially when the two people are at different levels.  I am also limited by geography.  I live in India, and adjust to my own time zone.  Trying to teach a live course from India to someone whose time zone is 12 or more hours off of mine is very difficult.  If I am trying to teach at 2 am, I am sure I am not at my best! There was only one solution to scale – Online Trainings. I have built 3 different courses on SQL Server Performance Tuning with Pluralsight. Now I have no problem – I am 100% scalable and available 24/7 and 365. You can make me say the same things again and again till you find it right. I am in your mobile, PC as well as on XBOX. This is why I am such a big fan of online courses.  I have recorded many performance tuning classes and you can easily access them online, at your own time.  And don’t think that just because these aren’t live classes you won’t be able to get any feedback from me.  I encourage all my viewers to go ahead and ask me questions by e-mail, Twitter, Facebook, or whatever way you can get a hold of me. Here are details of three of my courses with Pluralsight. I suggest you go over the description of the course. As an author of the course, I have few FREE codes for watching the free courses. Please leave a comment with your valid email address, I will send a few of them to random winners. SQL Server Performance: Introduction to Query Tuning  SQL Server performance tuning is an art to master – for developers and DBAs alike. This course takes a systematic approach to planning, analyzing, debugging and troubleshooting common query-related performance problems. This includes an introduction to understanding execution plans inside SQL Server. In this almost four hour course we cover following important concepts. Introduction 10:22 Execution Plan Basics 45:59 Essential Indexing Techniques 20:19 Query Design for Performance 50:16 Performance Tuning Tools 01:15:14 Tips and Tricks 25:53 Checklist: Performance Tuning 07:13 The duration of each module is mentioned besides the name of the module. SQL Server Performance: Indexing Basics This course teaches you how to master the art of performance tuning SQL Server by better understanding indexes. In this almost two hour course we cover following important concepts. Introduction 02:03 Fundamentals of Indexing 22:21 Practical Indexing Implementation Techniques 37:25 Index Maintenance 16:33 Introduction to ColumnstoreIndex 08:06 Indexing Practical Performance Tips and Tricks 24:56 Checklist : Index and Performance 07:29 The duration of each module is mentioned besides the name of the module. SQL Server Questions and Answers This course is designed to help you better understand how to use SQL Server effectively. The course presents many of the common misconceptions about SQL Server, and then carefully debunks those misconceptions with clear explanations and short but compelling demos, showing you how SQL Server really works. In this almost 2 hours and 15 minutes course we cover following important concepts. Introduction 00:54 Retrieving IDENTITY value using @@IDENTITY 08:38 Concepts Related to Identity Values 04:15 Difference between WHERE and HAVING 05:52 Order in WHERE clause 07:29 Concepts Around Temporary Tables and Table Variables 09:03 Are stored procedures pre-compiled? 05:09 UNIQUE INDEX and NULLs problem 06:40 DELETE VS TRUNCATE 06:07 Locks and Duration of Transactions 15:11 Nested Transaction and Rollback 09:16 Understanding Date/Time Datatypes 07:40 Differences between VARCHAR and NVARCHAR datatypes 06:38 Precedence of DENY and GRANT security permissions 05:29 Identify Blocking Process 06:37 NULLS usage with Dynamic SQL 08:03 Appendix Tips and Tricks with Tools 20:44 The duration of each module is mentioned besides the name of the module. SQL in Sixty Seconds You will have to login and to get subscribed to the courses to view them. Here are my free video learning resources SQL in Sixty Seconds. These are 60 second video which I have built on various subjects related to SQL Server. Do let me know what you think about them? Here are three of my latest videos: Identify Most Resource Intensive Queries – SQL in Sixty Seconds #028 Copy Column Headers from Resultset – SQL in Sixty Seconds #027 Effect of Collation on Resultset – SQL in Sixty Seconds #026 You can watch and learn at your own pace.  Then you can easily ask me any questions you have.  E-mail is easiest, but for really tough questions I’m willing to talk on Skype, Gtalk, or even Facebook chat.  Please do watch and then talk with me, I am always available on the internet! Here is the video of the world’s fastest man.Usain St. Leo Bolt inspires us that we all do better than best. We can go the next level of our own record. We all can improve if we have a will and dedication.  Watch the video from 5:00 mark. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLServer, T SQL, Technology, Video

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #035

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Row Overflow Data Explanation  In SQL Server 2005 one table row can contain more than one varchar(8000) fields. One more thing, the exclusions has exclusions also the limit of each individual column max width of 8000 bytes does not apply to varchar(max), nvarchar(max), varbinary(max), text, image or xml data type columns. Comparison Index Fragmentation, Index De-Fragmentation, Index Rebuild – SQL SERVER 2000 and SQL SERVER 2005 An old but like a gold article. Talks about lots of concepts related to Index and the difference from earlier version to the newer version. I strongly suggest that everyone should read this article just to understand how SQL Server has moved forward with the technology. Improvements in TempDB SQL Server 2005 had come up with quite a lots of improvements and this blog post describes them and explains the same. If you ask me what is my the most favorite article from early career. I must point out to this article as when I wrote this one I personally have learned a lot of new things. Recompile All The Stored Procedure on Specific TableI prefer to recompile all the stored procedure on the table, which has faced mass insert or update. sp_recompiles marks stored procedures to recompile when they execute next time. This blog post explains the same with the help of a script.  2008 SQLAuthority Download – SQL Server Cheatsheet You can download and print this cheat sheet and use it for your personal reference. If you have any suggestions, please let me know and I will see if I can update this SQL Server cheat sheet. Difference Between DBMS and RDBMS What is the difference between DBMS and RDBMS? DBMS – Data Base Management System RDBMS – Relational Data Base Management System or Relational DBMS High Availability – Hot Add Memory Hot Add CPU and Hot Add Memory are extremely interesting features of the SQL Server, however, personally I have not witness them heavily used. These features also have few restriction as well. I blogged about them in detail. 2009 Delete Duplicate Rows I have demonstrated in this blog post how one can identify and delete duplicate rows. Interesting Observation of Logon Trigger On All Servers – Solution The question I put forth in my previous article was – In single login why the trigger fires multiple times; it should be fired only once. I received numerous answers in thread as well as in my MVP private news group. Now, let us discuss the answer for the same. The answer is – It happens because multiple SQL Server services are running as well as intellisense is turned on. Blog post demonstrates how we can do the same with the help of SQL scripts. Management Studio New Features I have selected my favorite 5 features and blogged about it. IntelliSense for Query Editing Multi Server Query Query Editor Regions Object Explorer Enhancements Activity Monitors Maximum Number of Index per Table One of the questions I asked in my user group was – What is the maximum number of Index per table? I received lots of answers to this question but only two answers are correct. Let us now take a look at them in this blog post. 2010 Default Statistics on Column – Automatic Statistics on Column The truth is, Statistics can be in a table even though there is no Index in it. If you have the auto- create and/or auto-update Statistics feature turned on for SQL Server database, Statistics will be automatically created on the Column based on a few conditions. Please read my previously posted article, SQL SERVER – When are Statistics Updated – What triggers Statistics to Update, for the specific conditions when Statistics is updated. 2011 T-SQL Scripts to Find Maximum between Two Numbers In this blog post there are two different scripts listed which demonstrates way to find the maximum number between two numbers. I need your help, which one of the script do you think is the most accurate way to find maximum number? Find Details for Statistics of Whole Database – DMV – T-SQL Script I was recently asked is there a single script which can provide all the necessary details about statistics for any database. This question made me write following script. I was initially planning to use sp_helpstats command but I remembered that this is marked to be deprecated in future. 2012 Introduction to Function SIGN SIGN Function is very fundamental function. It will return the value 1, -1 or 0. If your value is negative it will return you negative -1 and if it is positive it will return you positive +1. Let us start with a simple small example. Template Browser – A Very Important and Useful Feature of SSMS Templates are like a quick cheat sheet or quick reference. Templates are available to create objects like databases, tables, views, indexes, stored procedures, triggers, statistics, and functions. Templates are also available for Analysis Services as well. The template scripts contain parameters to help you customize the code. You can Replace Template Parameters dialog box to insert values into the script. An invalid floating point operation occurred If you run any of the above functions they will give you an error related to invalid floating point. Honestly there is no workaround except passing the function appropriate values. SQRT of a negative number will give you result in real numbers which is not supported at this point of time as well LOG of a negative number is not possible (because logarithm is the inverse function of an exponential function and the exponential function is NEVER negative). Validating Spatial Object with IsValidDetailed Function SQL Server 2012 has introduced the new function IsValidDetailed(). This function has made my life very easy. In simple words, this function will check if the spatial object passed is valid or not. If it is valid it will give information that it is valid. If the spatial object is not valid it will return the answer that it is not valid and the reason for the same. This makes it very easy to debug the issue and make the necessary correction. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >