Search Results

Search found 36788 results on 1472 pages for 'sql 2008'.

Page 141/1472 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • What can prevent a Server 2008 machine accessing its OWN UNC shares?

    - by Simon
    I need to set up a UNC share for my hosted dedicated server to access a share on itself. Unfortunately TFS requires a UNC share. I am on a Windows Server 2008 Standard SP2 64bit dedicated server behind a PIX 501 firewall hosted with GoDaddy. I just cannot get the server to access itself and get this error: Windows cannot access \\SERVER\SHARE Check the spelling of the name.. etc. I've found numerous questions about this but no answer to my problem. Server 2008 Standard x64 SP2 Workgroup - not domain Windows Firewall is off Computer browser service is on I am trying to access \\MYMACHINE\TFS-BUILDS by typing in - or double clicking. Neither works. Machine has single network card Filesharing wizard says share was ok Share was showing under 'Computer management' Permissions are set to 'everyone' full control No obvious errors in eventlog Reboot didn't fix it Unfortunately I cannot try to access other shares in or out of this machine because it is a hosted dedicated server and the only machine behind a hardware firewall. The only thing left i can think of is that the hardware firewall needs to be configured. I don't think it is this because we have a 2003 Server machine behind a different hardware firewall and that one works fine. What on earth is left?!

    Read the article

  • SBS 2008 R2: Did something change with anonymous relays?

    - by gravyface
    Have noticed that prior documentation on setting up anonymous relays in SBS 2008 no longer work without some additional configuration. Used to be able to follow this documentation, which is basically: setup a new receive connector add the IP address(es) that will be permitted to relay check off "anonymous" under Permission Group and then run the Exchange shell script to grant permissions. Now what seems to be happening is that if the permitted IP address happens to fall within the same address space as another more restrictive Receive Connector (like the "Default SBS08" one) and possibly if it's ahead of the new Receive Connector alphabetically (haven't tested that yet), the relay attempt fails with "Client Was Not Authenticated" error. To get it to work, I had to modify the scope of the "Default SBS08" Receive Connector to exclude the one LAN IP that I wanted to allow relaying for. I can't recall ever having to do this for Exchange 2007 Standard and/or any other SBS 2008 servers I've setup over the last couple of years and I don't remember doing this and the wiki entry I added at the office doesn't mention it either. So my question is, has anyone else experienced this? Has there been a new change with R2 or perhaps an Exchange Service Pack?

    Read the article

  • Windows Server 2008 R2 creating a multi-year client certificate using the IIS certsrv page while deploying SSTP VPN

    - by Warren P
    I am trying to follow instructions on Technet about deploying a Standard (non-enterprise) SSTP based VPN) that were originally written for Server 2008, but I am using Server 2008 R2, I have gotten as far as the part where it asks you to create a request a Server Authentication certificate. I have deployed IIS, and Active Directory Certificate Services, and chose "Standalone" and "Standard" (non-enterprise) Certificate Authority because I don't have an OID and don't think I should have to get one for a simple deployment of SSTP. The resulting certificates made by the Certification Authority "Issue" command, only have a 1 year period of validity, I want a multi-year certificate. At no point in this process is there any way to input this information unless it's through the Attributes text input area on the Advance Certificate Request page, which appears to be generated using an old ActiveX control, which means I can only do this using the workarounds in the article that I linked at the top, and only using Internet Explorer. Update:: It may be that this question is pointless since self-signed keys do not appear to work, when I try them, using Windows 8 as the VPN client. The problem is that the keys that are self-created by the technique shown here do not have any Certificate Revocation Server URLs and so you get an error "The revocation function was unable to check revocation", and the VPN connection fails.

    Read the article

  • Adding a 2008 server to a 2003 Domain with DNS devolution?

    - by mvdwege
    I'm running into a problem adding a 2008 server to our existing 2003 domain, and as I am not a Windows admin, I'm not getting the problem here. Some reading around on Technet seems to indicate that DNS devolution is the issue. Here's the setup: DNS for the entire company is hosted on a Unix server running Bind, including the service records for the Windows domain. Our toplevel is company.local, and functional domains are in subdomains, such as mgt.company.local (our management servers). Our Windows servers live mostly in office.company.local, but some of them live in .mgt.company.local and .customers.company.local. The 2003 servers all succesfully authenticate against company.local as the Windows domain. Their position in the infrastructure is set by setting the primary DNS suffix under the network settings and the computer name dialog. Trying to do the same with a brand new 2008 install throws an error though: "Changing the Primary Domain DNS name of this computer to office.company.local failed [...] The specified server cannot perform the requested operation" I tried googling, but the closest I came was the Technet article on DNS Devolution, and I can't make heads nor tails on how to apply that to my case. Addendum 2012-10-23: The problem is not joining the domain, that works, the problem is that it joins with the wrong name, as .company.local, instead of .office.company.local. So far everything works, but I'm rather afraid to run production like this, because sooner or later something is going to complain about the AD name not matching DNS.

    Read the article

  • SQL Server: error when connecting

    - by atricapilla
    The application I'm using tries to connect SQL Server named instance running on a dedicated database server. Here's the error I'm getting: The TCP/IP connection to the host <instance_name>, port 1433 has failed. Error: Connection refused: connect. Is the firewall blocking my access or what? Should I dedicate a different port for this application?

    Read the article

  • SQL dynamic date but fixed time query

    - by Marko Lombardi
    I am trying to write a sql query like the example below, however, I need it to always choose the DateEntered field between the current day's date at 8:00am and the current day's date at 4:00pm. Not sure how to go about this. Can someone please help? SELECT OrderNumber , OrderRelease , HeatNumber , HeatSuffix , Operation , COUNT(Operation) AS [Pieces Out of Tolerance] FROM Alerts WHERE (Mill = 3) AND (DateEntered BETWEEN GetDate '08:00' AND GetDate '16:00') GROUP BY OrderNumber, OrderRelease, HeatNumber, HeatSuffix, Operation

    Read the article

  • SQL Server - Test the result of a stored procedure

    - by Melursus
    In SQL Server, it is possible to test the result of a stored procedure to know if the result return rows or nothing ? Example : EXEC _sp_MySp 1, 2, 3 IF @@ROWCOUNT = 0 BEGIN PRINT('Empty') END ELSE BEGIN PRINT(@@ROWCOUNT) END But @@ROWCOUNT always return 0 so maybe is there another way of doing this ?

    Read the article

  • how to connect SQL Server cubes using dotnet(C#)

    - by prince23
    Hi. I am new to this cubes concept in SQL Server. I need to connect to cubes and query and get a result and display that result in grid view Any help would be great telling how to connect to a cube, articles on it, code any thing that can help me to achieve the result Thank you.

    Read the article

  • Query performs poorly unless a temp table is used

    - by Paul McLoughlin
    The following query takes about 1 minute to run, and has the following IO statistics: SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN TASK_REQUESTS AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN AND T3.TASK = 'UPDATE_MEM_BAL' GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'TRANSACTIONS'. Scan count 5977, logical reads 7527408, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TASK_REQUESTS'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 58157 ms, elapsed time = 61437 ms. If I instead introduce a temporary table then the query returns quickly and performs less logical reads: CREATE TABLE #MyTable(RGN VARCHAR(20) NOT NULL, CD VARCHAR(20) NOT NULL, PRIMARY KEY([RGN],[CD])); INSERT INTO #MyTable(RGN, CD) SELECT RGN, CD FROM TASK_REQUESTS WHERE TASK='UPDATE_MEM_BAL'; SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN #MyTable AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'Worktable'. Scan count 5974, logical reads 382339, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TRANSACTIONS'. Scan count 4, logical reads 4547, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#MyTable________________________________________________________________000000000013'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 1420 ms, elapsed time = 1515 ms. The interesting thing for me is that the TASK_REQUEST table is a small table (3 rows at present) and statistics are up to date on the table. Any idea why such different execution plans and execution times would be occuring? And ideally how to change things so that I don't need to use the temp table to get decent performance? The only real difference in the execution plans is that the temp table version introduces an index spool (eager spool) operation.

    Read the article

  • rewards the products qualify for

    - by Rod
    products purchased -------------------------- bana bana bana stra kiwi reward requirements table (related to a rewards table) reward id, products ---------------------- 1,bana 1,bana 1,bana 2,stra 2,bana 3,stra 4,cart 5,bana 5,bana 5,oliv Can you help me with sql to get rewards the products purchased qualifies for? In the case above the reward ids would be: 1 2 3 If there is a better design that would make the solution easier I welcome those as well. I'm using product names for the sake of easier explaining, I hope. (I'll replace with product ids later)

    Read the article

  • Security considerations on Importing Bulk Data by Using BULK INSERT or OPENROWSET(BULK...)

    - by Ice
    I do not understand the following article profound. http://msdn.microsoft.com/en-us/library/ms175915(SQL.90).aspx "In contrast, if a SQL Server user logs on by using Windows Authentication, the user can read only those files that can be accessed by the user account, regardless of the security profile of the SQL Server process." What if i define a SQL-Agent Job to perform this bulk-Insert; Is it the OWNER of the Job who gives the security-context?

    Read the article

  • SQL Server 2008 R2 Reporting Services - The Word is But a Stage (T-SQL Tuesday #006)

    - by smisner
    Host Michael Coles (blog|twitter) has selected LOB data as the topic for this month's T-SQL Tuesday, so I'll take this opportunity to post an overview of reporting with spatial data types. As part of my work with SQL Server 2008 R2 Reporting Services, I've been exploring the use of spatial data types in the new map data region. You can create a map using any of the following data sources: Map Gallery - a set of Shapefiles for the United States only that ships with Reporting Services ESRI Shapefile - a .shp file conforming to the Environmental Systems Research Institute, Inc. (ESRI) shapefile spatial data format SQL Server spatial data - a query that includes SQLGeography or SQLGeometry data types Rob Farley (blog|twitter) points out today in his T-SQL Tuesday post that using the SQL geography field is a preferable alternative to ESRI shapefiles for storing spatial data in SQL Server. So how do you get spatial data? If you don't already have a GIS application in-house, you can find a variety of sources. Here are a few to get you started: US Census Bureau Website, http://www.census.gov/geo/www/tiger/ Global Administrative Areas Spatial Database, http://biogeo.berkeley.edu/gadm/ Digital Chart of the World Data Server, http://www.maproom.psu.edu/dcw/ In a recent post by Pinal Dave (blog|twitter), you can find a link to free shapefiles for download and a tutorial for using Shape2SQL, a free tool to convert shapefiles into SQL Server data. In my post today, I'll show you how to use combine spatial data that describes boundaries with spatial data in AdventureWorks2008R2 that identifies stores locations to embed a map in a report. Preparing the spatial data First, I downloaded Shapefile data for the administrative boundaries in France and unzipped the data to a local folder. Then I used Shape2SQL to upload the data into a SQL Server database called Spatial. I'm not sure of the reason why, but I had to uncheck the option to create a spatial index to upload the data. Otherwise, the upload appeared to run successfully, but no table appeared in my database. The zip file that I downloaded contained three files, but I didn't know what was in them until I used Shape2SQL to upload the data into tables. Then I found that FRA_adm0 contains spatial data for the country of France, FRA_adm1 contains spatial data for each region, and FRA_adm2 contains spatial data for each department (a subdivision of region). Next I prepared my SQL query containing sales data for fictional stores selling Adventure Works products in France. The Person.Address table in the AdventureWorks2008R2 database (which you can download from Codeplex) contains a SpatialLocation column which I joined - along with several other tables - to the Sales.Customer and Sales.Store tables. I'll be able to superimpose this data on a map to see where these stores are located. I included the SQL script for this query (as well as the spatial data for France) in the downloadable project that I created for this post. Step 1: Using the Map Wizard to Create a Map of France You can build a map without using the wizard, but I find it's rather useful in this case. Whether you use Business Intelligence Development Studio (BIDS) or Report Builder 3.0, the map wizard is the same. I used BIDS so that I could create a project that includes all the files related to this post. To get started, I added an empty report template to the project and named it France Stores. Then I opened the Toolbox window and dragged the Map item to the report body which starts the wizard. Here are the steps to perform to create a map of France: On the Choose a source of spatial data page of the wizard, select SQL Server spatial query, and click Next. On the Choose a dataset with SQL Server spatial data page, select Add a new dataset with SQL Server spatial data. On the Choose a connection to a SQL Server spatial data source page, select New. In the Data Source Properties dialog box, on the General page, add a connecton string like this (changing your server name if necessary): Data Source=(local);Initial Catalog=Spatial Click OK and then click Next. On the Design a query page, add a query for the country shape, like this: select * from fra_adm1 Click Next. The map wizard reads the spatial data and renders it for you on the Choose spatial data and map view options page, as shown below. You have the option to add a Bing Maps layer which shows surrounding countries. Depending on the type of Bing Maps layer that you choose to add (from Road, Aerial, or Hybrid) and the zoom percentage you select, you can view city names and roads and various boundaries. To keep from cluttering my map, I'm going to omit the Bing Maps layer in this example, but I do recommend that you experiment with this feature. It's a nice integration feature. Use the + or - button to rexize the map as needed. (I used the + button to increase the size of the map until its edges were just inside the boundaries of the visible map area (which is called the viewport). You can eliminate the color scale and distance scale boxes that appear in the map area later. Select the Embed map data in this report for faster rendering. The spatial data won't be changing, so there's no need to leave it in the database. However, it does increase the size of the RDL. Click Next. On the Choose map visualization page, select Basic Map. We'll add data for visualization later. For now, we have just the outline of France to serve as the foundation layer for our map. Click Next, and then click Finish. Now click the color scale box in the lower left corner of the map, and press the Delete key to remove it. Then repeat to remove the distance scale box in the lower right corner of the map. Step 2: Add a Map Layer to an Existing Map The map data region allows you to add multiple layers. Each layer is associated with a different data set. Thus far, we have the spatial data that defines the regional boundaries in the first map layer. Now I'll add in another layer for the store locations by following these steps: If the Map Layers windows is not visible, click the report body, and then click twice anywhere on the map data region to display it. Click on the New Layer Wizard button in the Map layers window. And then we start over again with the process by choosing a spatial data source. Select SQL Server spatial query, and click Next. Select Add a new dataset with SQL Server spatial data, and click Next. Click New, add a connection string to the AdventureWorks2008R2 database, and click Next. Add a query with spatial data (like the one I included in the downloadable project), and click Next. The location data now appears as another layer on top of the regional map created earlier. Use the + button to resize the map again to fill as much of the viewport as possible without cutting off edges of the map. You might need to drag the map within the viewport to center it properly. Select Embed map data in this report, and click Next. On the Choose map visualization page, select Basic Marker Map, and click Next. On the Choose color theme and data visualization page, in the Marker drop-down list, change the marker to diamond. There's no particular reason for a diamond; I think it stands out a little better than a circle on this map. Clear the Single color map checkbox as another way to distinguish the markers from the map. You can of course create an analytical map instead, which would change the size and/or color of the markers according to criteria that you specify, such as sales volume of each store, but I'll save that exploration for another post on another day. Click Finish and then click Preview to see the rendered report. Et voilà...c'est fini. Yes, it's a very simple map at this point, but there are many other things you can do to enhance the map. I'll create a series of posts to explore the possibilities. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • LINQ to SQL vs Entity Framework for an app with a future SQL Azure version

    - by Craig L
    I've got a vertical market Dot Net Framework 1.1 C#/WinForms/SQL Server 2000 application. Currently it uses ADO.Net and Microsoft's SQLHelper for CRUD operations. I've successfully converted it to Dot Net Framework 4 C#/WinForms/ SQL Server 2008. What I'd like to do is also offer my customers the ability to use SQL Azure as a backend storage for their data instead of local/LAN SQL Server. If I know SQL Azure is in my application's future, should I: A. Switch to LINQ to SQL B. Swith to Entity Framework C. Stick with ADO.Net and SQLHelper Thanks !

    Read the article

  • Where are task scheduler event files stored on Windows Server 2008?

    - by MacGyver
    I tried setting up triggers in Task Scheduler Windows Server 2008, but Microsoft needs to fix a bug that I documented on Stack Overflow. So I can't use triggers until they fix this bug. Basically the Task Scheduler doesn't trigger an event that has a Result Code of 2147942402. http://stackoverflow.com/questions/10933033/windows-server-2008-task-scheduler-trigger-xml-syntax-for-email-notification-ta In the mean time, I'd like to write a task that runs .NET code that queries the log/event files programmatically every 15 minutes and sends success/failure emails based on the events that occured for the given tasks in task scheduler. Here's where the XPath is stored for the Task Trigger XML tab (that I can't rely on): C:\Windows\System32\Tasks\ I cannot find where the events (or log files containing the event ids) of each task are stored. Does anyone know where to find these log files? The log name is "Microsoft-Windows-TaskScheduler/Operational".

    Read the article

  • Database Firewall

    - by ???02
    Database Firewall?????SQL????????SQL????????????WEB?????HTTP??????SQL????????SQL????????????????????????????????????????????????????????????SQL??????????????????????????????????????·WEB???????????????????·??????????????????WEB???????????WEB??????????Web?????????????????IPA???????????SQL?????????????????????SQL??????????????????????SQL????????????????WEB??????????????????????????????????????????????????????????????????????????????????????????????WEB????????????????????????????????????????B to B?B to C???WEB????????????????????????????????????????????????????????????????????????????WEB?????????????????????WEB??????????????????·??????????????????????????????????????????????????????????????????????????????????????????WEB???????SQL?????????????????????????????Oracle Database Firewall???SQL??????????????????????????????????????????????????????????????????????????????2011?Oracle Database Firewall?????????·????????????????????????????????????????Oracle Database Firewall??????????????????SQL?????????·?????????????????????? Oracle Database Fireawall ?Oracle Database Firewall???????????????????????????????????????????????????????????SQL??????????????????????????????????????????2????????????????????SQL???Pass?Block????????????SQL?????????????????????????????????????????????????????????????????????????????????SQL????????????????????????????????????????????????????????????????????????????????????????????SQL?Oracle Database Firewall???????????????????SQL????????????????WEB??????????Oracle Database Firewall???????????????????????????????????Oracle Database??????SQL Server?DB2?Sybase??????????2????????Oracle Database Firewall?????????????????????????????????????????????????Oracle Database Firewall?????????????????????????????????????????????SQL???????????????????????????????????????????????????????????SPAN???(?????????)?????????????????????????????????SQL???????????????????????????????????????????SQL?Block???Pass??????????????????????????????????IDS?IPS????????????????????WAF (Web Application Firewall)? ??????????????????????????????????Database Firewall???????????????SQL????????????SQL????400????????????????(ISO/IEC 9075)??????????Oracle Database Firewall???????????????????????????????????????????SQL?????????????????????SQL??????Oracle Database Firewall??Oracle Database, SQL Server, DB2??????????????????????SQL???????????????????SQL??????????????????????????????????SQL???????????????????????????????????SQL????????????????????????????????????Oracle Database Firewall???SQL??????Oracle Database Firewall?????????????????????????????????????????????????????????????????????????????(Oracle Database???10gR2??XML??????????????????????????????????????????????????????????????Oracle Database???????????????????Oracle Database???????????????????????????????)???Oracle Database Firewall??????????????????????????????????????????????????????????????Oracle Database Firewall????????????????????????????????????????????????????????? ?????? Oracle Direct

    Read the article

  • SQL Server 2008: Table Valued Parameters

    In SQL Server 2005 and earlier, it is not possible to pass a table variable as a parameter to a stored procedure. When multiple rows of data to SQL Server need to send multiple rows of data to SQL Server, developers either had to send one row at a time or come up with other workarounds to meet requirements. While a VB.Net developer recently informed me that there is a SQLBulkCopy object available in .Net to send multiple rows of data to SQL Server at once, the data still can not be passed to a stored proc.Possibly the most anticipated T-SQL feature of SQL Server 2008 is the new Table-Valued Parameters. This is the ability to easily pass a table to a stored procedure from T-SQL code or from an application as a parameter.

    Read the article

  • Whether to use UNION or OR in SQL Server Queries

    - by Dinesh Asanka
    Recently I came across with an article on DB2 about using Union instead of OR. So I thought of carrying out a research on SQL Server on what scenarios UNION is optimal in and which scenarios OR would be best. I will analyze this with a few scenarios using samples taken  from the AdventureWorks database Sales.SalesOrderDetail table. Scenario 1: Selecting all columns So we are going to select all columns and you have a non-clustered index on the ProductID column. --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 874 So query 1 is using OR and the later is using UNION. Let us analyze the execution plans for these queries. Query 1 Query 2 As expected Query 1 will use Clustered Index Scan but Query 2, uses all sorts of things. In this case, since it is using multiple CPUs you might have CX_PACKET waits as well. Let’s look at the profiler results for these two queries: CPU Reads Duration Row Counts OR 78 1252 389 3854 UNION 250 7495 660 3854 You can see from the above table the UNION query is not performing well as the  OR query though both are retuning same no of rows (3854).These results indicate that, for the above scenario UNION should be used. Scenario 2: Non-Clustered and Clustered Index Columns only --Query 1 : OR SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 GO --Query 2 : UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 874 GO So this time, we will be selecting only index columns, which means these queries will avoid a data page lookup. As in the previous case we will analyze the execution plans: Query 1 Query 2 Again, Query 2 is more complex than Query 1. Let us look at the profile analysis: CPU Reads Duration Row Counts OR 0 24 208 3854 UNION 0 38 193 3854 In this analyzis, there is only slight difference between OR and UNION. Scenario 3: Selecting all columns for different fields Up to now, we were using only one column (ProductID) in the where clause.  What if we have two columns for where clauses and let us assume both are covered by non-clustered indexes? --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2: As we can see, the query plan for the second query has improved. Let us see the profiler results. CPU Reads Duration Row Counts OR 47 1278 443 1228 UNION 31 1334 400 1228 So in this case too, there is little difference between OR and UNION. Scenario 4: Selecting Clustered index columns for different fields Now let us go only with clustered indexes: --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2 Now both execution plans are almost identical except is an additional Stream Aggregate is used in the first query. This means UNION has advantage over OR in this scenario. Let us see profiler results for these queries again. CPU Reads Duration Row Counts OR 0 319 366 1228 UNION 0 50 193 1228 Now see the differences, in this scenario UNION has somewhat of an advantage over OR. Conclusion Using UNION or OR depends on the scenario you are faced with. So you need to do your analyzing before selecting the appropriate method. Also, above the four scenarios are not all an exhaustive list of scenarios, I selected those for the broad description purposes only.

    Read the article

  • T-SQL query to check number of existences

    - by abatishchev
    I have next approximate tables structure: accounts: ID INT, owner_id INT, currency_id TINYINT related to clients: ID INT and currency_types: ID TINYINT, name NVARCHAR(25) I need to write a stored procedure to check existence of accounts with specific currency and all others, i.e. client can have accounts in specific currency, some other currencies and both. I have already written this query: SELECT ISNULL(( SELECT 1 WHERE EXISTS ( SELECT 1 FROM [accounts] AS A, [currency_types] AS CT WHERE A.[owner_id] = @client -- sp param AND A.[currency_id] = CT.[ID] AND CT.[name] = N'Ruble' )), 0) AS [ruble], ISNULL(( SELECT 1 WHERE EXISTS ( SELECT A.[ID] FROM [accounts] AS A, [currency_types] AS CT WHERE A.[owner_id] = @client AND A.[currency_id] = CT.[ID] AND CT.[name] != N'Ruble' )), 0) AS [foreign] Is it possible to optimize it? I'm new to (T)SQL, so thanks in advance!

    Read the article

  • Compare date fields in SQL server

    - by huslayer
    Hi all, I've a flat file that I cleaned the data out using SSIS, the output looks like that : MEDICAL ADMIT PATIENT PATIENT DATE OF DX REC NO DATE NUMBER NAME DISCHARGE Code DRG # 123613 02/16/09 12413209 MORIBALDI ,GEMMA 02/19/09 428.20 988 130897 01/23/09 12407193 TINLEY ,PATRICIA 01/23/09 535.10 392 139367 02/27/09 36262509 THARPE ,GLORIA 03/05/09 562.10 392 141954 02/25/09 72779499 SHUMATE ,VALERIA 02/25/09 112.84 370 141954 03/07/09 36271732 SHUMATE ,VALERIA 03/10/09 493.92 203 145299 01/21/09 12406294 BAUGH ,MARIA 01/21/09 366.17 117 and the report (final results) attached in the screen shot from the final excel report. so what's happening is IF the same name or same account number is duplicate, that means the patient has entered the hospital again and needs to be included in the report. ![alt text][1] what I need to do is... Eliminate any rows that is NOT duplicate (not everybody in this file has been admitted again) and compare the dates to get the ReAdmitdate and ReDischargedate I dumped the data into a SQL table and trying to compare the dates to figure out "ReAdmitdate" and "ReDischargedate" any help is appreciated. Thanks [link text][1]

    Read the article

  • Refreshing metadata on user functions t-SQL

    - by luckyluke
    I am doing some T-SQL programming and I have some Views defines on my database. The data model is still changing these days and I have some table functions defined. Sometimes i deliberately use select * from MYVIEW in such a table function to return all columns. If the view changes (or table) the function crashes and I need to recompile it. I know it is in general good thing so that it prevents from hell lotta errors but still... Is there a way to write such functions so the dont' blow up in my face everytime I change something on the underlying table? Or maybe I am doing something completely wrong... Thanks for help

    Read the article

  • Custom Expression in Linq-to-Sql Designer

    - by csharpnoob
    According to Microsoft: http://msdn.microsoft.com/de-de/library/system.data.linq.mapping.columnattribute.expression.aspx It's possible to add expression to the Linq-to-SQL Mapping. But how to configure or add them in Visual Studio in the Designer? Problem, when I add it manual to thex XYZ.designer.cs it on change it will be lost. //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:2.0.50727.4927 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ This is generated: [Column(Name="id", Storage="_id", DbType="Int")] public System.Nullable<int> id { ... But i need something like this [Column(Name="id", Storage="_id", DbType="Int", Expression="Max(id)")] public System.Nullable<int> id { ... Thanks.

    Read the article

  • Changing MSSQL Database sorting

    - by plaisthos
    I have a request to change the collation of a MS SQL Database: ALTER DATABASE solarwind95 collate SQL_Latin1_General_CP1_CI_AS but I get this strange error: Meldung 5075, Ebene 16, Status 1, Zeile 1 Das 'Spalte'-Objekt 'CustomPollerAssignment.PollerID' ist von 'Datenbanksortierung' abhängig. Die Datenbanksortierung kann nicht geändert werden, wenn ein schemagebundenes Objekt von ihr abhängig ist. Entfernen Sie die Anhängigkeiten der Datenbanksortierung, und wiederholen Sie den Vorgang. Sorry for the german errror message. I do not know how to switch the language to english, but here is a translation: Translation: Message 5075, Layer 16, Status 1, Row 1 The 'column' object 'CustomPollerAssignment.PollerID' depends on 'Database sorting. The database sorting cannot be changed if a schema bound object depends on it. Remove the dependency of the database sortieren and retry. I got a ton more of the errors like that.

    Read the article

  • query in sql server for retrieving rows

    - by Arash khangaldi
    I have a table that contains the following 4 columns: id name lastname phone I want to write a stored procedure that gets an id as parameter, and then gets the name of that id and then use that name to get all the rows that their name is equal to the names that i found in last step! here it is my query, i know it's wrong but i'm new to sql commands: ALTER PROCEDURE dbo.GetAllNames @id int AS select name as Name from Users where id = @id -- i don't how to retrieve the names that are equal to Name select * from Users where name = Name can you correct my query and help me? Thanks.

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >