Search Results

Search found 28900 results on 1156 pages for 'sql 2005'.

Page 154/1156 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • How to Create Views for All Tables with Oracle SQL Developer

    - by thatjeffsmith
    Got this question over the weekend via a friend and Oracle ACE Director, so I thought I would share the answer here. If you want to quickly generate DDL to create VIEWs for all the tables in your system, the easiest way to do that with SQL Developer is to create a data model. Wait, why would I want to do this? StackOverflow has a few things to say on this subject… So, start with importing a data dictionary. Step One: Open of Create a Model In SQL Developer, go to View – Data Modeler – Browser. Then in the browser panel, expand your design and create a new Relational Model. Step Two: Import your Data Dictionary This is a fancy way of saying, ‘suck objects out of the database into my model’ This will open a wizard to connect, select your schema(s), objects, etc. Once they’re in your model, you’re ready to cook with gas I’m using HR (Human Resources) for this example. You should end up with something that looks like this. Our favorite HR model Now we’re ready to generate the views! Step Three: Auto-generate the Views Go to Tools – Data Modeler – Table to View Wizard. I don’t want all my tables included, and I want to change the naming standard Decide if you want to change the default generated view names By default the views will be created as ‘V_TABLE_NAME.’ If you don’t like the ‘V_’ you can enter your own. You also can reference the object and model name with variables as shown in the screenshot above. I’m going to go with something a little more personal. The views are the little green boxes in the diagram Can’t find your views? They should be grouped together in your diagram. Don’t forget to use the Navigator to easily find and navigate to those model diagram objects! Step Four: Generate the DDL Ok, let’s use the Generate DDL button on the toolbar. Un-check everything but your views If you used a prefix, take advantage of that to create a filter. You might have existing views in your model that you don’t want to include, right? Once you click ‘OK’ the DDL will be generated. -- Generated by Oracle SQL Developer Data Modeler 4.0.0.825 -- at: 2013-11-04 10:26:39 EST -- site: Oracle Database 11g -- type: Oracle Database 11g CREATE OR REPLACE VIEW HR.TJS_BLOG_COUNTRIES ( COUNTRY_ID , COUNTRY_NAME , REGION_ID ) AS SELECT COUNTRY_ID , COUNTRY_NAME , REGION_ID FROM HR.COUNTRIES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_EMPLOYEES ( EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID FROM HR.EMPLOYEES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOBS ( JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY ) AS SELECT JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY FROM HR.JOBS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOB_HISTORY ( EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID FROM HR.JOB_HISTORY ; CREATE OR REPLACE VIEW HR.TJS_BLOG_LOCATIONS ( LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID ) AS SELECT LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID FROM HR.LOCATIONS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_REGIONS ( REGION_ID , REGION_NAME ) AS SELECT REGION_ID , REGION_NAME FROM HR.REGIONS ; -- Oracle SQL Developer Data Modeler Summary Report: -- -- CREATE TABLE 0 -- CREATE INDEX 0 -- ALTER TABLE 0 -- CREATE VIEW 6 -- CREATE PACKAGE 0 -- CREATE PACKAGE BODY 0 -- CREATE PROCEDURE 0 -- CREATE FUNCTION 0 -- CREATE TRIGGER 0 -- ALTER TRIGGER 0 -- CREATE COLLECTION TYPE 0 -- CREATE STRUCTURED TYPE 0 -- CREATE STRUCTURED TYPE BODY 0 -- CREATE CLUSTER 0 -- CREATE CONTEXT 0 -- CREATE DATABASE 0 -- CREATE DIMENSION 0 -- CREATE DIRECTORY 0 -- CREATE DISK GROUP 0 -- CREATE ROLE 0 -- CREATE ROLLBACK SEGMENT 0 -- CREATE SEQUENCE 0 -- CREATE MATERIALIZED VIEW 0 -- CREATE SYNONYM 0 -- CREATE TABLESPACE 0 -- CREATE USER 0 -- -- DROP TABLESPACE 0 -- DROP DATABASE 0 -- -- REDACTION POLICY 0 -- -- ERRORS 0 -- WARNINGS 0 You can then choose to save this to a file or not. This has a few steps, but as the number of tables in your system increases, so does the amount of time this feature can save you!

    Read the article

  • System Variables, Stored Procedures or Functions for Meta Data

    - by BuckWoody
    Whenever you want to know something about SQL Server’s configuration, whether that’s the Instance itself or a database, you have a few options. If you want to know “dynamic” data, such as how much memory or CPU is consumed or what a particular query is doing, you should be using the Dynamic Management Views (DMVs) that you can read about here: http://msdn.microsoft.com/en-us/library/ms188754.aspx  But if you’re looking for how much memory is installed on the server, the version of the Instance, the drive letters of the backups and so on, you have other choices. The first of these are system variables. You access these with a SELECT statement, and they are useful when you need a discrete value for use, say in another query or to put into a table. You can read more about those here: http://msdn.microsoft.com/en-us/library/ms173823.aspx You also have a few stored procedures you can use. These often bring back a lot more data, pre-formatted for the screen. You access these with the EXECUTE syntax. It is a bit more difficult to take the data they return and get a single value or place the results in another table, but it is possible. You can read more about those here: http://msdn.microsoft.com/en-us/library/ms187961.aspx Yet another option is to use a system function, which you access with a SELECT statement, which also brings back a discrete value that you can use in a test or to place in another table. You can read about those here: http://msdn.microsoft.com/en-us/library/ms187812.aspx  By the way, many of these constructs simply query from tables in the master or msdb databases for the Instance or the system tables in a user database. You can get much of the information there as well, and there are even system views in each database to show you the meta-data dealing with structure – more on that here: http://msdn.microsoft.com/en-us/library/ms186778.aspx  Some of these choices are the only way to get at a certain piece of data. But others overlap – you can use one or the other, they both come back with the same data. So, like many Microsoft products, you have multiple ways to do the same thing. And that’s OK – just research what each is used for and how it’s intended to be used, and you’ll be able to select (pun intended) the right choice. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Export all SSIS packages from msdb using Powershell

    - by jamiet
    Have you ever wanted to dump all the SSIS packages stored in msdb out to files? Of course you have, who wouldn’t? Right? Well, at least one person does because this was the subject of a thread (save all ssis packages to file) on the SSIS forum earlier today. Some of you may have already figured out a way of doing this but for those that haven’t here is a nifty little script that will do it for you and it uses our favourite jack-of-all tools … Powershell!! Imagine I have the following package folder structure on my Integration Services server (i.e. in [msdb]): There are two packages in there called “20110111 Chaining Expression components” & “Package”, I want to export those two packages into a folder structure that mirrors that in [msdb]. Here is the Powershell script that will do that:Param($SQLInstance = "localhost") #####Add all the SQL goodies (including Invoke-Sqlcmd)##### add-pssnapin sqlserverprovidersnapin100 -ErrorAction SilentlyContinue add-pssnapin sqlservercmdletsnapin100 -ErrorAction SilentlyContinue cls $Packages = Invoke-Sqlcmd -MaxCharLength 10000000 -ServerInstance $SQLInstance -Query "WITH cte AS ( SELECT cast(foldername as varchar(max)) as folderpath, folderid FROM msdb..sysssispackagefolders WHERE parentfolderid = '00000000-0000-0000-0000-000000000000' UNION ALL SELECT cast(c.folderpath + '\' + f.foldername as varchar(max)), f.folderid FROM msdb..sysssispackagefolders f INNER JOIN cte c ON c.folderid = f.parentfolderid ) SELECT c.folderpath,p.name,CAST(CAST(packagedata AS VARBINARY(MAX)) AS VARCHAR(MAX)) as pkg FROM cte c INNER JOIN msdb..sysssispackages p ON c.folderid = p.folderid WHERE c.folderpath NOT LIKE 'Data Collector%'" Foreach ($pkg in $Packages) { $pkgName = $Pkg.name $folderPath = $Pkg.folderpath $fullfolderPath = "c:\temp\$folderPath\" if(!(test-path -path $fullfolderPath)) { mkdir $fullfolderPath | Out-Null } $pkg.pkg | Out-File -Force -encoding ascii -FilePath "$fullfolderPath\$pkgName.dtsx" } To run it simply change the “localhost” parameter of the server you want to connect to either by editing the script or passing it in when the script is executed. It will create the folder structure in C:\Temp (which you can also easily change if you so wish – just edit the script accordingly). Here’s the folder structure that it created for me: Notice how it is a mirror of the folder structure in [msdb]. Hope this is useful! @Jamiet UPDATE: THis post prompted Chad Miller to write a post describing his Powershell add-in that utilises a SSIS API to do exporting of packages. Go take a read here: http://sev17.com/2011/02/importing-and-exporting-ssis-packages-using-powershell/

    Read the article

  • Export all SSIS packages from msdb using Powershell

    - by jamiet
    Have you ever wanted to dump all the SSIS packages stored in msdb out to files? Of course you have, who wouldn’t? Right? Well, at least one person does because this was the subject of a thread (save all ssis packages to file) on the SSIS forum earlier today. Some of you may have already figured out a way of doing this but for those that haven’t here is a nifty little script that will do it for you and it uses our favourite jack-of-all tools … Powershell!!   Imagine I have the following package folder structure on my Integration Services server (i.e. in [msdb]): There are two packages in there called “20110111 Chaining Expression components” & “Package”, I want to export those two packages into a folder structure that mirrors that in [msdb]. Here is the Powershell script that will do that:   Param($SQLInstance = "localhost") #####Add all the SQL goodies (including Invoke-Sqlcmd)##### add-pssnapin sqlserverprovidersnapin100 -ErrorAction SilentlyContinue add-pssnapin sqlservercmdletsnapin100 -ErrorAction SilentlyContinue cls $Packages = Invoke-Sqlcmd -MaxCharLength 10000000 -ServerInstance $SQLInstance -Query "WITH cte AS ( SELECT cast(foldername as varchar(max)) as folderpath, folderid FROM msdb..sysssispackagefolders WHERE parentfolderid = '00000000-0000-0000-0000-000000000000' UNION ALL SELECT cast(c.folderpath + '\' + f.foldername as varchar(max)), f.folderid FROM msdb..sysssispackagefolders f INNER JOIN cte c ON c.folderid = f.parentfolderid ) SELECT c.folderpath,p.name,CAST(CAST(packagedata AS VARBINARY(MAX)) AS VARCHAR(MAX)) as pkg FROM cte c INNER JOIN msdb..sysssispackages p ON c.folderid = p.folderid WHERE c.folderpath NOT LIKE 'Data Collector%'" Foreach ($pkg in $Packages) { $pkgName = $Pkg.name $folderPath = $Pkg.folderpath $fullfolderPath = "c:\temp\$folderPath\" if(!(test-path -path $fullfolderPath)) { mkdir $fullfolderPath | Out-Null } $pkg.pkg | Out-File -Force -encoding ascii -FilePath "$fullfolderPath\$pkgName.dtsx" }   To run it simply change the “localhost” parameter of the server you want to connect to either by editing the script or passing it in when the script is executed. It will create the folder structure in C:\Temp (which you can also easily change if you so wish – just edit the script accordingly). Here’s the folder structure that it created for me: Notice how it is a mirror of the folder structure in [msdb]. Hope this is useful! @Jamiet

    Read the article

  • How to deploy SQL Server 2005 Reporting Services on a network without a domain server?

    - by ti
    I have a small Windows network (~30 machines) and I need to deploy SQL Server 2005 Reporting Services. Because I use SQL Server Standard Edition and not Enterprise, I am forced to use Windows Authentication to the users. I am a Linux admin, and have near zero knowledge on Active Directory. As deep as my shallow knowledge goes, I think that I would need to invest in a domain server, a mirrored backup of that domain server. I think that I need to change every computer to use this domain too, and if the domain server goes down, every computer will be unavailable. Is there a easier way to deploy Windows Authentication so that users can access Reporting Services from their computers without changing the infra-structure that much? Thanks!

    Read the article

  • SQL Server 2012 and SQLMail - will it still work?

    - by Kharlos Dominguez
    We are considering upgrading our SQL Server, which is currently running 2005. We use SQLMail heavily in the organization, both to send e-mails and to import some into a database. I've read on various places that SQLMail was deprecated and superseded by "Database Mail". I'm confused because this MS page: http://msdn.microsoft.com/en-us/library/bb402904.aspx seems to imply that it would still work? I understand the dangers of SQLMail but we do not have the resources to rewrite the scripts right now and would prefer to do it later on. Does SQLMail still work in 2012, and if not, how easy is it to replace with Database Mail, both for reading and sending e-mails?

    Read the article

  • ServerName no longer works as SQL Server 2008 Default instance name?

    - by TomK
    Folks, Somehow in struggling to install something (Sage MIP), I have unsettled my system's SQL Server 2008 instance names. If I fired up SSMS and connected to "MySystemName" (while logged into MySystemName), I used to immediately connect and could view my databases. Now if I connect to "(local)" or "." I connect just fine...but "MySystemName" gives me a timeout error. If I log in with "." and do "SELECT @@SERVERNAME" it comes back with "MySystemName" just fine. I've double checked that my network name is in the Security\Logins list, and it is, as a dbadmin. Any suggestions? I thought that having SQL Server 2005 Express might be a problem (the "battle of the default instances"), so I uninstalled that. No better.

    Read the article

  • Running 32 bit SQL Server 2005 on 64 bit Windows Server?

    - by TooFat
    If I have a 32 bit version of SQL Server 2005 running on a 64 bit Windows Server does the max amount of memory avail. the SQL Server process increase from 2gb to 4gb. In reading this blog entry by Mark Russinovich in which he states that "All Microsoft server products and data intensive executables in Windows are marked with the large address space awareness flag" and "Because the address space on 64-bit Windows is much larger than 4GB, something I’ll describe shortly, Windows can give 32-bit processes the maximum 4GB that they can address and use the rest for the operating system’s virtual memory." which leads me to believe that the answer is "yes" but I not totally confident.

    Read the article

  • SQL database testing: How to capture state of my database for rollback.

    - by Rising Star
    I have a SQL server (MS SQL 2005) in my development environment. I have a suite of unit tests for some .net code that will connect to the database and perform some operations. If the code under test works correctly, then the database should be in the same (or similar) state to how it was before the tests. However, I would like to be able to roll back the database to its state from before the tests run. One way of doing this would be to programmatically use transactions to roll back each test operation, but this is difficult and cumbersome to program; it could easily lead to errors in the test code. I would like to be able to run my tests confidently knowing that if they destroy my tables, I can quickly restore them? What is a good way to save a snapshot of one of my databases with its tables so that I can easily restore the database to it's state from before the test?

    Read the article

  • TFS SQL Deployment Data Script

    - by Greg
    We are using TFS and SQL 2005 (looking to upgrade to SQL 2012 if that makes a difference). We store our database schema in a Visual Studio Database project (VS 2010). When code is released to live we currently use the Visual Studio Database Project to build a script for all our schema changes. The problem we have been getting is having to alter or add to that script to add/fix data for the deployment. For example if we add a new non-nullable column to an existing table we need to populate that column with data during the insert. Other times we may want to create new records in transactional tables (e.g. assign specific users to a new security access). Do Visual Studio Database Projects have a way to store these scripts that only need to be run once and somehow include them in the build? Does it know which scripts need to be run (for example if we are inserting default data we don't want to do that again a second time)? OR Is there a better way to manage these scripts?

    Read the article

  • Collation errors in business

    - by Rob Farley
    At the PASS Summit last month, I did a set (Lightning Talk) about collation, and in particular, the difference between the “English” spoken by people from the US, Australia and the UK. One of the examples I gave was that in the US drivers might stop for gas, whereas in Australia, they just open the window a little. This is what’s known as a paraprosdokian, where you suddenly realise you misunderstood the first part of the sentence, based on what was said in the second. My current favourite is Emo Phillip’s line “I like to play chess with old men in the park, but it can be hard to find thirty-two of them.” Essentially, this a collation error, one that good comedians can get mileage from. Unfortunately, collation is at its worst when we have a computer comparing two things in different collations. They might look the same, and sound the same, but if one of the things is in SQL English, and the other one is in Windows English, the poor database server (with no sense of humour) will get suspicious of developers (who all have senses of humour, obviously), and declare a collation error, worried that it might not realise some nuance of the language. One example is the common scenario of a case-sensitive collation and a case-insensitive one. One may think that “Rob” and “rob” are the same, but the other might not. Clearly one of them is my name, and the other is a verb which means to steal (people called “Nick” have the same problem, of course), but I have no idea whether “Rob” and “rob” should be considered the same or not – it depends on the collation. I told a lie before – collation isn’t at its worst in the computer world, because the computer has the sense to complain about the collation issue. People don’t. People will say something, with their own understanding of what they mean. Other people will listen, and apply their own collation to it. I remember when someone was asking me about a situation which had annoyed me. They asked if I was ‘pissed’, and I said yes. I meant that I was annoyed, but they were asking if I’d been drinking. It took a moment for us to realise the misunderstanding. In business, the problem is escalated. A business user may explain something in a particular way, using terminology that they understand, but using words that mean something else to a technical person. I remember a situation with a checkbox on a form (back in VB6 days from memory). It was used to indicate that something was approved, and indicated whether a particular database field should store True or False – nothing more. However, the client understood it to mean that an entire workflow system would be implemented, with different users have permission to approve items and more. The project manager I’d just taken over from clearly hadn’t appreciated that, and I faced a situation of explaining the misunderstanding to the client. Lots of fun... Collation errors aren’t just a database setting that you can ignore. You need to remember that Americans speak a different type of English to Aussies and Poms, and techies speak a different language to their clients.

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Install Sql Server Developer Edition 32-bit (or Enterprise Edition) on Windows 7 Home Premium 64-bit

    - by ali62b
    Is there any work around to Successfully install SQL server 2008 32-bit on Windows 7 Home premium 64-bit ? If this is the case I first installed VS 2008 SP 1 on my machine and when I click on install.exe file for installing SQL Server 2008 (Developer Edition) I get an error related to .NET Framework version which is installed already on my PC. { I get the same error trying to install Enterprise Edition}

    Read the article

  • How to upgrade to SQL 2008

    - by picflight
    What is the difference between SQL2008 and SQL2008 R2? Should I unintall SQL 2005 and install SQL 2008 Web Edition? Or Should I upgrade the SQL 2005 to SQL 2008 Web Edition? I will also need to make sure the Logins are transferred over as many of my web applications have a Login on SQL2005 server.

    Read the article

  • How do I get SQL Profiler to show statements with column names like 'password'?

    - by Kev
    I'm profiling a database just now and need to see the UPDATE and INSERT statements being executed on a particular table. However, because the table has a 'Password' column the SQL Profiler is being understandingly cautious and replacing the TextData column with: -- 'password' was found in the text of this event. -- The text has been replaced with this comment for security reasons. How do I prevent it doing this because I need to see the SQL statement being executed?

    Read the article

  • Alias a linked Server in SQL server management studio?

    - by absentmindeduk
    Hoping someone can help - is there a way in SQL server management studio 2008 R2 that I can alias a linked SQL server? I have a server, added by IP address, to which I do not have the login credentials - however as the connection is already setup I can login ok. Issue is that, this is a dev environment, prior to a live deployment and the IP I have as a linked server needs to be 'accessible' by my stored procs under a different name, eg 'myserver' not 192.168.xxx.xxx... Any help much appreciated.

    Read the article

  • Joins in single-table queries

    - by Rob Farley
    Tables are only metadata. They don’t store data. I’ve written something about this before, but I want to take a viewpoint of this idea around the topic of joins, especially since it’s the topic for T-SQL Tuesday this month. Hosted this time by Sebastian Meine (@sqlity), who has a whole series on joins this month. Good for him – it’s a great topic. In that last post I discussed the fact that we write queries against tables, but that the engine turns it into a plan against indexes. My point wasn’t simply that a table is actually just a Clustered Index (or heap, which I consider just a special type of index), but that data access always happens against indexes – never tables – and we should be thinking about the indexes (specifically the non-clustered ones) when we write our queries. I described the scenario of looking up phone numbers, and how it never really occurs to us that there is a master list of phone numbers, because we think in terms of the useful non-clustered indexes that the phone companies provide us, but anyway – that’s not the point of this post. So a table is metadata. It stores information about the names of columns and their data types. Nullability, default values, constraints, triggers – these are all things that define the table, but the data isn’t stored in the table. The data that a table describes is stored in a heap or clustered index, but it goes further than this. All the useful data is going to live in non-clustered indexes. Remember this. It’s important. Stop thinking about tables, and start thinking about indexes. So let’s think about tables as indexes. This applies even in a world created by someone else, who doesn’t have the best indexes in mind for you. I’m sure you don’t need me to explain Covering Index bit – the fact that if you don’t have sufficient columns “included” in your index, your query plan will either have to do a Lookup, or else it’ll give up using your index and use one that does have everything it needs (even if that means scanning it). If you haven’t seen that before, drop me a line and I’ll run through it with you. Or go and read a post I did a long while ago about the maths involved in that decision. So – what I’m going to tell you is that a Lookup is a join. When I run SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 285; against the AdventureWorks2012 get the following plan: I’m sure you can see the join. Don’t look in the query, it’s not there. But you should be able to see the join in the plan. It’s an Inner Join, implemented by a Nested Loop. It’s pulling data in from the Index Seek, and joining that to the results of a Key Lookup. It clearly is – the QO wouldn’t call it that if it wasn’t really one. It behaves exactly like any other Nested Loop (Inner Join) operator, pulling rows from one side and putting a request in from the other. You wouldn’t have a problem accepting it as a join if the query were slightly different, such as SELECT sod.OrderQty FROM Sales.SalesOrderHeader AS soh JOIN Sales.SalesOrderDetail as sod on sod.SalesOrderID = soh.SalesOrderID WHERE soh.SalesPersonID = 285; Amazingly similar, of course. This one is an explicit join, the first example was just as much a join, even thought you didn’t actually ask for one. You need to consider this when you’re thinking about your queries. But it gets more interesting. Consider this query: SELECT SalesOrderID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276 AND CustomerID = 29522; It doesn’t look like there’s a join here either, but look at the plan. That’s not some Lookup in action – that’s a proper Merge Join. The Query Optimizer has worked out that it can get the data it needs by looking in two separate indexes and then doing a Merge Join on the data that it gets. Both indexes used are ordered by the column that’s indexed (one on SalesPersonID, one on CustomerID), and then by the CIX key SalesOrderID. Just like when you seek in the phone book to Farley, the Farleys you have are ordered by FirstName, these seek operations return the data ordered by the next field. This order is SalesOrderID, even though you didn’t explicitly put that column in the index definition. The result is two datasets that are ordered by SalesOrderID, making them very mergeable. Another example is the simple query SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276; This one prefers a Hash Match to a standard lookup even! This isn’t just ordinary index intersection, this is something else again! Just like before, we could imagine it better with two whole tables, but we shouldn’t try to distinguish between joining two tables and joining two indexes. The Query Optimizer can see (using basic maths) that it’s worth doing these particular operations using these two less-than-ideal indexes (because of course, the best indexese would be on both columns – a composite such as (SalesPersonID, CustomerID – and it would have the SalesOrderID column as part of it as the CIX key still). You need to think like this too. Not in terms of excusing single-column indexes like the ones in AdventureWorks2012, but in terms of having a picture about how you’d like your queries to run. If you start to think about what data you need, where it’s coming from, and how it’s going to be used, then you will almost certainly write better queries. …and yes, this would include when you’re dealing with regular joins across multiples, not just against joins within single table queries.

    Read the article

  • .Net Windows Service Throws EventType clr20r3 system.data.sqlclient.sql error

    - by William Edmondson
    I have a .Net/c# 2.0 windows service. The entry point is wrapped in a try catch block yet when I look at the server's application event log I seem a number of "EventType clr20r3" errors that are causing the service to die unexpectedly. The catch block has a "catch (Exception ex)". Each sql commands is of the type "CommandType.StoredProcedure" and are executed with SqlDataReader's. These sproc calls function correctly 99% of time and have all been thoroughly unit tested, profiled, and QA'd. I additionally wrapped these calls in try catch blocks just to be sure and am still experiencing these unhandled exceptions. This only in our production environment and cannot be duplicated in our dev or staging environments (even under heavy load). Why would my error handling not catch this particular error? Is there anyway to capture more detail as to the root cause of the problem? Here is an example of the event log: EventType clr20r3, P1 RDC.OrderProcessorService, P2 1.0.0.0, P3 4ae6a0d0, P4 system.data, P5 2.0.0.0, P6 4889deaf, P7 2490, P8 2c, P9 system.data.sqlclient.sql, P10 NIL. Additionally The Order Processor service terminated unexpectedly. It has done this 1 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service.

    Read the article

  • LINQ To SQL ignore unique constraint exception and continue

    - by Martin
    I have a single table in a database called Users Users ------ ID (PK, Identity) Username (Unique Index) I have setup a unique index on the Username table to prevent duplicates. I am then enumerating through a collection and creating a new user in the database for each item. What I want to do is just insert a new user and ignore the exception if the unique key constraint is violated (as it's clearly a duplicate record in that case). This is to avoid having to craft where not exists kind of queries. First off, is this going to be any more efficient or should my insert code be checking for duplicates instead? I'm drawn more to the database having that logic as this prevents any other type of client from inserting duplicate data. My other issue is related to LINQ To SQL. I have the following code: public class TestRepo { DatabaseDataContext database = new DatabaseDataContext(); public void Add(string username) { database.Users.InsertOnSubmit(new User() { Username = username }); } public void Save() { database.SubmitChanges(); } } And then I iterate over a collection and insert new users, ignoring any exceptions: TestRepo repo = new TestRepo(); foreach (var name in new string[] { "Tim", "Bob", "John" }) { try { repo.Add(name); repo.Save(); } catch { } } The first time this is run, great I have three users in the table. If I remove the second one and run this code again, nothing is inserted. I expected the first insert to fail with the exception, the second to succeed (as I just removed that item from the DB) and the third to then fail. What seems to be happening is that once the SqlException is thrown (even though the loop continues to iterate) all of the next inserts fail - even when there isn't a row in the table that would cause a unique violation. Can anyone explain this? P.S. The only workaround I could find was to instantiate the repo each time before the insert, then it worked exactly as excepted - indicating that it's something to do with the LINQ To SQL DataContext. Thanks.

    Read the article

  • Complex SQL Query similar to a z order problem

    - by AaronLS
    I have a complex SQL problem in MS SQL Server, and in drawing on a piece of paper I realized that I could think of it as a single bar filled with rectangles, each rectangle having segments with different Z orders. In reality it has nothing to do with z order or graphics at all, but more to do with some complex business rules that would be difficult to explain. Howoever, if anyone has ideas on how to solve the below that will give me my solution. I have the following data: ObjectID, PercentOfBar, ZOrder (where smaller is closer) A, 100, 6 B, 50, 5 B, 50, 4 C, 30, 3 C, 70, 6 The result of my query that I want is this, in any order: PercentOfBar, ZOrder 50, 5 20, 4 30, 3 Think of it like this, if I drew rectangle A, it would fill 100% of the bar and have a z order of 6. 66666666666 AAAAAAAAAAA If I then layed out rectangle B, consisting of two segments, both segments would cover up rectangle A resulting in the following rendering: 4444455555 BBBBBBBBBB As a rule of thumb, for a given rectangle, it's segments should be layed out such that the highest z order is to the right of the lower Z orders. Finally rectangle C would cover up only portions of Rectangle B with it's 30% segment that is z order 3, which would be on the left. You can hopefully see how the is represented in the output dataset I listed above: 3334455555 CCCBBBBBBB Now to make things more complicated I actually have a 4th column such that this grouping occurs for each key: Input: SomeKey, ObjectID, PercentOfBar, ZOrder (where smaller is closer) X, A, 100, 6 X, B, 50, 5 X, B, 50, 4 X, C, 30, 3 X, C, 70, 6 Y, A, 100, 6 Z, B, 50, 2 Z, B, 50, 6 Z, C, 100, 5 Output: SomeKey, PercentOfBar, ZOrder X, 50, 5 X, 20, 4 X, 30, 3 Y, 100, 6 Z, 50, 2 Z, 50, 5 Notice in the output, the PercentOfBar for each SomeKey would add up to 100%. This is one I know I'm going to be thinking about when I go to bed tonight. Just to be explicit and have a question: What would be a query that would produce the results described above?

    Read the article

  • Fastest way to remove non-numeric characters from a VARCHAR in SQL Server

    - by Dan Herbert
    I'm writing an import utility that is using phone numbers as a unique key within the import. I need to check that the phone number does not already exist in my DB. The problem is that phone numbers in the DB could have things like dashes and parenthesis and possibly other things. I wrote a function to remove these things, the problem is that it is slow and with thousands of records in my DB and thousands of records to import at once, this process can be unacceptably slow. I've already made the phone number column an index. I tried using the script from this post: http://stackoverflow.com/questions/52315/t-sql-trim-nbsp-and-other-non-alphanumeric-characters But that didn't speed it up any. Is there a faster way to remove non-numeric characters? Something that can perform well when 10,000 to 100,000 records have to be compared. Whatever is done needs to perform fast. Update Given what people responded with, I think I'm going to have to clean the fields before I run the import utility. To answer the question of what I'm writing the import utility in, it is a C# app. I'm comparing BIGINT to BIGINT now, with no need to alter DB data and I'm still taking a performance hit with a very small set of data (about 2000 records). Could comparing BIGINT to BIGINT be slowing things down? I've optimized the code side of my app as much as I can (removed regexes, removed unneccessary DB calls). Although I can't isolate SQL as the source of the problem anymore, I still feel like it is.

    Read the article

  • How to Convert using of SqlLit to Simple SQL command in C#

    - by Nasser Hajloo
    I want to get start with DayPilot control I do not use SQLLite and this control documented based on SQLLite. I want to use SQL instead of SQL Lite so if you can, please do this for me. main site with samples http://www.daypilot.org/calendar-tutorial.html The database contains a single table with the following structure CREATE TABLE event ( id VARCHAR(50), name VARCHAR(50), eventstart DATETIME, eventend DATETIME); Loading Events private DataTable dbGetEvents(DateTime start, int days) { SQLiteDataAdapter da = new SQLiteDataAdapter("SELECT [id], [name], [eventstart], [eventend] FROM [event] WHERE NOT (([eventend] <= @start) OR ([eventstart] >= @end))", ConfigurationManager.ConnectionStrings["db"].ConnectionString); da.SelectCommand.Parameters.AddWithValue("start", start); da.SelectCommand.Parameters.AddWithValue("end", start.AddDays(days)); DataTable dt = new DataTable(); da.Fill(dt); return dt; } Update private void dbUpdateEvent(string id, DateTime start, DateTime end) { using (SQLiteConnection con = new SQLiteConnection(ConfigurationManager.ConnectionStrings["db"].ConnectionString)) { con.Open(); SQLiteCommand cmd = new SQLiteCommand("UPDATE [event] SET [eventstart] = @start, [eventend] = @end WHERE [id] = @id", con); cmd.Parameters.AddWithValue("id", id); cmd.Parameters.AddWithValue("start", start); cmd.Parameters.AddWithValue("end", end); cmd.ExecuteNonQuery(); } }

    Read the article

  • Linq to SQL not inserting data onto the DB

    - by Jesus Rodriguez
    Hello! I have a little / weird behaviour here and Im looking over internet and SO and I didn't find a response. I have to admit that this is my first time using databases, I know how to use them with SQL but never used it actually. Anyway, I have a problem with my app inserting data, I just created a very simple project for testing that and no solution yet. I have an example database with Sql Server Id - int (identity primary key) Name - nchar(10) (not null) The table is called "Person", simple as pie. I have this: static void Main(string[] args) { var db = new ExampleDBDataContext {Log = Console.Out}; var jesus = new Person {Name = "Jesus"}; db.Persons.InsertOnSubmit(jesus); db.SubmitChanges(); var query = from person in db.Persons select person; foreach (var p in query) { Console.WriteLine(p.Name); } } As you can see, nothing extrange. It show Jesus in the console. But if you see the table data, there is no data, just empty. I comment the object creation and insertion and the foreach doesn't print a thing (normal, there is no data in the database) The weird thing is that I created a row in the database manually and the Id was 2 and no 1 (Was the linq really playing with the database but it didn't create the row?) There is the log: INSERT INTO [dbo].Person VALUES (@p0) SELECT CONVERT(Int,SCOPE_IDENTITY()) AS [value] -- @p0: Input NChar (Size = 10; Prec = 0; Scale = 0) [Jesus] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.4926 SELECT [t0].[Id], [t0].[Name] FROM [dbo].[Person] AS [t0] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.4926 I am really confused, All the blogs / books use this kind of snippet to insert an element to a database. Thank you for helping.

    Read the article

  • Oracle 10g express home page is not coming up

    - by Adnan Anwar
    Hello All: My Oracle 10g Express Edition , I can login in the SQL plus but I cannot login into oracle via SQL developer and cannot view the Home page at link http://127.0.0.1:8080/apex. This was working fine until yesterday. I have checked via (WINDOWS)netstat -ab and no other app is using the 8080 port. The only thing I did today was I changed my SQL server 2005 Developer edition from Windows authentication to Mixed Mode authentication. Can anyone let me know how to get the Oracle Web page and the SQL Developer to work. I will gratly appreciate that. Thanks! Adnan A.

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >