Search Results

Search found 33585 results on 1344 pages for 'sql execution plan'.

Page 165/1344 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • The EU Commission's Digital Agenda Plan

    <b>Groklaw:</b> "I can't help but think of Microsoft's recent bragging about not being fully interoperable with Google Docs. I think they're not yet on the interoperability train that is already leaving the station, and I hope they hop on board before it's too late."

    Read the article

  • How many tasks to plan beforehand [closed]

    - by no__seriously
    As for my daily routine. Every morning when I come to work, I look at the items of my todo-list inbox (noted from the previous day). For each task I think about on which day I should get started and then group them accordingly. Once that's finished, I get started with my actual schedule for the day. Now, this pre-planning for each task (which could be concerning user interface to compiler programming) is mostly pretty sketchy. Serious thoughts about design and implementation comes when the task is about to be tackled. This approach works for me and I can't really complain. But I'm wondering. Since I'm personally most productive during the morning, would it make sense to already go into a deeper level of planning right away for each task? Or is that unproductive and would rather confuse than clarify? I think the latter. How do you handle your task management for each task / project and how far do you go with planning before even getting started with that item?

    Read the article

  • Should we use Visual Studio 2010 for all SQL Server Database Development?

    - by Luke
    Our company currently has seven dedicated SQL Server 2008 servers each running an average of 10 databases. All databases have many stored procedures and UDFs that commonly reference other databases both on the same server and also across linked servers. We currently use SSMS for all database related administration and development but we have recently purchased Visual Studio 2010 primarily for ongoing C# WinForms and ASP.NET development. I have used VS2010 to perform schema comparisons when rolling out changes from a development server into production and I'm finding it great for this task. We would like to consider using VS2010 for all database development going forward but as far as I understand, we would have to set up ALL databases as projects because of the dependencies on linked servers etc. My question is, do you have any experience using VS2010 for database development in a similar environment? Is it easy to use in tandem with SSMS or is it a one way street once VS2010 projects have been set up for all databases? Can you make any recommendations/impart any experience with a similar scenario? Thanks, Luke

    Read the article

  • How dynamic can I make my LINQ To SQL Statements?

    - by mcass20
    I have the need to construct a LINQ To SQL statement at runtime based on input from a user and I can't seem to figure out how to dynamically build the WHERE clause. I have no problem with the following: string Filters = "<value>FOO</value>"; Where("FormattedMessage.Contains(@0)",Filters) But what I really need is to make the entire WHERE clause dynamic. This way I can add multiple conditions at runtime like this (rough idea): foreach (Filter filter in filterlist) { whereclause = whereclause + "&& formattedmessage.contains(filter)"; }

    Read the article

  • Is there a better way to convert SQL datetime from hh:mm:ss to hhmmss?

    - by Johann J.
    I have to write an SQL view that returns the time part of a datetime column as a string in the format hhmmss (apparently SAP BW doesn't understand hh:mm:ss). This code is the SAP recommended way to do this, but I think there must be a better, more elegant way to accomplish this TIME = case len(convert(varchar(2), datepart(hh, timecolumn))) when 1 then /* Hour Part of TIMES */ case convert(varchar(2), datepart(hh, timecolumn)) when '0' then '24' /* Map 00 to 24 ( TIMES ) */ else '0' + convert(varchar(1), datepart(hh, timecolumn)) end else convert(varchar(2), datepart(hh, timecolumn)) end + case len(convert(varchar(2), datepart(mi, timecolumn))) when 1 then '0' + convert(varchar(1), datepart(mi, timecolumn)) else convert(varchar(2), datepart(mi, timecolumn)) end + case len(convert(varchar(2), datepart(ss, timecolumn))) when 1 then '0' + convert(varchar(1), datepart(ss, timecolumn)) else convert(varchar(2), datepart(ss, timecolumn)) end This accomplishes the desired result, 21:10:45 is displayed as 211045. I'd love for something more compact and easily readable but so far I've come up with nothing that works.

    Read the article

  • In T-SQL how to display columns for given table name?

    - by salvationishere
    I am trying to list all of the columns from whichever Adventureworks table I choose. What T-sQL statement or stored proc can I execute to see this list of all columns? I want to use my C# web app to input one input parameter = table_name and then get a list of all the column_names as output. Right now I am trying to execute the sp_columns stored proc which works, but I can't get just the one column with select and exec combined. Does anybody know how to do this?

    Read the article

  • Cardinality Estimation Bug with Lookups in SQL Server 2008 onward

    - by Paul White
    Cost-based optimization stands or falls on the quality of cardinality estimates (expected row counts).  If the optimizer has incorrect information to start with, it is quite unlikely to produce good quality execution plans except by chance.  There are many ways we can provide good starting information to the optimizer, and even more ways for cardinality estimation to go wrong.  Good database people know this, and work hard to write optimizer-friendly queries with a schema and metadata (e.g. statistics) that reduce the chances of poor cardinality estimation producing a sub-optimal plan.  Today, I am going to look at a case where poor cardinality estimation is Microsoft’s fault, and not yours. SQL Server 2005 SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; The query plan on SQL Server 2005 is as follows (if you are using a more recent version of AdventureWorks, you will need to change the year on the date range from 2003 to 2007): There is an Index Seek on ProductID = 1, followed by a Key Lookup to find the Transaction Date for each row, and finally a Filter to restrict the results to only those rows where Transaction Date falls in the range specified.  The cardinality estimate of 45 rows at the Index Seek is exactly correct.  The table is not very large, there are up-to-date statistics associated with the index, so this is as expected. The estimate for the Key Lookup is also exactly right.  Each lookup into the Clustered Index to find the Transaction Date is guaranteed to return exactly one row.  The plan shows that the Key Lookup is expected to be executed 45 times.  The estimate for the Inner Join output is also correct – 45 rows from the seek joining to one row each time, gives 45 rows as output. The Filter estimate is also very good: the optimizer estimates 16.9951 rows will match the specified range of transaction dates.  Eleven rows are produced by this query, but that small difference is quite normal and certainly nothing to worry about here.  All good so far. SQL Server 2008 onward The same query executed against an identical copy of AdventureWorks on SQL Server 2008 produces a different execution plan: The optimizer has pushed the Filter conditions seen in the 2005 plan down to the Key Lookup.  This is a good optimization – it makes sense to filter rows out as early as possible.  Unfortunately, it has made a bit of a mess of the cardinality estimates. The post-Filter estimate of 16.9951 rows seen in the 2005 plan has moved with the predicate on Transaction Date.  Instead of estimating one row, the plan now suggests that 16.9951 rows will be produced by each clustered index lookup – clearly not right!  This misinformation also confuses SQL Sentry Plan Explorer: Plan Explorer shows 765 rows expected from the Key Lookup (it multiplies a rounded estimate of 17 rows by 45 expected executions to give 765 rows total). Workarounds One workaround is to provide a covering non-clustered index (avoiding the lookup avoids the problem of course): CREATE INDEX nc1 ON Production.TransactionHistory (ProductID) INCLUDE (TransactionDate); With the Transaction Date filter applied as a residual predicate in the same operator as the seek, the estimate is again as expected: We could also force the use of the ultimate covering index (the clustered one): SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WITH (INDEX(1)) WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; Summary Providing a covering non-clustered index for all possible queries is not always practical, and scanning the clustered index will rarely be optimal.  Nevertheless, these are the best workarounds we have today. In the meantime, watch out for poor cardinality estimates when a predicate is applied as part of a lookup. The worst thing is that the estimate after the lookup join in the 2008+ plans is wrong.  It’s not hopelessly wrong in this particular case (45 versus 16.9951 is not the end of the world) but it easily can be much worse, and there’s not much you can do about it.  Any decisions made by the optimizer after such a lookup could be based on very wrong information – which can only be bad news. If you think this situation should be improved, please vote for this Connect item. © 2012 Paul White – All Rights Reserved twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • How to reference a sql server with a slash (\) in its name?

    - by Bill Paetzke
    Givens: One SQL Server is named: DevServerA Another is named: DevServerB\2K5 Problem: From DevServerA, how can I write a query that references DevServerB\2K5? I tried a sample, dummy query (running it from DevServerA): SELECT TOP 1 * FROM DevServerB\2K5.master.sys.tables And I get the error: Msg 102, Level 15, State 1, Line 2 Incorrect syntax near '\.'. However, I know my syntax is almost correct, since the other way around works (running this query from DevServerB\2K5): SELECT TOP 1 * FROM DevServerA.master.sys.tables Please help me figure out how to reference DevServerB\2K5 from DevServerA. Thanks.

    Read the article

  • Algorithms to trim leading zeroes from a SQL field?

    - by froadie
    I just came across the interesting problem of trying to trim the leading zeroes from a non-numeric field in SQL. (Since it can contain characters, it can't just be converted to a number and then back.) This is what we ended up using: SELECT REPLACE(LTRIM(REPLACE(fieldWithLeadingZeroes,'0',' ')),' ','0') It replaces the zeroes with spaces, left trims it, and then puts the zeroes back in. I thought this was a very clever and interesting way to do it, although not so readable if you've never come across it before. Are there any clearer ways to do this? Any more efficient ways to do this? Or any other ways to do this period? I was intrigued by this problem and would be interested to see any methods of getting around it.

    Read the article

  • Is using Natural Join or Implicit column names not a good practice when writing SQL in a programming

    - by Jian Lin
    When we use Natural Join, we are joining the tables when both table have the same column names. But what if we write it in PHP and then the DBA add some more fields to both tables, then the Natural Join can break? The same goes for Insert, if we do a insert into gifts values (NULL, "chocolate", "choco.jpg", now()); then it will break the code as well as contaminating the table when the DBA adds some fields to the table (example as column 2 or 3). So it is always best to spell out the column names when the SQL statements are written inside a programming language and stored in a file in a big project.

    Read the article

  • Can I encrypt value in C# and use that with SQL Server 2005 symmetric encryption?

    - by Robert Byrne
    To be more specific, if I create a symmetric key with a specific KEY_SOURCE and ALGORITHM (as described here), is there any way that I can set up the same key and algorithm in C# so that I can encrypt data in code, but have that data decrypted by the symmetric key in Sql Server? From the research I've done so far, it seems that the IDENTITY_VALUE for the key is also baked into the cypher text, making things even more complex. I'm thinking about just trying all the various ways I can think of, ie hashing the KEY_SOURCE using different hash algorithms for a key and trying different ways of encrypting the plain text until I get something that works. Or is that just futile? Has anyone else done this, any pointers? UPDATE Just to clarify, I want to use NHibernate on the client side, but theres a bunch of stored procedures on the database side that still perform decryption.

    Read the article

  • How to do a Postgresql subquery in select clause with join in from clause like SQL Server?

    - by Ricardo
    I am trying to write the following query on postgresql: select name, author_id, count(1), (select count(1) from names as n2 where n2.id = n1.id and t2.author_id = t1.author_id ) from names as n1 group by name, author_id This would certainly work on Microsft SQL Server but it does not at all on postegresql. I read its documentation a bit and it seems I could rewrite it as: select name, author_id, count(1), total from names as n1, (select count(1) as total from names as n2 where n2.id = n1.id and n2.author_id = t1.author_id ) as total group by name, author_id But that returns the following error on postegresql: "subquery in FROM cannot refer to other relations of same query level". So I'm stuck. Does anyone know how I can achieve that? Thanks

    Read the article

  • What can I do about a SQL Server ghost FK constraint?

    - by rcook8601
    I'm having some trouble with a SQL Server 2005 database that seems like it's keeping a ghost constraint around. I've got a script that drops the constraint in question, does some work, and then re-adds the same constraint. Normally, it works fine. Now, however, it can't re-add the constraint because the database says that it already exists, even though the drop worked fine! Here are the queries I'm working with: alter table individual drop constraint INDIVIDUAL_EMP_FK ALTER TABLE INDIVIDUAL ADD CONSTRAINT INDIVIDUAL_EMP_FK FOREIGN KEY (EMPLOYEE_ID) REFERENCES EMPLOYEE After the constraint is dropped, I've made sure that the object really is gone by using the following queries: select object_id('INDIVIDUAL_EMP_FK') select * from sys.foreign_keys where name like 'individual%' Both return no results (or null), but when I try to add the query again, I get: The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "INDIVIDUAL_EMP_FK". Trying to drop it gets me a message that it doesn't exist. Any ideas?

    Read the article

  • SQL: How can i update a value on a column only if that value is null?

    - by user321185
    Hey, I have an SQL question which may be basic to some but is confusing me. Here is an example of column names for a table 'Person': PersonalID, FirstName, LastName, Car, HairColour, FavDrink, FavFood Let's say that I input the row: 121312, Rayna, Pieterson, BMW123d, Brown, NULL, NULL Now I want to update the values for this person, but only if the new value is not null, Update: 121312, Rayna, Pieterson, NULL, Blonde, Fanta, NULL The new row needs to be: 121312, Rayna, Pieterson, BMW123d, Blonde, Fanta, NULL So I was thinking something along the lines of: Update Person(PersonalID, FirstName, LastName, Car, HairColour, FavDrink, FavFood) set Car = @Car (where @Car is not null), HairColour = @HairColour (where @HairColour...)... etc. My only concern is that I can't group all the conditions at the end of the query because it will require all the values to have the same condition. Can't i do something like Update HairColour if @HairColour is not Null

    Read the article

  • SQL Server - Storing multiple decimal values in a column?

    - by cshah
    I know storing multiple values in a column. Not a good idea. It violates first normal form --- which states NO multi valued attributes. Normalize period... I am using SQL Server 2005 I have a table that require to store lower limit and uppper limit for a measurement, think of it as a minimum and maximum speed limit... only problem is only 2 % out of hundread i need upper limit. I will only have data for lower limit. I was thinking to store both values in a column (Sparse column introduces in 2008 so not for me) Is there a way...? Not sure about XML..

    Read the article

  • What's the best way to capture output from SQL Management Studio and paste it into an Outlook email?

    - by Decker
    I'm constantly executing ad-hoc queries in SQL Management Studio and need to send the results to people via email. This happens several times a day so I'm looking for the best way to copy the results of the query from the results window into an Outlook email body so that it can be formatted in a reader friendly manner. I haven't come up with anything that works well for me. When it really matters, I end up going into Excel, executing the query from within there and then attaching the resulting spreadsheet. I'm looking for something that I can do without involving Excel if possible. Any ideas?

    Read the article

  • How can I update many rows with SQL in a single table?

    - by tmarouda
    Hi folks. I have a table and one of the columns holds web addresses like: 'http://...' or 'https://...'. The problem is that there are some invalid entries, like 'shttp://...' or '#http//...' (the first character is invalid) and I want to correct all of them. I use the following SQL statement: 'SELECT [...] FROM MyTable WHERE WebAddress LIKE '_http%' and I successfuly get the problematic rows. But how am I going to change/correct all of them using an UPDATE statement? If you have some other solution please share it!

    Read the article

  • Tips for improving performance of DB that is above size 40 GB (Sql Server 2005) and growing monthly

    - by HotTester
    The current DB or our project has crossed over 40 GB this month and on an average it is growing monthly by around 3 GB. Now all the tables are best normalized and proper indexing has been used. But still as the size is growing it is taking more time to fire even basic queries like 'select count(1) from table'. So can u share some more points that will help in this front. Database is Sql Server 2005. Further if we implement Partitioning wouldn't it create a overhead ? Thanks in advance.

    Read the article

  • LINQ to SQL for tables across databases. Or View?

    - by BritishDeveloper
    I have a Message table and a User table. Both are in separate databases. There is a userID in the Message table that is used to join to the User table to find things like userName. How can I create this in LINQ to SQL? I can't seem to do a cross database join. Should I create a View in the database and use that instead? Will that work? What will happen to CRUD against it? E.g. if I delete a message - surely it won't delete the user? I'd imagine it would throw an error. What to do? I can't move the tables into the same database!

    Read the article

  • How do I gain permissions to a Sql Compact Database?

    - by Quenton Jones
    I have an Sql Compact Database v3.5 that I'm bundling with my application. When the application is installed, the database is copied into the application's Program Files directory. Because of Vista and Win7's security settings, the installed application can't access the database file. It is merely a problem of having the database file reside in the Program Files. The solution I have thought of is to copy the file into Program Data, but does anyone have another solution? I am sure others have come across a similar problem. Thanks in advance for your input.

    Read the article

  • How to achieve "merge" of two data sets with Insert SQL statemen (Oracle DBMS)t?

    - by Roman Kagan
    Hi: What would be the insert SQL statement to merge data from two tables. For example I have events_source_1 table (columns: event_type_id, event_date) and events_source_2 table (same columns) and events_target table (columns: event_type_id, past_event_date nullalbe, future_event_date nullable). Events_source_1 has past events, Events_source_2 has future events and resultant events_target would contain past and future events in the same row for same event_type_id. If there is no past_events but future_events then past_event_date won't be set and only future_event_date will be and the opposite is true too. Thanks a lot in advance for helping me resolving this problem.

    Read the article

  • How do you escape double quotes inside a SQL fulltext 'contains' function?

    - by Richard Davies
    How do you escape a double quote character inside a MS SQL 'contains' function? SELECT decision FROM table WHERE CONTAINS(decision, '34" AND wide') Normally contains() expects double quotes to surround an exact phrase to match, but I want to search for an actual double quote character. I've tried escaping it with \, `, and even another double quote, but none of that has worked. P.S. I realize a simple example like this could also be done using the LIKE statement, but I need to use the fulltext search function. The query I provided here has been simplified from my actual query for example purposes.

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >