Search Results

Search found 36788 results on 1472 pages for 'sql 2008'.

Page 162/1472 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • T-SQL in SQL Azure

    - by kaleidoscope
    The following table summarizes the Transact-SQL support provided by SQL Azure Database at PDC 2009: Transact-SQL Features Supported Transact-SQL Features Unsupported Constants Constraints Cursors Index management and rebuilding indexes Local temporary tables Reserved keywords Stored procedures Statistics management Transactions Triggers Tables, joins, and table variables Transact-SQL language elements such as Create/drop databases Create/alter/drop tables Create/alter/drop users and logins User-defined functions Views, including sys.synonyms view Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags   Amit, S

    Read the article

  • Minimizing SQL transaction log file size on developer box running simple recovery model

    - by Anders Rask
    We have alot of SQL servers on development environment where we never take backup of the databases (TFS for code is enough). The (SharePoint) databases are all set to simple recovery model, but the log files, especially for the SharePoint configuration database is growing quite large and filling up our data drive on the SQL server. Since these log files are never used for anything, i would like advice on how to best minimize the size of these log files -or even disable them if possible. I'm not completely sure why the log files grow so large even on simple logging (checked for long running transactions (DBCC OPENTRAN) but found none). I guess the reason for the log files not being truncated is, that we dont take any backups, and hence Checkpoints arent reached. The autogrowth for log files are set to autogrow by 10% restricted to 2 gb, so i guess that is why Checkpoint (70%) arent reached here either. What would be the be best strategy to keep log files small (best case 0) without sacrificing performance (eg VLF fragmentation)?

    Read the article

  • SQL Server Reporting authentication not working

    - by Keith
    I'm not exactly sure what went wrong but our SQL Server Reporting Services authentication is no longer working correctly. When I try to load the site, it asks for a username and password, and mine doesn't work. I checked the service and it is using the NT AUTHORITY\NetworkService to logon. Since it is using NetworkService to logon, I read on Microsoft's site that I need to use these settings in the RSReportServer.config file: <AuthenticationTypes> <RSWindowsNegotiate /> </AuthenticationTypes> <EnableAuthPersistence>true</EnableAuthPersistence> Which is what I have set. It still asks for the password. When I set the Authentication to RSWindowsNTLM, it does login but everytime I click on a link, it asks for a password (the password doesn't seem to prevent anything from loading). Anyone know what is going on here? I'm not an expert to SQL Server so I may be missing something.

    Read the article

  • How do I identify and fix the cause of transaction log growth on SIMPLE recovery model databases?

    - by Stuart B
    I recently upgraded our SQL Server 2008 installations to service pack 2. One of our databases is on the simple recovery model, but its transaction log is growing extremely fast. The path I'm currently investigating is that we have a transaction somewhere out there stuck in active state. Here is why: select name, recovery_model_desc, log_reuse_wait_desc from sys.databases where name in ('SimpleDB') name recovery_model_desc log_reuse_wait_desc SimpleDB SIMPLE ACTIVE_TRANSACTION When I check my active transactions, I get the following. Note that I installed SP2 and restarted our server on 12/25 at around noonish. select transaction_id, name, transaction_begin_time, transaction_type from sys.dm_tran_active_transactions transaction_id name transaction_begin_time transaction_type 233 worktable 2010-12-25 12:44:29.283 2 236 worktable 2010-12-25 12:44:29.283 2 238 worktable 2010-12-25 12:44:29.283 2 240 worktable 2010-12-25 12:44:29.283 2 243 worktable 2010-12-25 12:44:29.283 2 245 worktable 2010-12-25 12:44:29.283 2 62210 tran_sp_MScreate_peer_tables 2010-12-25 12:45:00.880 1 55422856 user_transaction 2010-12-28 16:41:56.703 1 55422889 SELECT 2010-12-28 16:41:57.303 2 470 LobStorageProviderSession 2010-12-25 12:44:30.510 2 Note that according to the documentation a transaction_type of 1 means read/write, and 2 means read-only. So, my line of thinking is that the trans_sp_MScreate_peer_tables transaction is stuck for some reason and holding up transaction log truncation. Is this a plausible scenario? Correct me if my line of thinking is off, as I'm not a SQL Server expert. If this is correct, how do I erase that transaction so that my transaction log is truncated as usual?

    Read the article

  • Sql Server Management Studio: Change Prefix or Suffix characters

    - by PhantomDrummer
    I have an instance of SSMS 2008, for which the option to edit data in a table doesn't work. If I right-click on any table in the Object Explorer and select 'Edit top 200 rows' I get an error dialog 'Invalid prefix or suffix characters. (MS Visual Database Tools)'. The error seems to be associated specifically with SSMS, not with SQL Server (because this SSMS instance gives the same error no matter what database I connect to, but I've verified I can connect to some of the same databases using SSMS on other machines without the error). (However, our firewall prevents me using SSMS on other machines for some crucial tasks, so I do need to fix the problem). Googling for the error suggests that I should change the prefix, suffix or escape character, but without any indication of how you can make that change in SSMS. I'd also note that I'm not aware of having done any customization on SSMS since installing it, so would be surprised at having to make such a change now. Does anyone have any idea what the error message means or what I can do about it? Or how I can change the prefix/suffix/escape characters if that is really the problem.

    Read the article

  • Microsoft SQL Server Management Studio causing system freeze

    - by CRoshanLG
    I'm experiencing very slow response from MSSMS and it causes other applications to slow down. Specially Skype crashes after few seconds from opening MSSMS, showing an error called "Disk I/O Error". I'm regularly using few applications (Sublime text, MS Word, Firefox, Outlook, Skype and one or two other apps) simultaneously. The system works fine when MSSMS is not in use! But as soon as MSSMS is opened, all the apps start to freeze (MSSMS also responds very slow). This problem has been there for about a week now (I haven't installed any apps or haven't made any changes to the system during that time). -- System Specifications -- Processor: Core i3 (3.1 GHz) RAM: 4 GB OS: Windows 7 Professional (64 bit) Free space in C drive: ~ 100 GB MS SQL Server 2008 R2 Microsoft SQL Server Management Studio version - 10.50.1600.1 I've tried to find a reason for this but there are no helpful information in the web! There are some solutions suggested (in forums and in Skype Support pages) for the Skypes' "Disk I/O Error", all of which I tried but does not solve the problem. Has anyone faced the same senario? (and hopefully) knows a solution? Systems Log I don't have much knowledge in interpreting the System Log, but I think the Critical and Warnings are not helpful. But there are lots of Error logs which might be useful. In source Kernal-General there are few similar errors saying "An I/O operation initiated by the Registry failed unrecoverably.The Registry could not flush hive (file): <some file>" In source atapi also there are few similar errors -- "The driver detected a controller error on \Device\Ide\IdePort0." (all errors has occurred in 'IdePort0') In Application Error, there are several errors logged, and following is the latest one. Both the Errors which has occurred today is similar (to this one). As it is from Ssms.exe, I guess this is relevant to the cause of problem. But as I said above I can't understand what it means!

    Read the article

  • Prevent SQL injection from form-generated SQL - NO PreparedStmts

    - by Markos Fragkakis
    Hi all, I have a search table where user will be able to filter results with a filter of the type: Field [Name], Value [John], Remove Rule Field [Surname], Value [Blake], Remove Rule Field [Has Children], Value [Yes], Remove Rule Add Rule So the user will be able to set an arbitrary set of filters, which will result essentially in a completely dynamic WHERE clause. In the future I will also have to implement more complicated logical expressions, like Where (name=John OR name=Nick) AND (surname=Blake OR surname=Bourne), Of all 10 fields the user may or may not filter by, I don't know how many and which filters the user will set. So, I cannot use a prepared statement (which assumes that at least we know the fields in the WHERE clause). This is why prepared statements are unfortunately out of the question, I have to do it with plain old, generated SQL. What measures can I take to protect the application from SQL Injection (REGEX-wise or any other way)?

    Read the article

  • T-SQL IsNumeric() and Linq-to-SQL

    - by cdonner
    I need to find the highest value from the database that satisfies a certain formatting convention. Specifically, I would like to fund the highest value that looks like EU999999 ('9' being any digit) select max(col) will return something like 'EUZ...' for instance that I want to exclude. The following query does the trick, but I can't produce this via Linq-to-SQL. There seems to be no translation for the isnumeric() function in SQL Server. select max(col) from table where col like 'EU%' and 1=isnumeric(replace(col, 'EU', '')) Writing a database function, stored procedure, or anything else of that nature is far down the list of my preferred solutions, because this table is central to my app and I cannot easily replace the table object with something else. What's the next-best solution?

    Read the article

  • SQL Server 2000 tables

    - by klork
    We currently have an SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: Does anyone have any experience with a SQL Server (2000/2005) installation with millions of tables? What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • How to find N Consecutive records in a table using SQL

    - by user320587
    Hi, I have the following Table definition with sample data. In the following table, Customer Product & Date are key fields Table One Customer Product Date SALE X A 01/01/2010 YES X A 02/01/2010 YES X A 03/01/2010 NO X A 04/01/2010 NO X A 05/01/2010 YES X A 06/01/2010 NO X A 07/01/2010 NO X A 08/01/2010 NO X A 09/01/2010 YES X A 10/01/2010 YES X A 11/01/2010 NO X A 12/01/2010 YES In the above table, I need to find the N or N consecutive records where there was no sale, Sale value was 'NO' For example, if N is 2, the the result set would return the following Customer Product Date SALE X A 03/01/2010 NO X A 04/01/2010 NO X A 06/01/2010 NO X A 07/01/2010 NO X A 08/01/2010 NO Can someone help me with a SQL query to get the desired results. I am using SQL Server 2005. I started playing using ROW_NUMBER() AND PARTITION clauses but no luck. Thanks for any help

    Read the article

  • Prevent SQL injection from form-generated SQL.

    - by Markos Fragkakis
    Hi all, I have a search table where user will be able to filter results with a filter of the type: Field [Name], Value [John], Remove Rule Field [Surname], Value [Blake], Remove Rule Field [Has Children], Value [Yes], Remove Rule Add Rule So the user will be able to set an arbitrary set of filters, which will result essentially in a completely dynamic WHERE clause. In the future I will also have to implement more complicated logical expressions, like Where (name=John OR name=Nick) AND (surname=Blake OR surname=Bourne), Of all 10 fields the user may or may not filter by, I don't know how many and which filters the user will set. So, I cannot use a prepared statement (which assumes that at least we know the fields in the WHERE clause). This is why prepared statements are unfortunately out of the question, I have to do it with plain old, generated SQL. What measures can I take to protect the application from SQL Injection (REGEX-wise or any other way)?

    Read the article

  • SQL Server join and wildcards

    - by Ernst
    I want to get the results of a left join between two tables, with both having a column of the same name, the column on which I join. The following query is seen as valid by the import/export wizard in SQL Server, but it always gives an error. I have some more conditions, so the size wouldn't be too much. We're using SQL Server 2000 iirc and since we're using an externally developed program to interact with the database (except for some information we can't retrieve that way), we can not simply change the column name. SELECT table1.*, table2.* FROM table1 LEFT JOIN table2 ON table1.samename = table2.samename At least, I think the column name is the problem, or am I doing something else wrong?

    Read the article

  • Override Linq-to-Sql Datetime.ToString() Default Convert Values

    - by snmcdonald
    Is it possible to override the default CONVERT style? I would like the default CONVERT function to always return ISO8601 style 126. Steps To Reproduce: DROP TABLE DATES; CREATE TABLE DATES ( ID INT IDENTITY(1,1) PRIMARY KEY, MYDATE DATETIME DEFAULT(GETUTCDATE()) ); INSERT INTO DATES DEFAULT VALUES; INSERT INTO DATES DEFAULT VALUES; INSERT INTO DATES DEFAULT VALUES; INSERT INTO DATES DEFAULT VALUES; SELECT CONVERT(NVARCHAR,MYDATE) AS CONVERTED, CONVERT(NVARCHAR(4000),MYDATE,126) AS ISO, MYDATE FROM DATES WHERE MYDATE LIKE'Feb%' Output: CONVERTED ISO MYDATE --------------------------- ---------------------------- ----------------------- Feb 8 2011 12:17AM 2011-02-08T00:17:03.040 2011-02-08 00:17:03.040 Feb 8 2011 12:17AM 2011-02-08T00:17:03.040 2011-02-08 00:17:03.040 Feb 8 2011 12:17AM 2011-02-08T00:17:03.040 2011-02-08 00:17:03.040 Feb 8 2011 12:17AM 2011-02-08T00:17:03.040 2011-02-08 00:17:03.040 Linq-to-Sql calls CONVERT(NVARCHAR,@p) when I cast ToString(). However, I am displaying all my data in the ISO8601 format. I would like to override the database default if possible to CONVERT(NVARCHAR,@p,126). I am using Dynamic Linq-to-Sql as demoed by ScottGu to process my data. PropertyInfo piField = typeof(T).GetProperty(rule.field); if (piField != null) { Type typeField = piField.PropertyType; if (typeField.IsGenericType && typeField.GetGenericTypeDefinition().Equals(typeof(Nullable<>))) { filter = filter .Select(x => x) .Where(string.Format("{0} != null", rule.field)) .Where(string.Format("{0}.Value.ToString().Contains(\"{1}\")", rule.field, rule.data)); } else { filter = filter .Select(x => x) .Where(string.Format("{0} != null", rule.field)) .Where(string.Format("{0}.ToString().Contains(\"{1}\")", rule.field, rule.data)); } } I was hoping my property would convert the expression from CONVERT(NVARCHAR,@p) to CONVERT(NVARCHAR,@p,126), however I get a NotSupportedException: ... has no supported translation to SQL. public string IsoDate { get { if (SUBMIT_DATE.HasValue) { return SUBMIT_DATE.Value.ToString("o"); } else { return string.Empty; } } }

    Read the article

  • LINQ-to-SQL and SQL Compact - database file sharing problem

    - by Eye of Hell
    Hello. I'm learing LINQ-to-SQL right now and i have wrote a simple application that define SQL data: [Table( Name = "items" )] public class Item { [ Column( IsPrimaryKey = true, IsDbGenerated = true ) ] public int Id; [ Column ] public string Name; } I have launched 2 copy of application connected to the same .sdf file and tested if all database modifications in one application affects another application. But strange thing arise. If i use InsertOnSubmit() and DeleteOnSubmit() in one application, added/removed items are instantly visible in other application via 'select' LINQ queue. But if i try to modify 'Name' field in one application, it is NOT visible in other applicaton until it reconnects the database :(. The test code i use: var Items = from c in db.Items where Id == c.Id select c; foreach( var Item in Items ) { Item.Name = "new name"; break; } db.SubmitChanges(); Can anyone suggest what i'm doing wrong and why InsertOnSubmit()/DeleteOnSubmit works and SubmitChanges() don't?

    Read the article

  • Handling auto-incrementing IDENTITY SQL Server fields with LINQ to SQL in C#

    - by Maxim Z.
    I'm building an ASP.NET MVC site that uses LINQ to SQL to connect to SQL Server, where I have a table that has an IDENTITY bigint primary key column that represents an ID. In one of my code methods, I need to create an object of that table to get its ID, which I will place into another object based on another table (FK-to-PK relationship). At what point is the IDENTITY column value generated and how can I obtain it from my code? Is the right approach to: Create the object that has the IDENTITY column Do an InsertOnSubmit() and SubmitChanges() to submit the object to the database table Get the value of the ID property of the object

    Read the article

  • Store time of the day in SQL

    - by nute
    How would you store a time or time range in SQL? It won't be a datetime because it will just be let's say 4:30PM (not, January 3rd, 4:30pm). Those would be weekly, or daily meetings. The type of queries that I need are of course be for display, but also later will include complex queries such as avoiding conflicts in schedule. I'd rather pick the best datatype for that now. I'm using MS SQL Server Express 2005. Thanks! Nathan

    Read the article

  • SELECT with a Replace()

    - by andyjohnson
    I have a table of names and addresses, which includes a postcode column. I want to strip the spaces from the postcodes and select any that match a particular pattern. I'm trying this (simplified a bit) in T-SQL on SQL Server 2005: SELECT Replace(Postcode, ' ', '') AS P FROM Contacts WHERE P LIKE 'NW101%' But I get the following error; Msg 207, Level 16, State 1, Line 3 Invalid column name 'P'. If I remove the WHERE clause I get a list of postcodes without spaces, which is what I want to search. How should I approach this? What am I doing wrong?

    Read the article

  • Convert SQL to LINQ to SQL

    - by Adam
    Hi I have the SQL query with c as ( select categoryId,parentId, name,0 as [level] from task_Category b where b.parentId is null union all select b.categoryId,b.parentId,b.name,[level] + 1 from task_Category b join c on b.parentId = c.categoryId) select name,[level],categoryId,parentId as item from c and I want to convert it to LINQ to SQL, yet my LINQ skills are not there yet. Could someone please help me convert this. It's the with and union statements that are making this a bit more complex for me. Any help appreciated.

    Read the article

  • Very Different Execution Times of SQL Query in C# and SQL Server Management Studio

    - by Paul
    I have a simple SQL query that when run from C# takes over 30 seconds then times-out every time, whereas when run on SQL Server Management Studio successfully completes instantly. In the latter case, a query execution plan reveals nothing troubling, and the execution time is spread nicely through a few simple operations. I've run 'EXEC sp_who2' while the query is running from C#, and it is listed as taking 29,000 milliseconds of CPU time, and is not blocked by anything. I have no idea how to begin solving this. Does anyone have some insight? The query is: SELECT c.lngId, ... FROM tblCase c INNER JOIN tblCaseStatus s ON s.lngId = c.lngId INNER JOIN tblCaseStatusType t ON t.lngId = s.lngId INNER JOIN [Another Database]..tblCompany cm ON cm.lngId = cs.lngCompanyId WHERE t.lngId = 25 AND c.IsDeleted = 0 AND s.lngStatus = 1

    Read the article

  • Spatial data in the UK

    - by simonsabin
    I am just loving the fact that the Ordance Survey has now released a huge amount of data that can be used freely. I’ve downloaded the Panorama (tm) data http://www.ordnancesurvey.co.uk/oswebsite/products/land-form-panorama-contours/index.html . which is all the contours for the UK This I’ve loaded into SQL Server using Safe Computing’s FME ( http://www.safe.com/ ). This is because the data is a Autocad DXF file and translating that to SQL Server spatial data is not easy. The FME workbench is not...(read more)

    Read the article

  • Some thoughts on the Virtualization Feedback in the SSWUG Newsletters

    - by Jonathan Kehayias
    Last Thursday, March 25, 2010, the topic of Virtualization of SQL Server came up in the SSWUG Newsletter , with Steven Wynkoop asking if peoples perceptions and experiences have changed since the last time he covered virtualizing SQL Server.  I unfortunately missed the last coverage of this topic, but it appears from the newsletter that there was a general consensus that “low-traffic solution could be fine, but if you had a heavy hitting application, the net advise was to avoid a virtual environment...(read more)

    Read the article

  • Some thoughts on the Virtualization Feedback in the SSWUG Newsletters

    - by Jonathan Kehayias
    Last Thursday, March 25, 2010, the topic of Virtualization of SQL Server came up in the SSWUG Newsletter , with Steven Wynkoop asking if peoples perceptions and experiences have changed since the last time he covered virtualizing SQL Server.  I unfortunately missed the last coverage of this topic, but it appears from the newsletter that there was a general consensus that “low-traffic solution could be fine, but if you had a heavy hitting application, the net advise was to avoid a virtual environment...(read more)

    Read the article

  • Putting indexes in separate filegroup kills our queries

    - by womp
    Can anyone shed some light on this? On our dev boxes, our database resides entirely in the PRIMARY filegroup, and everything works fine. On one of our production servers, recently upgraded from 2005 to 2008, we noticed it was performing slower than it should. On this machine, there are two filegroups - PRIMARY and INDEXES. Both filegroups contain 1 file per logical volume, one logical volume per CPU, (and each logical volume is a RAID 10 of 4 physical disks). We isolated a few queries that were performing fast on the dev boxes and slow (up to 40x slower) on the production machine. Turned out these queries were using the non-clustered indexes that resided in the INDEXES filegroup. Tweaking some of the queries to only use clustered indexes that were in the PRIMARY filegroup dropped their times back to normal. As a final confirmation, we redeployed the same database on the same machine to have everything in PRIMARY, and things went back to normal! Here's the statistics output of one of the queries, run identically on the machine with different filegroup configurations (table names changed to protect the innocent): FAST (everything in PRIMARY filegroup): (3 row(s) affected) Table '0'. Scan count 2, logical reads 14, ... Table '1'. Scan count 0, logical reads 0, ... Table '1'. Scan count 0, logical reads 0, ... Table '2'. Scan count 2, logical reads 7, ... Table '3'. Scan count 2, logical reads 1012, ... Table '4'. Scan count 1, logical reads 3, ... SQL Server Execution Times: CPU time = 437 ms, elapsed time = 445 ms. SLOW (indexes split into their own filegroup): (3 row(s) affected) Table '0'. Scan count 209, logical reads 428, ... Table '1'. Scan count 0, logical reads 0,... Table '2'. Scan count 1021, logical reads 9043,.... Table '3'. Scan count 209, logical reads 105754, .... Table '4'. Scan count 0, logical reads 0, .... Table '5'. Scan count 1, logical reads 695, ... **Table '#46DA8CA9'. Scan count 205, logical reads 205, ...** Table '6'. Scan count 6, logical reads 436, ... Table '7'. Scan count 1, logical reads 12,.... SQL Server Execution Times: CPU time = 17581 ms, elapsed time = 17595 ms. Notice the weird temp table and extra tables involved in the slow query. It seems clear that having a second file group is making SQL Server batty with choosing an execution plan. What the heck is going on?

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >