Search Results

Search found 88309 results on 3533 pages for 'server 2005'.

Page 42/3533 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Optimizing ROW_NUMBER() in SQL Server

    - by BlueRaja
    We have a number of machines which record data into a database at sporadic intervals. For each record, I'd like to obtain the time period between this recording and the previous recording. I can do this using ROW_NUMBER as follows: WITH TempTable AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY Machine_ID ORDER BY Date_Time) AS Ordering FROM dbo.DataTable ) SELECT [Current].*, Previous.Date_Time AS PreviousDateTime FROM TempTable AS [Current] INNER JOIN TempTable AS Previous ON [Current].Machine_ID = Previous.Machine_ID AND Previous.Ordering = [Current].Ordering + 1 The problem is, it goes really slow (several minutes on a table with about 10k entries) - I tried creating separate indicies on Machine_ID and Date_Time, and a single joined-index, but nothing helps. Is there anyway to rewrite this query to go faster?

    Read the article

  • SQL Server - CAST AND DIVIDE

    - by rs
    DECLARE @table table(XYZ VARCHAR(8) , id int) INSERT INTO @table SELECT '4000', 1 UNION ALL SELECT '3.123', 2 UNION ALL SELECT '7.0', 3 UNION ALL SELECT '80000', 4 UNION ALL SELECT NULL, 5 SELECT CASE WHEN PATINDEX('^[0-9]{1,5}[\.][0-9]{1,3}$', XYZ) = 0 THEN XYZ WHEN PATINDEX('^[0-9]{1,8}$',XYZ) = 0 THEN CAST(XYZ AS decimal(18,3))/1000 ELSE NULL END FROM @table This part - CAST(XYZ AS decimal(18,3))/1000 doesn't divide value it gives me more number of zeros after decimal instead of dividing it. (I even enclosed that in brackets and tried but same result) Am i doing something wrong here?

    Read the article

  • Getting Null value Of variable in sql server

    - by Neo
    Strange situation In a trigger i assign a column value to variable but gives exception while inserting into other table using that variable. e.g select @srNO=A.SrNo from A where id=123; insert into B (SRNO) values (@srNo) // here it gives null I run above select query in query pane it works fine but in trigger it gives me null any suggestions

    Read the article

  • SQL Server: How do I delimit this data?

    - by codingguy3000
    declare @mydata nvarchar(4000) set @mydata = '36|0, 77|5, 132|61' I have this data that I need to get into a table. So for Row1 columnA would be 36 and columnB would be 0. For Row2 columnA would be 77 and columnB would be 5 etc. What is the best way to do this? Thanks

    Read the article

  • SQL Server 2008 to SQL Server 2005

    - by Sakhawat Ali
    I have an MDF and LDF file of SQL Server 2005. i attached it with SQL Server 2008 and did some change in data. now when i attached it back to sql server 2005 Express Edition it gives version error. The database 'E:\DB\JOBPERS.MDF' cannot be opened because it is version 655. This server supports version 612 and earlier. A downgrade path is not supported. Could not open new database 'E:\DB\JOBPERS.MDF'. CREATE DATABASE is aborted. An attempt to attach an auto-named database for file E:\DB\Jobpers.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share.

    Read the article

  • Contains performs MUCH slower with variable vs constant string MS SQL Server

    - by Greg R
    For some unknown reason I'm running into a problem when passing a variable to a full text search stored procedure performs many times slower than executing the same statement with a constant value. Any idea why and how can that be avoided? This executes very fast: SELECT * FROM table WHERE CONTAINS (comments, '123') This executes very slowly and times out: DECLARE @SearchTerm nvarchar(30) SET @SearchTerm = '123' SET @SearchTerm = '"' + @SearchTerm + '"' SELECT * FROM table WHERE CONTAINS (comments, @SearchTerm) Does this make any sense???

    Read the article

  • SQL Server: How to iterate over function arguments

    - by atricapilla
    I'm trying to learn SQL and for demonstration purposes I would like to create a loop which iterates over function arguments. E.g. I would like to iterate over SERVERPROPERTY function arguments (propertynames). I can do single select like this: SELECT SERVERPROPERTY('ProductVersion') AS ProductVersion, SERVERPROPERTY('ProductLevel') AS ProductLevel, SERVERPROPERTY('Edition') AS Edition, SERVERPROPERTY('EngineEdition') AS EngineEdition; GO but how to iterate over all these propertynames? Thanks in advance.

    Read the article

  • SQL Server union selects built dynamically from list of words

    - by Adam Tuttle
    I need to count occurrence of a list of words across all records in a given table. If I only had 1 word, I could do this: select count(id) as NumRecs where essay like '%word%' But my list could be hundreds or thousands of words, and I don't want to create hundreds or thousands of sql requests serially; that seems silly. I had a thought that I might be able to create a stored procedure that would accept a comma-delimited list of words, and for each word, it would run the above query, and then union them all together, and return one huge dataset. (Sounds reasonable, right? But I'm not sure where to start with that approach...) Short of some weird thing with union, I might try to do something with a temp table -- inserting a row for each word and record count, and then returning select * from that temp table. If it's possible with a union, how? And does one approach have advantages (performance or otherwise) over the other?

    Read the article

  • SQL SERVER Project

    - by Saif Omari
    My Application Database Without Project and without Source safe, i planned to make my DB to be as project and add it to TFS, but I have no idea how to script the stored procedures, Triggers, Views, Functions, and what is the best practice to Make Update Script for All My stored procedures, Triggers, Views, and Functions to My customers DB.

    Read the article

  • Returning data from SQL Server reporting web service call

    - by user79339
    Hi, I am generating a report that contains the version number. The version number is stored in the DB and retrieved/incremented as part of the report generation. The only problem is, I am calling SSRS via a web service call, which returns the generated report as a byte array. Is there any way to get the version number out of this generated report? For example to display a dialog that says "You generated Status Report, Version number 3". I tried passing in an output parameter and setting it inside the storedproc. Its modified when i execute it in sql management studio, but not in the reporting studio. Or atleast i can't seem to bind to the modified, post execution value (using expression "=Parameters!ReportVersion.Value"). Of course, I could get/increment the version number from database myself before calling the SSRS webservice and pass it along as a parameter to the Report, but that might cause concurrancy problems. On the whole, it just seems neater for the storedproc to access/generate a version number and return it to the ReportingEngine, which will generate the report with the version number and return the updated version number to the WebService client. Any thoughts/Ideas?

    Read the article

  • Storing SQL queries in Table in sql server

    - by Rohit
    We have multiple jobs in our system.These jobs are listed in a grid. We have 3 different user types (usertypeid 1,2,3). For each user listing is different and he can filter listing by selecting view from a dropdown. ViewName in the below table is the view which needs to be displayed. To achieve this functionality, a fellow developer has created the following table structure and stored sql fragments in SQLExpression in the below table. According to me the query should not be stored in database. What are the pros and cons of this approach and what are the available alternatives? JobListingViewID ViewName SQLExpression UserTypeID 3 All Jobs 1 = 1 3 4 Error Jobs JobStatusID IN ( 2 ) 1 5 Error Jobs JobStatusID IN ( 2 ) 2 6 Error Jobs JobStatusID IN ( 2 ) 3 7 Speech JobStatusID IN ( 1, 3, 8 ) 1

    Read the article

  • Filtering coluns in SQL Server replication - how?

    - by truthseeker
    Hi, I need to replicate some data from two tables in one database to another databases. I used snapshot replication. The issue is that I would like to replicate only some selected columns and the others should stay with untouched data. I don't want to loose their data. The sours of those columns is other system. So I need to replicate only data from my columns. Do anybody know how to achieve this?

    Read the article

  • Passing multiple parameters of same column to SQL Server select SP

    - by Bill
    I have a string value in the web.config — for example 2 guids seperated by a ",". I need to query the database dynamically (i.e i have no idea how many values could be seperated by a comma in the web.config) and run a select statement on the table passing these values and getting all that is relevant for example: select * from tablename where columnname = string1 string2 string3 etc etc some strings may only contain 1 guid some may contain 10

    Read the article

  • SQL Server Reporting Services Bar Chart

    - by Avitus
    I am trying to create horizontal bar chart to look like a pill chart. I would like to take a either a stacked bar chart or a 100% stacked bar chart and put rounded ends onto this chart. I would only be using 1 row within the chart. One idea I had was just putting rounded images on either end of the chart to accomplish this but I'm not sure that this will give the smooth feel of a single pill. Does anyone have a good suggestion? I am not tied to the idea of using a chart object. But I would prefer to stay away from third party components that need to be purchased.

    Read the article

  • SQL Server: String Manipulation, Unpivoting

    - by OMG Ponies
    I have a column called body, which contains body content for our CMS. The data looks like: ...{cloak:id=1.1.1}...{cloak}...{cloak:id=1.1.2}...{cloak}...{cloak:id=1.1.3}...{cloak}... A moderately tweaked for readability example: ## h5. A formal process for approving and testing all external network connections and changes to the firewall and router configurations? {toggle-cloak:id=1.1.1}{tree-plus-icon} *Compliance:* {color:red}{*}Partial{*}{color} (?) {cloak:id=1.1.1} || Date: | 2010-03-15 || || Owner: | Brian || || Researched by: | || || Narrative: | Jira tickets are normally used to approve and track network changes\\ || || Artifacts: | Jira.bccampus.ca\\ || || Recommendation: | Need to update policy that no Jira = no change\\ || || Proposed Remedy(ies): | || || Approved Remedy(ies): | || || Date: | || || Reviewed by: | || || Remarks/comments: | || {cloak}## h5. Current network diagrams with all connections to cardholder data, including any wireless networks? {toggle-cloak:id=1.1.2}{tree-plus-icon} *Compliance:* {color:red}{*}TBD{*}{color} (?) {cloak:id=1.1.2} I'd like to get the cloak values out in the following format: requirement_num ----------------- 1.1.1 1.1.2 1.1.3 I'm looking at using UNIONs - does anyone have a better recommendation? Forgot to mention: I can't use regex, because CLR isn't enabled on the database. The numbers aren't sequencial. The current record jumps from 1.1.6 to 1.2.1

    Read the article

  • Load a 6 MB binary file in a SQL Server 2005 VARBINARY(MAX) column using ADO/VC++?

    - by Feroz Khan
    How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database using ADO in a VC++ application. This is the code I am using to load the file which I used to load a .bmp file: BOOL CSaveView::PutECGInDB(CString strFilePath, FieldPtr pFileData) { //Open File CFile fileImage; CFileStatus fileStatus; fileImage.Open(strFilePath,CFile::modeRead); fileImage.GetStatus(fileStatus); //Alocating memory for data ULONG nBytes = (ULONG)fileStatus.m_size; HGLOBAL hGlobal = GlobalAlloc(GPTR,nBytes); LPVOID lpData = GlobalLock(hGlobal); //Putting data into file fileImage.Read(lpData,nBytes); HRESULT hr; _variant_t varChunk; long lngOffset = 0; UCHAR chData; SAFEARRAY FAR *psa = NULL; SAFEARRAYBOUND rgsabound[1]; try { //Create a safe array to store the BYTES rgsabound[0].lLbound = 0; rgsabound[0].cElements = nBytes; psa = SafeArrayCreate(VT_UI1,1,rgsabound); while(lngOffset<(long)nBytes) { chData = ((UCHAR*)lpData)[lngOffset]; hr = SafeArrayPutElement(psa,&lngOffset,&chData); if(hr != S_OK) { return false; } lngOffset++; } lngOffset = 0; //Assign the safe array to a varient varChunk.vt = VT_ARRAY|VT_UI1; varChunk.parray = psa; hr = pFileData->AppendChunk(varChunk); if(hr != S_OK) { return false; } } catch(_com_error &e) { //get info from _com_error _bstr_t bstrSource(e.Source()); _bstr_t bstrDescription(e.Description()); _bstr_t bstrErrorMessage(e.ErrorMessage()); _bstr_t bstrErrorCode(e.Error()); TRACE("Exception thrown for classes generated by #import"); TRACE("\tCode= %08lx\n",(LPCSTR)bstrErrorCode); TRACE("\tCode Meaning = %s\n",(LPCSTR)bstrErrorMessage); TRACE("\tSource = %s\n",(LPCSTR)bstrSource); TRACE("\tDescription = %s\n",(LPCSTR)bstrDescription); } catch(...) { TRACE("***Unhandle Exception***"); } //Free Memory GlobalUnlock(lpData); return true; } But when I read the same file using Getchunk function it gives me all 0s but the size of the file I get is same as the one uploaded. Your help will be highly appreciated.

    Read the article

  • Execution plan issue requires reset on SQL Server 2005, how to determine cause?

    - by Tony Brandner
    We have a web application that delivers training to thousands of corporate students running on top of SQL Server 2005. Recently, we started seeing that a single specific query in the application went from 1 second to about 30 seconds in terms of execution time. The application started throwing timeouts in that area. Our first thought was that we may have incorrect indexes, so we reviewed the tables and indexes. However, similar queries elsewhere in the application also run quickly. Reviewing the indexes showed us that they were configured as expected. We were able to narrow it down to a single query, not a stored procedure. Running this query in SQL Studio also runs quickly. We tried running the application in a different server environment. So a different web server with the same query, parameters and database. The query still ran slow. The query is a fairly large one related to determining a student's current list of training. It includes joins and left joins on a dozen tables and subqueries. A few of the tables are fairly large (hundreds of thousands of rows) and some of the other tables are small lookup tables. The query uses a grouping clause and a few where conditions. A few of the tables are quite active and the contents change often but the volume of added rows doesn't seem extreme. These symptoms led us to consider the execution plan. First off, as soon as we reset the execution plan cache with the SQL command 'DBCC FREEPROCCACHE', the problem went away. Unfortunately, the problem started to reoccur within a few days. The problem has continued to plague us for awhile now. It's usually the same query, but we did appear to see the problem occur in another single query recently. It happens enough to be a nuisance. We're having a heck of a time trying to fix it since we can't reproduce it in any other environment other than production. I have downloaded the High Availability guide from Red Gate and I read up more on execution plans. I hope to run the profiler on the live server, but I'm a bit concerned about impact. I would like to ask - what is the best way to figure out what is triggering this problem? Has anyone else seen this same issue?

    Read the article

  • SQL 2K5 - Multiple databases vs. Multiple files

    - by Bob Palmer
    Hey all, quick question. Our current legacy system was built using multiple distinct databases (about ten of them). These are all part of the same discreet system, and a large number of SPs and functionalty span multiple databases. There are also key relationships that span (for example, a header table may be in database A with history, etc. in database B). When deploying multiple copies of our app to the same server therefore, we have to use multiple instances (because the database names are coded into so many sprocs). We're evaluating the idea of taking these ten databases (about 30gb total with individual sizes ranging from 100mb to 10gb) and merging them into a single database. Currently, we have our databases spread accross multiple spindles for better IO. The question I have is whether or not there is any performance loss or benefit of having 10 different databases vs. 10 different database files? i.e. rather than having three databases (A, B, and C) Disk D: A.mdf (1gb) Disk E: B.mdf (4gb) Disk F: C.mdf (10gb) Disk G: A_Log.ldf, B_Log.ldf, C_Log.ldf have one database (X) Disk D: X1.mdf (5gb) Disk E: X2.mdf (5gb) Disk F: X3.mdf (5gb) Disk G: X1_log.ldf,X2_log.ldf,X3_log.ldf Thanks! -Bob

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >