Search Results

Search found 40744 results on 1630 pages for 'sql interview questions a'.

Page 127/1630 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Copy SQL Server data from one server to another on a schedule

    - by rwmnau
    I have a pair of SQL Servers at different webhosts, and I'm looking for a way to periodically update the one server using the other. Here's what I'm looking for: As automated as possible - ideally, without any involvement on my part once it's set up. Pushes a number of databases, in their entirely (including any schema changes) from one server to the other Freely allows changes on the source server without breaking my process. For this reason, I don't want to use replication, as I'd have to break it every time there's an update on the source, and then recreate the publication and subscription One database is about 4GB in size and contains binary data. I'm not sure if there's a way to export this to a script, but it would be a mammoth file if I did. Originally, I was thinking of writing something that takes a scheduled full backup of each database, FTPs the backups from one server to the other once they're done, and then the new server picks it up and restores it. The only downside I can see to this is that there's no way to know that the backups are done before starting to transfer them - can these backups be done synchronously? Also, the server being refreshes is our test server, so if there's some downtime involved in moving the data, that's fine. Does anybody out there have a better idea, or is what I'm currently considering the best non-replication way to go? Thanks for your help, everybody. UPDATE: I ended up designing a custom solution to get this done using BAT files, 7Zip,command line FTP, and OSQL, so it runs in a completely automatic way and aggregates the data from a dozen servers across the country. I've detailed the steps in a blog entry. Thanks for all your input!

    Read the article

  • SSIS - How do I use a resultset as input in a SQL task and get data types right?

    - by thursdaysgeek
    I am trying to merge records from an Oracle database table to my local SQL table. I have a variable for the package that is an Object, called OWell. I have a data flow task that gets the Oracle data as a SQL statment (select well_id, well_name from OWell order by Well_ID), and then a conversion task to convert well_id from a DT_STR of length 15 to a DT_WSTR; and convert well_name from a DT_STR of length 15 to DT_WSTR of length 50. That is then stored in the recordset OWell. The reason for the conversions is the table that I want to add records to has an identity field: SSIS shows well_id as a DT_WSTR of length 15, well_name a DT_WSTR of length 50. I then have a SQL task that connects to the local database and attempts to add records that are not there yet. I've tried various things: using the OWell as a result set and referring to it in my SQL statement. Currently, I have the ResultSet set to None, and the following SQL statment: Insert into WELL (WELL_ID, WELL_NAME) Select OWELL_ID, OWELL_NAME from OWell where OWELL_ID not in (select WELL.WELL_ID from WELL) For Parameter Mapping, I have Paramater 0, called OWell_ID, from my variable User::OWell. Parameter 1, called OWell_Name is from the same variable. Both are set to VARCHAR, although I've also tried NVARCHAR. I do not have a Result set. I am getting the following error: Error: 0xC002F210 at Insert records to FLEDG, Execute SQL Task: Executing the query "Insert into WELL (WELL_ID, WELL_NAME) Select OWELL..." failed with the following error: "An error occurred while extracting the result into a variable of type (DBTYPE_STR)". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. I don't think it's a data type issue, but rather that I somehow am not using the resultset properly. How, exactly, am I supposed to refer to that recordset in my SQL task, so that I can use the two recordset fields and add records that are missing?

    Read the article

  • SQL, MVC, Entity Framework

    - by Anthony
    Hi Im using the above technologies and have ran into what I presume is a design issue I have made. I have an Artwork table in my DB and have been able to add art (I now think of these as Digital Products) to a shopping cart + CartLine table fine. The system I have that adds art to galleries and user accounts etc works fine. Now the client wants to sell T-shirts, Mugs and Pens etc, 'HardwareProducts' so I have created a 'HardwareProducts' table. Now I have two different product types in two tables. I use GUID's as the PK's in both the HardwareProducts table and Artwork table. When a customer adds an item to their cart I store the GUID in the ProductID column in the CartItems table. The issue is the database will not know which table to reference when I bring the LineItem object up through my ORM to the front end. In OOP I can see how you would have a base class of Product, and then a DigitalProduct class and HardwareProduct class drived from it, but how do you model this in SQL Server and the Entity Framework please, or is there another way?

    Read the article

  • TSQL Help (SQL Server 2005)

    - by Mick Walker
    I have been playing around with a quite complex SQL Statement for a few days, and have gotten most of it working correctly. I am having trouble with one last part, and was wondering if anyone could shed some light on the issue, as I have no idea why it isnt working: INSERT INTO ExistingClientsAccounts_IMPORT SELECT DISTINCT cca.AccountID, cca.SKBranch, cca.SKAccount, cca.SKName, cca.SKBase, cca.SyncStatus, cca.SKCCY, cca.ClientType, cca.GFCID, cca.GFPID, cca.SyncInput, cca.SyncUpdate, cca.LastUpdatedBy, cca.Deleted, cca.Branch_Account, cca.AccountTypeID FROM ClientsAccounts AS cca INNER JOIN (SELECT DISTINCT ClientAccount, SKAccount, SKDesc, SKBase, SKBranch, ClientType, SKStatus, GFCID, GFPID, Account_Open_Date, Account_Update FROM ClientsAccounts_IMPORT) AS ccai ON cca.Branch_Account = ccai.ClientAccount Table definitions follow: CREATE TABLE [dbo].[ExistingClientsAccounts_IMPORT]( [AccountID] [int] NOT NULL, [SKBranch] [varchar](2) NOT NULL, [SKAccount] [varchar](12) NOT NULL, [SKName] [varchar](255) NULL, [SKBase] [varchar](16) NULL, [SyncStatus] [varchar](50) NULL, [SKCCY] [varchar](5) NULL, [ClientType] [varchar](50) NULL, [GFCID] [varchar](10) NULL, [GFPID] [varchar](10) NULL, [SyncInput] [smalldatetime] NULL, [SyncUpdate] [smalldatetime] NULL, [LastUpdatedBy] [varchar](50) NOT NULL, [Deleted] [tinyint] NOT NULL, [Branch_Account] [varchar](16) NOT NULL, [AccountTypeID] [int] NOT NULL ) ON [PRIMARY] CREATE TABLE [dbo].[ClientsAccounts_IMPORT]( [NEWClientIndex] [bigint] NOT NULL, [ClientGroup] [varchar](255) NOT NULL, [ClientAccount] [varchar](255) NOT NULL, [SKAccount] [varchar](255) NOT NULL, [SKDesc] [varchar](255) NOT NULL, [SKBase] [varchar](10) NULL, [SKBranch] [varchar](2) NOT NULL, [ClientType] [varchar](255) NOT NULL, [SKStatus] [varchar](255) NOT NULL, [GFCID] [varchar](255) NULL, [GFPID] [varchar](255) NULL, [Account_Open_Date] [smalldatetime] NULL, [Account_Update] [smalldatetime] NULL, [SKType] [varchar](255) NOT NULL ) ON [PRIMARY] The error message I get is: Msg 8152, Level 16, State 14, Line 1 String or binary data would be truncated. The statement has been terminated.

    Read the article

  • Optimzing TSQL code

    - by adopilot
    My job is the maintain one application which heavy use SQL server (MSSQL2005). Until now middle server stores TSQL codes in XML and send dynamic TSQL queries without using stored procs. As I am able change those XML queries I want to migrate most of my queries to stored procs. Question is folowing: Most of my queries have same Where conditions against one table Sample: Select ..... from .... where .... and (a.vrsta_id = @vrsta_id or @vrsta_id = 0) and (a.podvrsta_id = @podvrsta_id or @podvrsta_id = 0) and (a.podgrupa_2 = @podgrupa2_id or @podgrupa2_id = 0) and ( (a.id in (select art_id from osobina_veze where podosobina_id in (select ado from dbo.fn_ado_param_int(@podosobina)) group by art_id having count(art_id)= @podosobina_count )) or ('0' = @podosobina) ) They also have same where conditions on other table. How I should organize my code ? What is proper way ? Should I make table valued function that I will use in all queries or use #Temp tables and simple inner join my query to that each time when proc executing? or use #temp filed by table valued function ? or leave all queries with this large where clause and hope that index is going to do their jobs. or use WITH(statement)

    Read the article

  • What is the scope of TRANSACTION in Sql server

    - by Shantanu Gupta
    I was creating a stored procedure and i got stuck in the writing methodology of me and my collegue. I am using SQL Server 2005 I was writing Stored procedure like this BEGIN TRAN BEGIN TRY INSERT INTO Tags.tblTopic (Topic, TopicCode, Description) VALUES(@Topic, @TopicCode, @Description) INSERT INTO Tags.tblSubjectTopic (SubjectId, TopicId) VALUES(@SubjectId, @@IDENTITY) COMMIT TRAN END TRY BEGIN CATCH DECLARE @Error VARCHAR(1000) SET @Error= 'ERROR NO : '+ERROR_NUMBER() + ', LINE NO : '+ ERROR_LINE() + ', ERROR MESSAGE : '+ERROR_MESSAGE() PRINT @Error ROLLBACK TRAN END CATCH And my collegue was writing it like the below one BEGIN TRY BEGIN TRAN INSERT INTO Tags.tblTopic (Topic, TopicCode, Description) VALUES(@Topic, @TopicCode, @Description) INSERT INTO Tags.tblSubjectTopic (SubjectId, TopicId) VALUES(@SubjectId, @@IDENTITY) COMMIT TRAN END TRY BEGIN CATCH DECLARE @Error VARCHAR(1000) SET @Error= 'ERROR NO : '+ERROR_NUMBER() + ', LINE NO : '+ ERROR_LINE() + ', ERROR MESSAGE : '+ERROR_MESSAGE() PRINT @Error ROLLBACK TRAN END CATCH Here the only difference that you will find is the position of writing Begin TRAN. According to me the methodology of my collegue should not work when an exception occurs i.e. Rollback should not get executed because TRAN does'nt have scope. But when i tried to run both the code, both was working in the same way. I am confused to know how does TRANSACTION works. Is it scope free or what ?

    Read the article

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • Something confusing about FormsOf (Sql server Full-Text searching)

    - by AspOnMyNet
    hi I'm using Sql Server 2008 1) A given <simple_term> within a <generation_term> will not match both nouns and verbs. If I understand the above text correctly, then query SELECT * FROM someTable WHERE CONTAINS ( *, ' FORMSOF ( INFLECTIONAL, park ) ' ) should search for either nouns or verbs derived from the root word “park”, but not for both? Thus out of the two rows, one containing noun parks and other verb parking, the above query should return just one of the two rows? But as it turns out, query returns both rows, so are perhaps my assumptions a bit off or is the above quote wrong?! 2) From Msdn: If freetext_string is enclosed in double quotation marks, a phrase match is instead performed; stemming and thesaurus are not performed. According to the above quote the following query shouldn’t return rows containing strings surfing ( due to query not performing stemming ), surf ( due to query performing phrase matching and not individual word matching ) and surfing with suzy’s sister ( due to query not performing stemming and due to query performing phrase matching and not word matching ), but it does. Thus, it appears that even when *freetext_string* is enclosed in double quotation marks, stemming is still preformed, while phrase matching is not: SELECT * FROM someTable WHERE FREETEXT( *, ' "surf sister" ' ) So is the above quote wrong or...? thanx

    Read the article

  • SQL select row into a string variable without knowing columns

    - by Brandi
    Hello, I am new to writing SQL and would greatly appreciate help on this problem. :) I am trying to select an entire row into a string, preferably separated by a space or a comma. I would like to accomplish this in a generic way, without having to know specifics about the columns in the tables. What I would love to do is this: DECLARE @MyStringVar NVARCHAR(MAX) = '' @MyStringVar = SELECT * FROM MyTable WHERE ID = @ID AS STRING But what I ended up doing was this: DECLARE @MyStringVar = '' DECLARE @SecificField1 INT DECLARE @SpecificField2 NVARCHAR(255) DECLARE @SpecificField3 NVARCHAR(1000) ... SELECT @SpecificField1 = Field1, @SpecificField2 = Field2, @SpecificField3 = Field3 FROM MyTable WHERE ID = @ID SELECT @StringBuilder = @StringBuilder + CONVERT(nvarchar(10), @Field1) + ' ' + @Field2 + ' ' + @Field3 Yuck. :( I have seen some people post stuff about the COALESCE function, but again, I haven't seen anyone use it without specific column names. Also, I was thinking, perhaps there is a way to use the column names dynamically getting them by: SELECT [COLUMN_NAME] FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'MyTable' It really doesn't seem like this should be so complicated. :( What I did works for now, but thanks ahead of time to anyone who can point me to a better solution. :) EDIT: Got it fixed, thanks to everyone who answered. :)

    Read the article

  • Conversion failed when converting the varchar value to int

    - by onedaywhen
    Microsoft SQL Server 2008 (SP1), getting an unexpected 'Conversion failed' error. Not quite sure how to describe this problem, so below is a simple example. The CTE extracts the numeric portion of certain IDs using a search condition to ensure a numeric portion actually exists. The CTE is then used to find the lowest unused sequence number (kind of): CREATE TABLE IDs (ID CHAR(3) NOT NULL UNIQUE); INSERT INTO IDs (ID) VALUES ('A01'), ('A02'), ('A04'), ('ERR'); WITH ValidIDs (ID, seq) AS ( SELECT ID, CAST(RIGHT(ID, 2) AS INTEGER) FROM IDs WHERE ID LIKE 'A[0-9][0-9]' ) SELECT MIN(V1.seq) + 1 AS next_seq FROM ValidIDs AS V1 WHERE NOT EXISTS ( SELECT * FROM ValidIDs AS V2 WHERE V2.seq = V1.seq + 1 ); The error is, 'Conversion failed when converting the varchar value 'RR' to data type int.' I can't understand why the value ID = 'ERR' should be being considered for conversion because the predicate ID LIKE 'A[0-9][0-9]' should have removed the invalid row from the resultset. When the base table is substituted with an equivalent CTE the problem goes away i.e. WITH IDs (ID) AS ( SELECT 'A01' UNION ALL SELECT 'A02' UNION ALL SELECT 'A04' UNION ALL SELECT 'ERR' ), ValidIDs (ID, seq) AS ( SELECT ID, CAST(RIGHT(ID, 2) AS INTEGER) FROM IDs WHERE ID LIKE 'A[0-9][0-9]' ) SELECT MIN(V1.seq) + 1 AS next_seq FROM ValidIDs AS V1 WHERE NOT EXISTS ( SELECT * FROM ValidIDs AS V2 WHERE V2.seq = V1.seq + 1 ); Why would a base table cause this error? Is this a known issue? UPDATE @sgmoore: no, doing the filtering in one CTE and the casting in another CTE still results in the same error e.g. WITH FilteredIDs (ID) AS ( SELECT ID FROM IDs WHERE ID LIKE 'A[0-9][0-9]' ), ValidIDs (ID, seq) AS ( SELECT ID, CAST(RIGHT(ID, 2) AS INTEGER) FROM FilteredIDs ) SELECT MIN(V1.seq) + 1 AS next_seq FROM ValidIDs AS V1 WHERE NOT EXISTS ( SELECT * FROM ValidIDs AS V2 WHERE V2.seq = V1.seq + 1 );

    Read the article

  • Java w/ SQL Server Express 2008 - Index out of range exception

    - by BS_C3
    Hi! I created a stored procedure in a sql express 2008 and I'm getting the following error when calling the procedure from a Java method: Index 36 is out of range. com.microsoft.sqlserver.jdbc.SQLServerException:Index 36 is out of range. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:170) at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.setterGetParam(SQLServerPreparedStatement.java:698) at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.setValue(SQLServerPreparedStatement.java:707) at com.microsoft.sqlserver.jdbc.SQLServerCallableStatement.setString(SQLServerCallableStatement.java:1504) at fr.alti.ccm.middleware.Reporting.initReporting(Reporting.java:227) at fr.alti.ccm.middleware.Reporting.main(Reporting.java:396) I cannot figure out where it is coming from... _< Any help would be appreciated. Regards, BS_C3 Here's some source code: public ArrayList<ReportingTableMapping> initReporting( String division, String shop, String startDate, String endDate) { ArrayList<ReportingTableMapping> rTable = new ArrayList<ReportingTableMapping>(); ManagerDB db = new ManagerDB(); CallableStatement callStmt = null; ResultSet rs = null; try { callStmt = db.getConnexion().prepareCall("{call getInfoReporting(?,...,?)}"); callStmt.setString("CODE_DIVISION", division); . . . callStmt.setString("cancelled", " "); rs = callStmt.executeQuery(); while (rs.next()) { ReportingTableMapping rtm = new ReportingTableMapping( rs.getString("werks"), ... ); rTable.add(rtm); } rs.close(); callStmt.close(); } catch (Exception e) { System.out.println(e.getMessage()); e.printStackTrace(); } finally { if (rs != null) try { rs.close(); } catch (Exception e) { } if (callStmt != null) try { callStmt.close(); } catch (Exception e) { } if (db.getConnexion() != null) try { db.getConnexion().close(); } catch (Exception e) { } } return rTable; }

    Read the article

  • Modify SQL result set before returning from stored procedure

    - by m0sa
    I have a simple table in my SQL Server 2008 DB: Tasks_Table -id -task_complete -task_active -column_1 -.. -column_N The table stores instructions for uncompleted tasks that have to be executed by a service. I want to be able to scale my system in future. Until now only 1 service on 1 computer read from the table. I have a stored procedure, that selects all uncompleted and inactive tasks. As the service begins to process tasks it updates the task_active flag in all the returned rows. To enable scaleing of the system I want to enable deployment of the service on more machines. Because I want to prevent a task being returned to more than 1 service I have to update the stored procedure that returns uncompleted and inactive tasks. I figured that i have to lock the table (only 1 reader at a time - I know I have to use an apropriate ISOLATION LEVEL), and updates the task_active flag in each row of the result set before returning the result set. So my question is how to modify the SELECT result set iin the stored procedure before returning it?

    Read the article

  • Shrinking the transaction log of a mirrored SQL Server 2005 database

    - by Peter Di Cecco
    I've been looking all over the internet and I can't find an acceptable solution to my problem, I'm wondering if there even is a solution without a compromise... I'm not a DBA, but I'm a one man team working on a huge web site with no extra funding for extra bodies, so I'm doing the best I can. Our backup plan sucks, and I'm having a really hard time improving it. Currently, there are two servers running SQL Server 2005. I have a mirrored database (no witness) that seems to be working well. I do a full backup at noon and at midnight. These get backed up to tape by our service provider nightly, and I burn the backup files to dvd weekly to keep old records on hand. Eventually I'd like to switch to log shipping, since mirroring seems kinda pointless without a witness server. The issue is that the transaction log is growing non-stop. From the research I've done, it seems that I can't truncate a log file of a mirrored database. So how do I stop the file from growing!? Based on this web page, I tried this: USE dbname GO CHECKPOINT GO BACKUP LOG dbname TO DISK='NULL' WITH NOFORMAT, INIT, NAME = N'dbnameLog Backup', SKIP, NOREWIND, NOUNLOAD GO DBCC SHRINKFILE('dbname_Log', 2048) GO But that didn't work. Everything else I've found says I need to disable the mirror before running the backup log command in order for it to work. My Question (TL;DR) How can I shrink my transaction log file without disabling the mirror?

    Read the article

  • In SQL can I return a tables with a varying number of columns

    - by Matt
    I have a somewhat more complicated scenario, but I think it should be possible. I have a large SPROC whose result is a set of characteristics for a set of persons. So the Table would look something like this: Property | Client1 Client 2 Client3 ----------------------------------------------------------- Sex | M F M Age | 67 56 67 Income | Low Mid Low It's built using cursors, iterating over different datasets. The problem I am facing is that there is a varying number of Clients and Properties, so an equally valid result over different input sets might be: Property | Client1 Client 2 ------------------------------------------- Sex | M F Age | 67 56 Weight | 122 122 The different number of properties is easy, those are just extra rows. My problem is that I need to declare a temporary table with a varying number of columns. There could be 2 clients or 100. Every client in guaranteed to have every property ultimately listed. What SQL structure would statisfy this and how can I declare it and insert things into it? I can't just flip the columns and rows either because there is a variable number of each.

    Read the article

  • Unable to add FromName to e-mail using cdosys in SQL Server 2008

    - by Alex Andronov
    I have a piece of cdosys code which runs correctly and generates e-mail with my SQL Server 2008 server talking to a MS Exchange 2003 Server. However the from name is not appearing on the e-mails when they arrive. Is there a fault in the code is it not possible this way? Thanks in advance usp_send_cdosysmail @from varchar(500), @to text, @bcc text , @subject varchar(1000), @body text , @smtpserver varchar(25), @bodytype varchar(10) as declare @imsg int declare @hr int declare @source varchar(255) declare @description varchar(500) declare @output varchar(8000) exec @hr = sp_oacreate 'cdo.message', @imsg out exec @hr = sp_oasetproperty @imsg, 'configuration.fields("http://schemas.microsoft.com/cdo/configuration/sendusing").value','2' exec @hr = sp_oasetproperty @imsg, 'configuration.fields("http://schemas.microsoft.com/cdo/configuration/smtpserver").value', @smtpserver exec @hr = sp_oamethod @imsg, 'configuration.fields.update', null exec @hr = sp_oasetproperty @imsg, 'to', @to exec @hr = sp_oasetproperty @imsg, 'bcc', @bcc exec @hr = sp_oasetproperty @imsg, 'from', @from exec @hr = sp_oasetproperty @imsg, 'fromname','A From Name' exec @hr = sp_oasetproperty @imsg, 'subject', @subject -- if you are using html e-mail, use 'htmlbody' instead of 'textbody'. exec @hr = sp_oasetproperty @imsg, @bodytype, @body exec @hr = sp_oamethod @imsg, 'send', null -- sample error handling. if @hr <>0 select @hr begin exec @hr = sp_oageterrorinfo null, @source out, @description out if @hr = 0 begin select @output = ' source: ' + @source print @output select @output = ' description: ' + @description print @output end else begin print ' sp_oageterrorinfo failed.' return end end exec @hr = sp_oadestroy @imsg

    Read the article

  • SQL Server 2008 - Full Text Query

    - by user208662
    Hello, I have two tables in a SQL Server 2008 database in my company. The first table represents the products that my company sells. The second table contains the product manufacturer’s details. These tables are defined as follows: Product ------- ID Name ManufacturerID Description Manufacturer ------------ ID Name As you can imagine, I want to make this as easy as possible for our customers to query this data. However, I’m having problems writing a forgiving, yet powerful search query. For instance, I’m anticipating people to search based on phonetical spellings. Because of this, the data may not match the exact data in my database. In addition, I think some individuals will search by manufacturer’s name first, but I want the matching product names to appear first. Based on these requirements, I’m currently working on the following query: select p.Name as 'ProductName', m.Name as 'Manufacturer', r.Rank as 'Rank' from Product p inner join Manufacturer m on p.ManufacturerID=m.ID inner join CONTAINSTABLE(Product, Name, @searchQuery) as r Oddly, this query is throwing an error. However, I have no idea why. Squiggles appear to the right of the last parenthesis in management studio. The tool tip says "An expression of non-boolean type specified in a context where a condition is expected". I understand what this statement means. However, I guess I do not know how COntainsTable works. What am I doing wrong? Thank you

    Read the article

  • Fixing an SQL where statement that is ugly and confusing

    - by Mike Wills
    I am directly querying the back-end MS SQL Server for a software package. The key field (vehicle number) is defined as alpha though we are entering numeric value in the field. There is only one exception to this, we place an "R" before the number when the vehicle is being retired (which means we sold it or the vehicle is junked). Assuming the users do this right, we should not run into a problem using this method. (Right or wrong isn't the issue here) Fast forward to now. I am trying to query a subset of these vehicle numbers (800 - 899) for some special processing. By doing a range of '800' to '899' we also get 80, 81, etc. If I cast the vehicle number into an INT, I should be able to get the right range. Except that these "R" vehicles are kicking me in the butt now. I have tried where vehicleId not like 'R%' and cast(vehicleId as int) between 800 and 899 however, I get a casting error on one of these "R" vehicles. What does work is where vehicleId not between '800' and '899' and cast(vehicleId as int) between 800 and 899', but I feel there has to be a better way and less confusing way. I have also tried other variations with HAVING and a sub-query all producing a casting error.

    Read the article

  • Indexing table with duplicates MySQL/SQL Server with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In SQL Server you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • SQL Server query

    - by carrot_programmer_3
    Hi, I have a SQL Server DB containing a registrations table that I need to plot on a graph over time. The issue is that I need to break this down by where the user registered from (e.g. website, wap site, or a mobile application). the resulting output data should look like this... [date] [num_reg_website] [num_reg_wap_site] [num_reg_mobileapp] 1 FEB 2010,24,35,64 2 FEB 2010,23,85,48 3 FEB 2010,29,37,79 etc... The source table is as follows... UUID(int), signupdate(datetime), requestsource(varchar(50)) some smple data in this table looks like this... 1001,2010-02-2:00:12:12,'website' 1002,2010-02-2:00:10:17,'app' 1003,2010-02-3:00:14:19,'website' 1004,2010-02-4:00:16:18,'wap' 1005,2010-02-4:00:18:16,'website' Running the following query returns one data column 'total registrations' for the website registrations but I'm not sure how to do this for multiple columns unfortunatly.... select CAST(FLOOR(CAST([signupdate]AS FLOAT ))AS DATETIME) as [signupdate], count(UUID) as 'total registrations' FROM [UserRegistrationRequests] WHERE requestsource = 'website' group by CAST(FLOOR(CAST([signupdate]AS FLOAT ))AS DATETIME)

    Read the article

  • What is preferred method for searching table data using stored procedure?

    - by Mourya
    I have a customer table with Cust_Id, Name, City and search is based upon any or all of the above three. Which one Should I go for ? Dynamic SQL: declare @str varchar(1000) set @str = 'Select [Sno],[Cust_Id],[Name],[City],[Country],[State] from Customer where 1 = 1' if (@Cust_Id != '') set @str = @str + ' and Cust_Id = ''' + @Cust_Id + '''' if (@Name != '') set @str = @str + ' and Name like ''' + @Name + '%''' if (@City != '') set @str = @str + ' and City like ''' + @City + '%''' exec (@str) Simple query: select [Sno],[Cust_Id],[Name],[City],[Country],[State] from Customer where (@Cust_Id = '' or Cust_Id = @Cust_Id) and (@Name = '' or Name like @Name + '%') and (@City = '' or City like @City + '%') Which one should I prefer (1 or 2) and what are advantages? After going through everyone's suggestion , here is what i finally got. DECLARE @str NVARCHAR(1000) DECLARE @ParametersDefinition NVARCHAR(500) SET @ParametersDefinition = N'@InnerCust_Id varchar(10), @InnerName varchar(30),@InnerCity varchar(30)' SET @str = 'Select [Sno],[Cust_Id],[Name],[City],[Country],[State] from Customer where 1 = 1' IF(@Cust_Id != '') SET @str = @str + ' and Cust_Id = @InnerCust_Id' IF(@Name != '') SET @str = @str + ' and Name like @InnerName' IF(@City != '') SET @str = @str + ' and City like @InnerCity' -- ADD the % symbol for search based upon the LIKE keyword SELECT @Name = @Name + '%', @City = @City+ '%' EXEC sp_executesql @str, @ParametersDefinition, @InnerCust_Id = @Cust_Id, @InnerName = @Name, @InnerCity = @City; References : http://blogs.lessthandot.com/index.php/DataMgmt/DataDesign/changing-exec-to-sp_executesql-doesn-t-p http://msdn.microsoft.com/en-us/library/ms175170.aspx

    Read the article

  • SQL Server: Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • Please explain this delete top 100 SQL syntax

    - by Patrick
    Basically I want to do this: delete top( 100 ) from table order by id asc but MS SQL doesn't allow order in this position The common solution seems to be this: DELETE table WHERE id IN(SELECT TOP (100) id FROM table ORDER BY id asc) But I also found this method here: delete table from (select top (100) * from table order by id asc) table which has a much better estimated execution plan (74:26). Unfortunately I don't really understand the syntax, please can some one explain it to me? Always interested in any other methods to achieve the same result as well. EDIT: I'm still not getting it I'm afraid, I want to be able to read the query as I read the first two which are practically English. The above queries to me are: delete the top 100 records from table, with the records ordered by id ascending delete the top 100 records from table where id is anyone of (this lot of ids) delete table from (this lot of records) table I can't change the third one into a logical English sentence... I guess what I'm trying to get at is how does this turn into "delete from table (this lot of records)". The 'from' seems to be in an illogical position and the second mention of 'table' is logically superfluous (to me).

    Read the article

  • Date and Time Support in SQL Server 2008

    - by Aamir Hasan
      Using the New Date and Time Data Types Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 1.       The new date and time data types in SQL Server 2008 offer increased range and precision and are ANSI SQL compatible. 2.       Separate date and time data types minimize storage space requirements for applications that need only date or time information. Moreover, the variable precision of the new time data type increases storage savings in exchange for reduced accuracy. 3.       The new data types are mostly compatible with the original date and time data types and use the same Transact-SQL functions. 4.       The datetimeoffset data type allows you to handle date and time information in global applications that use data that originates from different time zones. SELECT c.name, p.* FROM politics pJOIN country cON p.country = c.codeWHERE YEAR(Independence) < 1753ORDER BY IndependenceGO8.    Highlight the SELECT statement and click Execute ( ) to show the use of some of the date functions.T-SQLSELECT c.name AS [Country Name],        CONVERT(VARCHAR(12), p.Independence, 107) AS [Independence Date],       DATEDIFF(YEAR, p.Independence, GETDATE()) AS [Years Independent (appox)],       p.GovernmentFROM politics pJOIN country cON p.country = c.codeWHERE YEAR(Independence) < 1753ORDER BY IndependenceGO10.    Select the SET DATEFORMAT statement and click Execute ( ) to change the DATEFORMAT to day-month-year.T-SQLSET DATEFORMAT dmyGO11.    Select the DECLARE and SELECT statements and click Execute ( ) to show how the datetime and datetime2 data types interpret a date literal.T-SQLSET DATEFORMAT dmyDECLARE @dt datetime = '2008-12-05'DECLARE @dt2 datetime2 = '2008-12-05'SELECT MONTH(@dt) AS [Month-Datetime], DAY(@dt)     AS [Day-Datetime]SELECT MONTH(@dt2) AS [Month-Datetime2], DAY(@dt2)     AS [Day-Datetime2]GO12.    Highlight the DECLARE and SELECT statements and click Execute ( ) to use integer arithmetic on a datetime variable.T-SQLDECLARE @dt datetime = '2008-12-05'SELECT @dt + 1GO13.    Highlight the DECLARE and SELECT statements and click Execute ( ) to show how integer arithmetic is not allowed for datetime2 variables.T-SQLDECLARE @dt2 datetime = '2008-12-05'SELECT @dt2 + 1GO14.    Highlight the DECLARE and SELECT statements and click Execute ( ) to show how to use DATE functions to do simple arithmetic on datetime2 variables.T-SQLDECLARE @dt2 datetime2(7) = '2008-12-05'SELECT DATEADD(d, 1, @dt2)GO15.    Highlight the DECLARE and SELECT statements and click Execute ( ) to show how the GETDATE function can be used with both datetime and datetime2 data types.T-SQLDECLARE @dt datetime = GETDATE();DECLARE @dt2 datetime2(7) = GETDATE();SELECT @dt AS [GetDate-DateTime], @dt2 AS [GetDate-DateTime2]GO16.    Draw attention to the values returned for both columns and how they are equal.17.    Highlight the DECLARE and SELECT statements and click Execute ( ) to show how the SYSDATETIME function can be used with both datetime and datetime2 data types.T-SQLDECLARE @dt datetime = SYSDATETIME();DECLARE @dt2 datetime2(7) = SYSDATETIME();SELECT @dt AS [Sysdatetime-DateTime], @dt2     AS [Sysdatetime-DateTime2]GO18.    Draw attention to the values returned for both columns and how they are different.Programming Global Applications with DateTimeOffset 2.    If you have not previously created the SQLTrainingKitDB database while completing another demo in this training kit, highlight the CREATE DATABASE statement and click Execute ( ) to do so now.T-SQLCREATE DATABASE SQLTrainingKitDBGO3.    Select the USE and CREATE TABLE statements and click Execute ( ) to create table datetest in the SQLTrainingKitDB database.T-SQLUSE SQLTrainingKitDBGOCREATE TABLE datetest (  id integer IDENTITY PRIMARY KEY,  datetimecol datetimeoffset,  EnteredTZ varchar(40)); Reference:http://www.microsoft.com/downloads/details.aspx?FamilyID=E9C68E1B-1E0E-4299-B498-6AB3CA72A6D7&displaylang=en   

    Read the article

  • SQL ADO.NET shortcut extensions (old school!)

    - by Jeff
    As much as I love me some ORM's (I've used LINQ to SQL quite a bit, and for the MSDN/TechNet Profile and Forums we're using NHibernate more and more), there are times when it's appropriate, and in some ways more simple, to just throw up so old school ADO.NET connections, commands, readers and such. It still feels like a pain though to new up all the stuff, make sure it's closed, blah blah blah. It's pretty much the least favorite task of writing data access code. To minimize the pain, I have a set of extension methods that I like to use that drastically reduce the code you have to write. Here they are... public static void Using(this SqlConnection connection, Action<SqlConnection> action) {     connection.Open();     action(connection);     connection.Close(); } public static SqlCommand Command(this SqlConnection connection, string sql){    var command = new SqlCommand(sql, connection);    return command;}public static SqlCommand AddParameter(this SqlCommand command, string parameterName, object value){    command.Parameters.AddWithValue(parameterName, value);    return command;}public static object ExecuteAndReturnIdentity(this SqlCommand command){    if (command.Connection == null)        throw new Exception("SqlCommand has no connection.");    command.ExecuteNonQuery();    command.Parameters.Clear();    command.CommandText = "SELECT @@IDENTITY";    var result = command.ExecuteScalar();    return result;}public static SqlDataReader ReadOne(this SqlDataReader reader, Action<SqlDataReader> action){    if (reader.Read())        action(reader);    reader.Close();    return reader;}public static SqlDataReader ReadAll(this SqlDataReader reader, Action<SqlDataReader> action){    while (reader.Read())        action(reader);    reader.Close();    return reader;} It has been awhile since I've really revisited these, so you will likely find opportunity for further optimization. The bottom line here is that you can chain together a bunch of these methods to make a much more concise database call, in terms of the code on your screen, anyway. Here are some examples: public Dictionary<string, string> Get(){    var dictionary = new Dictionary<string, string>();    _sqlHelper.GetConnection().Using(connection =>        connection.Command("SELECT Setting, [Value] FROM Settings")            .ExecuteReader()            .ReadAll(r => dictionary.Add(r.GetString(0), r.GetString(1))));    return dictionary;} or... public void ChangeName(User user, string newName){    _sqlHelper.GetConnection().Using(connection =>         connection.Command("UPDATE Users SET Name = @Name WHERE UserID = @UserID")            .AddParameter("@Name", newName)            .AddParameter("@UserID", user.UserID)            .ExecuteNonQuery());} The _sqlHelper.GetConnection() is just some other code that gets a connection object for you. You might have an even cleaner way to take that step out entirely. This looks more fluent, and the real magic sauce for me is the reader bits where you can put any kind of arbitrary method in there to iterate over the results.

    Read the article

  • Setting Timeouts: SQL Server 2008/IIS 7.5

    - by Julie
    We have recently migrated from a Win 2003/SQL Server 2000 system to Win 2008 64 bit R2, SQL Server 2008 R2. Our websites are in classic asp, and this can't be changed to another scripting language at this time. On the old server, if I got stuck in some kind of endless loop, the page would throw an error. On the new server, I have a page that has some sort of looping problem, that even though the SQL SP is called only once (and runs fine run as a query on the server) it pegs SQL server and therefore locks all of our websites. I'll get my code figured out, no biggie. But I need to make sure the server times out when this happens. (The page I'm working on runs fine with certain instances of the query, and locks with others using a different query variable. I can't have something like that sneak up on me on a page I haven't touched for three years.) I can't figure out how an SP that runs once on the server, from an ASP page, is tying up SQL server this way. It's obviously some sort of a timeout issue, but I can't figure out where/which timeout values to change. I actually have to remote desktop to the server and kill the process in SQL server. I'm afraid I'm a generalist, and server management is not my thing, even though it's my responsibility, so I am almost certain to have questions about any answer that I receive. How can I track this down? What settings do I need to change? More info: It's not SQL Server On our test site, I created an ASP file that just did an endless loop (do while 1=1) and had the same problem - the other websites wouldn't load - without SQL server being involved. So I think the reason the process was hanging is that the page wasn't timing out as it should, and so the connection to SQL was never closed. Killing the process in SQL server would reset the page somehow. For my intentional endless loop, I had to refresh the app pool to get rid of it. This points more to either IIS or the ASP settings. The ASP timeouts are set to whatever the default were when the server was first loaded. I still can't figure out why one file is locking up all websites, though. Again, that didn't happen on the old server.

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >