Search Results

Search found 36773 results on 1471 pages for 'sql statement syntax'.

Page 131/1471 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • SQL: Speed Improvement - Cluttered union query

    - by vol7ron
    SELECT * FROM ( SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( a.user_id = b.user_id ) UNION SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( lower(a.f_name)=lower(b.f_name) AND lower(a.l_name)=lower(b.l_name) ) ) foo -- UNION -- SELECT a.user_id , a.f_name , a.l_name , '' , '' , '' FROM current_tbl a WHERE a.user_id NOT IN ( select user_id from( SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( a.user_id = b.user_id ) UNION SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( lower(a.f_name)=lower(b.f_name) AND lower(a.l_name)=lower(b.l_name) ) ) bar ) ORDER BY user_id Example of table population: current_tbl: ------------------------------- user_id | f_name | l_name ---------+----------+---------- A1 | Adam | Acorn A2 | Beth | Berry A3 | Calv | Chard | | import_tbl: ------------------------------- user_id | f_name | l_name ---------+----------+---------- A1 | Adam | Acorn A2 | Beth | Butcher <- last_name different | | Expected Output: ----------------------------------------------------------------------- user_id1 | f_name1 | l_name1 | user_id2 | f_name2 | l_name2 ----------+-----------+-----------+------------+-----------+----------- A1 | Adam | Acorn | A1 | Adam | Acorn A2 | Beth | Berry | A2 | Beth | Butcher A3 | Calv | Chard | | | Doing this method gets rid of conditions where the row would be: A2 | Beth | Berry | A2 | Beth | Butcher But it keeps the A3 row I hope this makes sense and I haven't overly simplified it. This is a continuation question from my other question. The succession of these improvements has dropped the query down from ~32000ms to where it's at now ~1200ms - quite an improvement. I supect I can optimize by using UNION ALL in the subquery and of course the usual index optimizations, but I'm looking for the best SQL optimization. FYI this particular case is for PostgreSQL.

    Read the article

  • Why use INCLUDE in a SQL index

    - by StarLite
    I recently encountered an index in a database I maintain that was of the form: CREATE INDEX [IX_Foo] ON [Foo] ( Id ASC ) INCLUDE ( SubId ) In this particular case, the performance problem that I was encountering (a slow SELECT filtering on both Id and SubId) could be fixed by simply moving the SubId column into the index proper rather than as an included column. This got me thinking however that I don't understand the reasoning behind included columns at all, when generally, they could simply be a part of the index itself. Even if I don't particularly care about the items being in the index itself is there any downside to having column in the index rather than simply being included. After some research, I am aware that there are a number of restrictions on what can go into an indexed column (maximum width of the index, and some column types that can't be indexed like 'image'). In these cases I can see that you would be forced to include the column in the index page data. The only thing I can think of is that if there are updates on SubId, the row will not need to be relocated if the column is included (though the value in the index would need to be changed). Is there something else that I'm missing? I'm considering going through the other indexes in the database and shifting included columns in the index proper where possible. Would this be a mistake? I'm primarily interested in MS SQL Server, but information on other DB engines is welcome also.

    Read the article

  • Creating LINQ to SQL Data Models' Data Contexts with ASP.NET MVC

    - by Maxim Z.
    I'm just getting started with ASP.NET MVC, mostly by reading ScottGu's tutorial. To create my database connections, I followed the steps he outlined, which were to create a LINQ-to-SQL dbml model, add in the database tables through the Server Explorer, and finally to create a DataContext class. That last part is the part I'm stuck on. In this class, I'm trying to create methods that work around the exposed data. Following the example in the tutorial, I created this: using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MySite.Models { public partial class MyDataContext { public List<Post> GetPosts() { return Posts.ToList(); } public Post GetPostById(int id) { return Posts.Single(p => p.ID == id); } } } As you can see, I'm trying to use my Post data table. However, it doesn't recognize the "Posts" part of my code. What am I doing wrong? I have a feeling that my problem is related to my not adding the data tables correctly, but I'm not sure. Thanks in advance.

    Read the article

  • When connecting SAP Business One to SQL Server 2005, what is the

    - by Nick
    we have SAP Business One - Fourth Shift Edition running here at a small manufacturing company. The consulting company that has come in to do the installation/implementation uses the "sa" id/pass to initially connect to the database to get the list of companies. From then on, I have to assume that its the sa id/pass that is being used to connect the client software to the database. Is this appropriate? I dont know where this data is being stored... as an ODBC connection? directly in the registry somewhere? Is it secure? Would it be better to set the users network ID in the database security and then use the "trusted connection" setting instead? Or do most people create a separate login in the database for each user and use that in the client settings? seems like the easiest way would be to add the users network login to the sql server security so they can use the "trusted connection"... but then wouldn't that allow ANY software to connect to the database from that machine? So anyways: what are the best-practices for setting this up?

    Read the article

  • Copy SQL Server data from one server to another on a schedule

    - by rwmnau
    I have a pair of SQL Servers at different webhosts, and I'm looking for a way to periodically update the one server using the other. Here's what I'm looking for: As automated as possible - ideally, without any involvement on my part once it's set up. Pushes a number of databases, in their entirely (including any schema changes) from one server to the other Freely allows changes on the source server without breaking my process. For this reason, I don't want to use replication, as I'd have to break it every time there's an update on the source, and then recreate the publication and subscription One database is about 4GB in size and contains binary data. I'm not sure if there's a way to export this to a script, but it would be a mammoth file if I did. Originally, I was thinking of writing something that takes a scheduled full backup of each database, FTPs the backups from one server to the other once they're done, and then the new server picks it up and restores it. The only downside I can see to this is that there's no way to know that the backups are done before starting to transfer them - can these backups be done synchronously? Also, the server being refreshes is our test server, so if there's some downtime involved in moving the data, that's fine. Does anybody out there have a better idea, or is what I'm currently considering the best non-replication way to go? Thanks for your help, everybody. UPDATE: I ended up designing a custom solution to get this done using BAT files, 7Zip,command line FTP, and OSQL, so it runs in a completely automatic way and aggregates the data from a dozen servers across the country. I've detailed the steps in a blog entry. Thanks for all your input!

    Read the article

  • SSIS - How do I use a resultset as input in a SQL task and get data types right?

    - by thursdaysgeek
    I am trying to merge records from an Oracle database table to my local SQL table. I have a variable for the package that is an Object, called OWell. I have a data flow task that gets the Oracle data as a SQL statment (select well_id, well_name from OWell order by Well_ID), and then a conversion task to convert well_id from a DT_STR of length 15 to a DT_WSTR; and convert well_name from a DT_STR of length 15 to DT_WSTR of length 50. That is then stored in the recordset OWell. The reason for the conversions is the table that I want to add records to has an identity field: SSIS shows well_id as a DT_WSTR of length 15, well_name a DT_WSTR of length 50. I then have a SQL task that connects to the local database and attempts to add records that are not there yet. I've tried various things: using the OWell as a result set and referring to it in my SQL statement. Currently, I have the ResultSet set to None, and the following SQL statment: Insert into WELL (WELL_ID, WELL_NAME) Select OWELL_ID, OWELL_NAME from OWell where OWELL_ID not in (select WELL.WELL_ID from WELL) For Parameter Mapping, I have Paramater 0, called OWell_ID, from my variable User::OWell. Parameter 1, called OWell_Name is from the same variable. Both are set to VARCHAR, although I've also tried NVARCHAR. I do not have a Result set. I am getting the following error: Error: 0xC002F210 at Insert records to FLEDG, Execute SQL Task: Executing the query "Insert into WELL (WELL_ID, WELL_NAME) Select OWELL..." failed with the following error: "An error occurred while extracting the result into a variable of type (DBTYPE_STR)". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. I don't think it's a data type issue, but rather that I somehow am not using the resultset properly. How, exactly, am I supposed to refer to that recordset in my SQL task, so that I can use the two recordset fields and add records that are missing?

    Read the article

  • SQL, MVC, Entity Framework

    - by Anthony
    Hi Im using the above technologies and have ran into what I presume is a design issue I have made. I have an Artwork table in my DB and have been able to add art (I now think of these as Digital Products) to a shopping cart + CartLine table fine. The system I have that adds art to galleries and user accounts etc works fine. Now the client wants to sell T-shirts, Mugs and Pens etc, 'HardwareProducts' so I have created a 'HardwareProducts' table. Now I have two different product types in two tables. I use GUID's as the PK's in both the HardwareProducts table and Artwork table. When a customer adds an item to their cart I store the GUID in the ProductID column in the CartItems table. The issue is the database will not know which table to reference when I bring the LineItem object up through my ORM to the front end. In OOP I can see how you would have a base class of Product, and then a DigitalProduct class and HardwareProduct class drived from it, but how do you model this in SQL Server and the Entity Framework please, or is there another way?

    Read the article

  • What is the scope of TRANSACTION in Sql server

    - by Shantanu Gupta
    I was creating a stored procedure and i got stuck in the writing methodology of me and my collegue. I am using SQL Server 2005 I was writing Stored procedure like this BEGIN TRAN BEGIN TRY INSERT INTO Tags.tblTopic (Topic, TopicCode, Description) VALUES(@Topic, @TopicCode, @Description) INSERT INTO Tags.tblSubjectTopic (SubjectId, TopicId) VALUES(@SubjectId, @@IDENTITY) COMMIT TRAN END TRY BEGIN CATCH DECLARE @Error VARCHAR(1000) SET @Error= 'ERROR NO : '+ERROR_NUMBER() + ', LINE NO : '+ ERROR_LINE() + ', ERROR MESSAGE : '+ERROR_MESSAGE() PRINT @Error ROLLBACK TRAN END CATCH And my collegue was writing it like the below one BEGIN TRY BEGIN TRAN INSERT INTO Tags.tblTopic (Topic, TopicCode, Description) VALUES(@Topic, @TopicCode, @Description) INSERT INTO Tags.tblSubjectTopic (SubjectId, TopicId) VALUES(@SubjectId, @@IDENTITY) COMMIT TRAN END TRY BEGIN CATCH DECLARE @Error VARCHAR(1000) SET @Error= 'ERROR NO : '+ERROR_NUMBER() + ', LINE NO : '+ ERROR_LINE() + ', ERROR MESSAGE : '+ERROR_MESSAGE() PRINT @Error ROLLBACK TRAN END CATCH Here the only difference that you will find is the position of writing Begin TRAN. According to me the methodology of my collegue should not work when an exception occurs i.e. Rollback should not get executed because TRAN does'nt have scope. But when i tried to run both the code, both was working in the same way. I am confused to know how does TRANSACTION works. Is it scope free or what ?

    Read the article

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • Conversion failed when converting the varchar value to int

    - by onedaywhen
    Microsoft SQL Server 2008 (SP1), getting an unexpected 'Conversion failed' error. Not quite sure how to describe this problem, so below is a simple example. The CTE extracts the numeric portion of certain IDs using a search condition to ensure a numeric portion actually exists. The CTE is then used to find the lowest unused sequence number (kind of): CREATE TABLE IDs (ID CHAR(3) NOT NULL UNIQUE); INSERT INTO IDs (ID) VALUES ('A01'), ('A02'), ('A04'), ('ERR'); WITH ValidIDs (ID, seq) AS ( SELECT ID, CAST(RIGHT(ID, 2) AS INTEGER) FROM IDs WHERE ID LIKE 'A[0-9][0-9]' ) SELECT MIN(V1.seq) + 1 AS next_seq FROM ValidIDs AS V1 WHERE NOT EXISTS ( SELECT * FROM ValidIDs AS V2 WHERE V2.seq = V1.seq + 1 ); The error is, 'Conversion failed when converting the varchar value 'RR' to data type int.' I can't understand why the value ID = 'ERR' should be being considered for conversion because the predicate ID LIKE 'A[0-9][0-9]' should have removed the invalid row from the resultset. When the base table is substituted with an equivalent CTE the problem goes away i.e. WITH IDs (ID) AS ( SELECT 'A01' UNION ALL SELECT 'A02' UNION ALL SELECT 'A04' UNION ALL SELECT 'ERR' ), ValidIDs (ID, seq) AS ( SELECT ID, CAST(RIGHT(ID, 2) AS INTEGER) FROM IDs WHERE ID LIKE 'A[0-9][0-9]' ) SELECT MIN(V1.seq) + 1 AS next_seq FROM ValidIDs AS V1 WHERE NOT EXISTS ( SELECT * FROM ValidIDs AS V2 WHERE V2.seq = V1.seq + 1 ); Why would a base table cause this error? Is this a known issue? UPDATE @sgmoore: no, doing the filtering in one CTE and the casting in another CTE still results in the same error e.g. WITH FilteredIDs (ID) AS ( SELECT ID FROM IDs WHERE ID LIKE 'A[0-9][0-9]' ), ValidIDs (ID, seq) AS ( SELECT ID, CAST(RIGHT(ID, 2) AS INTEGER) FROM FilteredIDs ) SELECT MIN(V1.seq) + 1 AS next_seq FROM ValidIDs AS V1 WHERE NOT EXISTS ( SELECT * FROM ValidIDs AS V2 WHERE V2.seq = V1.seq + 1 );

    Read the article

  • Something confusing about FormsOf (Sql server Full-Text searching)

    - by AspOnMyNet
    hi I'm using Sql Server 2008 1) A given <simple_term> within a <generation_term> will not match both nouns and verbs. If I understand the above text correctly, then query SELECT * FROM someTable WHERE CONTAINS ( *, ' FORMSOF ( INFLECTIONAL, park ) ' ) should search for either nouns or verbs derived from the root word “park”, but not for both? Thus out of the two rows, one containing noun parks and other verb parking, the above query should return just one of the two rows? But as it turns out, query returns both rows, so are perhaps my assumptions a bit off or is the above quote wrong?! 2) From Msdn: If freetext_string is enclosed in double quotation marks, a phrase match is instead performed; stemming and thesaurus are not performed. According to the above quote the following query shouldn’t return rows containing strings surfing ( due to query not performing stemming ), surf ( due to query performing phrase matching and not individual word matching ) and surfing with suzy’s sister ( due to query not performing stemming and due to query performing phrase matching and not word matching ), but it does. Thus, it appears that even when *freetext_string* is enclosed in double quotation marks, stemming is still preformed, while phrase matching is not: SELECT * FROM someTable WHERE FREETEXT( *, ' "surf sister" ' ) So is the above quote wrong or...? thanx

    Read the article

  • SQL select row into a string variable without knowing columns

    - by Brandi
    Hello, I am new to writing SQL and would greatly appreciate help on this problem. :) I am trying to select an entire row into a string, preferably separated by a space or a comma. I would like to accomplish this in a generic way, without having to know specifics about the columns in the tables. What I would love to do is this: DECLARE @MyStringVar NVARCHAR(MAX) = '' @MyStringVar = SELECT * FROM MyTable WHERE ID = @ID AS STRING But what I ended up doing was this: DECLARE @MyStringVar = '' DECLARE @SecificField1 INT DECLARE @SpecificField2 NVARCHAR(255) DECLARE @SpecificField3 NVARCHAR(1000) ... SELECT @SpecificField1 = Field1, @SpecificField2 = Field2, @SpecificField3 = Field3 FROM MyTable WHERE ID = @ID SELECT @StringBuilder = @StringBuilder + CONVERT(nvarchar(10), @Field1) + ' ' + @Field2 + ' ' + @Field3 Yuck. :( I have seen some people post stuff about the COALESCE function, but again, I haven't seen anyone use it without specific column names. Also, I was thinking, perhaps there is a way to use the column names dynamically getting them by: SELECT [COLUMN_NAME] FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'MyTable' It really doesn't seem like this should be so complicated. :( What I did works for now, but thanks ahead of time to anyone who can point me to a better solution. :) EDIT: Got it fixed, thanks to everyone who answered. :)

    Read the article

  • Modify SQL result set before returning from stored procedure

    - by m0sa
    I have a simple table in my SQL Server 2008 DB: Tasks_Table -id -task_complete -task_active -column_1 -.. -column_N The table stores instructions for uncompleted tasks that have to be executed by a service. I want to be able to scale my system in future. Until now only 1 service on 1 computer read from the table. I have a stored procedure, that selects all uncompleted and inactive tasks. As the service begins to process tasks it updates the task_active flag in all the returned rows. To enable scaleing of the system I want to enable deployment of the service on more machines. Because I want to prevent a task being returned to more than 1 service I have to update the stored procedure that returns uncompleted and inactive tasks. I figured that i have to lock the table (only 1 reader at a time - I know I have to use an apropriate ISOLATION LEVEL), and updates the task_active flag in each row of the result set before returning the result set. So my question is how to modify the SELECT result set iin the stored procedure before returning it?

    Read the article

  • Java w/ SQL Server Express 2008 - Index out of range exception

    - by BS_C3
    Hi! I created a stored procedure in a sql express 2008 and I'm getting the following error when calling the procedure from a Java method: Index 36 is out of range. com.microsoft.sqlserver.jdbc.SQLServerException:Index 36 is out of range. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:170) at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.setterGetParam(SQLServerPreparedStatement.java:698) at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.setValue(SQLServerPreparedStatement.java:707) at com.microsoft.sqlserver.jdbc.SQLServerCallableStatement.setString(SQLServerCallableStatement.java:1504) at fr.alti.ccm.middleware.Reporting.initReporting(Reporting.java:227) at fr.alti.ccm.middleware.Reporting.main(Reporting.java:396) I cannot figure out where it is coming from... _< Any help would be appreciated. Regards, BS_C3 Here's some source code: public ArrayList<ReportingTableMapping> initReporting( String division, String shop, String startDate, String endDate) { ArrayList<ReportingTableMapping> rTable = new ArrayList<ReportingTableMapping>(); ManagerDB db = new ManagerDB(); CallableStatement callStmt = null; ResultSet rs = null; try { callStmt = db.getConnexion().prepareCall("{call getInfoReporting(?,...,?)}"); callStmt.setString("CODE_DIVISION", division); . . . callStmt.setString("cancelled", " "); rs = callStmt.executeQuery(); while (rs.next()) { ReportingTableMapping rtm = new ReportingTableMapping( rs.getString("werks"), ... ); rTable.add(rtm); } rs.close(); callStmt.close(); } catch (Exception e) { System.out.println(e.getMessage()); e.printStackTrace(); } finally { if (rs != null) try { rs.close(); } catch (Exception e) { } if (callStmt != null) try { callStmt.close(); } catch (Exception e) { } if (db.getConnexion() != null) try { db.getConnexion().close(); } catch (Exception e) { } } return rTable; }

    Read the article

  • Shrinking the transaction log of a mirrored SQL Server 2005 database

    - by Peter Di Cecco
    I've been looking all over the internet and I can't find an acceptable solution to my problem, I'm wondering if there even is a solution without a compromise... I'm not a DBA, but I'm a one man team working on a huge web site with no extra funding for extra bodies, so I'm doing the best I can. Our backup plan sucks, and I'm having a really hard time improving it. Currently, there are two servers running SQL Server 2005. I have a mirrored database (no witness) that seems to be working well. I do a full backup at noon and at midnight. These get backed up to tape by our service provider nightly, and I burn the backup files to dvd weekly to keep old records on hand. Eventually I'd like to switch to log shipping, since mirroring seems kinda pointless without a witness server. The issue is that the transaction log is growing non-stop. From the research I've done, it seems that I can't truncate a log file of a mirrored database. So how do I stop the file from growing!? Based on this web page, I tried this: USE dbname GO CHECKPOINT GO BACKUP LOG dbname TO DISK='NULL' WITH NOFORMAT, INIT, NAME = N'dbnameLog Backup', SKIP, NOREWIND, NOUNLOAD GO DBCC SHRINKFILE('dbname_Log', 2048) GO But that didn't work. Everything else I've found says I need to disable the mirror before running the backup log command in order for it to work. My Question (TL;DR) How can I shrink my transaction log file without disabling the mirror?

    Read the article

  • Unable to add FromName to e-mail using cdosys in SQL Server 2008

    - by Alex Andronov
    I have a piece of cdosys code which runs correctly and generates e-mail with my SQL Server 2008 server talking to a MS Exchange 2003 Server. However the from name is not appearing on the e-mails when they arrive. Is there a fault in the code is it not possible this way? Thanks in advance usp_send_cdosysmail @from varchar(500), @to text, @bcc text , @subject varchar(1000), @body text , @smtpserver varchar(25), @bodytype varchar(10) as declare @imsg int declare @hr int declare @source varchar(255) declare @description varchar(500) declare @output varchar(8000) exec @hr = sp_oacreate 'cdo.message', @imsg out exec @hr = sp_oasetproperty @imsg, 'configuration.fields("http://schemas.microsoft.com/cdo/configuration/sendusing").value','2' exec @hr = sp_oasetproperty @imsg, 'configuration.fields("http://schemas.microsoft.com/cdo/configuration/smtpserver").value', @smtpserver exec @hr = sp_oamethod @imsg, 'configuration.fields.update', null exec @hr = sp_oasetproperty @imsg, 'to', @to exec @hr = sp_oasetproperty @imsg, 'bcc', @bcc exec @hr = sp_oasetproperty @imsg, 'from', @from exec @hr = sp_oasetproperty @imsg, 'fromname','A From Name' exec @hr = sp_oasetproperty @imsg, 'subject', @subject -- if you are using html e-mail, use 'htmlbody' instead of 'textbody'. exec @hr = sp_oasetproperty @imsg, @bodytype, @body exec @hr = sp_oamethod @imsg, 'send', null -- sample error handling. if @hr <>0 select @hr begin exec @hr = sp_oageterrorinfo null, @source out, @description out if @hr = 0 begin select @output = ' source: ' + @source print @output select @output = ' description: ' + @description print @output end else begin print ' sp_oageterrorinfo failed.' return end end exec @hr = sp_oadestroy @imsg

    Read the article

  • In SQL can I return a tables with a varying number of columns

    - by Matt
    I have a somewhat more complicated scenario, but I think it should be possible. I have a large SPROC whose result is a set of characteristics for a set of persons. So the Table would look something like this: Property | Client1 Client 2 Client3 ----------------------------------------------------------- Sex | M F M Age | 67 56 67 Income | Low Mid Low It's built using cursors, iterating over different datasets. The problem I am facing is that there is a varying number of Clients and Properties, so an equally valid result over different input sets might be: Property | Client1 Client 2 ------------------------------------------- Sex | M F Age | 67 56 Weight | 122 122 The different number of properties is easy, those are just extra rows. My problem is that I need to declare a temporary table with a varying number of columns. There could be 2 clients or 100. Every client in guaranteed to have every property ultimately listed. What SQL structure would statisfy this and how can I declare it and insert things into it? I can't just flip the columns and rows either because there is a variable number of each.

    Read the article

  • mysql syntax error for timestamp

    - by eyecreate
    I have this piece of SQL that is being fed to Mysql. CREATE TABLE pn_history( member INT, action INT, with INT, timestamp DATETIME, details VARCHAR(256) ) But is comes back as an error about the syntax. #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'with INT, timestamp DATETIME, details VARCHAR(256) )' at line 4 Why is this failing?

    Read the article

  • sql insert statement with a lot of same where clause and one different where cluase

    - by william
    I m sry if the title is not clear. Here's my proble. I created a new table which will show total, average and maximum values. I have to insert the results into that table. That table will have only 4 rows. No Appointment, Appointment Early, Appointment Late and Appointment Punctual. So.. I have sth like.. insert into newTable select 'No Appointment' as 'Col1', avg statement, total statement, max statement from orgTable where (general conditions) and (unique condition to check NO APPOINTMENT); I have to do that same thing for another 3 rows.. where only the unique condition is different to check early, punctual or late.. So..the statement is super long. I wanna reduce the size.. How can I achieve that?

    Read the article

  • Indexing table with duplicates MySQL/SQL Server with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In SQL Server you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • SQL Server query

    - by carrot_programmer_3
    Hi, I have a SQL Server DB containing a registrations table that I need to plot on a graph over time. The issue is that I need to break this down by where the user registered from (e.g. website, wap site, or a mobile application). the resulting output data should look like this... [date] [num_reg_website] [num_reg_wap_site] [num_reg_mobileapp] 1 FEB 2010,24,35,64 2 FEB 2010,23,85,48 3 FEB 2010,29,37,79 etc... The source table is as follows... UUID(int), signupdate(datetime), requestsource(varchar(50)) some smple data in this table looks like this... 1001,2010-02-2:00:12:12,'website' 1002,2010-02-2:00:10:17,'app' 1003,2010-02-3:00:14:19,'website' 1004,2010-02-4:00:16:18,'wap' 1005,2010-02-4:00:18:16,'website' Running the following query returns one data column 'total registrations' for the website registrations but I'm not sure how to do this for multiple columns unfortunatly.... select CAST(FLOOR(CAST([signupdate]AS FLOAT ))AS DATETIME) as [signupdate], count(UUID) as 'total registrations' FROM [UserRegistrationRequests] WHERE requestsource = 'website' group by CAST(FLOOR(CAST([signupdate]AS FLOAT ))AS DATETIME)

    Read the article

  • What is preferred method for searching table data using stored procedure?

    - by Mourya
    I have a customer table with Cust_Id, Name, City and search is based upon any or all of the above three. Which one Should I go for ? Dynamic SQL: declare @str varchar(1000) set @str = 'Select [Sno],[Cust_Id],[Name],[City],[Country],[State] from Customer where 1 = 1' if (@Cust_Id != '') set @str = @str + ' and Cust_Id = ''' + @Cust_Id + '''' if (@Name != '') set @str = @str + ' and Name like ''' + @Name + '%''' if (@City != '') set @str = @str + ' and City like ''' + @City + '%''' exec (@str) Simple query: select [Sno],[Cust_Id],[Name],[City],[Country],[State] from Customer where (@Cust_Id = '' or Cust_Id = @Cust_Id) and (@Name = '' or Name like @Name + '%') and (@City = '' or City like @City + '%') Which one should I prefer (1 or 2) and what are advantages? After going through everyone's suggestion , here is what i finally got. DECLARE @str NVARCHAR(1000) DECLARE @ParametersDefinition NVARCHAR(500) SET @ParametersDefinition = N'@InnerCust_Id varchar(10), @InnerName varchar(30),@InnerCity varchar(30)' SET @str = 'Select [Sno],[Cust_Id],[Name],[City],[Country],[State] from Customer where 1 = 1' IF(@Cust_Id != '') SET @str = @str + ' and Cust_Id = @InnerCust_Id' IF(@Name != '') SET @str = @str + ' and Name like @InnerName' IF(@City != '') SET @str = @str + ' and City like @InnerCity' -- ADD the % symbol for search based upon the LIKE keyword SELECT @Name = @Name + '%', @City = @City+ '%' EXEC sp_executesql @str, @ParametersDefinition, @InnerCust_Id = @Cust_Id, @InnerName = @Name, @InnerCity = @City; References : http://blogs.lessthandot.com/index.php/DataMgmt/DataDesign/changing-exec-to-sp_executesql-doesn-t-p http://msdn.microsoft.com/en-us/library/ms175170.aspx

    Read the article

  • SQL Server: Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • SQL ADO.NET shortcut extensions (old school!)

    - by Jeff
    As much as I love me some ORM's (I've used LINQ to SQL quite a bit, and for the MSDN/TechNet Profile and Forums we're using NHibernate more and more), there are times when it's appropriate, and in some ways more simple, to just throw up so old school ADO.NET connections, commands, readers and such. It still feels like a pain though to new up all the stuff, make sure it's closed, blah blah blah. It's pretty much the least favorite task of writing data access code. To minimize the pain, I have a set of extension methods that I like to use that drastically reduce the code you have to write. Here they are... public static void Using(this SqlConnection connection, Action<SqlConnection> action) {     connection.Open();     action(connection);     connection.Close(); } public static SqlCommand Command(this SqlConnection connection, string sql){    var command = new SqlCommand(sql, connection);    return command;}public static SqlCommand AddParameter(this SqlCommand command, string parameterName, object value){    command.Parameters.AddWithValue(parameterName, value);    return command;}public static object ExecuteAndReturnIdentity(this SqlCommand command){    if (command.Connection == null)        throw new Exception("SqlCommand has no connection.");    command.ExecuteNonQuery();    command.Parameters.Clear();    command.CommandText = "SELECT @@IDENTITY";    var result = command.ExecuteScalar();    return result;}public static SqlDataReader ReadOne(this SqlDataReader reader, Action<SqlDataReader> action){    if (reader.Read())        action(reader);    reader.Close();    return reader;}public static SqlDataReader ReadAll(this SqlDataReader reader, Action<SqlDataReader> action){    while (reader.Read())        action(reader);    reader.Close();    return reader;} It has been awhile since I've really revisited these, so you will likely find opportunity for further optimization. The bottom line here is that you can chain together a bunch of these methods to make a much more concise database call, in terms of the code on your screen, anyway. Here are some examples: public Dictionary<string, string> Get(){    var dictionary = new Dictionary<string, string>();    _sqlHelper.GetConnection().Using(connection =>        connection.Command("SELECT Setting, [Value] FROM Settings")            .ExecuteReader()            .ReadAll(r => dictionary.Add(r.GetString(0), r.GetString(1))));    return dictionary;} or... public void ChangeName(User user, string newName){    _sqlHelper.GetConnection().Using(connection =>         connection.Command("UPDATE Users SET Name = @Name WHERE UserID = @UserID")            .AddParameter("@Name", newName)            .AddParameter("@UserID", user.UserID)            .ExecuteNonQuery());} The _sqlHelper.GetConnection() is just some other code that gets a connection object for you. You might have an even cleaner way to take that step out entirely. This looks more fluent, and the real magic sauce for me is the reader bits where you can put any kind of arbitrary method in there to iterate over the results.

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >