Search Results

Search found 34476 results on 1380 pages for 'sql blog'.

Page 359/1380 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • Is there a command to test an SQL query without executing it? ( MySQL or ANSI SQL )

    - by Petruza
    Is there anything like this: TEST DELETE FROM user WHERE somekey = 45; That can return any errors, for example that somekey doesn't exist, or some constraint violation or anything, and reporting how many rows would be affected, but not executing the query? I know you can easily turn any query in a select query that has no write or delete effect in any row, but that can lead to errors and it's not very practical if you want to test and debug many queries.

    Read the article

  • using NEWSEQUENTIALID() with UPDATE Trigger

    - by Ram
    I am adding a new GUID/Uniqueidentifier column to my table. ALTER TABLE table_name ADD VersionNumber UNIQUEIDENTIFIER UNIQUE NOT NULL DEFAULT NEWSEQUENTIALID() GO And when ever a record is updated in the table, I would want to update this column "VersionNumber". So I create a new trigger CREATE TRIGGER [DBO].[TR_TABLE_NAMWE] ON [DBO].[TABLE_NAME] AFTER UPDATE AS BEGIN UPDATE TABLE_NAME SET VERSIONNUMBER=NEWSEQUENTIALID() FROM TABLE_NAME D JOIN INSERTED I ON D.ID=I.ID/* some ID which is used to join*/ END GO But just realized that NEWSEQUENTIALID() can only be used with CREATE TABLE or ALTER TABLE. I got this error The newsequentialid() built-in function can only be used in a DEFAULT expression for a column of type 'uniqueidentifier' in a CREATE TABLE or ALTER TABLE statement. It cannot be combined with other operators to form a complex scalar expression. Is there a workaround for this ? Edit1: Changing NEWSEQUENTIALID() to NEWID() in the trigger solves this, but I am indexing this column and using NEWID() would be sub-optimal

    Read the article

  • SQL return error within PHP

    - by Luke
    I use GET to get the id of a result. $id = $_GET['id']; I then use the following code: <? $q = $database->friendlyDetails($id); while( $row=mysql_fetch_assoc($q) ) { $hu = $row['home_user']; $ht = $row['home_team']; $hs = $row['home_score']; $au = $row['away_user']; $at = $row['away_team']; $as = $row['away_score']; $game = $row['game']; $name = $row['name']; $match = $row['match_report1']; $compid = $row['compid']; $date = $row['date_submitted']; $sub = $row['user_submitted']; } ?> And friendDetails- function friendlyDetails($i) { $q = "SELECT * FROM ".TBL_SUB_RESULTS." INNER JOIN ".TBL_FRIENDLY." ON ".TBL_FRIENDLY.".id = ".TBL_SUB_RESULTS.".compid WHERE ".TBL_SUB_RESULTS.".id = '$i'"; return mysql_query($q, $this->connection); } For some reason, the code will only return what is under id =1. Can anyone see anything obvious I am doing wrong?

    Read the article

  • Difficults on sql query

    - by João Madureira Pires
    I have the following tables: TableA (id, tableB_id, tableC_id) TableB (id, expirationDate) TableC (id, expirationDate) I want to retrieve all the results from TableA ordered by tableB.expirationDate and tableC.expirationDate. thanks

    Read the article

  • How i can do the following querry to get needed information

    - by Night Walker
    Hello there I have two tables CompList table with following columns : CompId , McID , station , slot ,subslot , and several others BookingTable with columns: CompId , LineID , McID , station , slot ,subslot. I want to get following result: rows only that CompList.CompId == BookingTable.CompId (only CompId that is in both tables) and i need in the result columns from CompList: CompId , McID , station , slot ,subslot . and from BookingTable: LineID , McID , station , slot ,subslot and how i will be able to distinguish between same columns with same table in the result table them in the result table? Thanks for help.

    Read the article

  • SQL how to avoid duplicate insert in a table

    - by user1624531
    how to avoid duplicate insert in a table? I use below query to insert in to table: insert into RefundDetails(ID,StatusModified,RefundAmount,OrderNumber) select O.id,O.StatusModified,OI.RefundAmount,O.OrderNumber from Monsoon.dbo.[Order] as O WITH (NOLOCK) JOIN Monsoon.dbo.OrderItem as OI WITH (NOLOCK)on O.Id = OI.OrderId WHERE o.ID in (SELECT OrderID FROM Mon2QB.dbo.monQB_OrderActivityView WHERE ACTIVITYTYPE = 4 AND at BETWEEN '10/30/2012' AND '11/3/2012') AND (O.StatusModified < '11/3/2012')

    Read the article

  • LINQ to SQL left outer joins

    - by César
    Is this query equivalent to a LEFT OUTER join? var rows = from a in query join s in context.ViewSiteinAdvise on a.Id equals s.SiteInAdviseId where a.Order == s.Order select new {....}; I tried this but it did not result from s in ViewSiteinAdvise join q in query on s.SiteInAdviseId equals q.Id into sa from a in sa.DefaultIfEmpty() where s.Order == a.Order select new {s,a} I need all columns from View

    Read the article

  • Oracle SQL: Query results from previous X isoweeks () (where X might be > 52)

    - by tommy-o-dell
    How could I adapt this query to show the previous 61 weeks? (still exlcluding the current week). My query currently shows me the total weekly sales for 2010 grouped by ISO Week and ISO Year (exlcuding the current week). select to_char(order_date,'IYYY') as iso_year, to_char(order_date,'IW') as iso_week, sum(sale_amount) from orders where to_char(order_date,'IW') <> to_char(SYSDATE) and to_char(order_date,'IYYY') = 2010 group by to_char(order_date,'IYYY') to_char(order_date,'IW') I realize I could probably just omit the "2010" requirement, order by desc and limit results to a certain bnumber of rows. But that just doesn't seem right! Much appreciate any help pointing me in the right direction!

    Read the article

  • Is INT the correct datatype for ABS(CHECKSUM(NEWID()))?

    - by Chad Sellers
    I'm in the process of creating unique customers ID's that is an alternative Id for external use. In the process of adding a new column "cust_uid" with datatype INT for my unique ID's, When I do an INSERT into this new column: Insert Into Customers(cust_uid) Select ABS(CHECKSUM(NEWID())) I get a error: Could not create an acceptable cursor. OLE DB provider "SQLNCLI" for linked server "SHQ2IIS1" returned message "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done. I've check all data types on both tables and the only things that has changed is the new column in both tables. The update is being done on one Big @$$ table...and for reasons above my pay grade, we would like to have new uid's that are different form the one's that we currently have "so users don't know how many accounts we actually have." Is INT the correct datatype for ABS(CHECKSUM(NEWID())) ?

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

  • SQL Server Querying An XML Field

    - by Gavin Draper
    I have a table that contains some meta data in an XML field. For example <Meta> <From>[email protected]</From> <To> <Address>[email protected]</Address> <Address>[email protected]</Address> </To> <Subject>ESubject Goes Here</Subject> </Meta> I want to then be able to query this field to return the following results From To Subject [email protected] [email protected] Subject Goes Here [email protected] [email protected] Subject Goes Here I've written the following query SELECT MetaData.query('data(/Meta/From)') AS [From], MetaData.query('data(/Meta/To/Address)') AS [To], MetaData.query('data(/Meta/Subject)') AS [Subject] FROM Documents However this only returns one record for that XML field. It combines both the 2 addresses into one result. Is it possible for split these on to separate records? The result I'm getting is From To Subject [email protected] [email protected] [email protected] Subject Goes Here Thanks Gav

    Read the article

  • Query Execution Plan - When is the Where clause executed?

    - by Alex
    I have a query like this (created by LINQ): SELECT [t0].[Id], [t0].[CreationDate], [t0].[CreatorId] FROM [dbo].[DataFTS]('test', 100) AS [t0] WHERE [t0].[CreatorId] = 1 ORDER BY [t0].[RANK] DataFTS is a full-text search table valued function. The query execution plan looks like this: SELECT (0%) - Sort (23%) - Nested Loops (Inner Join) (1%) - Sort (Top N Sort) (25%) - Stream Aggregate (0%) - Stream Aggregate (0%) - Compute Scalar (0%) - Table Valued Function (FullTextMatch) (13%) | | - Clustered Index Seek (38%) Does this mean that the WHERE clause ([CreatorId] = 1) is executed prior to the TVF ( full text search) or after the full text search? Thank you.

    Read the article

  • Set time part of datetime variable to 18:00

    - by maxt3r
    Hi. I need to set datetime variable to two days from now but it's time part must be 18:00. For example if i call getdate() now i'll get 2010-05-17 13:18:07.260. I need to set it to 2010-05-19 18:00:00.000. Does anybody have a good snippet for that or any ideas how to do it right?

    Read the article

  • Help with optimising SQL query

    - by user566013
    Hi i need some help with this problem. I am working web application and for database i am using sqlite. Can someone help me with one query from databse which must be optimized == fast =) I have table x: ID | ID_DISH | ID_INGREDIENT 1 | 1 | 2 2 | 1 | 3 3 | 1 | 8 4 | 1 | 12 5 | 2 | 13 6 | 2 | 5 7 | 2 | 3 8 | 3 | 5 9 | 3 | 8 10| 3 | 2 .... ID_DISH is id of different dishes, ID_INGREDIENT is ingredient which dish is made of: so in my case dish with id 1 is made with ingredients with ids 2,3 In this table a have more then 15000 rows and my question is: i need query which will fetch rows where i can find ids of dishes ordered by count of ingreedients ASC which i haven added to my algoritem. examle: foo(2,4) will rows in this order: ID_DISH | count(stillMissing) 10 | 2 1 | 3 Dish with id 10 has ingredients with id 2 and 4 and hasn't got 2 more, then is

    Read the article

  • How to use multiple identity numbers in one table?

    - by vincer
    I have an web application that creates printable forms, these forms have a unique number on them, the problem is I have 2 forms that separate numbers need to be created for them. ie) Form1- Numbered 2000000-2999999 Form2- Numbered 3000000-3999999 dbo.test2 - is my form information table Tsel - is my autoinc table for the 3000000 series numbers Tadv - is my autoinc table for the 2000000 series numbers What I have done is create 2 tables with just autoinc row (one for 2000000 series numbers and one for 3000000 series numbers), I then created a trigger to add a record to the coresponding table, read back the autoinc number and add it to my table that stores the form information including the just created autoinc number for the right series of forms. Although it does work, I'm concerned that the numbers will get messed up under load. I'm not sure the @@IDENTITY will always return the right value when many people are using the system. (I cannot have duplicates and I need to use the numbering form show above. See code below. ** TRIGGER ** CREATE TRIGGER MAKEANID2 ON dbo.test2 AFTER INSERT AS SET NOCOUNT ON declare @someid int declare @someid2 int declare @startfrom int declare @test1 varchar(10) select @someid=@@IDENTITY select @test1 = (Select name1 from test2 where sysid = @someid ) if @test1 = 'select' begin insert into Tsel Default values select @someid2 = @@IDENTITY end if @test1 = 'adv' begin insert into Tadv Default values select @someid2 = @@IDENTITY end update test2 set name2=(@someid2) where sysid = @someid SET NOCOUNT OFF

    Read the article

  • SSIS (missing) Pre-Build and Post-Build

    - by Raj More
    For the warehouse work under progress, we have a single solution with multiple projects in it OLTP Database Project Warehouse Database Project SSIS ETL project After the SSIS project is built, I want to move the binaries (XML, really) from the Bin folder to "C:\AutomatedTasks\ETL.Warehouse\" and "C:\AutomatedTasks\ETL" I cannot find the Post-Build events to do that for the SSIS project. Where are they? If they aren't available, how do I achieve this?

    Read the article

  • Reset SQL variable inside SELECT statement

    - by Jason McCreary
    I am trying to number some rows on a bridge table with a single UPDATE/SELECT statement using a counter variable @row. For example: UPDATE teamrank JOIN (SELECT @row := @row + 1 AS position, name FROM members) USING(teamID, memberID) SET rank = position Is something like this possible or do I need to create a cursor? If it helps, I am using MySQL 5.

    Read the article

  • Oracle SQL outer join query puzzle

    - by user1651446
    So I am dumb and I have this: select whatever from bank_accs b1, bank_accs b2, table3 t3 where t3.bank_acc_id = t1.bank_acc_id and b2.bank_acc_number = b1.bank_acc_number and b2.currency_code(+) = t3.buy_currency and trunc(sysdate) between nvl(b2.start_date, trunc(sysdate)) and nvl(b2.end_date, trunc(sysdate)); My problem is with the date (actuality) check on b2. Now, I need to return a row for each t3xb1 (t3 = ~10 tables joined, of course), even if there are ONLY INVALID records (date-wise) in b2. How do I outer-join this bit properly? Can't use ANSI joins, must do in a single flat query. Thanks.

    Read the article

  • Figuring out the resource a lock in SQL Server 2000 affects

    - by Michael Lang
    I am adding a simple web-interface to show data from a commercial off the shelf (COTS) application. This COTS issues locks on any record the user is actively looking at (whether they intend to edit and update it or not). I have found sp_lock and the Microsoft sp_lock2 scripts and can see the locks, so that's all well and good. However, I cannot figure out how I can tell if a specific record I am about to update has been affected by one of these locks. If I submit the update request and there is in fact a lock, the web-interface will wait indefinitely until the user closes the window in the COTS. How can I either: a) determine before issuing an update that the record has been locked OR b) issue an update that will immediately return with a LOCKED status rather than indefinitely waiting on the COTS user to close their window on that record?

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >