Search Results

Search found 27530 results on 1102 pages for 'sql truncate'.

Page 319/1102 | < Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • sql query problem

    - by benjamin button
    why this query give me an error:ORA-01790 SELECT TO_CHAR(logical_date,'MM') MONTH FROM logical_date WHERE logical_date_type='B' UNION SELECT TO_CHAR(logical_date,'MM')+1 MONTH FROM logical_date WHERE logical_date_type='B' but when i run them separately,they give the proper output.

    Read the article

  • SQL joins "going up" two tables

    - by blcArmadillo
    I'm trying to create a moderately complex query with joins: SELECT `history`.`id`, `parts`.`type_id`, `serialized_parts`.`serial`, `history_actions`.`action`, `history`.`date_added` FROM `history_actions`, `history` LEFT OUTER JOIN `parts` ON `parts`.`id` = `history`.`part_id` LEFT OUTER JOIN `serialized_parts` ON `serialized_parts`.`parts_id` = `history`.`part_id` WHERE `history_actions`.`id` = `history`.`action_id` AND `history`.`unit_id` = '1' ORDER BY `history`.`id` DESC I'd like to replace `parts`.`type_id` in the SELECT statement with `part_list`.`name` where the relationship I need to enforce between the two tables is `part_list`.`id` = `parts`.`type_id`. Also I have to use joins because in some cases `history`.`part_id` may be NULL which obviously isn't a valid part id. How would I modify the query to do this?

    Read the article

  • Optimizing a Soundex Query for finding similar names

    - by xkingpin
    My application will offer a list of suggestions for English names that "sound like" a given typed name. The query will need to be optimized and return results as quick as possible. Which option would be most optimal for returning results quickly. (Or your own suggestion if you have one) A. Generate the Soundex Hash and store it in the "Names" table then do something like the following: (This saves generating the soundex hash for at least every row in my db per query right?) select name from names where NameSoundex = Soundex('Ann') B. Use the Difference function (This must generate the soundex for every name in the table?) select name from names where Difference(name, 'Ann') = 3 C. Simple comparison select name from names where Soundex(name) = Soundex('Ann') Option A seems like to me it would be the fastest to return results because it only generates the Soundex for one string then compares to an indexed column "NameSoundex" Option B should give more results than option A because the name does not have to be an exact match of the soundex, but could be slower Assuming my table could contain millions of rows, what would yield the best results?

    Read the article

  • How to implement filter system in SQL?

    - by sadvaw
    Right now I am planning to add a filter system to my site. Examples: (ID=apple, COLOR=red, TASTE=sweet, ORIGIN=US) (ID=mango, COLOR=yellow, TASTE=sweet, ORIGIN=MEXICO) (ID=banana, COLOR=yellow, TASTE=bitter-sweet, ORIGIN=US) so now I am interested in doing the following: SELECT ID FROM thisTable WHERE COLOR='yellow' AND TASTE='SWEET' But my problem is I am doing this for multiple categories in my site, and the columns are NOT consistent. (like if the table is for handphones, then it will be BRAND, 3G-ENABLED, PRICE, COLOR, WAVELENGTH, etc) how could I design a general schema that allows this? Right now I am planning on doing: table(ID, KEY, VALUE) This allows arbitary number of columns, but for the query, I am using SELECT ID FROM table WHERE (KEY=X1 AND VALUE=V1) AND (KEY=X2 AND VALUE=V2), .. which returns an empty set. Can someone recommend a good solution to this? Note that the number of columns WILL change regularly

    Read the article

  • Counting consecutive items within MS SQL

    - by Greg
    Got a problem with a query I'm trying to write. I have a table that lists people that have been sent an email. There is a bit column named Active which is set to true if they have responded. But I need to count the number of consecutive emails the person has been inactive since either their first email or last active email. For example, this basic table shows one person has been sent 9 emails. They have been active within two of the emails (3 & 5). So their inactive count would be 4 as we are counting from email number 6 onwards. PersonID(int) EmailID(int) Active(bit) 1 1 0 1 2 0 1 3 1 1 4 0 1 5 1 1 6 0 1 7 0 1 8 0 1 9 0 Any pointers or help would be great. Regards Greg

    Read the article

  • SQL count NULL cells

    - by Giuseppe
    Dear All, I have the following problem. I have a table in a db, with many columns. I can do different kind of select queries, to show, for example, for each record that satisfies a condition: all cells from columns with names ending in _t0 all cells from columns with names ending in _t1 ... To get the column lists to form the queries I use the information schema. Now, the problem: each query returns a record with a subset of the columns of the big table. This means that I can get a row of (all!) NULLs. How can I ask my query to reject such rows without having to type in explicitely the column names (i.e. by saying where col_1 is not null, col_2 is not null...)? Is it possible? Thanks in advance!!! Sep

    Read the article

  • Best way to handle SQL Server fulltext index updates

    - by tlianza
    Hi all, I have a fulltext index which doesn't need to be immediately up-to-date, I'd like to spare myself the I/O (when I do bulk updates, I see a ton of I/O related to the index) and do the index updates during low usage times (nightly, perhaps even weekly). It seems there are two ways to go about this: Turn off change tracking (SET CHANGE_TRACKING OFF) and add a timestamp field to the indexed table, so that you can run alter fulltext index on <table> start INCREMENTAL population, or Enable change tracking, but set it to MANUAL, so that you can run alter fulltext index on <table> start UPDATE population when you need it updated. Is there a preferred method? I couldn't tell from this overview if there was a performance benefit one way or the other. Tom

    Read the article

  • Help with SQL query (list strings and count in same query)

    - by Mestika
    Hi everybody, I’m working on a small kind of log system to a webpage, and I’m having some difficulties with a query I want to do multiple things. I have tried to do some nested / subqueries but can’t seem to get it right. I’ve two tables: User = {userid: int, username} Registered = {userid: int, favoriteid: int} What I need is a query to list all the userid’s and the usernames of each user. In addition, I also need to count the total number of favoriteid’s the user is registered with. A user who is not registered for any favorite must also be listed, but with the favorite count shown as zero. I hope that I have explained my request probably but otherwise please write back so I can elaborate. By the way, the query I’ve tried with look like this: SELECT user.userid, user.username FROM user,registered WHERE user.userid = registered.userid(SELECT COUNT(favoriteid) FROM registered) However, it doesn’t do the trick, unfortunately Kind regards Mestika

    Read the article

  • Concerned with Top in sql

    - by ramyatk06
    hi guys, I have variable @count of datatype int.I am setting values to this @count. I want to select top @count number of rows from table.When i use Select top @count,its showing error. Delete from ItemDetails where GroupId in (Select Top @count Id from ItemDetails where GroupId=@Prm_GroupId ) The error is Incorrect syntax near '@count'.Can anybody help?

    Read the article

  • again new query. i am trying to solve this from one hour. please help

    - by Dharmendra
    Query : List the film title and the leading actor for all of 'Julie Andrews' films. there are three tables : movie(id, title, yr, score, votes, director) actor(id, name) casting(movieid, actorid, ord) select movie.title,actor.name as cont from movie join casting on (movie.id=casting.movieid) join actor on (casting.actorid=actor.id) where actor.name='Julie andrews' actually i can' get how to find the leading actor.

    Read the article

  • Sql query to get the GrandFather name by doing a self join

    - by mahesh
    Hello all , Can any one please give me the query to get the child name and his grand fathers name For eg - if i have a table Relationships in the i have childid and fatherid columns so how will i get the grandfather, i can easily get the father name by using a join but for grandfather i need to do joins 2 times so can any one help me with this D.Mahesh

    Read the article

  • Child sProc cannot reference a Local temp table created in parent sProc

    - by John Galt
    On our production SQL2000 instance, we have a database with hundreds of stored procedures, many of which use a technique of creating a #TEMP table "early" on in the code and then various inner stored procedures get EXECUTEd by this parent sProc. In SQL2000, the inner or "child" sProc have no problem INSERTing into #TEMP or SELECTing data from #TEMP. In short, I assume they can all refer to this #TEMP because they use the same connection. In testing with SQL2008, I find 2 manifestations of different behavior. First, at design time, the new "intellisense" feature is complaining in Management Studio EDIT of the child sProc that #TEMP is an "invalid object name". But worse is that at execution time, the invoked parent sProc fails inside the nested child sProc. Someone suggested that the solution is to change to ##TEMP which is apparently a global temporary table which can be referenced from different connections. That seems too drastic a proposal both from the amount of work to chase down all the problem spots as well as possible/probable nasty effects when these sProcs are invoked from web applications (i.e. multiuser issues). Is this indeed a change in behavior in SQL2005 or SQL2008 regarding #TEMP (local temp tables)? We skipped 2005 but I'd like to learn more precisely why this is occuring before I go off and try to hack out the needed fixes. Thanks.

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • SQL Server Full Text Search Leading Wildcard

    - by aherrick
    After taking a look at this SO question and doing my own research, it appears that you cannot have a leading wildcard while using full text search. So in the most simple example, if I have a Table with 1 column like below: TABLE1 coin coinage undercoin select COLUMN1 from TABLE1 where COLUMN1 LIKE '%coin%' Would get me the results I want. How can I get the exact same results with FULL TEXT SEARCH enabled on the column? The following two queries return the exact same data, which is not exactly what I want. SELECT COLUMN1 FROM TABLE1 WHERE CONTAINS(COLUMN1, '"coin*"') SELECT COLUMN1 FROM TABLE1 WHERE CONTAINS(COLUMN1, '"*coin*"')

    Read the article

  • Efficient way to update SQL 'relationship' table

    - by AmbroseChapel
    Say I have three properly normalised tables. One of people, one of qualifications and one mapping people to qualifications: People: id | Name ---------- 1 | Alice 2 | Bob Degrees: id | Name --------- 1 | PhD 2 | MA People-to-degrees: person_id | degree_id --------------------- 1 | 2 # Alice has an MA 2 | 1 # Bob has a PhD So then I have to update this mapping via my web interface. (I made a mistake. Bob has a BA, not a PhD, and Alice just got her B Eng.) There are four possible states of these one-to-many relationship mappings: was true before, should now be false was false before, should now be true was true before, should remain true was false before, should remain false what I don't want to do is read the values from four checkboxes, then hit the database four times to say "Did Bob have a BA before? Well he does now." "Did Bob have PhD before? Because he doesn't any more" and so on. How do other people address this issue? I'm curious to see if someone else arrives at the same solution I did.

    Read the article

  • Multiple IN statements for WHERE. Would this return good data?

    - by TheDudeAbides
    SELECT ['VISA CK - 021810$'].[ACCT NBR #1], ['VISA CK - 021810$'].[ALT CUST NM #1], ['VISA CK - 021810$'].[LAST USED] FROM ['VISA CK - 021810$'] WHERE ['VISA CK - 021810$'].[ALT CUST NM #1] IN ( SELECT ['VISA CK - 021810$'].[ALT CUST NM #1] FROM ['VISA CK - 021810$'] GROUP BY ['VISA CK - 021810$'].[ALT CUST NM #1] HAVING COUNT(['VISA CK - 021810$'].[ALT CUST NM #1]) > 1 ) AND ['VISA CK - 021810$'].[ACCT NBR #1] IN ( SELECT ['VISA CK - 021810$'].[ACCT NBR #1] FROM ['VISA CK - 021810$'] GROUP BY ['VISA CK - 021810$'].[ACCT NBR #1] HAVING COUNT(['VISA CK - 021810$'].[ACCT NBR #1]) > 1 )

    Read the article

  • How to record different authentication types (username / password vs token based) in audit log

    - by RM
    I have two types of users for my system, normal human users with a username / password, and delegation authorized accounts through OAuth (i.e. using a token identifier). The information that is stored for each is quite different, and are managed by different subsytems. They do however interact with the same tables / data within the system, so I need to maintain the audit trail regardless of whether human user, or token-based user modified the data. My solution at the moment is to have a table called something like AuditableIdentity, and then have the two types inheriting off that table (either in the single table, or as two seperate tables with 1 to 1 PK with AuditableIdentity. All operations would use the common AuditableIdentity PK for CreatedBy, ModifiedBy etc columns. There isn't any FK constraint on the audit columns, so any text can go in there, but I want an easy way to easily determine whether it was a human or system that made the change, and joining to the one AuditableIdentity table seems like a clean way to do that? Is there a best practice for this scenario? Is this an appropriate way of approaching the problem - or would you not bother with the common table and just rely on joins (to the two seperate un-related user / token tables) later to determine which user type matches which audit records?

    Read the article

  • Sql Paging/Sorting - Efficent method with dynamic sort?

    - by dmose
    I'm trying to implement this style of paging: http://www.4guysfromrolla.com/webtech/042606-1.shtml CREATE PROCEDURE [dbo].[usp_PageResults_NAI] ( @startRowIndex int, @maximumRows int ) AS DECLARE @first_id int, @startRow int -- A check can be added to make sure @startRowIndex isn't > count(1) -- from employees before doing any actual work unless it is guaranteed -- the caller won't do that -- Get the first employeeID for our page of records SET ROWCOUNT @startRowIndex SELECT @first_id = employeeID FROM employees ORDER BY employeeid -- Now, set the row count to MaximumRows and get -- all records >= @first_id SET ROWCOUNT @maximumRows SELECT e.*, d.name as DepartmentName FROM employees e INNER JOIN Departments D ON e.DepartmentID = d.DepartmentID WHERE employeeid >= @first_id ORDER BY e.EmployeeID SET ROWCOUNT 0 GO This method works great, however, is it possible to use this method and have dynamic field sorting? If we change this to SELECT e.*, d.name as DepartmentName FROM employees e INNER JOIN Departments D ON e.DepartmentID = d.DepartmentID WHERE employeeid >= @first_id ORDER BY e.FirstName DESC It breaks the sorting... Is there any way to combine this method of paging with the ability to sort on different fields?

    Read the article

  • How to rollback a database deployment without losing new data?

    - by devlife
    My company uses virtual machines for our web/app servers. This allows for very easy rollbacks of a deployment if something goes wrong. However, if an app server deployment also requires a database deployment and we have to rollback I'm kind of at a loss. How can you rollback database schema changes without losing data? The only thing that I can think of is to write a script that will drop/revert tables/columns back to their original state. Is this really the best way?

    Read the article

  • What does MSSQL execution plan show?

    - by tim
    There is the following code: declare @XmlData xml = '<Locations> <Location rid="1"/> </Locations>' declare @LocationList table (RID char(32)); insert into @LocationList(RID) select Location.RID.value('@rid','CHAR(32)') from @XmlData.nodes('/Locations/Location') Location(RID) insert into @LocationList(RID) select A2RID from tblCdbA2 Table tblCdbA2 has 172810 rows. I have executed the batch in SSMS with “Include Actual execution plan “ and having Profiler running. The plan shows that the first query cost is 88% relative to the batch and the second is 12%, but the profiler says that durations of the first and second query are 17ms and 210 ms respectively, the overall time is 229, which is not 12 and 88.. What is going on? Is there a way how I can determine in the execution plan which is the slowest part of the query?

    Read the article

  • SQL SERVER FULL-TEXT INDEX, CONTAINS return empty

    - by max
    Hi, All: I got a issue about full index, any body can help me on this? set up full text index CREATE FULLTEXT INDEX ON dbo.Companies(my table name) ( CompanyName(colum of my table) Language 0X0 ) KEY INDEX IX_Companies_CompanyAlias ON QuestionsDB WITH CHANGE_TRACKING AUTO GO Using CONTAINS to find the matched rows SELECT CompanyId, CompanyName FROM dbo.Companies WHERE CONTAINS(CompanyName,'Micro') All is going well. just just just return empty resultset. And I am sure there is company with CompanyName "Microsoft" in Table Company Much appreciated if anybody does me a favor on this.

    Read the article

  • Restore Partioned database into multiple filegroups

    - by Renju
    does anyone have any query to restore partioned db that having multiple file groups,In the restore option in the SSME i need to edit manually all the path of the filegroups restore as option it little bit tedious as it having more than 150 filegroups eg:USE master GO -- First determine the number and names of the files in the backup. RESTORE FILELISTONLY FROM MyNwind_1 -- Restore the files for MyNwind. RESTORE DATABASE MyNwind FROM MyNwind_1 WITH NORECOVERY, MOVE 'MyNwind_data_1' TO 'D:\MyData\MyNwind_data_1.mdf', MOVE 'MyNwind_data_2' TO 'D:\MyData\MyNwind_data_2.ndf' GO -- Apply the first transaction log backup. RESTORE LOG MyNwind FROM MyNwind_log1 WITH NORECOVERY GO -- Apply the last transaction log backup. RESTORE LOG MyNwind FROM MyNwind_log2 WITH RECOVERY GO Here i need to specify multiple MOVE command for all my filegroups,this is a tedious task when having more than 100s of filegroups MOVE 'MyNwind_data_1' TO 'D:\MyData\MyNwind_data_1.mdf', MOVE 'MyNwind_data_2' TO 'D:\MyData\MyNwind_data_2.ndf' I need to move the files into the path i provided as a parameter.Please help. Regards Renju http://blog.renjucool.com

    Read the article

  • SQL SELECT Statement

    - by mouthpiec
    I have a table with the following columns: id, teamA_id, teamB_id Will it be possible to write a SELECT statement that gives both teamA_id and teamB_id in the same column? EDIT: Consider this example From id, teamA_id, teamB_id 1, 21, 45 2, 34, 67 I need Teams 21 45 34 67

    Read the article

< Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >