Search Results

Search found 27530 results on 1102 pages for 'sql truncate'.

Page 358/1102 | < Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >

  • Prioritize SQL WHERE clause

    - by JaTochNietDan
    Basically I want to do this: SELECT * FROM `table` WHERE x = 'hello' OR x = 'bye' LIMIT 1'; I want it to return 1 value, but to prioritize results from the 1st where clause. So if there exists a row where column x's value is "hello", it will not return the result from the 'bye' value. If the "hello" value doesn't exist though, it will return the result from the 'bye' value. Can't figure out a way to do it even though it seems fairly trivial. Any ideas?

    Read the article

  • Oracle/SQL table attribute question

    - by UnceRawr
    I have been given a context schema for a database i need to make for my coursework but some of the attributes (column names) contain letters in brackets at the end of them such as: *x_Number (Adm) *x_Number (A) *x_Number (P) The problem is i have no idea what these letters mean! Can anyone help?

    Read the article

  • SQL Trigger Need to set x from a value

    - by Eric
    Im stuck on a the type of trigger needed to for this constraint. I will have a price and a commission. The price determines the commission amount, < 100 - 4%, < 200 - 5% etc. My idea. the database contains a separate table that will hold 4 price values , 101, 201, 401, 601, with their own matching comission %, this will be called PC. When i create a property listing i want to calculate the commission they earn depending on the price entered. on insert, i need to check the new.price and compare it to the prices in PC. Once new.price is less than the price tuple, i set the price to that commission value create or replace TRIGGER findCommission BEFORE INSERT OR UPDATE ON HASLISTING FOR each ROW BEGIN IF (:NEW.ASKING_PRICE < 100001) THEN :NEW.COMMISSION = 6.0; END IF; IF (:NEW.ASKING_PRICE < 250001) THEN :NEW.COMMISSION = 5.5; END IF; IF (:NEW.ASKING_PRICE < 1000001) THEN :NEW.COMMISSION = 5.0; END IF; IF (:NEW.ASKING_PRICE > 1000000) THEN :NEW.COMMISSION = 4.0; END IF; END;

    Read the article

  • Complex SQL queries (DELETE)?

    - by Joe
    Hello all, I'm working with three tables, and for simplicity's sake let's call them table A, B, and C. Both tables A and B have a column called id, as well as one other column, Aattribute and Battribute, respectively. Column c also has an id column, and two other columns which hold values for A.id and B.id. Now, in my code, I have easy access to values for both Aattribute and Battribute, and want to delete the row at C, so effectively I want to do something like this: DELETE FROM C WHERE aid=(SELECT id FROM A WHERE Aattribute='myvalue') AND bid=(SELECT id FROM B WHERE Battribute='myothervalue') But this obviously doesn't work. Is there any way to make a single complex query, or do I have to run three queries, where I first get the value of A.id using a SELECT with 'myvalue', then the same for B.id, then use those in the final query? [Edit: it's not letting me comment, so in response to the first comment on this: I tried the above query and it did not work, I figured it just wasn't syntactically correct. Using MS Access, for what it's worth. ]

    Read the article

  • DB Strategy for inserting into a high read table (Sql Server)

    - by Tom
    Looking for strategies for a very large table with data maintained for reporting and historical purposes, a very small subset of that data is used in daily operations. Background: We have Visitor and Visits tables which are continuously updated by our consumer facing site. These tables contain information on every visit and visitor, including bots and crawlers, direct traffic that does not result in a conversion, etc. Our back end site allows management of the visitor's (leads) from the front end site. Most of the management occurs on a small subset of our visitors (visitors that become leads). The vast majority of the data in our visitor and visit tables is maintained only for a much smaller subset of user activity (basically reporting type functionality). This is NOT an indexing problem, we have done all we can with indexing and keeping our indexes clean, small, and not fragmented. ps: We do not currently have the budget or expertise for a data warehouse. The problem: We would like the system to be more responsive to our end users when they are querying, for instance, the list of their assigned leads. Currently the query is against a huge data set of mostly irrelevant data. I am pondering a few ideas. One involves new tables and a fairly major re-architecture, I'm not asking for help on that. The other involves creating redundant data, (for instance a Visitor_Archive and a Visitor_Small table) where the larger visitor and visit tables exist for inserts and history/reporting, the smaller visitor1 table would exist for managing leads, sending lead an email, need leads phone number, need my list of leads, etc.. The reason I am reaching out is that I would love opinions on the best way to keep the Visitor_Archive and the Visitor_Small tables in sync... Replication? Can I use replication to replicate only data with a certain column value (FooID = x) Any other strategies?

    Read the article

  • SQL Command Not Properly Ended (Nested Aggregation with Group-by)

    - by snowind
    I keep getting this error when I tried to execute this query, although I couldn't figure out what went wrong. I'm using Oracle and JDBC. Here's the query: SELECT Temp.flight_number, Temp.avgprice FROM (SELECT P.flight_number, AVG (P.amount) AS avgprice FROM purchase P GROUP BY P.flight_number) AS Temp WHERE Temp.avgprice = (SELECT MAX (Temp.avgprice) FROM Temp) I'm trying to get the maximum of average price of the tickets that customers have booked, group by flight_number.

    Read the article

  • Multiple reference in SQL

    - by AGarofoli
    Hi! I'm working on a db but i'm kinda new to this so i've bumped into a problem today. I've got some tables: OFFICE, ROOM, EMPLOYEE and DOCUMENT. Document must specify the sender, which can be a single employee, an entire room or an entire office so it must have a reference to the primary keys of those tables. Should I do a "parallel" table for handle it (for example i've done one for handle the multiple recipients documents) or there is another way? Thank you

    Read the article

  • [SQL][MS access] Query to find duplicate item in 2 table

    - by Rico
    I have this table Antecedent Consequent I1 I2 I1 I1,I2,I3 I1 I4,I1,I3,I4 I1,I2 I1 I1,I2 I1,I4 I1,I2 I1,I3 I1,I4 I3,I2 I1,I2,I3 I1,I4 I1,I3,I4 I4 AS you can see it's pretty messed up. is there anyway i can remove rows if item in consequent exist in antecedent (in 1 row) for example: INPUT: Antecedent Consequent I1 I2 I1 I1,I2,I3 <---- DELETE since I1 exist in antecedent I1 I4,I1,I3,I4 <---- DELETE since I1 exist in antecedent I1,I2 I1 <---- DELETE since I1 exist in antecedent I1,I2 I1,I4 <---- DELETE since I1 exist in antecedent I1,I2 I1,I3 <---- DELETE since I1 exist in antecedent I1,I4 I3,I2 <---- DELETE since I2 exist in antecedent I1,I2,I3 I1,I4 I1,I3,I4 I4 <---- DELETE since I4 exist in antecedent OUTPUT: Antecedent Consequent I1 I2 I1,I2,I3 I1,I4 is there anyway i can do that by query?

    Read the article

  • How can i add '"-" in column

    - by jasmeet
    my query is showing in row 2000 the data of 2000-2001 & in 2001 the data of 2001-2002. how can i change the column so that it displayes column 1 column 2 2000-2001 5 2001-2002 3 2002-2003 9 2003-2004 12 . . . . and so on...

    Read the article

  • how to filter in sql

    - by user3634746
    good day i have a database containing time in and time out and i want to filter all the record of the employees time in and time out. here is my sample db using php and mysql PersonalId LogCount LogDate LogType LogKind 2 1 2014-04-09 12:42:24 0 0 2 1 2014-04-10 12:43:53 1 0 2 1 2014-04-11 02:17:39 0 0 2 2 2014-04-09 12:42:48 1 0 3 2 2014-04-10 12:44:14 0 0 3 2 2014-04-11 02:48:54 1 0 3 3 2014-04-09 12:43:23 0 0 3 3 2014-04-09 12:43:23 1 0 0 in log type is =login 1 in log type is =login this will be the format emp id IN OUT HOURS 2 6/2/2014 8:15 6/2/2014 17:00 7.25 2 6/2/2014 8:15 6/2/2014 17:00 7.25 thanks for your help

    Read the article

  • SQL Server : copy data from one table to another

    - by Gladdy
    I want to update Table2 names with names from Table1 with matching Ids I have around 100 rows in each table. Here is my sample tables. Table1 ID Name Table2 ID Name Sample data Table1 ID |Name -------- 1 |abc 2 |bcd Table2 ID |Name -------- 1 |xyz 2 |OOS Expected result Table2 ID |Name -------- 1 |abc 2 |bcd How can I do this?

    Read the article

  • Optimising a query for Top 5% of users

    - by Nai
    On my website, there exists a group of 'power users' who are fantastic and adding lots of content on to my site. However, their prolific activities has led to their profile pages slowing down a lot. For 95% of the other users, the SPROC that is returning the data is very quick. It's only for these group of power users, the very same SPROC is slow. How does one go about optimising the query for this group of users? You can assume that the right indexes have already been constructed. EDIT: Ok, I think I have been a bit too vague. To rephrase the question, how can I optimise my site to enhance the performance for these 5% of users. Given that this SPROC is the same one that is in use for every user and that it is already well optimised, I am guessing the next steps are to explore caching possibilities on the data and application layers?

    Read the article

  • What are JOINs in SQL (for)?

    - by sabwufer
    I have been using MySQL for 2 years now, yet I still don't know what you actually do with the JOIN statement. I really didn't come across any situation where I was unable to solve a problem with the statements and syntax I already know (SELECT, INSERT, UPDATE, ordering, ...) What does JOIN do in MySQL? (Where) Do I need it? Should I generally avoid it?

    Read the article

  • SQL University: Database testing and refactoring tools and examples

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Database testing and refactoring tools and examples This is the third and last part of the series and in it we’ll take a look at tools we can test and refactor with plus some an example of the both. Tools of the trade First a few thoughts about how to go about testing a database. I'm firmily against any testing tools that go into the database itself or need an extra database. Unit tests for the database and applications using the database should all be in one place using the same technology. By using database specific frameworks we fragment our tests into many places and increase test system complexity. Let’s take a look at some testing tools. 1. NUnit, xUnit, MbUnit All three are .Net testing frameworks meant to unit test .Net application. But we can test databases with them just fine. I use NUnit because I’ve always used it for work and personal projects. One day this might change. So the thing to remember is to be flexible if something better comes along. All three are quite similar and you should be able to switch between them without much problem. 2. TSQLUnit As much as this framework is helpful for the non-C# savvy folks I don’t like it for the reason I stated above. It lives in the database and thus fragments the testing infrastructure. Also it appears that it’s not being actively developed anymore. 3. DbFit I haven’t had the pleasure of trying this tool just yet but it’s on my to-do list. From what I’ve read and heard Gojko Adzic (@gojkoadzic on Twitter) has done a remarkable job with it. 4. Redgate SQL Refactor and Apex SQL Refactor Neither of these refactoring tools are free, however if you have hardcore refactoring planned they are worth while looking into. I’ve only used the Red Gate’s Refactor and was quite impressed with it. 5. Reverting the database state I’ve talked before about ways to revert a database to pre-test state after unit testing. This still holds and I haven’t changed my mind. Also make sure to read the comments as they are quite informative. I especially like the idea of setting up and tearing down the schema for each test group with NHibernate. Testing and refactoring example We’ll take a look at the simple schema and data test for a view and refactoring the SELECT * in that view. We’ll use a single table PhoneNumbers with ID and Phone columns. Then we’ll refactor the Phone column into 3 columns Prefix, Number and Suffix. Lastly we’ll remove the original Phone column. Then we’ll check how the view behaves with tests in NUnit. The comments in code explain the problem so be sure to read them. I’m assuming you know NUnit and C#. T-SQL Code C# test code USE tempdbGOCREATE TABLE PhoneNumbers( ID INT IDENTITY(1,1), Phone VARCHAR(20))GOINSERT INTO PhoneNumbers(Phone)SELECT '111 222333 444' UNION ALLSELECT '555 666777 888'GO-- notice we don't have WITH SCHEMABINDINGCREATE VIEW vPhoneNumbersAS SELECT * FROM PhoneNumbersGO-- Let's take a look at what the view returns -- If we add a new columns and rows both tests will failSELECT *FROM vPhoneNumbers GO -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will SUCCEED -- refactor to split Phone column into 3 partsALTER TABLE PhoneNumbers ADD Prefix VARCHAR(3)ALTER TABLE PhoneNumbers ADD Number VARCHAR(6)ALTER TABLE PhoneNumbers ADD Suffix VARCHAR(3)GO-- update the new columnsUPDATE PhoneNumbers SET Prefix = LEFT(Phone, 3), Number = SUBSTRING(Phone, 5, 6), Suffix = RIGHT(Phone, 3)GO-- remove the old columnALTER TABLE PhoneNumbers DROP COLUMN PhoneGO-- This returns unexpected results!-- it returns 2 columns ID and Phone even though -- we don't have a Phone column anymore.-- Notice that the data is from the Prefix column-- This is a danger of SELECT *SELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will FAIL -- for a fix we have to call sp_refreshview -- to refresh the view definitionEXEC sp_refreshview 'vPhoneNumbers'-- after the refresh the view returns 4 columns-- this breaks the input/output behavior of the database-- which refactoring MUST NOT doSELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will FAIL -- DoesViewReturnCorrectData test will FAIL -- to fix the input/output behavior change problem -- we have to concat the 3 columns into one named PhoneALTER VIEW vPhoneNumbersASSELECT ID, Prefix + ' ' + Number + ' ' + Suffix AS PhoneFROM PhoneNumbersGO-- now it works as expectedSELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will SUCCEED -- clean upDROP VIEW vPhoneNumbersDROP TABLE PhoneNumbers [Test]public void DoesViewReturnCoorectColumns(){ // conn is a valid SqlConnection to the server's tempdb // note the SET FMTONLY ON with which we return only schema and no data using (SqlCommand cmd = new SqlCommand("SET FMTONLY ON; SELECT * FROM vPhoneNumbers", conn)) { DataTable dt = new DataTable(); dt.Load(cmd.ExecuteReader(CommandBehavior.CloseConnection)); // test returned schema: number of columns, column names and data types Assert.AreEqual(dt.Columns.Count, 2); Assert.AreEqual(dt.Columns[0].Caption, "ID"); Assert.AreEqual(dt.Columns[0].DataType, typeof(int)); Assert.AreEqual(dt.Columns[1].Caption, "Phone"); Assert.AreEqual(dt.Columns[1].DataType, typeof(string)); }} [Test]public void DoesViewReturnCorrectData(){ // conn is a valid SqlConnection to the server's tempdb using (SqlCommand cmd = new SqlCommand("SELECT * FROM vPhoneNumbers", conn)) { DataTable dt = new DataTable(); dt.Load(cmd.ExecuteReader(CommandBehavior.CloseConnection)); // test returned data: number of rows and their values Assert.AreEqual(dt.Rows.Count, 2); Assert.AreEqual(dt.Rows[0]["ID"], 1); Assert.AreEqual(dt.Rows[0]["Phone"], "111 222333 444"); Assert.AreEqual(dt.Rows[1]["ID"], 2); Assert.AreEqual(dt.Rows[1]["Phone"], "555 666777 888"); }}   With this simple example we’ve seen how a very simple schema can cause a lot of problems in the whole application/database system if it doesn’t have tests. Imagine what would happen if some outside process would depend on that view. It would get wrong data and propagate it silently throughout the system. And that is not good. So have tests at least for the crucial parts of your systems. And with that we conclude the Database Testing and Refactoring week at SQL University. Hope you learned something new and enjoy the learning weeks to come. Have fun!

    Read the article

  • How to archive data from a table to a local or remote database in SQL 2005 and SQL 2008

    - by simonsabin
    Often you have the need to archive data from a table. This leads to a number of challenges 1. How can you do it without impacting users 2. How can I make it transactionally consistent, i.e. the data I put in the archive is the data I remove from the main table 3. How can I get it to perform well Points 1 is very much tied to point 3. If it doesn't perform well then the delete of data is going to cause lots of locks and thus potentially blocking. For points 1 and 3 refer to my previous posts DELETE-TOP-x-rows-avoiding-a-table-scan and UPDATE-and-DELETE-TOP-and-ORDER-BY---Part2. In essence you need to be removing small chunks of data from your table and you want to do that avoiding a table scan. So that deals with the delete approach but archiving is about inserting that data somewhere else. Well in SQL 2008 they introduced a new feature INSERT over DML (Data Manipulation Language, i.e. SQL statements that change data), or composable DML. The ability to nest DML statements within themselves, so you can past the results of an insert to an update to a merge. I've mentioned this before here SQL-Server-2008---MERGE-and-optimistic-concurrency. This feature is currently limited to being able to consume the results of a DML statement in an INSERT statement. There are many restrictions which you can find here http://msdn.microsoft.com/en-us/library/ms177564.aspx look for the section "Inserting Data Returned From an OUTPUT Clause Into a Table" Even with the restrictions what we can do is consume the OUTPUT from a DELETE and INSERT the results into a table in another database. Note that in BOL it refers to not being able to use a remote table, remote means a table on another SQL instance. To show this working use this SQL to setup two databases foo and fooArchive create database foo go --create the source table fred in database foo select * into foo..fred from sys.objects go create database fooArchive go if object_id('fredarchive',DB_ID('fooArchive')) is null begin     select getdate() ArchiveDate,* into fooArchive..FredArchive from sys.objects where 1=2       end go And then we can use this simple statement to archive the data insert into fooArchive..FredArchive select getdate(),d.* from (delete top (1)         from foo..Fred         output deleted.*) d         go In this statement the delete can be any delete statement you wish so if you are deleting by ids or a range of values then you can do that. Refer to the DELETE-TOP-x-rows-avoiding-a-table-scan post to ensure that your delete is going to perform. The last thing you want to do is to perform 100 deletes each with 5000 records for each of those deletes to do a table scan. For a solution that works for SQL2005 or if you want to archive to a different server then you can use linked servers or SSIS. This example shows how to do it with linked servers. [ONARC-LAP03] is the source server. begin transaction insert into fooArchive..FredArchive select getdate(),d.* from openquery ([ONARC-LAP03],'delete top (1)                     from foo..Fred                     output deleted.*') d commit transaction and to prove the transactions work try, you should get the same number of records before and after. select (select count(1) from foo..Fred) fred        ,(select COUNT(1) from fooArchive..FredArchive ) fredarchive   begin transaction insert into fooArchive..FredArchive select getdate(),d.* from openquery ([ONARC-LAP03],'delete top (1)                     from foo..Fred                     output deleted.*') d rollback transaction   select (select count(1) from foo..Fred) fred        ,(select COUNT(1) from fooArchive..FredArchive ) fredarchive The transactions are very important with this solution. Look what happens when you don't have transactions and an error occurs   select (select count(1) from foo..Fred) fred        ,(select COUNT(1) from fooArchive..FredArchive ) fredarchive   insert into fooArchive..FredArchive select getdate(),d.* from openquery ([ONARC-LAP03],'delete top (1)                     from foo..Fred                     output deleted.*                     raiserror (''Oh doo doo'',15,15)') d                     select (select count(1) from foo..Fred) fred        ,(select COUNT(1) from fooArchive..FredArchive ) fredarchive Before running this think what the result would be. I got it wrong. What seems to happen is that the remote query is executed as a transaction, the error causes that to rollback. However the results have already been sent to the client and so get inserted into the

    Read the article

< Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >