Search Results

Search found 27530 results on 1102 pages for 'sql truncate'.

Page 673/1102 | < Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >

  • Insert into select and update in single query

    - by Ossi
    I have 4 tables: tempTBL, linksTBL and categoryTBL, extra on my tempTBL I have: ID, name, url, cat, isinserted columns on my linksTBL I have: ID, name, alias columns on my categoryTBL I have: cl_id, link_id,cat_id on my extraTBL I have: id, link_id, value How do I do a single query to select from tempTBL all items where isinsrted = 0 then insert them to linksTBL and for each record inserted, pickup ID (which is primary) and then insert that ID to categoryTBL with cat_id = 88. after that insert extraTBL ID for link_id and url for value. I know this is so confusing, put I'll post this anyhow... This is what I have so far: INSERT IGNORE INTO linksTBL (link_id,link_name,alias) VALUES(NULL,'tex2','hello'); # generate ID by inserting NULL INSERT INTO categoryTBL (link_id,cat_id) VALUES(LAST_INSERT_ID(),'88'); # use ID in second table I would like to add here somewhere that it only selects items where isinserted = 0 and iserts those records, and onse inserted, will change isinserted to 1, so when next time it runs, it will not add them again.

    Read the article

  • Improve SQL query performance

    - by Anax
    I have three tables where I store actual person data (person), teams (team) and entries (athlete). The schema of the three tables is: In each team there might be two or more athletes. I'm trying to create a query to produce the most frequent pairs, meaning people who play in teams of two. I came up with the following query: SELECT p1.surname, p1.name, p2.surname, p2.name, COUNT(*) AS freq FROM person p1, athlete a1, person p2, athlete a2 WHERE p1.id = a1.person_id AND p2.id = a2.person_id AND a1.team_id = a2.team_id AND a1.team_id IN ( SELECT id FROM team, athlete WHERE team.id = athlete.team_id GROUP BY team.id HAVING COUNT(*) = 2 ) GROUP BY p1.id ORDER BY freq DESC Obviously this is a resource consuming query. Is there a way to improve it?

    Read the article

  • sqlobject: No connection has been defined for this thread or process

    - by Claudiu
    I'm using sqlobject in Python. I connect to the database with conn = connectionForURI(connStr) conn.makeConnection() This succeeds, and I can do queries on the connection: g_conn = conn.getConnection() cur = g_conn.cursor() cur.execute(query) res = cur.fetchall() This works as intended. However, I also defined some classes, e.g: class User(SQLObject): class sqlmeta: table = "gui_user" username = StringCol(length=16, alternateID=True) password = StringCol(length=16) balance = FloatCol(default=0) When I try to do a query using the class: User.selectBy(username="foo") I get an exception: ... File "c:\python25\lib\site-packages\SQLObject-0.12.4-py2.5.egg\sqlobject\main.py", line 1371, in selectBy conn = connection or cls._connection File "c:\python25\lib\site-packages\SQLObject-0.12.4-py2.5.egg\sqlobject\dbconnection.py", line 837, in __get__ return self.getConnection() File "c:\python25\lib\site-packages\SQLObject-0.12.4-py2.5.egg\sqlobject\dbconnection.py", line 850, in getConnection "No connection has been defined for this thread " AttributeError: No connection has been defined for this thread or process How do I define a connection for a thread? I just realized I can pass in a connection keyword which I can give conn to to make it work, but how do I get it to work if I weren't to do that?

    Read the article

  • Performance optimization for mssql: decrease stored procedures execution time or unload the server?

    - by tim
    Hello everybody! We have a web service which provides search over hotels. There is a problem with performance: a single request to the service takes around 5000 ms. Almost all of the time is spent in database by executing storing procedures. During the request our server (mssql2008) consumes ~90% of the processor time. When 2 requests are made in parallel the average time grows and is around 7000 ms. When number of request is increasing, the average time of response is increasing as well. We have 20-30 requests per minute. Which kind of optimization is the best in this case having in mind that the goal is to provide stable response time for the service: 1) Try to decrease the stored procedures execution time 2) Try to find the way how to unload the server It is interesting to hear from people who deal with booking sites. Thanks!

    Read the article

  • what does select @@identity do?

    - by every_answer_gets_a_point
    i am connecting to a mysql database through excel using odbc what does this line do? Set rs = oConn.Execute("SELECT @@identity", , adCmdText) i am having trouble updating the database: With rs .AddNew ' create a new record ' add values to each field in the record .Fields("datapath") = dpath .Fields("analysistime") = atime .Fields("reporttime") = rtime .Fields("lastcalib") = lcalib .Fields("analystname") = aname .Fields("reportname") = rname .Fields("batchstate") = "bstate" .Fields("instrument") = "NA" .Update ' stores the new record End With it is ONLY updating .Fields("instrument") = "NA", but for all other fields it is putting NULL values

    Read the article

  • How to limit results by SUM

    - by superspace
    I have a table of events called event. For the purpose of this question it only has one field called date. The following query returns me a number of events that are happening on each date for the next 14 days: SELECT DATE_FORMAT( ev.date, '%Y-%m-%d' ) as short_date, count(*) as date_count FROM event ev WHERE ev.date >= NOW() GROUP BY short_date ORDER BY ev.start_date ASC LIMIT 14 The result could be as follows: +------------+------------+ | short_date | date_count | +------------+------------+ | 2010-03-14 | 1 | | 2010-03-15 | 2 | | 2010-03-16 | 9 | | 2010-03-17 | 8 | | 2010-03-18 | 11 | | 2010-03-19 | 14 | | 2010-03-20 | 13 | | 2010-03-21 | 7 | | 2010-03-22 | 2 | | 2010-03-23 | 3 | | 2010-03-24 | 3 | | 2010-03-25 | 6 | | 2010-03-26 | 23 | | 2010-03-27 | 14 | +------------+------------+ 14 rows in set (0.06 sec) Let's say I want to dislay these events by date. At the same time I only want to display a maximum of 10 at a time. How would I do this? Somehow I need to limit this result by the SUM of the date_count field but I do not know how. Anybody run into this problem before? Any help would be appreciated. Thanks

    Read the article

  • comparing rows on a mysql table

    - by user311324
    Ok here's the deal I got one table with a bunch of client information. Each client makes up to one purchase a year which is represented by an individual row. there's a column for the year and there's a column the contains a unique identifier for each client. What I need to do is to construct a query that takes last year and this year and shows me which clients were here made a purchase last year but not make a purchase this year. I also need to build a query that shows me which clients did not make a purchase last year and the year before last but did make a purchase this year.

    Read the article

  • Do partitions allow multiple bulk loads?

    - by ck
    I have a database that contains data for many "clients". Currently, we insert tens of thousands of rows into multiple tables every so often using .Net SqlBulkCopy which causes the entire tables to be locked and inaccessible for the duration of the transaction. As most of our business processes rely upon accessing data for only one client at a time, we would like to be able to load data for one client, while updating data for another client. To make things more fun, all PKs, FKs and clustered indexes are on GUID columns (I am looking at changing this). I'm looking at adding the ClientID into all tables, then partitioning on this. Would this give me the functionality I require?

    Read the article

  • SSRS run SQL/DataSet conditionally

    - by MikeTWebb
    Hello.... I have an SSRS report that contains several subreports. The user has the ability to select/deselect which subreports they want to produce using several Boolean parameters. If a subreport is deselected then it is not rendered by setting the Visibility property. However, the DataSet associated with the de-selected subreport still executes causing the execution time to take longer than expected. Is there any way to tell a dataset on a subreport or Tablix not to execute based on a Parameter selection? Thanks

    Read the article

  • Join with table and sub query in oracle

    - by Amandeep
    I dont understand what is wrong with this query it is giving me compile time error of command not ended properly.The inner query is giving me 4 records can any body help me out. select WGN3EVENTPARTICIPANT.EVENTUID from (Select WGN_V_ADDRESS_1.ADDRESSUID1 as add1, WGN_V_ADDRESS_1.ADDRESSUID2 as add2 from WGN3USER inner join WGN_V_ADDRESS_1 on WGN_V_ADDRESS_1.USERID=wgn3user.USERID where WGN3USER.USERNAME='FIRMWIDE\khuraj' ) as ta ,WGN3EVENTPARTICIPANT where (ta.ADDRESSUID1=WGN3EVENTPARTICIPANT.ADDRESSUID1) AND (ta.ADDRESSUID2=WGN3EVENTPARTICIPANT.ADDRESSUID2) I am running it in oracle. Thanks Amandeep

    Read the article

  • How to handle Foreign Keys with Entity Framework

    - by Jack Marchetti
    I have two entities. Groups. Pools. A Group can create many pools. So I setup my Pool table to have a GroupID foreign key. My code: using (entity _db = new entity()) { Pool p = new Pool(); p.Name = "test"; p.Group.ID = "5"; _db.AddToPool(p); } This doesn't work. I get a null reference exception on p.Group. How do I go about creating a new "Pool" and associating a GroupID?

    Read the article

  • How to repair "order by" after union of 2 selects from 1 tables

    - by 4e4el
    I have a dropDownList on my form, where i need to have union of values from 2 colums of table [ost]. Type of this columns is currency. I have russian version of access, default value of curency in "rur" and i need "uah". I need to change format and save "order by". I use this query: (SELECT distinct FORMAT([Sum1] ,'# ##0.00" uah.";-# ##0.00" uah."') FROM ost) Union (SELECT distinct FORMAT([Sum2],'# ##0.00" uah.";-# ##0.00" uah."') FROM ost) ORDER BY 1

    Read the article

  • SQL query construction - separate data in a column into two columns

    - by Tommy
    I have a column that contains links. The problem is that the titles of the links are in the same column, so it looks like this: linktitle|-|linkurl I want link title and linkurl in separate columns. I've created a new column for the urls, so I'm looking for a way to extract them and update the linkurl column with them. Is there any clever way to construct a query that does this?

    Read the article

  • how to select distinct rows for a column

    - by Satoru.Logic
    Hi, all. I have a table x that's like the one bellow: id | name | observed_value | 1 | a | 100 | 2 | b | 200 | 3 | b | 300 | 4 | a | 150 | 5 | c | 300 | I want to make a query so that in the result set I have exactly one record for one name: (1, a, 100) (2, b, 200) (5, c, 300) If there are multiple records corresponding to a name, say 'a' in the table above, I just pick up one of them. In my current implementation, I make a query like this: select x.* from x , (select distinct name, min(observed_value) as minimum_val from x group by name) x1 where x.name = x1.name and x.observed_value = x1.observed_value; But I think there may be some better way around, please tell me if you know, thanks in advance.

    Read the article

  • Notification 7 Days Before Payment PHP No Error Message

    - by user1858672
    So I have PHP script running on cron daily. It is suppose to check the date and send an email if it is 7 days before any payments as a reminder. I haven't used PHP in a long time so sorry if the way I did it was ridiculous or something. Thanks a lot. I would say what the problem is but I don't get any error message or anything.. <?php mysqli_connect("xxxxxx", "xxxxxxx", "xxxxxxxxx") or die(mysqli_error()); mysqli_select_db("xxxxxxxxx") or die(mysqli_error()); //Retrieve original payment dates and enter them into an array $result = mysqli_query("SELECT DateCreated FROM UserTable"); $payment_dates = mysqli_fetch_array($result); //Puts original payment date into month-day format $md_payment_dates = substr($payment_dates, 5); //Gets todays date in m-d format and adds 7 days to it $due_date = mktime(0, 0, 0, date("m"), date("d")+7); //Checks if any payment days match today's date. If there are none it script will stop. if (count(preg_grep($due_date, $md_payment_dates)) > 0) { //Retrieves usernames of users that have an invoice due. $users_to_pay = mysqli_fetch_array("SELECT Username IN UserTable WHERE DateCreated = $date"); //Notifies you via email $to = "[email protected]"; $subject = "7 Day Payment Reminder"; $message = "Hi, <br /> The following owe a payment in 7 days : " + $users_to_pay ".<br/> Their payments are due on " + $md_payment_dates " of this year."; mail($to, $subject, $message); exit(); }else{ exit(); } ?>

    Read the article

  • PHP, MySQL - would results-array shuffle be quicker than "select... order by rand()"?

    - by sombe
    I've been reading a lot about the disadvantages of using "order by rand" so I don't need update on that. I was thinking, since I only need a limited amount of rows retrieved from the db to be randomized, maybe I should do: $r = $db->query("select * from table limit 500"); for($i;$i<500;$i++) $arr[$i]=mysqli_fetch_assoc($r); shuffle($arr); (i know this only randomizes the 500 first rows, be it). would that be faster than $r = $db->("select * from table order by rand() limit 500"); let me just mention, say the db tables were packed with more than...10,000 rows. why don't you do it yourself?!? - well, i have, but i'm looking for your experienced opinion. thanks!

    Read the article

  • is Payment table needed when you have an invoice table like this?

    - by EBAGHAKI
    this is my invoice table: Invoice Table: invoice_id creation_date due_date payment_date status enum('not paid','paid','expired') user_id total_price I wonder if it's Useful to have a payment table in order to record user payments for invoices. payment table can be like this: payment_id payment_date invoice_id price_paid status enum('successful', 'not successful')

    Read the article

  • C#: ExecuteNonQuery() returns -1 when execute the stored procedure

    - by user1122359
    I'm trying to execute stored procedure in Visual Studio. Its given below. CREATE PROCEDURE [dbo].[addStudent] @stuName varchar(50), @address varchar(100), @tel varchar(15), @etel varchar(15), @nic varchar (10), @dob date AS BEGIN SET NOCOUNT ON; DECLARE @currentID INT DECLARE @existPerson INT SET @existPerson = (SELECT p_ID FROM Student WHERE s_NIC = @nic); IF @existPerson = null BEGIN INSERT INTO Person (p_Name, p_RegDate, p_Address, p_Tel, p_EmergeNo, p_Valid, p_Userlevel) VALUES (@stuName, GETDATE(), @address, @tel, @etel, 0, 'Student' ); SET @currentID = (SELECT MAX( p_ID) FROM Person); INSERT INTO Student (p_ID, s_Barcode, s_DOB, s_NIC) VALUES (@currentID , NULL, @dob, @nic); return 0; END ELSE return -1; END Im doing so by using this code below. SqlConnection con = new SqlConnection(); Connect conn = new Connect(); con = conn.getConnected(); con.Open(); cmd = new SqlCommand("addStudent", con); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add("@stuName", SqlDbType.VarChar).Value = nameTxt.Text.ToString(); cmd.Parameters.Add("@address", SqlDbType.VarChar).Value = addressTxt.Text.ToString(); cmd.Parameters.Add("@tel", SqlDbType.VarChar).Value = telTxt.Text.ToString(); cmd.Parameters.Add("@etel", SqlDbType.VarChar).Value = emerTxt.Text.ToString(); cmd.Parameters.Add("@nic", SqlDbType.VarChar).Value = nicTxt.Text.ToString(); cmd.Parameters.Add("@dob", SqlDbType.DateTime).Value = dobTime.Value.ToString("MM-dd-yyyy"); int n = cmd.ExecuteNonQuery(); MessageBox.Show(n.ToString()); But it returns me -1. I tried this stored procedure by entering the same values I captured from debugging. It was successful. What can be the possible error? Thanks a lot!

    Read the article

  • Redundancy in doing sum()

    - by Abhi
    table1 - id, time_stamp, value This table consists of 10 id's. Each id would be having a value for each hour in a day. So for 1 day, there would be 240 records in this table. table2 - id Table2 consists of a dynamically changing subset of id's present in table1. At a particular instance, the intention is to get sum(value) from table1, considering id's only in table2, grouping by each hour in that day, giving the summarized values a rank and repeating this each day. the query is at this stage: select time_stamp, sum(value), rank() over (partition by trunc(time_stamp) order by sum(value) desc) rn from table1 where exists (select t2.id from table2 t2 where id=t2.id) and time_stamp >= to_date('05/04/2010 00','dd/mm/yyyy hh24') and time_stamp <= to_date('25/04/2010 23','dd/mm/yyyy hh24') group by time_stamp order by time_stamp asc If the query is correct, can this be made more efficient, considering that, table1 will actually consist of thousand's of id's instead of 10 ? EDIT: I am using sum(value) 2 times in the query, which I am not able to get a workaround such that the sum() is done only once. Pls help on this

    Read the article

< Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >