Search Results

Search found 27530 results on 1102 pages for 'sql truncate'.

Page 643/1102 | < Previous Page | 639 640 641 642 643 644 645 646 647 648 649 650  | Next Page >

  • How to store MySQL query results in another Table?

    - by Taz
    How to store results from following query into another table. Considering there is an appropriate table already created. SELECT labels.label,shortabstracts.ShortAbstract,images.LinkToImage,types.Type FROM ner.images,ner.labels,ner.shortabstracts,ner.types WHERE labels.Resource=images.Resource AND labels.Resource=shortabstracts.Resource AND labels.Resource=types.Resource;

    Read the article

  • how to save html to a database field

    - by ooo
    i have an tiny editor web page where my users can use this editor and i am saving the html into my database. i am having issues saving this html to my database. for example if there is a name with a "'" or if there are other html character "<,","" etc, my code seems to blow up on the insert. Is there any best practices about taking any arbitrary html and have it persist fully to a db field without worrying about any specific characters.

    Read the article

  • Removal of table primary key in MySQL

    - by marionmaiden
    Hello, I've removed the primary key of one table of my MySQL database, but now, when I use the MySQL Administrator and try to edit some data of this table, it doesn't allow me to do this. The button edit that appears in the bottom of the table keeps visible, but disabled to click.

    Read the article

  • How to write my own global lock / unlock functions for PostgreSQL

    - by rafalmag
    I have postgresql (in perlu) function getTravelTime(integer, timestamp), which tries to select data for specified ID and timestamp. If there are no data or if data is old, it downloads them from external server (downloading time ~300ms). Multiple process use this database and this function. There is an error when two process do not find data and download them and try to do an insert to travel_time table (id and timestamp pair have to be unique). I thought about locks. Locking whole table would block all processes and allow only one to proceed. I need to lock only on id and timestamp. pg_advisory_lock seems to lock only in "current session". But my processes uses their own sessions. I tried to write my own lock/unlock functions. Am I doing it right? I use active waiting, how can I omit this? Maybe there is a way to use pg_advisory_lock() as global lock? My code: CREATE TABLE travel_time_locks ( id_key integer NOT NULL, time_key timestamp without time zone NOT NULL, UNIQUE (id_key, time_key) ); ------------ -- Function: mylock(integer, timestamp) DROP FUNCTION IF EXISTS mylock(integer, timestamp) CASCADE; -- Usage: SELECT mylock(1, '2010-03-28T19:45'); -- function tries to do a global lock similar to pg_advisory_lock(key, key) CREATE OR REPLACE FUNCTION mylock(id_input integer, time_input timestamp) RETURNS void AS $BODY$ DECLARE rows int; BEGIN LOOP BEGIN -- active waiting here !!!! :( INSERT INTO travel_time_locks (id_key, time_key) VALUES (id_input, time_input); EXCEPTION WHEN unique_violation THEN CONTINUE; END; EXIT; END LOOP; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 1; ------------ -- Function: myunlock(integer, timestamp) DROP FUNCTION IF EXISTS myunlock(integer, timestamp) CASCADE; -- Usage: SELECT myunlock(1, '2010-03-28T19:45'); -- function tries to do a global unlock similar to pg_advisory_unlock(key, key) CREATE OR REPLACE FUNCTION myunlock(id_input integer, time_input timestamp) RETURNS integer AS $BODY$ DECLARE BEGIN DELETE FROM ONLY travel_time_locks WHERE id_key=id_input AND time_key=time_input; RETURN 1; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 1;

    Read the article

  • kohana quotes in query.

    - by peter
    hey, i want to format a date in mysql using DATE_FORMAT(tblnews.datead, '%M %e, %Y, %l:%i%p') i cant seem to get the quotes right , so i keep getting errors. how would you put this in a query?

    Read the article

  • sql boolean truth test: zero OR null

    - by AK
    Is there way to test for both 0 and NULL with one equality operator? I realize I could do this: WHERE field = 0 OR field IS NULL But my life would be a hundred times easier if this would work: WHERE field IN (0, NULL) (btw, why doesn't that work?) I've also read about converting NULL to 0 in the SELECT statement (with COALESCE). The framework I'm using would also make this unpleasant. Realize this is oddly specific, but is there any way to test for 0 and NULL with one WHERE predicate?

    Read the article

  • Strange use of the index in Mysql

    - by user309067
    explain SELECT feed_objects.* FROM feed_objects WHERE (feed_objects.feed_id IN (165,160,159,158,157,153,152,151,150,149,148,147,129,128,127,126,125,124,122,121,120,119,118,117,116,115,114,113,111,110)) ; +----+-------------+--------------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | feed_objects | ALL | by_feed_id | NULL | NULL | NULL | 188 | Using where | +----+-------------+--------------+------+---------------+------+---------+------+------+-------------+ Not used index 'by_feed_id' But when I point less than the values in the "WHERE" - everything is working right explain SELECT feed_objects.* FROM feed_objects WHERE (feed_objects.feed_id IN (165,160,159,158,157,153,152,151,150,149,148,147,129,128,127,125,124)) ; +----+-------------+--------------+-------+---------------+------------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+-------+---------------+------------+---------+------+------+-------------+ | 1 | SIMPLE | feed_objects | range | by_feed_id | by_feed_id | 9 | NULL | 18 | Using where | +----+-------------+--------------+-------+---------------+------------+---------+------+------+-------------+ Used index 'by_feed_id' What is the problem?

    Read the article

  • Service Broker not working after database restore

    - by roryok
    Have a working Service Broker set up on a server, we're in the process of moving to a new server but I can't seem to get Service Broker set up on the new box. Have done the obvious (to me) things like Enabling Broker on the DB, dropping the route, services, contract, queues and even message type and re adding them, setting ALTER QUEUE with STATUS ON SELECT * FROM sys.service_queues gives me a list of the queues, including my own two, which show as activation_enabled, receive_enabled etc. Needless to say the queues aren't working. When I drop messages into them nothing goes in and nothing comes out. Any ideas? I'm sure there's something really obvious I've missed...

    Read the article

  • Filling a LOV in Oracle Apex based on data in another text box

    - by Martin Pugh
    I am fairly new to Oracle Apex, and have a problem. Our application currently has a method of entering data, with several text boxes and Optional List of Values. I would like to have an LOV based on information in another text box like so: select APPOINTMENT_ID PATIENT_ID from APPOINTMENT where PATIENT_ID = :P9_PAT_NUM where P9_PAT_NUM is a patient number in a text box. However, this would apparently only work if the text box has already been submitted, else it assumes the text box is null. Is there any way to get this working with an LOV, or perhaps some other method?

    Read the article

  • Insert not working

    - by user1642318
    I've searched evreywhere and tried all suggestions but still no luck when running the following code. Note that some code is commented out. Thats just me trying different things. SqlConnection connection = new SqlConnection("Data Source=URB900-PC\SQLEXPRESS;Initial Catalog=usersSQL;Integrated Security=True"); string password = PasswordTextBox.Text; string email = EmailTextBox.Text; string firstname = FirstNameTextBox.Text; string lastname = SurnameTextBox.Text; //command.Parameters.AddWithValue("@UserName", username); //command.Parameters.AddWithValue("@Password", password); //command.Parameters.AddWithValue("@Email", email); //command.Parameters.AddWithValue("@FirstName", firstname); //command.Parameters.AddWithValue("@LastName", lastname); command.Parameters.Add("@UserName", SqlDbType.VarChar); command.Parameters["@UserName"].Value = username; command.Parameters.Add("@Password", SqlDbType.VarChar); command.Parameters["@Password"].Value = password; command.Parameters.Add("@Email", SqlDbType.VarChar); command.Parameters["@Email"].Value = email; command.Parameters.Add("@FirstName", SqlDbType.VarChar); command.Parameters["@FirstName"].Value = firstname; command.Parameters.Add("@LasttName", SqlDbType.VarChar); command.Parameters["@LasttName"].Value = lastname; SqlCommand command2 = new SqlCommand("INSERT INTO users (UserName, Password, UserEmail, FirstName, LastName)" + "values (@UserName, @Password, @Email, @FirstName, @LastName)", connection); connection.Open(); command2.ExecuteNonQuery(); //command2.ExecuteScalar(); connection.Close(); When I run this, fill in the textboxes and hit the button I get...... Must declare the scalar variable "@UserName". Any help would be greatly appreciated. Thanks.

    Read the article

  • Does normalization really hurt performance in high traffic sites?

    - by Luke101
    I am designing a database and I would like to normalize the database. I one query I will joining about 30-40 tables. Will this hurt the website performance if it ever becomes extremely popular? This will be the main query and it will be getting called 50% of the time. The other queries I will be joining about 2 tables. I have a choice right now to normalize or not to normalize but if the normalization becomes a problem in the future i may have to rewrite 40% of the software and it may take me a long time. Does normalization really hurt in this case? Should I denormalize now while I have the time?

    Read the article

  • No operations allowed after statement closed issue

    - by Washu
    I have the next methods in my singleton to execute the JDBC connections public void openDB() throws ClassNotFoundException, IllegalAccessException, InstantiationException, SQLException { Class.forName("com.mysql.jdbc.Driver").newInstance(); String url = "jdbc:mysql://localhost/mbpe_peru";//mydb conn = DriverManager.getConnection(url, "root", "admin"); st = conn.createStatement(); } public void sendQuery(String query) throws SQLException { st.executeUpdate(query); } public void closeDB() throws SQLException { st.close(); conn.close(); } And I'm having a problem in a void where i have to call this twice. private void jButton1ActionPerformed(ActionEvent evt) { Main.getInstance().openDB(); Main.getInstance().sendQuery("call insertEntry('"+EntryID()+"','"+SupplierID()+"');"); Main.getInstance().closeDB(); Main.getInstance().openDB(); for(int i=0;i<dataBox.length;i++){ Main.getInstance().sendQuery("call insertCount('"+EntryID()+"','"+SupplierID()+"','"+BoxID()+"'); Main.getInstance().closeDB(); } } I have already tried to keep the connection open and send the 2 querys and after that closed and it didnt work... The only way it worked was to not use the methods, declare the commands for the connection and use different variables for the connection and the statement. I thought that if i close the Connecion and the Statement I could use the variable once again since is a method but I'm not able to. Is there any way to solve this using my methods for the JDBC connection?

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • join select from multiple row values?

    - by user1869132
    Two tables 1) product -------------------- id | Name | price 1 | p1 | 10 2 | p2 | 20 3 | p3 | 30 2) product_attributes: --------------------------------------------------- id | product_id | attribute_name | attribute_value --------------------------------------------------- 1 | 1 | size | 10 2 | 1 | colour | red 3 | 2 | size | 20 I need to join these two tables. In the where clause I need to match both the two rows attribute values. Is it possible to get the result based on two rows value. Here if size=10 and colour=red. Output should be 1 | p1 | 10 It could be greatly helpful to get a query for this.

    Read the article

  • Is there a library / tool to query MySQL data files (MyISAM / InnoDB) without the server? (the SQLit

    - by MGW
    Oftentimes I want to query my MySQL data directly without a server running or without having access to the server (but having read / write rights to the files). Is there a tool or maybe even a library around to query MySQL data files like it is possible with SQLite? I'm specifically looking for InnoDB and MyISAM support. Performance is not a factor. I don't have any knowledge about MySQL internals, but I presume it should be possible to do and not too hard to get the specific code out? Thank you for any suggestions!

    Read the article

  • how to combine the related version in group by

    - by randeepsp
    select count(a),b,c from APPLE join MANGO on (APPLE.link=MANGO.link) join ORANGE on (APPLE.link=ORANGE.link) where id='camel' group by b,c; the column b gives values like 1.0 1.0,R 1.0,B 2.0 2.0,B 2.0,R 3.0,C 3.0,R is there a way to modify the above quer so that all 1.0 and 1.0,R and 1.0,B are merged as 1.0 and 2.0,2.0,B are merged as 2.0 and same way for 3.0 and 4.0

    Read the article

  • Round date to 10 minutes interval

    - by Peter Lang
    I have a DATE column that I want to round to the next-lower 10 minute interval in a query (see example below). I managed to do it by truncating the seconds and then subtracting the last digit of minutes. WITH test_data AS ( SELECT TO_DATE('2010-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:05:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:09:59', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:10:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2099-01-01 10:00:33', 'YYYY-MM-DD HH24:MI:SS') d FROM dual ) -- #end of test-data SELECT d, TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data And here is the result: 01.01.2010 10:00:00    01.01.2010 10:00:00 01.01.2010 10:05:00    01.01.2010 10:00:00 01.01.2010 10:09:59    01.01.2010 10:00:00 01.01.2010 10:10:00    01.01.2010 10:10:00 01.01.2099 10:00:33    01.01.2099 10:00:00 Works as expected, but is there a better way? EDIT: I was curious about performance, so I did the following test with 500.000 rows and (not really) random dates. I am going to add the results as comments to the provided solutions. DECLARE t TIMESTAMP := SYSTIMESTAMP; BEGIN FOR i IN ( WITH test_data AS ( SELECT SYSDATE + ROWNUM / 5000 d FROM dual CONNECT BY ROWNUM <= 500000 ) SELECT TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data ) LOOP NULL; END LOOP; dbms_output.put_line( SYSTIMESTAMP - t ); END; This approach took 03.24 s.

    Read the article

  • How to do this query?

    - by Damiano
    Hello everybody! I have a mysql table with these columns: ID (auto-increment) ID_BOOK (int) PRICE (double) DATA (date) I know two ID_BOOK values, example, 1 and 2. QUERY: I have to extract all the PRICE (of the ID_BOOK=1 and ID_BOOK=2) where DATA is the same! Table example: 1 1 10.00 2010-05-16 2 1 11.00 2010-05-15 3 1 12.00 2010-05-14 4 2 18.00 2010-05-16 5 2 11.50 2010-05-15 Result example: 1 1 10.00 2010-05-16 4 2 18.00 2010-05-16 2 1 11.00 2010-05-15 5 2 11.50 2010-05-15 ID_BOOK=2 hasn't 2010-05-14 so i jump it. Thank you so much!

    Read the article

  • SQL query INSERT not working inserting values into my DB.

    - by Aiden Ryan
    Hello, I'm trying to insert registration data into a database but my php code isn't inserting the values into the DB although I'm not getting any errors either, can someone help me? this is the code i'm currently using: $connect = mysql_connect("localhost","myusername","mypassword"); mysql_select_db("application"); $queryreg = mysql_query('INSERT INTO users("username","password","email","date") VALUES("$username","$password","$email","$date")'); die ("You Have Been Registered."); I just need to add the username password email and date into the fields i have specified but it won't work, please someone help!

    Read the article

< Previous Page | 639 640 641 642 643 644 645 646 647 648 649 650  | Next Page >