Search Results

Search found 27530 results on 1102 pages for 'sql truncate'.

Page 656/1102 | < Previous Page | 652 653 654 655 656 657 658 659 660 661 662 663  | Next Page >

  • mysql: inserting data and autoincrement

    - by every_answer_gets_a_point
    i am converting from access to mysql i have a table in access where one of the columns is an autonumber when i transfer the data into the mysql database (where i also have a column that is auto_increment), should i be transfering the auto_increment data into the auto_increment column, or will it auto_increment itself? how do i ensure that if i do not transfer the autoincrement data from access, that it auto_increments properly?

    Read the article

  • Which MySql line is faster:

    - by Camran
    I have a classified_id variable which matches one document in a MySql table. I am currently fetching the information about that one record like this: SELECT * FROM table WHERE table.classified_id = $classified_id I wonder if there is a faster approach, for example like this: SELECT 1 FROM table WHERE table.classified_id = $classified_id Wont the last one only select 1 record, which is exactly what I need, so that it doesn't have to scan the entire table but instead stops searching for records after 1 is found? Or am I dreaming this? Thanks

    Read the article

  • Do I need to drop index on temp table?

    - by Phil
    Hi, Fairly simple question, but I don't see it anywhere else on SO: Do indexes (indices?) on a temporary table get automatically deleted with the temporary table? I'd imagine they do but I don't really know how to check to make sure. Thanks, Phil

    Read the article

  • Get smallest date for each element in access query

    - by skerit
    So I have a table containing different elements and dates. It basically looks like this: actieElement beginDatum 1 1/01/2010 1 1/01/2010 1 10/01/2010 2 1/02/2010 2 3/02/2010 What I now need is the smallest date for every actieElement. I've found a solution using a simple GROUP BY statement, but that way the query loses its scope and you can't change anything anymore. Without the GROUP BY statement I get multiple dates for every actieElement because certain dates are the same. I thought of something like this, but it also does not work as it would give the subquery more then 1 record: SELECT s1.actieElement, s1.begindatum FROM tblActieElementLink AS s1 WHERE (((s1.actieElement)=(SELECT TOP 1 (s2.actieElement) FROM tblActieElementLink s2 WHERE s1.actieElement = s2.actieElement ORDER BY s2.begindatum ASC)));

    Read the article

  • Specify sorting order for a GROUP BY query to retrieve oldest or newest record for each group

    - by Beau Simensen
    I need to get the most recent record for each device from an upgrade request log table. A device is unique based on a combination of its hardware ID and its MAC address. I have been attempting to do this with GROUP BY but I am not convinced this is safe since it looks like it may be simply returning the "top record" (whatever SQLite or MySQL thinks that is). I had hoped that this "top record" could be hinted at by way of ORDER BY but that does not seem to be having any impact as both of the following queries returns the same records for each device, just in opposite order: SELECT extHwId, mac, created FROM upgradeRequest GROUP BY extHwId, mac ORDER BY created DESC SELECT extHwId, mac, created FROM upgradeRequest GROUP BY extHwId, mac ORDER BY created ASC Is there another way to accomplish this? I've seen several somewhat related posts that have all involved sub selects. If possible, I would like to do this without subselects as I would like to learn how to do this without that.

    Read the article

  • INSERT 0..n records into table 'A' based on content of table 'B' in MySql 5

    - by Robert Gowland
    Using MySql 5, I have a task where I need to update one table based on the contents of another table. For example, I need to add 'A1' to table 'A' if table 'B' contains 'B1'. I need to add 'A2a' and 'A2b' to table 'A' if table 'B' contains 'B2', etc.. In our case, the value in table 'B' we're interested is an enum. Right now I have a stored procedure containing a series of statements like: INSERT INTO A SELECT 'A1' FROM B WHERE B.Value = 'B1'; --Repeat for 'B2' -> 'A2a'; 'B2' -> 'A2b'; 'B3' -> 'A3', etc... Is there a nicer more DRY way of accomplishing this? Edit: There may be values in table 'B' that have no equivalent value for table 'A'.

    Read the article

  • While Loop in TSQL with Sum totals

    - by RPS
    I have the following TSQL Statement, I am trying to figure out how I can keep getting the results (100 rows at a time), store them in a variable (as I will have to add the totals after each select) and continue to select in a while loop until no more records are found and then return the variable totals to the calling function. SELECT [OrderUser].OrderUserId, ISNULL(SUM(total.FileSize), 0), ISNULL(SUM(total.CompressedFileSize), 0) FROM ( SELECT DISTINCT TOP(100) ProductSize.OrderUserId, ProductSize.FileInfoId, CAST(ProductSize.FileSize AS BIGINT) AS FileSize, CAST(ProductSize.CompressedFileSize AS BIGINT) AS CompressedFileSize FROM ProductSize WITH (NOLOCK) INNER JOIN [Version] ON ProductSize.VersionId = [Version].VersionId ) AS total RIGHT OUTER JOIN [OrderUser] WITH (NOLOCK) ON total.OrderUserId = [OrderUser].OrderUserId WHERE NOT ([OrderUser].isCustomer = 1 AND [OrderUser].isEndOrderUser = 0 OR [OrderUser].isLocation = 1) AND [OrderUser].OrderUserId = 1 GROUP BY [OrderUser].OrderUserId

    Read the article

  • PreparedStatement.setString() method without quotes

    - by Slavko
    I'm trying to use a PreparedStatement with code similar to this: SELECT * FROM ? WHERE name = ? Obviously, what happens when I use setString() to set the table and name field is this: SELECT * FROM 'my_table' WHERE name = 'whatever' and the query doesn't work. Is there a way to set the String without quotes so the line looks like this: SELECT * FROM my_table WHERE name = 'whatever' or should I just give it up and use the regular Statement instead (the arguments come from another part of the system, neither of those is entered by a user)?

    Read the article

  • Copy new records from datatable and identify changes in old records

    - by Betite
    Assume there are two tables: Remote_table and My_table. Remote_table has 6 columns: **PROJECT JOB_TYPE MONTH YEAR** HOURS IS_DELETED 134393 70 1 2013 30 0 134393 70 2 2013 50 0 134393 70 3 2013 80 0 134393 70 10 2012 10 0 134393 70 11 2012 0 0 134393 70 12 2012 15 0 My_table is a copy of remote_table. I tried to copy only the new records from the remote_table by this query: SELECT * FROM [remote_DB].[LudanProjectManager].[dbo].Remote_table EXCEPT SELECT * FROM My_table It works OK but I get a duplicate primary key exception when changes have been made on the remote_table on the hours column. Can anyone think of a way to copy only the new records from remote_table and if changes has been made on old records, to identify them and update the my_table to correspond?

    Read the article

  • Help me choose between XML or SQL Lite on android

    - by Ngetha
    I have an android app that periodically, say once a week downloads content from a server in XML. The content is used by the app, different Acitivities use different parts of the content. My question is a design one, should I save the data in SQlite or just keep it as an XML file, which one would be faster to read? The app can only use one content piece at a time, which means subsequent XML content downloads replace the old one.

    Read the article

  • Alternatives to sign() in sqlite for custom order by

    - by Pentium10
    I have a string column which contains some numeric fields, but a lot are 0, empty string or null. The rest are numbers having different range, all positive. I tried to create a custom order by. The order would be done by two fields. First I would like to order the fields that have this number 0 and then sort by name. So something this would work: select * from table order by sign(referenceid) desc, name asc; But Sqlite lacks the sign() -1/0/1 function, and I am on Android and I can't create user defined functions. What other options I have to get this sort done.

    Read the article

  • A SELECT statement for Mysql

    - by Hossein
    I have this table: id,bookmarkID,tagID I want to fetch the top N bookmarkIDs for a given list of tags. Does anyone know a very fast solution for this? the table is quite large(12 million records) I am using MySql

    Read the article

  • how do I integrate the aspnet_users table that comes with asp.net membership into my existing databa

    - by ooo
    i have a database that already has a users table COLUMNS: userID - int loginName - string First - string Last - string i just installed the asp.net membership table. Right now all of my tables are joined into my users table foreign keyed into the "userId" field How do i integrate asp.net_users table into my schema? here are the ideas i thought of: Add a membership_id field to my users table and on new inserts, include that new field in my users table. This seems like the cleanest way as i dont need to break any existing relationships. break all existing relationship and move all of the fields in my user table into the asp.net_users table. This seems like a pain but ultimately will lead to the most simple, normalized solution any thoughts?

    Read the article

  • how do I integrate the aspnet_users table (asp.net membership) into my existing database

    - by ooo
    i have a database that already has a users table COLUMNS: userID - int loginName - string First - string Last - string i just installed the asp.net membership table. Right now all of my tables are joined into my users table foreign keyed into the "userId" field How do i integrate asp.net_users table into my schema? here are the ideas i thought of: Add a membership_id field to my users table and on new inserts, include that new field in my users table. This seems like the cleanest way as i dont need to break any existing relationships. break all existing relationship and move all of the fields in my user table into the asp.net_users table. This seems like a pain but ultimately will lead to the most simple, normalized solution any thoughts?

    Read the article

  • how can add an extra select in this query?

    - by BulgedSnowy
    i've three tables related. images: id | filename | filesize | ... nodes: image_id | tag_id tags: id | name And i'm using this query to search images containing x tags SELECT images.* FROM images INNER JOIN nodes ON images.id = nodes.image_id WHERE tag_id IN (SELECT tags.id FROM tags WHERE tags.tag IN ("tag1","tag2")) GROUP BY images.id HAVING COUNT(*)= 2 The problem is that i need to retrieve also all images contained by the retrieved image, and i need this in the same query. This the actual query wich search retrieve all tags contained by the image: SELECT tag FROM nodes JOIN tags ON nodes.tag_id = tags.id WHERE image_id = images.id and nodes.private = images.private ORDER BY tag How can i mix this two to have only one query?

    Read the article

  • I want to do a sql update loop statement, by using the do--while in php

    - by Jean
    Hello, I want to loop the update statement, but it only loops once. Here is the code I am using: do { mysql_select_db($database_ll, $ll); $query_query= "update table set ex='$71[1]' where field='val'"; $query = mysql_query($query_query, $ll) or die(mysql_error()); $row_domain_all = mysql_fetch_assoc($query); } while ($row_query = mysql_fetch_assoc($query)); Thanks Jean

    Read the article

  • History tables pros, cons and gotchas - using triggers, sproc or at application level.

    - by Nathan W
    I am currently playing around with the idea of having history tables for some of my tables in my database. Basically I have the main table and a copy of that table with a modified date and an action column to store what action was preformed eg Update,Delete and Insert. So far I can think of three different places that you can do the history table work. Triggers on the main table for update, insert and delete. (Database) Stored procedures. (Database) Application layer. (Application) My main question is, what are the pros, cons and gotchas of doing the work in each of these layers. One advantage I can think of by using the triggers way is that integrity is always maintained no matter what program is implmentated on top of the database.

    Read the article

  • Getting the last element of a Postgres array, declaratively

    - by Wojciech Kaczmarek
    How to obtain the last element of the array in Postgres? I need to do it declaratively as I want to use it as a ORDER BY criteria. I wouldn't want to create a special PGSQL function for it, the less changes to the database the better in this case. In fact, what I want to do is to sort by the last word of a specific column containing multiple words. Changing the model is not an option here. In other words, I want to push Ruby's sort_by {|x| x.split[-1]} into the database level. I can split a value into array of words with Postgres string_to_array or regexp_split_to_array functions, then how to get its last element?

    Read the article

  • drop-down combo box-like functionality for queries.

    - by Frank Developer
    Is it possible to provide the following type of fuctionality with informix client tools? As the user types the first two characters of a name, the drop-down list is empty. At the third character, the list fills with just the names beginning with those three characters. At the fourth character, MS-Access completes the first matching name (assuming the combo's AutoExpand is on). Once enough characters are typed to identify the customer, the user tabs to the next field. The time taken to load the combo between keystrokes is minimal. This occurs once only for each entry, unless the user backspaces through the first three characters again. If your list still contains too many records, you can reduce them by another order of magnitude by changing the value of constant conSuburbMin from 3 to 4.

    Read the article

  • Query crashes MS Access

    - by user284651
    THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend?

    Read the article

  • How do I select a random record efficiently in MySQL?

    - by user198729
    mysql> EXPLAIN SELECT * FROM urls ORDER BY RAND() LIMIT 1; +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | 1 | SIMPLE | urls | ALL | NULL | NULL | NULL | NULL | 62228 | Using temporary; Using filesort | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ The above doesn't qualify as efficient,how should I do it properly?

    Read the article

  • SimpleDB as Denormalized DB

    - by Max
    In an environment where you have a relational database which handles all business transactions is it a good idea to utilise SimpleDB for all data queries to have faster and more lightweight search? So the master data storage would be a relational DB which is "replicated"/"transformed" into SimpleDB to provide very fast read only queries since no JOINS and complicated subselects are needed.

    Read the article

  • Transaction within IF THEN ELSE doesn't commit

    - by boris callens
    In my TSQL script I have an IF THEN ELSE structure that checks if a column already exists. If not it creates the column and updates it. IF NOT EXISTS( SELECT 1 FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'tableName' AND COLUMN_NAME = 'columnName')) BEGIN BEGIN TRANSACTION ALTER TABLE tableName ADD columnName int NULL COMMIT BEGIN TRANSACTION update tableName set columnName = [something] from [subquery] COMMIT END This doesn't work because the column doesn't exist after the commit. Why doesn't the COMMIT commit?

    Read the article

< Previous Page | 652 653 654 655 656 657 658 659 660 661 662 663  | Next Page >