Search Results

Search found 63875 results on 2555 pages for 'mysql error 1045'.

Page 197/2555 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • MySQL - display rows of names and addresses grouped by name, where name occures more than once

    - by Stoob
    I have two tables, "name" and "address". I would like to list the last_name and joined address.street_address of all last_name in table "name" that occur more than once in table "name". The two tables are joined on the column "name_id". The desired output would appear like so: 213 | smith | 123 bluebird | 14 | smith | 456 first ave | 718 | smith | 12 san antonia st. | 244 | jones | 78 third ave # 45 | 98 | jones | 18177 toronto place | Note that if the last_name "abernathy" appears only once in table "name", then "abernathy" should not be included in the result. This is what I came up with so far: SELECT name.name_id, name.last_name, address.street_address, count(*) FROM `name` JOIN `address` ON name.name_id = address.name_id GROUP BY `last_name` HAVING count(*) > 1 However, this produces only one row per last name. I'd like all the last names listed. I know I am missing something simple. Any help is appreciated, thanks!

    Read the article

  • mysql concat all field table

    - by hafizan
    Is there a way we can concat all field in the table(1 sql statement)(automatic) ? The reason was before user updated or delete a record,the record will push to another table for future reference.

    Read the article

  • Problem importing mysql triggers generated from mysqldump

    - by OM The Eternity
    I am using phpmyadmin for using the mysqldump query, but as per my requirement i have to create a new database which is clone of the previous one, now in this case when i import the main DB it contain all the trigger information as well with the DB name mentioned in it.. As i import this DB to new one my triggers get imported as well but the trigger_schema are not changed as per new DB.. What could be done to get resolve this problem?

    Read the article

  • MySql left join on several regs

    - by egidiocs
    Hi there! I have this table1 idproduct(PK) | date_to_go 1 2010-01-18 2 2010-02-01 3 2010-02-21 4 2010-02-03 and this other table2 that controls date_to_go updates id | idproduct(FK) | prev_date_to_go | date_to_go | update_date 1 1 2010-01-01 2010-01-05 2009-12-01 2 1 2010-01-05 2010-01-10 2009-12-20 3 1 2010-01-10 2010-01-18 2009-12-20 4 3 2010-01-20 2010-02-03 2010-01-05 So, in this example, for table1.idproduct #1 2010-01-18 is the actual date_to_go and 2010-01-01 (table2.prev_date_to_go, first reg) is the original date_to_go . using this query select v.idproduct, v.date_to_go, p.prev_date_to_go original_date_to_go from table1 v left join produto_datas p on p.idproduto = v.idproduto group by (v.idproduto) order by v.idproduto can I assume that original_date_to_go will be the first related reg of table2? idproduct | date_to_go | original_date_to_go 1 2010-01-18 2010-01-01 2 2010-02-01 NULL 3 2010-02-21 2010-01-20 4 2010-02-03 NULL

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • What is an index in MySQL?

    - by Eric
    http://i.imgur.com/JdsUK.jpg I created a table like the picture above. What are the "Indexes"? primary key? unique? It works well without setting indexes.. What do they do? why do I need them? Also, I set all String fields to TEXT because I didn't know how many characters I need. Is this a good idea? I don't see any difference. Thanks!

    Read the article

  • mysql dynamic cursor

    - by machaa
    Here is the procedure I wrote- Cursors c1 & c2. c2 is inside c1, I tried declaring c2 below c1 (outside the c1 cursor) but then I is NOT taking the updated value :( Any suggestions to make it working would be helpful, Thanks create table t1(i int); create table t2(i int, j int); insert into t1(i) values(1), (2), (3), (4), (5); insert into t2(i, j) values(1, 6), (2, 7), (3, 8), (4, 9), (5, 10); delimiter $ CREATE PROCEDURE p1() BEGIN DECLARE I INT; DECLARE J INT; DECLARE done INT DEFAULT 0; DECLARE c1 CURSOR FOR SELECT i FROM t1; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; OPEN c1; REPEAT FETCH c1 INTO I; IF NOT done THEN select I; DECLARE c2 CURSOR FOR SELECT j FROM t2 WHERE i = I; OPEN c2; REPEAT FETCH c2 into J; IF NOT done THEN SELECT J; END IF; UNTIL done END REPEAT; CLOSE c2; set done = 0; END IF; UNTIL done END REPEAT; CLOSE c1; END$ delimiter ;

    Read the article

  • Multiple many-to-many JOINs in a single mysql query without Cartesian Product

    - by VWD
    At the moment I can get the results I need with two seperate SELECT statements SELECT COUNT(rl.refBiblioID) FROM biblioList bl LEFT JOIN refList rl ON bl.biblioID = rl.biblioID GROUP BY bl.biblioID SELECT GROUP_CONCAT( CONCAT_WS( ':', al.lastName, al.firstName ) ORDER BY al.authorID ) FROM biblioList bl LEFT JOIN biblio_author ba ON ba.biblioID = bl.biblioID JOIN authorList al ON al.authorID = ba.authorID GROUP BY bl.biblioID Combining them like this however SELECT GROUP_CONCAT( CONCAT_WS( ':', al.lastName, al.firstName ) ORDER BY al.authorID ), COUNT(rl.refBiblioID) FROM biblioList bl LEFT JOIN biblio_author ba ON ba.biblioID = bl.biblioID JOIN authorList al ON al.authorID = ba.authorID LEFT JOIN refList rl ON bl.biblioID = rl.biblioID GROUP BY bl.biblioID causes the author result column to have duplicate names. How can I get the desired results from one SELECT statement without using DISTINCT? With subqueries?

    Read the article

  • mysql order-by original "where order"

    - by Benjamin Dobnikar
    I have this order-by problem I canot crack. I select from my table like this: SELECT * FROM 'sidemodules' WHERE name = 'module1' OR name = 'module2' OR 'name3' Which returns me the modules I want. But the modules lie the table, say in this order: module3 module1 module2 And they are returned to me in this order. How can I get them to display in order AS IN THE WHERE CLAUSE (1,2,3) ? Big thanks!

    Read the article

  • Use of HAVING in MySQL

    - by KBrian
    I have a table from which I need to select all persons that have a first name that is not unique and that that set should be selected only if among the persons with a similar first name, all have a different last name. Example: FirstN LastN Bill Clinton Bill Cosby Bill Maher Elvis Presley Elvis Presley Largo Winch I want to obtain FirstN LastN Bill Clinton or FirstN LastN Bill Clinton Bill Cosby Bill Maher I tried this but it does not return what I want. SELECT * FROM Ids GROUP BY FirstN, LastN HAVING (COUNT(FirstN)>1 AND COUNT(LastN)=1)) [Edited my post after Aleandre P. Lavasseur remark]

    Read the article

  • Get list of duplicate rows in MySql

    - by user347033
    Hi, i have a table like this ID nachname vorname 1 john doe 2 john doe 3 jim doe 4 Michael Knight I need a query that will return all the fields (select *) from the records that have the same nachname and vorname (in this case, records 1 and 2). Can anyone help me with this? Thanks

    Read the article

  • Encoding issue with form and HTML Purifier / MySQL

    - by Andrew Heath
    Driving me nuts... Page with form is encoded as Unicode (UTF-8) via: <meta http-equiv="content-type" content="text/html; charset=utf-8"> entry column in database is text utf8_unicode_ci copying text from a Word document with " in it, like this: “1922.” is insta-fail and ends up in the database as â??1922.â?? (typing new data into the form, including " works fine... it's cut and pasting from Word...) PHP steps behind the scenes are: grab value from POST run through HTML Purifier default settings run through mysql_real_escape_string insert query into dbase Help?

    Read the article

  • mysql - joining three tables with HAVING

    - by Qiao
    I have table: id name type where "type" is 1 or 2 I need to join this table with two other. Rows with "type = 1" should be joined with first table, and =2 with second. Something like SELECT * FROM tbl INNER JOIN tbl_1 ON tbl.name = tbl_1.name HAVING tbl.type = 1 INNER JOIN tbl_2 ON tbl.name = tbl_2.name HAVING tbl.type = 2 But it does not working. How it can be implemented?

    Read the article

  • mysql subselect alternative

    - by Arnold
    Hi, Lets say I am analyzing how high school sports records affect school attendance. So I have a table in which each row corresponds to a high school basketball game. Each game has an away team id and a home team id (FK to another "team table") and a home score and an away score and a date. I am writing a query that matches attendance with this seasons basketball games. My sample output will be (#_students_missed_class, day_of_game, home_team, away_team, home_team_wins_this_season, away_team_wins_this_season) I now want to add how each team did the previous season to my analysis. Well, I have their previous season stored in the game table but i should be able to accomplish that with a subselect. So in my main select statement I add the subselect: SELECT COUNT(*) FROM game_table WHERE game_table.date BETWEEN 'start of previous season' AND 'end of previous season' AND ( (game_table.home_team = team_table.id AND game_table.home_score > game_table.away_score) OR (game_table.away_team = team_table.id AND game_table.away_score > game_table.home_score)) In this case team-table.id refers to the id of the home_team so I now have all their wins calculated from the previous year. This method of calculation is neither time nor resource intensive. The Explain SQL shows that I have ALL in the Type field and I am not using a Key and the query times out. I'm not sure how I can accomplish a more efficient query with a subselect. It seems proposterously inefficient to have to write 4 of these queries (for home wins, home losses, away wins, away losses). I am sure this could be more lucid. I'll absolutely add color tomorrow if anyone has questions

    Read the article

  • auto_increment in MySQL - can I omit it?

    - by kees-kist
    I've noticed that PHPmyAdmin creates the following SQL for table creation: CREATE TABLE something ( ... ) auto_increment=1; When I write a database creation script I don't use the auto_increment bit. From reading related questions here I understand that it determines the starting value for auto_increment values. But it is good practice to reset it to 1, or should I just leave it out of the SQL so that the default is used?

    Read the article

  • De-normalization alternative to specific MYSQL problem?

    - by Booker
    I am facing quite a specific optimization problem. I currently have 4 normalized tables of data. Every second, possibly thousands of users will pull down up-to-date info from these tables using AJAX. The thing is that I can predict relatively easily which subset of data they need... The most recent 100 or so entries in those 4 normalized tables. I have been researching de-normalization... but feel that perhaps there is an easier solution. I was thinking that I could somehow every second run one sql query to condense the needed info, store it in a temp cached table and then have all of the user queries just draw from this. This will allow the complex join of 4 tables to only be run once, and then from there the users just need to do a simple lookup from the cached table. I really don't know if this is feasible. Comments on this or any other suggestions would be much appreciated. Thanks!

    Read the article

  • mysql union query

    - by Sergio
    The table that contains information about members has a structure like: id | fname | pic | status -------------------------------------------------- 1 | john | a.jpg | 1 2 | mike | b.jpg | 1 3 | any | c.jpg | 1 4 | jacky | d.jpg | 1 Table for list of friends looks like: myid | date | user ------------------------------- 1 | 01-01-2011 | 4 2 | 04-01-2011 | 3 I want to make a query that will as result print users from "friendlist" table that contains photos and names of that users from "members" table of both, myid (those who adding) and user (those who are added). That table in this example will look like: myid | myidname | myidpic | user | username | userpic | status ----------------------------------------------------------------------------------- 1 | john | a.jpg | 4 | jacky | d.jpg | 1 2 | mike | b.jpg | 3 | any | c.jpg | 1

    Read the article

  • data in mysql show after barcode split and matches character

    - by klox
    i need some code for the next step..this my first step: <script> $("#mod").change(function() { var barcode; barCode=$("#mod").val(); var data=barCode.split(" "); $("#mod").val(data[0]); $("#seri").val(data[1]); var str=data[0]; var matches=str.matches(/EE|[EJU]).*(D)/i); }); </script> after matches..i want the result can connect to data base then show data from table inside <div id="value">...how to do that?

    Read the article

  • mysql 2 primary key onone table

    - by Bharanikumar
    CREATE TABLE Orders -> ( -> ID SMALLINT UNSIGNED NOT NULL, -> ModelID SMALLINT UNSIGNED NOT NULL, -> Descrip VARCHAR(40), -> PRIMARY KEY (ID, ModelID) -> ); Basically May i know ... Shall we create the two primary key on one table... Is it correct... Bcoz as per sql law,,, We can create N number of unque key in one table, and only one primary key only is the LAW know... Then how can my system allowing to create multiple primary key ? Please advise .... what is the general rule

    Read the article

  • PHP Serialize Function - Adding serialized data to mysql and then fetch and display

    - by Abhilash Shukla
    I want to know whether the PHP serialize function is 100% secure, also if we store serialized data into a database and want to do something after fetching it, will it be a nice way. For example:- I have a website with different user privileges, now i want to store the permissions settings for a particular privilege to my database (This data i want to store is to be done through php serialize function), now when a user logs in i want to fetch this data and set the privilege for the customer. Now i am ok to do this thing, what i want to know is, whether it is the best way to do or something more efficient can be done. Also, i was going through php manual and found this code, can anybody explain me a bit what's happening in this code:- [Specially why base64_encode is used?] <?php mySerialize( $obj ) { return base64_encode(gzcompress(serialize($obj))); } myUnserialize( $txt ) { return unserialize(gzuncompress(base64_decode($txt))); } ?> Also if somebody can provide me their own code to show me to do this thing in the most efficient manner. Thanks.

    Read the article

  • What does this MySQL statement do?

    - by user198729
    INSERT IGNORE INTO `PREFIX_tab_lang` (`id_tab`, `id_lang`, `name`) (SELECT `id_tab`, id_lang, (SELECT tl.`name` FROM `PREFIX_tab_lang` tl WHERE tl.`id_lang` = (SELECT c.`value` FROM `PREFIX_configuration` c WHERE c.`name` = 'PS_LANG_DEFAULT' LIMIT 1) AND tl.`id_tab`=`PREFIX_tab`.`id_tab`) FROM `PREFIX_lang` CROSS JOIN `PREFIX_tab`); It's from an opensource project,and no documentation available. Especially,what does cross-join mean? I've only used join/left join .

    Read the article

  • MySQL query killing my server

    - by Webnet
    Looking at this query there's got to be something bogging it down that I'm not noticing. I ran it for 7 minutes and it only updated 2 rows. //set product count for makes $tru->query->run(array( 'name' => 'get-make-list', 'sql' => 'SELECT id, name FROM vehicle_make', 'connection' => 'core' )); while($tempMake = $tru->query->getArray('get-make-list')) { $tru->query->run(array( 'name' => 'update-product-count', 'sql' => 'UPDATE vehicle_make SET product_count = ( SELECT COUNT(product_id) FROM taxonomy_master WHERE v_id IN ( SELECT id FROM vehicle_catalog WHERE make_id = '.$tempMake['id'].' ) ) WHERE id = '.$tempMake['id'], 'connection' => 'core' )); } I'm sure this query can be optimized to perform better, but I can't think of how to do it. vehicle_make = 45 rows taxonomy_master = 11,223 rows vehicle_catalog = 5,108 rows All tables have appropriate indexes

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >