Search Results

Search found 20838 results on 834 pages for 'mysql num rows'.

Page 30/834 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • unable to install mysql completely on debian 5.0

    - by austin powers
    hi, its been a couple of days that I'm trying to install mysql on my vps which has debian 5.0 with 256mb ram. I've installed webmin also. here is the symptoms : after installing mysql using either webmin or apt-get I am trying to connect to mysql for changing root password but every time I cope with this error : ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) so I start to investigate and I understand there is no root user inside mysql database when I use : UPDATE user SET password=PASSWORD('newpassword') WHERE user="root"; it says 0 row affected I reinstall mysql for several times but the same problem still exits. please help me how can I install mysql-server as well as mysql-client correctly. regards.

    Read the article

  • problem with MySQL installation : template configuration file cannot be found

    - by user35389
    Trying to install MySQL onto the Windows XP machine. While going through the installation steps (in the "MySQL Server Instance Config. Wizard"), I get to a point where it the window reads: MySQL Server Instance Configuration (bold header) Choose the configuration for the server instance. Ready to execute... o Prepare configuration o Write configuration file o Start service o Apply security settings (this line is greyed out) Please press [Execute] to start the configuration. [ Back ] [ Execute ] [ Cancel ] So I press execute, and then a red X appears in the second step: Write configuration file and at the bottom, where it originally said: Please press [Execute] to start the configuration. It now says: The template configuration file cannot be found at C:\Program Files\MySQL\MySQL Server 5.0\bin\my-template.cnf I'm unsure what it means, but I canceled the config wizard and looked in the directory that had been created (C:\Program Files\MySQL\MySQL Server 5.0). There are some configuration settings files, and there are 4 folders: bin data Docs share

    Read the article

  • MySQL Cluster Failover doesn't work

    - by Lukasz
    I have two servers, where First server 10.100.15.150: 1. one mgm server 2. one ndbd 3. one mysql api Second server 10.100.15.160: 1. one ndbd 2. one mysql api When i start all 'parts' of cluster it looks : Cluster Configuration [ndbd(NDB)] 2 node(s) id=21 @10.100.15.150 (mysql-5.1.56 ndb-7.1.17, Nodegroup: 0) id=22 @10.100.15.160 (mysql-5.1.56 ndb-7.1.17, Nodegroup: 0, Master) [ndb_mgmd(MGM)] 1 node(s) id=3 @10.100.15.150 (mysql-5.1.56 ndb-7.1.17) [mysqld(API)] 2 node(s) id=11 @10.100.15.150 (mysql-5.1.56 ndb-7.1.17) id=12 @10.100.15.160 (mysql-5.1.56 ndb-7.1.17) When i shutdown first machine - 10.100.15.150, on second the nbdb process also has been shutdown so i cannot use this data node and cluster fail ... How i must configure this cluster to get FailOver working ? Thx

    Read the article

  • Installing isolated instance of MySQL on Windows using silent install with .msi

    - by Abram
    I'm trying to write an installer for an internal application we wrote. After it installs our application it then installs MySQL using the .msi installer in silent mode. I specify the install dir and data dir to that of a directory within my application's install directory, such as: msiexec /i @@MYSQL_INSTALLER_FILE@@ /qn INSTALLDIR="@@INSTALL_DIR@@\MySQL\" DATADIR="@@INSTALL_DIR@@\MySQL\" USERNAME="@@DB_USER@@" PASSWORD="@@DB_PASS@@" (the @@variable@@'s are replace by my installer routine using InstallJammer) Once installed, I use mysqld.exe to install a windows service with a custom service name and defaults file like so: mysqld.exe --install CustomMySQL --defaults-file="@@INSTALL_DIR@@\MySQL\my.ini" This works fine as long as there is not already another instance of MySQL installed. If there is it silently fails to install MySQL. Running the msi installer manually (double-click) shows an error that a previous version is already installed and just aborts. Is there a way to automate installing MySQL as an isolated instance, regardless of whether another version/instance is already installed?

    Read the article

  • Why can't I install MySQL on my computer?

    - by Bea
    I have read a lot of tutorials, but I am still having problems. What I tried: I downloaded mysql-5.5.9-winx64. All that I read says that I can run Setup.exe, but there is no such file in the download. The other option I know there is, is including \mysql-5.5.9-winx64\bin in the PATH variable and then trying to execute the mysql command. When I do that, the error I get is: ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost' (10061) I then downloaded mysql-5.5.9-winx64.msi, which is easier to install, but once I followed the instructions and it was installed, I got the same error executing the mysql command. How can I use MySQL? EDIT: I've now removed everything I installed, and I want to start from scratch.

    Read the article

  • Mysql: how to extract multiple text files from my mysql table

    - by Patrick
    hi, I need to extract data from my mysql database into multiple text files. I have a table with 4 columns: UserID, UserName, Tag, Score. I need to create a text file for each Tag, with the userID, the userName and score (ordered by score) i.e. Tag1.txt 234922 John 35 234294 David 205 392423 Patrick 21 Tag2.txt 234922 John 35 234294 David 205 392423 Patrick 21 and so on... Edited: Sample: http://dl.dropbox.com/u/72686/expertsTable.png thanks

    Read the article

  • Query returns too few rows

    - by Tareq
    setup: mysql> create table product_stock( product_id integer, qty integer, branch_id integer); Query OK, 0 rows affected (0.17 sec) mysql> create table product( product_id integer, product_name varchar(255)); Query OK, 0 rows affected (0.11 sec) mysql> insert into product(product_id, product_name) values(1, 'Apsana White DX Pencil'); Query OK, 1 row affected (0.05 sec) mysql> insert into product(product_id, product_name) values(2, 'Diamond Glass Marking Pencil'); Query OK, 1 row affected (0.03 sec) mysql> insert into product(product_id, product_name) values(3, 'Apsana Black Pencil'); Query OK, 1 row affected (0.03 sec) mysql> insert into product_stock(product_id, qty, branch_id) values(1, 100, 1); Query OK, 1 row affected (0.03 sec) mysql> insert into product_stock(product_id, qty, branch_id) values(1, 50, 2); Query OK, 1 row affected (0.03 sec) mysql> insert into product_stock(product_id, qty, branch_id) values(2, 80, 1); Query OK, 1 row affected (0.03 sec) my query: mysql> SELECT IFNULL(SUM(s.qty),0) AS stock, product_name FROM product_stock s RIGHT JOIN product p ON s.product_id=p.product_id WHERE branch_id=1 GROUP BY product_name ORDER BY product_name; returns: +-------+-------------------------------+ | stock | product_name | +-------+-------------------------------+ | 100 | Apsana White DX Pencil | | 80 | Diamond Glass Marking Pencil | +-------+-------------------------------+ 1 row in set (0.00 sec) But I want to have the following result: +-------+------------------------------+ | stock | product_name | +-------+------------------------------+ | 0 | Apsana Black Pencil | | 100 | Apsana White DX Pencil | | 80 | Diamond Glass Marking Pencil | +-------+------------------------------+ To get this result what mysql query should I run?

    Read the article

  • UITableView rows load asynchronously

    - by Mohan
    On launch of app, i want to show 10 rows, at the bottom "Load 15 more", on click on "Load 15 more" the next 15 rows should add to the bottom of the table and again show "Load 15 more", until no rows found. note: I am using php to load table data as an xml feed. If anyone has any tutorial/sample code pl send me the link

    Read the article

  • combining two select statements to return one result

    - by DalivDali
    I need to combine the results for two select queries from two view tables, from which I am performing calculations. Perhaps there is an easier way to perform a query using if...else - any pointers? Essentially I need to divide everything by 'ar.time_ratio' under the condition in sql query 1, and ignore that for query 2. SELECT gs.traffic_date, gs.domain_group, gs.clicks/ar.time_ratio as 'Scaled_clicks', gs.visitors/ar.time_ratio as 'scaled_visitors', gs.revenue/ar.time_ratio as 'scaled_revenue', (gs.revenue/gs.clicks)/ar.time_ratio as 'scaled_average_cpc', (gs.clicks)/(gs.visitors)/ar.time_ratio as 'scaled_ctr', gs.average_rpm/ar.time_ratio as 'scaled_rpm', (((gs.revenue)/(gs.visitors))/ar.time_ratio)*1000 as "Ecpm" FROM group_stats gs, v_active_ratio ar WHERE ar.group_id=gs.domain_group and SELECT gs.traffic_date, gs.domain_group, gs.clicks, gs.visitors, gs.revenue, (gs.revenue/gs.clicks) as 'average_cpc', (gs.clicks)/(gs.visitors) as 'average_ctr', gs.average_rpm, ((gs.revenue)/(gs.visitors))*1000 as "Ecpm" FROM group_stats gs, v_active_ratio ar where not ar.group_id=gs.domain_group

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • Problem calling method inside same method in php?

    - by Fero
    Hi all, Will any one please tell me how to run this class. I am getting the FATAL ERROR: Fatal error: Call to undefined function readnumber() in E:\Program Files\xampp\htdocs\numberToWords\numberToWords.php on line 20 while giving input as 120 <?php class Test { function readnumber($num, $depth) { $num = (int)$num; $retval =""; if ($num < 0) // if it's any other negative, just flip it and call again return "negative " + readnumber(-$num, 0); if ($num > 99) // 100 and above { if ($num > 999) // 1000 and higher $retval .= readnumber($num/1000, $depth+3); $num %= 1000; // now we just need the last three digits if ($num > 99) // as long as the first digit is not zero $retval .= readnumber($num/100, 2)." hundred\n"; $retval .=readnumber($num%100, 1); // our last two digits } else // from 0 to 99 { $mod = floor($num / 10); if ($mod == 0) // ones place { if ($num == 1) $retval.="one"; else if ($num == 2) $retval.="two"; else if ($num == 3) $retval.="three"; else if ($num == 4) $retval.="four"; else if ($num == 5) $retval.="five"; else if ($num == 6) $retval.="six"; else if ($num == 7) $retval.="seven"; else if ($num == 8) $retval.="eight"; else if ($num == 9) $retval.="nine"; } else if ($mod == 1) // if there's a one in the ten's place { if ($num == 10) $retval.="ten"; else if ($num == 11) $retval.="eleven"; else if ($num == 12) $retval.="twelve"; else if ($num == 13) $retval.="thirteen"; else if ($num == 14) $retval.="fourteen"; else if ($num == 15) $retval.="fifteen"; else if ($num == 16) $retval.="sixteen"; else if ($num == 17) $retval.="seventeen"; else if ($num == 18) $retval.="eighteen"; else if ($num == 19) $retval.="nineteen"; } else // if there's a different number in the ten's place { if ($mod == 2) $retval.="twenty "; else if ($mod == 3) $retval.="thirty "; else if ($mod == 4) $retval.="forty "; else if ($mod == 5) $retval.="fifty "; else if ($mod == 6) $retval.="sixty "; else if ($mod == 7) $retval.="seventy "; else if ($mod == 8) $retval.="eighty "; else if ($mod == 9) $retval.="ninety "; if (($num % 10) != 0) { $retval = rtrim($retval); //get rid of space at end $retval .= "-"; } $retval.=readnumber($num % 10, 0); } } if ($num != 0) { if ($depth == 3) $retval.=" thousand\n"; else if ($depth == 6) $retval.=" million\n"; if ($depth == 9) $retval.=" billion\n"; } return $retval; } } $objTest = new Test(); $objTest->readnumber(120,0); ?>

    Read the article

  • SELECT INTO or Stored Procedure?

    - by Kerry
    Would this be better as a stored procedure or leave it as is? INSERT INTO `user_permissions` ( `user_id`, `object_id`, `type`, `view`, `add`, `edit`, `delete`, `admin`, `updated_by_user_id` ) SELECT `user_id`, $object_id, '$type', 1, 1, 1, 1, 1, $user_id FROM `user_permissions` WHERE `object_id` = $object_id_2 AND `type` = '$type_2' AND `admin` = 1 You can think of this with different objects, lets say you have groups and subgroups. If someone creates a subgroup, it is making everyone who had access to the parent group now also have access to the subgroup. I've never made a stored procedure before, but this looks like it might be time. This call be probably be called very often. Should I be creating a procedure or will the performance be insignificant?

    Read the article

  • Zend_Table_Db and Zend_Paginator num rows

    - by Uffo
    I have the following query: $this->select() ->where("`name` LIKE ?",'%'.mysql_escape_string($name).'%') Now I have the Zend_Paginator code: $paginator = new Zend_Paginator( // $d is an instance of Zend_Db_Select new Zend_Paginator_Adapter_DbSelect($d) ); $paginator->getAdapter()->setRowCount(200); $paginator->setItemCountPerPage(15) ->setPageRange(10) ->setCurrentPageNumber($pag); $this->view->data = $paginator; As you see I'm passing the data to the view using $this->view->data = $paginator Before I didn't had $paginator->getAdapter()->setRowCount(200);I could determinate If I have any data or not, what I mean with data, if the query has some results, so If the query has some results I show the to the user, if not, I need to show them a message(No results!) But in this moment I don't know how can I determinate this, since count($paginator) doesn't work anymore because of $paginator->getAdapter()->setRowCount(200);and I'm using this because it taks about 7 sec for Zend_Paginator to count the page numbers. So how can I find If my query has any results?

    Read the article

  • Date range/query problem..

    - by Simon
    Am hoping someone can help me out a bit with date ranges... I have a table with 3 fields id, datestart, dateend I need to query this to find out if a pair of dates from a form are conflicting i.e table entry 1, 2010-12-01, 2010-12-09 from the form 2010-12-08, 20-12-15 select id from date_table where '2010-12-02' between datestart and dateend; That returns me the id that I want, but what I would like to do is to take the date range from the form and do a query similar to what I have got that will take both form dates 2010-12-08, 20-12-15 and query the db to ensure that there is no conflicting date ranges in the table. Am sat scratching my head with the problem... TIA

    Read the article

  • MySQL multiple dependent subqueries, painfully slow

    - by matt80
    I have a working query that retrieves the data that I need, but unfortunately it is painfully slow (runs over 3 minutes). I have indexes in place, but I think the problem is the multiple dependent subqueries. I've been trying to rewrite the query using joins but I can't seem to get it to work. Any help would be greatly appreciated. The tables: Basically, I have 2 tables. The first (prices) holds the prices of items in a store. Each row is the price of an item that day, and new rows are added every day with an updated price. The second table (watches_US) holds the item information (name, description, etc). CREATE TABLE `prices` ( `prices_id` int(11) NOT NULL auto_increment, `prices_locale` enum('CA','DE','FR','JP','UK','US') NOT NULL default 'US', `prices_watches_ID` char(10) NOT NULL, `prices_date` datetime NOT NULL, `prices_am` varchar(10) default NULL, `prices_new` varchar(10) default NULL, `prices_used` varchar(10) default NULL, PRIMARY KEY (`prices_id`), KEY `prices_am` (`prices_am`), KEY `prices_locale` (`prices_locale`), KEY `prices_watches_ID` (`prices_watches_ID`), KEY `prices_date` (`prices_date`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=61764 ; CREATE TABLE `watches_US` ( `watches_ID` char(10) NOT NULL, `watches_date_added` datetime NOT NULL, `watches_last_update` datetime default NULL, `watches_title` varchar(255) default NULL, `watches_small_image_height` int(11) default NULL, `watches_small_image_width` int(11) default NULL, `watches_description` text, PRIMARY KEY (`watches_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; The query retrieves the last 10 prices changes over a period of 30 hours, ordered by the size of the price change. So I have subqueries to get the newest price, the oldest price within 30 hours, and then to calculate the price change. Here's the query: SELECT watches_US.*, prices.*, watches_US.watches_ID as current_ID, ( SELECT prices_am FROM prices WHERE prices_watches_ID = current_ID AND prices_locale = 'US' ORDER BY prices_date DESC LIMIT 1 ) as new_price, ( SELECT prices_date FROM prices WHERE prices_watches_ID = current_ID AND prices_locale = 'US' ORDER BY prices_date DESC LIMIT 1 ) as new_price_date, ( SELECT prices_am FROM prices WHERE ( prices_watches_ID = current_ID AND prices_locale = 'US') AND ( prices_date >= DATE_SUB(new_price_date,INTERVAL 30 HOUR) ) ORDER BY prices_date ASC LIMIT 1 ) as old_price, ( SELECT ROUND(((new_price - old_price)/old_price)*100,2) ) as percent_change, ( SELECT (new_price - old_price) ) as absolute_change FROM watches_US LEFT OUTER JOIN prices ON prices.prices_watches_ID = watches_US.watches_ID WHERE ( prices_locale = 'US' ) AND ( prices_am IS NOT NULL ) AND ( prices_am != '' ) HAVING ( old_price IS NOT NULL ) AND ( old_price != 0 ) AND ( old_price != '' ) AND ( absolute_change < 0 ) AND ( prices.prices_date = new_price_date ) ORDER BY absolute_change ASC LIMIT 10 How would I rewrite this to use joins instead, or otherwise optimize this so it doesn't take over 3 minutes to get a result? Any help would be greatly appreciated! Thank you kindly.

    Read the article

  • mysql_connect VS mysql_pconnect

    - by rogeriopvl
    I have this doubt, I've searched the web and the answers seem to be diversified. Is it better to use mysql_pconnect over mysql_connect when connecting to a database via PHP? I read that pconnect scales much better, but on the other hand, being a persistent connection... having 10 000 connections at the same time, all persistent, doesn't seem scalable to me. Thanks in advance.

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • Practical mysql schema advice for eCommerce store - Products & Attributes

    - by Gravy
    I am currently planning my first eCommerce application (mySQL & Laravel Framework). I have various products, which all have different attributes. Describing products very simply, Some will have a manufacturer, some will not, some will have a diameter, others will have a width, height, depth and others will have a volume. Option 1: Create a master products table, and separate tables for specific product types (polymorphic relations). That way, I will not have any unnecessary null fields in the products table. Option 2: Create a products table, with all possible fields despite the fact that there will be a lot of null rows Option 3: Normalise so that each attribute type has it's own table. Option 4: Create an attributes table, as well as an attribute_values table with the value being varchar regardless of the actual data-type. The products table would have a many:many relationship with the attributes table. Option 5: Common attributes to all or most products put in the products table, and specific attributes to a particular category of product attached to the categories table. My thoughts are that I would like to be able to allow easy product filtering by these attributes and sorting. I would also want the frontend to be fast, less concern over the performance of the inserting and updating of product records. Im a bit overwhelmed with the vast implementation options, and cannot find a suitable answer in terms of the best method of approach. Could somebody point me in the right direction? In an ideal world, I would like to offer the following kind of functionality - http://www.glassesdirect.co.uk/products/ to my eCommerce store. As can be seen, in the sidebar, you can select an attribute the glasses to filter them. e.g. male / female or plastic / metal / titanium etc... Alternatively, should I just dump the mySql relational database idea and learn mongodb?

    Read the article

  • MySQL – Export the Resultset to CSV file

    - by Pinal Dave
    In SQL Server, you can use BCP command to export the result set to a csv file. In MySQL too, You can export data from a table or result set as a csv file in many methods. Here are two methods. Method 1 : Make use of Work Bench If you are using Work Bench as a querying tool, you can make use of it’s Export option in the result window. Run the following code in Work Bench SELECT db_names FROM mysql_testing; The result will be shown in the result windows. There is an option called “File”. Click on it and it will prompt you a window to save the result set (Screen shot attached to show how file option can be used). Choose the directory and type out the name of the file. Method 2 : Make use of OUTFILE command You can do the export using a query with OUTFILE command as shown below SELECT db_names FROM mysql_testing INTO OUTFILE 'C:/testing.csv' FIELDS ENCLOSED BY '"' TERMINATED BY ';' ESCAPED BY '"' LINES TERMINATED BY '\r\n'; After the execution of the above code, you can find a file named testing.csv in C drive of the server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL Tagged: CSV

    Read the article

  • MySQL – Introduction to CONCAT and CONCAT_WS functions

    - by Pinal Dave
    MySQL supports two types of concatenation functions. They are CONCAT and CONCAT_WS CONCAT function just concats all the argument values as such SELECT CONCAT('Television','Mobile','Furniture'); The above code returns the following TelevisionMobileFurniture If you want to concatenate them with a comma, either you need to specify the comma at the end of each value, or pass comma as an argument along with the values SELECT CONCAT('Television,','Mobile,','Furniture'); SELECT CONCAT('Television',',','Mobile',',','Furniture'); Both the above return the following Television,Mobile,Furniture However you can omit the extra work by using CONCAT_WS function. It stands for Concatenate with separator. This is very similar to CONCAT function, but accepts separator as the first argument. SELECT CONCAT_WS(',','Television','Mobile','Furniture'); The result is Television,Mobile,Furniture If you want pipeline as a separator, you can use SELECT CONCAT_WS('|','Television','Mobile','Furniture'); The result is Television|Mobile|Furniture So CONCAT_WS is very flexible in concatenating values along with separate. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • PHP Can't connect to MySQL on the production server

    - by Jairo Santos
    I'm having problems with connections with MySQL through PHP script. The MySQL user is root and I added GRANTS to root@'%' so I can connect from anywhere. Lets assume my MySQL host as "bigboy.com.br" The funny part is, from my local machine, on my test server, the script can connect to the MySQL server normally. But on the dedicated server where MySQL is running, the same PHP script gives me "Access denied for 'root'@'bigboy.com.br'" error.

    Read the article

  • Benchmark MySQL Cluster using flexAsynch: No free node id found for mysqld(API)?

    - by quanta
    I am going to benchmark MySQL Cluster using flexAsynch follow this guide, details as below: mkdir /usr/local/mysqlc732/ cd /usr/local/src/mysql-cluster-gpl-7.3.2 cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysqlc732/ -DWITH_NDB_TEST=ON make make install Everything works fine until this step: # /usr/local/mysqlc732/bin/flexAsynch -t 1 -p 80 -l 2 -o 100 -c 100 -n FLEXASYNCH - Starting normal mode Perform benchmark of insert, update and delete transactions 1 number of concurrent threads 80 number of parallel operation per thread 100 transaction(s) per round 2 iterations Load Factor is 80% 25 attributes per table 1 is the number of 32 bit words per attribute Tables are with logging Transactions are executed with hint provided No force send is used, adaptive algorithm used Key Errors are disallowed Temporary Resource Errors are allowed Insufficient Space Errors are disallowed Node Recovery Errors are allowed Overload Errors are allowed Timeout Errors are allowed Internal NDB Errors are allowed User logic reported Errors are allowed Application Errors are disallowed Using table name TAB0 NDBT_ProgramExit: 1 - Failed ndb_cluster.log: WARNING -- Failed to allocate nodeid for API at 127.0.0.1. Returned eror: 'No free node id found for mysqld(API).' I also have recompiled with -DWITH_DEBUG=1 -DWITH_NDB_DEBUG=1. How can I run flexAsynch in the debug mode? # /usr/local/mysqlc732/bin/flexAsynch -h FLEXASYNCH Perform benchmark of insert, update and delete transactions Arguments: -t Number of threads to start, default 1 -p Number of parallel transactions per thread, default 32 -o Number of transactions per loop, default 500 -l Number of loops to run, default 1, 0=infinite -load_factor Number Load factor in index in percent (40 -> 99) -a Number of attributes, default 25 -c Number of operations per transaction -s Size of each attribute, default 1 (PK is always of size 1, independent of this value) -simple Use simple read to read from database -dirty Use dirty read to read from database -write Use writeTuple in insert and update -n Use standard table names -no_table_create Don't create tables in db -temp Create table(s) without logging -no_hint Don't give hint on where to execute transaction coordinator -adaptive Use adaptive send algorithm (default) -force Force send when communicating -non_adaptive Send at a 10 millisecond interval -local 1 = each thread its own node, 2 = round robin on node per parallel trans 3 = random node per parallel trans -ndbrecord Use NDB Record -r Number of extra loops -insert Only run inserts on standard table -read Only run reads on standard table -update Only run updates on standard table -delete Only run deletes on standard table -create_table Only run Create Table of standard table -drop_table Only run Drop Table on standard table -warmup_time Warmup Time before measurement starts -execution_time Execution Time where measurement is done -cooldown_time Cooldown time after measurement completed -table Number of standard table, default 0

    Read the article

  • How can I centralise MySQL data between 3 or more geographically separate servers?

    - by Andy Castles
    To explain the background to the question: We have a home-grown PHP application (for running online language-learning courses) running on a Linux server and using MySQL on localhost for saving user data (e.g. results of tests taken, marks of submitted work, time spent on different pages in the courses, etc). As we have students from different geographic locations we currently have 3 virtual servers hosted close to those locations (Spain, UK and Hong Kong) and users are added to the server closest to them (they access via different URLs, e.g. europe.domain.com, uk.domain.com and asia.domain.com). This works but is an administrative nightmare as we have to remember which server a particular user is on, and users can only connect to one server. We would like to somehow centralise the information so that all users are visible on any of the servers and users could connect to any of the 3 servers. The question is, what method should we use to implement this. It must be an issue that that lots of people have encountered but I haven't found anything conclusive after a fair bit of Googling around. The closest I have seen to solutions are: something like master-master replication, but I have read so many posts suggesting that this is not a good idea as things like auto_increment fields can break. circular replication, this sounded perfect but to quote from O'Reilly's High Performance MySQL, "In general, rings are brittle and best avoided" We're not against rewriting code in the application to make it work with whatever solution is required but I am not sure if replication is the correct thing to use. Thanks, Andy P.S. I should add that we experimented with writes to a central database and then using reads from a local database but the response time between the different servers for writing was pretty bad and it's also important that written data is available immediately for reading so if replication is too slow this could cause out-of-date data to be returned. Edit: I have been thinking about writing my own rudimentary replication script which would involve something like having each user given a server ID to say which is his "home server", e.g. users in asia would be marked as having the Hong Kong server as their own server. Then the replication scripts (which would be a PHP script set to run as a cron job reasonably frequently, e.g. every 15 minutes or so) would run independently on each of the servers in the system. They would go through the database and distribute any information about users with the "home server" set to the server that the script is running on to all of the other databases in the system. They would also need to suck new information which has been added to any of the other databases on the system where the "home server" flag is the server where the script is running. I would need to work out the details and build in the logic to deal with conflicts but I think it would be possible, however I wanted to make sure that there is not a correct solution for this already out there as it seems like it must be a problem that many people have already come across.

    Read the article

  • Symlink error when installing MySQL via Homebrew

    - by Asad Syed
    Trying to install MySQL via Homebrew. The install seems to work fine but i get an error: "Error: The linking step did not complete successfully The formula built, but is not symlinked into /usr/local You can try again using `brew link mysql'" Naturally, after this I ran: brew link mysql Which spat out: Error: Could not symlink file: /usr/local/Cellar/mysql/5.5.20/include/typelib.h /usr/local/include is not writable. You should change its permissions. So I ran it with sudo and got a "cowardly refusing to brew link mysql".

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >