Search Results

Search found 14017 results on 561 pages for 'mysql binlog'.

Page 113/561 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Slow MySQL query....only sometimes

    - by Shane N
    I have a query that's used in a reporting system of ours that sometimes runs quicker than a second, and other times takes 1 to 10 minutes to run. Here's the entry from the slow query log: # Query_time: 543 Lock_time: 0 Rows_sent: 0 Rows_examined: 124948974 use statsdb; SELECT count(distinct Visits.visitorid) as 'uniques' FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime>=1275721200 and visittime<=1275807599 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9 AND Visits.visitorid NOT IN (SELECT Visits.visitorid FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime<1275721200 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9); It's basically counting unique visitors, and it's doing that by counting the visitors for today and then substracting those that have been here before. If you know of a better way to do this, let me know. I just don't understand why sometimes it can be so quick, and other times takes so long - even with the same exact query under the same server load. Here's the EXPLAIN on this query. As you can see it's using the indexes I've set up: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY Visits range visittime_visitorid,visitorid visittime_visitorid 4 NULL 82500 Using where; Using index 1 PRIMARY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where 2 DEPENDENT SUBQUERY Visits ref visittime_visitorid,visitorid visitorid 8 func 1 Using where 2 DEPENDENT SUBQUERY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where I tried to optimize the query a few weeks ago and came up with a variation that consistently took about 2 seconds, but in practice it ended up taking more time since 90% of the time the old query returned much quicker. Two seconds per query is too long because we are calling the query up to 50 times per page load, with different time periods. Could the quick behavior be due to the query being saved in the query cache? I tried running 'RESET QUERY CACHE' and 'FLUSH TABLES' between my benchmark tests and I was still getting quick results most of the time. Note: last night while running the query I got an error: Unable to save result set. My initial research shows that may be due to a corrupt table that needs repair. Could this be the reason for the behavior I'm seeing? In case you want server info: Accessing via PHP 4.4.4 MySQL 4.1.22 All tables are InnoDB We run optimize table on all tables weekly The sum of both the tables used in the query is 500 MB MySQL config: key_buffer = 350M max_allowed_packet = 16M thread_stack = 128K sort_buffer = 14M read_buffer = 1M bulk_insert_buffer_size = 400M set-variable = max_connections=150 query_cache_limit = 1048576 query_cache_size = 50777216 query_cache_type = 1 tmp_table_size = 203554432 table_cache = 120 thread_cache_size = 4 wait_timeout = 28800 skip-external-locking innodb_file_per_table innodb_buffer_pool_size = 3512M innodb_log_file_size=100M innodb_log_buffer_size=4M

    Read the article

  • mySQL 1046 error when importing wordpress database

    - by j-man86
    I'm moving a locally developed wordpress site to a client's server so I'm trying to export the local database and import it to the server. I exported the .sql file according to the instructions here http://codex.wordpress.org/Backing_Up_Your_Database but I keep getting this error when importing: DROP TABLE IF EXISTS `wp_commentmeta` ; MySQL said: Documentation #1046 - No database selected Any help very much appreciated. Thanks!

    Read the article

  • MySQL: What's the best to use, Unix TimeStamp Or DATETIME

    - by Axel
    Hello, Probably many coders want to ask this question. it is What's the adventages of each one of those MySQL time formats. and which one you will prefer to use it in your apps. For me i use Unix timestamp because maybe i find it easy to convert & order records with it, and also because i never tried the DATETIME thing. but anyways i'm ready to change my mind if anyone tells me i'm wrong. Thanks

    Read the article

  • Question regarding MySQL indices and their functionality

    - by user281434
    Hi Say I have an ordinary table in my db like so ---------------------------- | id | username | password | ---------------------------- | 24 | blah | blah | ---------------------------- A primary key is assigned to the id column. Now when I run a Mysql query like this: SELECT id FROM table WHERE username = 'blah' LIMIT 1 Does that primary key index even help? If I am telling it to match usernames, then shouldn't the username column be indexed instead? Thanks for your time

    Read the article

  • APE engine Mysql push data to channel on insert

    - by Fotis
    Hello, i am working with APE Engine (http://www.ape-project.org) and up until now i had no actual problem. The problem is that i would like to use the MySQL module and push data to a channel each time a row is inserted into a table. I've tried to setup a server side module, i created an SQL query but data is fetched only when the server boots. How can i make this work?

    Read the article

  • mysql insert data from multiple select queries

    - by daulex
    What I've got working and it's what I need to improve on: INSERT form_data (id,data_id, email) SELECT fk_form_joiner_id AS data_id, value AS email FROM wp_contactform_submit_data WHERE form_key='your-email' This just gets the emails, now this is great, but not enough as I have a good few different values of form_key that I need to import into different columns, I'm aware that I can do it via php using foreach loops and updates, but this needs to be done purely in mysql. So how do I do something like: insert form_data(id,data,email,name,surname,etc) Select [..],Select [..].... Please help

    Read the article

  • MySQL Optimization 20 gig table

    - by user169743
    I have a 20 gig table that has a large amount of inserts and updates daily. This table is also frequently searched. I'd like to know if the MySQL indices can become fragmented and perhaps need to be rebuilt or something similar. I'm finding it difficult to figure out which of the CHECK TABLE, REPAIR TABLE or something similar? Any guidance appreciated, I'm a db newb.

    Read the article

  • Create Chart using PHP-MySQL

    - by Ajith
    I have a mysql table - request_events with three fields; request_eventsid,datetime,type.this table will track all the activities of my website day wise and also type wise.thus,type may be 1 or 2.I need to display an open-chart for understanding the progress.So I need to retrieve the ratio of type2/type1 as input day wise.How can I get all these input for last 30 days from this table.Please give me some idea....It already kill my week end.Please help me

    Read the article

  • MySQL SUM Query daily values of a week problem

    - by davykiash
    Am trying to return the sum of each day of a week in mysql but it returns nothing despite having values for the 3rd Week of March 2010 SELECT SUM(expense_details_amount) AS total FROM expense_details WHERE YEAR(expense_details_date) = '2010' AND MONTH(expense_details_date) = '03' AND WEEK(expense_details_date) = '3' GROUP BY DAY(expense_details_date) How do I go about this?

    Read the article

  • Best way to add a column in mysql query

    - by PHP-Prabhu
    Can any one please let me know that, I need to add a column dynamically when executing mysql query Table: Table1 -------------------------- col1 col2 col3 -------------------------- Test OK Test3 Test OK Test5 Test OK Test6 from the above example i need to introduce "col2" as new column and its value to be as "OK"

    Read the article

  • Saving auto increment in MySQL

    - by oshafran
    Hello, I am trying to sync between 2 tables: I have active table where has auto_increment, and I have archive table with the same values. I would like both ID's to be unique (between the tables as well) - I mean, I would like to save auto incremenet, and if I UNION both table I still have uniqness. How can I do that? Is there a possibility to save auto increment when mysql is off?

    Read the article

  • Deceptive MySQL Query

    - by jerebear
    So I don't consider myself a novice at MySQL but this one has me stumped: I have a message board and I want to pull a list of all the most recent posts grouped by the Thread ID. Here's the table: MB_Posts -ID -Thread_ID -Created_On (timestamp) -Creator_User (user_id) -Subject -Contents -Edited (timestamp) -Reported I've tried many different things to keep it simple but I would like help from the community on this one. Just to kick this out there...this one does not work as expected: SELECT * FROM MB_Posts GROUP BY Thread_ID ORDER BY ID DESC

    Read the article

  • Extend precision of MySQL's double datatype?

    - by tim82
    I am trying to save the value "6.714285714285714" into a DOUBLE datatype field. Unfortunately it does not fit at all and is cutted by one char. Storing larger numbers becomes less precise. Already searched in the mysql manual and it seems to be that double is the most precise data type available. Anyone knows a practicable workaround? Sorry for my bad english and thx a lot!

    Read the article

  • please help me construct this MYSQL Query (date / time)

    - by sebb
    Hi there i would like to construct a query that fetches results that occurred between NOW and 15 minutes ago, im getting a mysql error when I try the following , can you help me? thanks SELECT * WHERE user_id = '000' AND date_time < now( ) AND date_time > DATE_SUB( now( ) , INTERVAL 15 MINUTE)

    Read the article

  • mysql custom sorting first alpha then numeric using case when

    - by Nizzy
    How can you sort a query using ORDER BY CASE WHEN REGEXP? or other alternatives? I don't want to use UNION. Thank you mysql> SELECT `floor_id`, `floor_number` FROM `floors`; +----------+--------------+ | floor_id | floor_number | +----------+--------------+ | 1 | 4 | | 2 | 7 | | 3 | G | | 4 | 19 | | 5 | B | | 6 | 3 | | 7 | A | +----------+--------------+ Expected result: +----------+--------------+ | floor_id | floor_number | +----------+--------------+ | 7 | A | | 5 | B | | 3 | G | | 6 | 3 | | 1 | 4 | | 2 | 7 | | 4 | 19 | +----------+--------------+

    Read the article

  • python popen and mysql import

    - by khelll
    I'm doing the following: from subprocess import PIPE from subprocess import Popen file = 'dump.sql.gz' p1 = Popen(["gzip", "-cd" ,file], stdout=PIPE) print "Importing temporary file %s" % file p2 = Popen(["mysql","--default-character-set=utf8", "--user=root" , "--password=something", "--host=localhost", "--port=3306" , 'my_db'],stdin=p1.stdout, stdout=PIPE,stderr=PIPE) err = p1.communicate()[1] if err: print err err = p2.communicate()[1] if err: print err But the db is not being populated. No errors are shown, also I have checked p1.stdout and it has the file contents. Any ideas?

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >