Search Results

Search found 14017 results on 561 pages for 'mysql binlog'.

Page 209/561 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • CakePHP 'Cake is NOT able to connect to the database'

    - by kand
    I've looked at this post about a similar issue: CakePHP: Can't access MySQL database and I've tried everything they mentioned in there including: Changing my database.php so that the 'port' attribute for both $default and $test are the location of my mysqld.sock file Changing the 'port' attribute to the actual integer that represents the port in my my.cnf mysql config Changing the mysql socket locations in php.ini to the location of my mysqld.sock file I'm using ubuntu 11.04, apache 2.2.17, mysql 5.1.54, and CakePHP 1.3.10. My install of mysql and apache don't seem to match any conventions, as in, all the config files are there, they are all just in really weird places--I'm not sure why that is, but I've tried reinstalling both programs multiple times with the same results... At any rate, I can log into mysql from the terminal and use it normally, and apache is working because I can see the CakePHP default homepage. I just can't get it to change the message 'Cake is NOT able to connect to the database'. SOLVED: Figured it out, had to change php.ini so that extension_dir pointed to the correct directory and had to add a line extension=mysql.so.

    Read the article

  • Dynamically change MYSQL query within a PHP file using jQuery .post?

    - by John
    Hi, Been trying this for quite a while now and I need help. Basically I have a PHP file that queries database and I want to change the query based on a logged in users name. What happens on my site is that a user logs on with Twitter Oauth and I can display their details (twitter username etc.). I have a database which the user has added information to and I what I would like to happen is when the user logs in with Twitter Oauth, I could use jQuery to take the users username and update the mysql query to show only the results where the user_name = that particular users name. At the moment the mysql query is: "SELECT * FROM markers WHERE user_name = 'dave'" I've tried something like: "SELECT * FROM markers WHERE user_name = '$user_name'" And elsewhere in the PHP file I have $user_name = $_POST['user_name'];. In a separate file (the one in which the user is redirected to after they log in through Twitter) I have some jQuery like this: $(document).ready(function(){ $.post('phpsqlinfo_resultb.php',{user_name:"<?PHP echo $profile_name?>"})}); $profile_name has been defined earlier on that page. I know i'm clearly doing something wrong, i'm still learning. Is there a way to achieve what I want using jQuery to post the users username to the PHP file to change the mysql query to display only the results related to the user that is logged in. I've included the PHP file with the query below: <?php // create a new XML document //$doc = domxml_new_doc('1.0'); $doc = new DomDocument('1.0'); //$root = $doc->create_element('markers'); //$root = $doc->append_child($root); $root = $doc->createElement('markers'); $root = $doc->appendChild($root); $table_id = 'marker'; $user_name = $_POST['user_name']; // Make a MySQL Connection include("phpsqlinfo_addrow.php"); $result = mysql_query("SELECT * FROM markers WHERE user_name = '$user_name'") or die(mysql_error()); // process one row at a time //header("Content-type: text/xml"); header('Content-type: text/xml; charset=utf-8'); while($row = mysql_fetch_assoc($result)) { // add node for each row $occ = $doc->createElement($table_id); $occ = $root->appendChild($occ); $occ->setAttribute('lat', $row['lat']); $occ->setAttribute('lng', $row['lng']); $occ->setAttribute('type', $row['type']); $occ->setAttribute('user_name', utf8_encode($row['user_name'])); $occ->setAttribute('name', utf8_encode($row['name'])); $occ->setAttribute('tweet', utf8_encode($row['tweet'])); $occ->setAttribute('image', utf8_encode($row['image'])); } // while $xml_string = $doc->saveXML(); $user_name2->response; echo $xml_string; ?> This is for use with a google map mashup im trying to do. Many thanks if you can help me. If my question isn't clear enough, please say and i'll try to clarify for you. I'm sure this is a simple fix, i'm just relatively inexperienced to do it. Been at this for two days and i'm running out of time unfortunately.

    Read the article

  • nginx+mysql5 loadtesting configuration strangeness

    - by genseric
    i am trying to setup a new server running on debian6 and trying to make it work smooth under load. i ve used a wordpress site as a test object, and tried the configurations on http://blitz.io. when i increase the mysql max_connections from 50 to 200 lots of timeouts start to occur. but on 50 , no timeouts and pretty well response times. nginx configuration is fine , i tuned the config so i dont see errors. so i presume it's related to the other configuration options of my.cnf . i read some about options but still cant find what max_connections problem is all about. btw, the server has 16gb of ram and a fine i7 cpu. here is the current my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] wait_timeout=60 connect_timeout=10 interactive_timeout=120 user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking bind-address = 127.0.0.1 key_buffer = 384M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 20 myisam-recover = BACKUP max_connections = 50 table_cache = 1024 thread_concurrency = 8 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M thanks in advance. i asked this question on SO but it's closed as off topic so i believe this is a SF question.

    Read the article

  • fatal error 'stdio.h' Python 2.7 on Mc OS X 10.7.5 [closed]

    - by DjangoRocks
    I have this weird issue on my Mac OS X 10.7.5 /Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33:10: fatal error: 'stdio.h' file not found What caused the above error? This error has been bugging me and i can't install mysql-python as i'm stuck with this step. I'm using Python 2.7.3. Things like Google App Engine ( python ), python script, tornado generally works on my mac. But not mysql-python. I've install MySQL using the dmg image and have copied the mysql folder to /usr/local/ How do i fix this? ======UPDATE====== I've ran the command, and tried to install mysql-python by running sudo python setup.py install. But received the following: running install running bdist_egg running egg_info writing MySQL_python.egg-info/PKG-INFO writing top-level names to MySQL_python.egg-info/top_level.txt writing dependency_links to MySQL_python.egg-info/dependency_links.txt writing MySQL_python.egg-info/PKG-INFO writing top-level names to MySQL_python.egg-info/top_level.txt writing dependency_links to MySQL_python.egg-info/dependency_links.txt reading manifest file 'MySQL_python.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'MySQL_python.egg-info/SOURCES.txt' installing library code to build/bdist.macosx-10.6-intel/egg running install_lib running build_py copying MySQLdb/release.py -> build/lib.macosx-10.6-intel-2.7/MySQLdb running build_ext gcc-4.2 not found, using clang instead building '_mysql' extension clang -fno-strict-aliasing -fno-common -dynamic -g -O2 -DNDEBUG -g -O3 -Dversion_info=(1,2,4,'rc',5) -D__version__=1.2.4c1 -I/usr/local/mysql/include -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mysql.c -o build/temp.macosx-10.6-intel-2.7/_mysql.o -Os -g -fno-common -fno-strict-aliasing -arch x86_64 In file included from _mysql.c:29: /Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33:10: fatal error: 'stdio.h' file not found #include <stdio.h> ^ 1 error generated. error: command 'clang' failed with exit status 1 What other possible ways can i fix it? thanks! Best Regards.

    Read the article

  • Can't login to phpMyAdmin on a WAMP server running Windows 2008

    - by Richard West
    I am setting up a new server. I have installed Apache 2.2.17, PHP 5.3.3, MySQL 5.1.53 and phpMyAdmin 3.3.8 running on a Windows 2008 (32 bit) OS. I have configured Apache and PHP so they appear to be working fine. I have created the standard test php page with the following code and everything appears to be working fine. <?php //index.php phpinfo(); ?> I also see the mySQL and mySQLi section in the above webpage, so it appears that that I have the proper extensions loaded for mySQL access. The problem that I am having centeres around myPHPAdmin. I have this installed and I can access to the login screen at http://localhost/pma I login using "root" and the password I have setup for root. After a delay of 30 seconds or so the web page goes to a blank screen, and the url is now http://localhost/pma/index.php?token= No error is ever displayed - however nothing usable is either. I have confirmed that mySQL is running by going to the command line and logging into mySQL from there. I have double checked my configuration but I am not having any luck getting this to work. I have also disabled the Windows firewall, but that did not change anything. I installed mySQL using the standard port 3306. Any advice would be greatly appreciated.

    Read the article

  • PhpMyAdmin import/export - strange character encoding issues.

    - by John Hunt
    Hello, I'm migrating a site to a new host, and there are a couple of databases on there. There's no SSH access so I'm stuck with phpmyadmin. The issue is that certain characters (namely just whitespace) seems to being corrupt on the new site (same html, and apache doesn't seem to be messing with any encodings - you can see the strange characters have changed when I use less on my linux machine after downloading a table dump from both servers.) The issue isn't as bad if I import into the new database as utf-8 - whitespace characters only have one funny A type symbol instead of two. I've been trying various combinations of character encoding etc to no avail. Exporting from: phpMyAdmin 2.6.2 MySQL 4.1.20 MySQL connection collation: utf8_general_ci MySQL charset: UTF-8 Unicode (utf8) Collation on tables and their fields is: latin1_swedish_ci Importing to: phpMyAdmin - 2.11.9.2 MySQL client version: 5.0.45 MySQL charset: UTF-8 Unicode (utf8) MySQL connection collation: utf8_general_ci The import sql has this kind of thing in it: ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=192 ; I get the impression this is actually a bug or something with mysqldump as nothing seems to work.. does anyone have any insight into this? Cheers, John.

    Read the article

  • mysqldump isn't able to export a specific database, phpMyAdmin crashes

    - by Devils Child
    I'm experiencing problems with a database on my server (Note: All other databases work fine). Once I try to export it with mysqldump I get this error: # mysqldump -u root -pXXXXXXXXX databasename > /root/databasename.sql mysqldump: Couldn't execute 'show table status like 'apps'': Lost connection to MySQL server during query (2013) Also, phpMyAdmin throws an error when selecting this database and immediately logs out. However, the web site which uses this database works fine. I can also execute SELECT statements on the table named "apps" from the MySQL shell. I tried restarting the MySQL daemon as well as REPAIR DATABASE and REPAIR TABLE but the problem still persists. I had this problem before, then it disappeared somehow without me doing anything to resolve the issue. Now, the problem is back and I'm simply unable to create a backup of this database. Used software Debian 6.0.7 x64 MySQL 5.1.66-0 MySQL Version: mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+-------------------+ | Variable_name | Value | +-------------------------+-------------------+ | protocol_version | 10 | | version | 5.1.66-0+squeeze1 | | version_comment | (Debian) | | version_compile_machine | x86_64 | | version_compile_os | debian-linux-gnu | +-------------------------+-------------------+

    Read the article

  • Update RDS db via mysqlbinlog: "you need (at least one of) the SUPER privilege(s)"

    - by timoxley
    We are moving a production site to EC2/RDS Followed these instructions: http://geehwan.posterous.com/moving-a-production-mysql-database-to-amazon I have set up row-based binary logging on the production server did a: mysqldump --single-transaction --master-data=2 -C -q -u root -p backup.sql then imported to RDS instance. No dramas. Due to the size of the db, and minimal downtime requirements, I've got to update the ec2 db to the latest datas via the binlogs, and it won't let me. mysqlbinlog mysql-bin.000004 --start-position=360812488 | mysql -uroot -p -h and it says: ERROR 1227 (42000) at line 6: Access denied; you need (at least one of) the SUPER privilege(s) for this operation My guess, based on what is on line 6 of the binlog, is that it's the 'write to the BINLOG' statements in the SQL backup, and because RDS doesn't support this, it can't run these statements, or something, I don't really know. Please help.

    Read the article

  • How do you fix a MySQL “Incorrect key file” error when you can’t repair the table?

    - by Wayne M
    I'm trying to run a rather large query that is supposed to run nightly to populate a table. I'm getting an error saying Incorrect key file for table '/var/tmp/#sql_201e_0.MYI'; try to repair it but the storage engine I'm using (whatever the default is, I guess?) doesn't support repairing tables. how do I fix this so I can run the query? We are under pressure to get this table loaded for a client.

    Read the article

  • can 'Percona MySQL Data Recovery' be used to recover dropped tables if the datadir filesystem is mounted as /

    - by Tom Geee
    according to Percona: Unmount the filesystem or make it read-only if... You have filesystem corruption OR You have dropped tables in innodb_file_per_table format If I have innodb_file_per_table enabled, and accidently dropped a table, while the datadir is mounted as within the / partition , can data still be recovered? Obviously you can't work with an unmounted root filesystem. Our VPS host has a defaulted filesystem table which we cannot customize. I was wondering in case of any future scenario. edit: would mounting the / filesystem through NFS onto another system as read-only be a workaround? TIA.

    Read the article

  • PHPMyAdmin works with https Only (not http)

    - by 01010011
    Hi I've been having a problem getting phpmyadmin to work consistently on my XP desktop and laptop computers for months now. When I type into Chrome's browser on both machines, localhost/phpmyadmin, I kept getting Error #1045 Access Denied for user at root@localhost (using password yes). Eventually, I realized that I had two (2) versions of mysql installed (XAMPP and MySQL Server 5.1) on both machines. So I uninstalled the MySQL Server 5.1I from the desktop and phpmyadmin worked. But when I uninstalled MySQL Server 5.1 from my laptop, it did not work. But I realized I could still get into MySQL Commandline Client using my password and that my databases were still intact. So I uninstalled and reinstalled XAMPP on the laptop and phpmyadmin worked after that. Now I have a new problem. On phpMyAdmin's home page has a message at the bottom: Your configuration file contains settings (root with no password) that correspond to the default MySQL privileged account. Your MySQL server is running with this default, is open to intrusion, and you really should fix this security hole by setting a password for user 'root'. So I located the following lines in config.inc.php file: /* Authentication type and info */ $cfg['Servers'][$i]['auth_type'] = 'config'; $cfg['Servers'][$i]['user'] = 'root'; $cfg['Servers'][$i]['password'] = ''; $cfg['Servers'][$i]['AllowNoPassword'] = true; and I just changed the last 2 lines as follows: $cfg['Servers'][$i]['password'] = 'mypassword'; $cfg['Servers'][$i]['AllowNoPassword'] = false; As soon as I did that and I tried to access phpmyadmin again, I got the Error #1045 message again, but when I tried https://localhost/phpmyadmin/ I got a red page saying this sites certificate is not trusted would you like to proceed anyway. And now it only works using https. I would really like to settle all my phpmyadmin problems once and for all so here are my questions: 1. Why does my laptop only access phpmyadmin via https? 2. How do I change my password in my configuration file? Also, if you have any other tips regarding phpMyAdmin, they are very welcome. Thanks in advance

    Read the article

  • MySQL query very slow on Amazon RDS but really fast on my laptop?

    - by Luc
    I would love to know if anybody knows why this is happening. i've just migrated over to Amazon RDS for our website and our biggest query which takes .2 seconds to execute on my macbook takes 1.3 seconds to execute on the most expensive RDS instance. Obviously i've disabled query cache (and tested this) on my local computer and both databases are exactly the same. InnoDB, both have the same indexes etc. It's costing us a fortune ($2000 per month) for the fastest RDS instance and i'm losing faith quickly. any ideas?

    Read the article

  • Possible to make mysql server both master and slave?

    - by Amy Anuszewski
    I am getting ready to move a database from one server to another. In order to reduce downtime for the client, I am wondering if it would be possible for me to turn on replication and give it time to replicate fully, then just point the customer to the new server. The issue I have is that the server I'm moving to has existing, active databases for other customers. And, the server I'm moving from has other active customers who will not be moving at this time. Is this even possible? If so, how do I configure the server I am moving from and the one I am moving to?

    Read the article

  • Can I change a MySQL table back and forth between InnoDB and MyISAM without any problems?

    - by Daniel Magliola
    I have a site with a decently big database, 3Gb in size, a couple of tables with a dozen million records. It's currently 100% on MyISAM, and I have the feeling that the server is going slower than it should because of too much locking, so I'd like to try going to InnoDB and see if that makes things better. However, I need to do that directly in production, because obviously without load this doesn't make any difference. However, I'm a bit worried about this, because InnoDB actually has potential to be slower, so the question is: If I convert all tables to InnoDB and it turns out i'm worse off than before, can I go back to MyISAM without losing anything? Can you think of any problems I might encounter? (For example, I know that InnoDB stores all data in ONE big file that only gets bigger, can this be a problem?) Thank you very much Daniel

    Read the article

  • What are some general tips to make InnoDB for MySQL perform at its highest?

    - by James Simpson
    I've been using MyISAM exclusively for several years now and know the ins-and-outs pretty well of how to optimize it, but I've just recently started using InnoDB for some of my tables and don't know that much about it. What are some general tips to help improve the performance of these InnoDB tables (they were converted from MyISAM and have anywhere from 100k - 2M rows and most won't use transactions).

    Read the article

  • Is there a MySQL performance benchmark to measure the impact of utf8_unicode_ci versus utf8_general_ci?

    - by MiniQuark
    I read here and there that using the utf8_unicode_ci collation ensures a better treatment of unicode text (for example, it knowns how to expand characters such as 'œ' into 'oe' for searching and ordering) compared to the default utf8_general_ci which basically just strips diacritics. Unfortunately, both sources indicate that utf8_unicode_ci is slightly slower than utf8_general_ci. So my question is: what does "slightly slower" mean? Has anyone run benchmarks? Are we talking about a -0.01% performance impact or rather something like -25%? Thanks for your help.

    Read the article

  • mysql 5.1 - innodb - query_cache_size - 9,418,108 queries have been removed from the query cache due to lack of memory

    - by Tom C
    Currently running on a 16GB system - Ubuntu 64 bit. INnodb Buffer Pool is set to 10GB. tuning-primer shows the following: QUERY CACHE Query cache is enabled Current query_cache_size = 512 M Current query_cache_used = 501 M Current query_cache_limit = 4 M Current Query cache Memory fill ratio = 97.87 % Current query_cache_min_res_unit = 4 K However, 9418108 queries have been removed from the query cache due to lack of memory Perhaps you should raise query_cache_size That is over 9million queries removed. System uptime is 8 days. Should I remove the Query Cache altogether? Our db is always under heavy I/O. tia

    Read the article

  • How do I split a large MySql backup file into multiple files?

    - by Brian T Hannan
    I have a 250 MB backup SQL file but the limit on the new hosting is only 100 MB ... Is there a program that let's you split an SQL file into multiple SQL files? It seems like people are answering the wrong question ... so I will clarify more: I ONLY have the 250 MB file and only have the new hosting using phpMyAdmin which currently has no data in the database. I need to take the 250 MB file and upload it to the new host but there is a 100 MB SQL backup file upload size limit. I simply need to take one file that is too large and split it out into multiple files each containing only full valid SQL statements (no statements can be split between two files).

    Read the article

  • How Export Result of MySQL Query on PHPMyAdmin 3.4.3?

    - by grape
    1) I've got a 30K row table. 2) When I run a long, 50-line query on that table, a GROUP function reduces the number of rows to 7K. 3) I want to export the grouped 7K rows as a new table, or save them as a CSV. When I attempt to export, instead of getting the grouped 7K rows, I get the old, pre-query 30K rows. What am I doing wrong, and what should I be doing? NOTE: I'm not a coder, so I'd really appreciate a solution that just used the PHPMyAdmin GUI. Thanks!

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >