Search Results

Search found 20838 results on 834 pages for 'mysql num rows'.

Page 211/834 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • Does disable log error for MySQL increasing it's performance ? How disable it?

    - by adnan
    Does disable log error for MySQL increasing it's performance ? How disable it ? This is my service status Server load 0.63 (8 CPUs) Memory Used 23.38% (957,600 of 4,096,000) Swap Used 0% (0 of 1) And this is print screen for process manager http://elnhrda.com/promgr.jpg This is my.cnf [mysqld] query_cache_size=64M skip-name-resolve #innodb_file_per_table=1 query_cache_limit=2M read_buffer_size = 2M read_rnd_buffer_size = 16M sort_buffer_size = 8M join_buffer_size = 8M thread_cache_size = 8 thread_concurrency = 8 innodb_buffer_pool_size = 2G Iam looking for doing any thing to increase my website speed I have VPS 4G.B RAM CENTOS 6 X86_64 Note please : this statics taken now which no any queries executed & site have not any visitors in the same time

    Read the article

  • MySQL Windows vs. Linux: performance, caveats, pros and cons?

    - by gravyface
    Looking for (preferrably) some hard data or at least some experienced anecdotal responses with regards to hosting a MySQL database (roughly 5k transactions a day, 60-70% more reads than writes, < 100k of data per transaction i.e. no large binary objects like images, etc.) on Windows 2003/2008 vs. a Debian-based derivative (Ubuntu/Debian, etc.). This server will function only as a database server with a separate Web server on another physical box; this server will require remote access for management (SSH for Linux, RDP for Windows). I suspect that the Linux kernel/OS will compete less than the Windows Server for resources, but for this I can't be certain. There's also security footprint: even with Windows 2008, I'm thinking that the Linux box can be locked down more easily than the Windows Server. Anyone have any experience with both configurations?

    Read the article

  • Is it faster to create indexes before or after data loading in MySQL?

    - by Josh Glover
    I have a data replication process that drops and recreates a few tables in a target database, then loads them up with data from a source database (running on another host, but that is immaterial to the question at hand). The target database does need primary keys and a few other indexes on its tables, but not during the data loading. I'm currently loading all of the data, then creating the indexes. However, index creation takes a pretty long time--30 minutes of my data loader's 5 and a half hour running time. My intuition tells me that creating the indexes at the end should be faster than creating them first, since the index would need to be rewritten with each insert. Can anyone tell me for sure which way is faster? FWIW, I'm running MySQL 5.1 with InnoDB tables.

    Read the article

  • How to efficiently dump a huge MySQL innodb database?

    - by Jagbir
    I got an Ubuntu 10.04 production MySQL database server where total size of database is 260 GB while size of root partition is itself 300 GB where DB is stored, essentially means around 96% of / is full and there's no space left for storing dump/backup etc. No other disk is attached to server as of now. My task is to migrate this database to other server sitting in different datacenter. Question is how to do that efficiently with minimum downtime? I'm thinking in line of: Request to attach an extra drive to server and take a dump in that drive. Transfer dump to new server, restore it and make new server slave of existing one to keep data in sync When migration is needed, break replication, update slave config to accept read/write requests and make old server read-only so it won't entertain any write requests and tell app developers to update there config with new IP address for db. What's your suggestions to improve this or any alternate better approach for this task?

    Read the article

  • windows VPS running apache and mysql, php scripts running slow.. but cpu usage is 1-3%..

    - by Roeland
    So every night I run some cron jobs. It requires probably about 20 min to process all the records. I gather the script does something like 10,000 sql queries. I figure this task was just that intense and needs time to complete, but I looked at CPU and memory usage, and it is super low. Cpu usage is between 1-3% and once in a while will bounce to 50ish for 2-3 seconds. This VPS is running windows 2003 server with Apache and MySQL. Does this sound right?

    Read the article

  • MySQL: how to convert many MyISAM tables to InnoDB in a production database?

    - by Continuation
    We have a production database that is made up entirely of MyISAM tables. We are considering converting them to InnoDB to gain better concurrency & reliability. Can I just alter the myISAM tables to InnoDB without shutting down MySQL? What are the recommend procedures here? How long will such a conversion take? All the tables have a total size of about 700MB There are quite a large number of tables. Is there any way to apply ALTER TABLE to all the MyISAM tables at once instead of doing it one by one? Any pitfalls I need to be aware of? Thank you

    Read the article

  • After creating a mysql user with all privileges, the user cannot create databases in phpMyAdmin and only sees information_schema table

    - by GHarping
    This is a recurring problem for some reason... Using mysql 5.5, I am simply trying to create a user that can connect to the database remotely, have access to all databases, and create databases. I have created a user using: create user 'dev'@'%' identified by 'abcdefg'; then granted all permissions using: GRANT ALL ON *.* to 'dev'@'192.168.%' IDENTIFIED BY 'abcdefg' WITH GRANT OPTION; and the result is that the user cannot create databases, and can only see information_schema database for some reason. Databases Create database: Documentation No Privileges Database Ascending information_schema Total: 1 Does anyone know why this might be happening?

    Read the article

  • How can I speed up a MySQL retore from a dump file?

    - by Dave Forgac
    I am restoring a 30GB database from a mysqldump file to an empty database on a new server. When running the SQL from the dump file, the restore starts very quickly and then starts to get slower and slower. Individual inserts are now taking 15+ seconds. The tables are MyISAM. The server has no other active connections. SHOW PROCESSLIST; only shows the insert from the restore (and the show processlist itself). Does anyone have any ideas what could be causing the dramatic slowdown? Are there any MySQL variables that I can change to speed the restore while it is progressing?

    Read the article

  • Is there a postfix mysql virtual_maps append_at_origin workaround so I can pipe to external scripts?

    - by FilmJ
    I am using virtual domains, and I'd like to setup the server to alias to custom scripts. I manage all accounts using postfix mappings to mysql. It seems that postfix automatically appends a virtual domain regardless of how the forwarded/aliased result comes back. So even though i have: "|/bin/command" postfix is reading it as: "|/bin/command"@mydomain.com Is there any work-around, or setting I can fix? It would seem than append_at_myorigin=no would be ideal, but that's unsupported according to the documentation. Another option, maybe I can skip virtual aliases altogether and use the "/etc/postfix/aliases" table - assuming all emails go to the main domain. I'll try this, but if anyone has any other ideas how to make it work with virtual domains, please let me know as this would be very useful! Thanks.

    Read the article

  • How can I speed up a MySQL restore from a dump file?

    - by Dave Forgac
    I am restoring a 30GB database from a mysqldump file to an empty database on a new server. When running the SQL from the dump file, the restore starts very quickly and then starts to get slower and slower. Individual inserts are now taking 15+ seconds. The tables are MyISAM. The server has no other active connections. SHOW PROCESSLIST; only shows the insert from the restore (and the show processlist itself). Does anyone have any ideas what could be causing the dramatic slowdown? Are there any MySQL variables that I can change to speed the restore while it is progressing?

    Read the article

  • What are the pros & cons of these MySQL engines for OLTP -- XtraDB, PBXT, or TokuDB?

    - by Continuation
    I'm working on a social website with an approximate read/write split of 90/10. Trying to decide on a MySQL engine. The ones I'm interested in are: XtraDB PBXT TokuDB What are the pros and cons of them for my use case? A few specific questions: PBXT uses log-based structure that avoids double-writes. It sounds very elegant, but the benchmark I've seen doesn't show any/much advantages over XtraDB. Do you have any experience with PBXT/XtraDB you can share? TokuDB sounds VERY interesting. But all the benchmarks I've seen are about single-threaded bulk inserts - inserting 100M rows for example. that's not very relevant for OLTP. What about its performance with large number of concurrent threads writing and reading at the same time? Anyone has tried that?

    Read the article

  • In mysql I want to set lower_case_table_names=1 on existing databases to avoid cases-sensitivity issues accross multiple platforms

    - by sakhunzai
    In mysql I want to set lower_case_table_names=1 on existing databases to avoid cases-sensitivity issues accross multiple platforms. A) What are the risks ?( besides show table issue) B) After setting lower_case_table_names=1, will I be in position to query databases across multiple platforms consistantly ? select * from USERS == select * from users; C) How the triggers + stored procedure + functions + views + events will be affected in this regards. I know lower_case_table_names is only for "TABLE" names but how about triggers other database objects . Will they remain case-insensitive How about views ? D) Do I need to rename all tables before/after this configuration setting or this will do the miracle in one step (i.e lower_case_table_names=1 neutralize table names) ? E) What will be the exact steps WRT:mysqd / my.ini ?

    Read the article

  • how to serve php files on a Apache server (localhost) running Coldfusion/MySql?

    - by frequent
    I'm still learning my ways around on my localhost server, whih is running Apache 2.2, Coldfusion8 and MySQL Server 5.5 (on Windows XP). I need to work on a site I inherited, which also ran some PHP scripts under the same setup. I have installed PHP5 on my localhost, but when I open a dummy page with: <?php phpinfo();?> I only get plain text returned, so I guess I haven't configured Apache correctly to also serve PHP (while defaulting to Coldfusion). Question: Where do I need to get started if I want PHP to work on my current setup, too? Is there something I need to add to the httpd.conf file? If possible I don't want to uninstall/reinstall everything, because it took forever to get everything to work (excluding php). Thanks for any pointers!

    Read the article

  • How To Backup Of MySQL Database Using PhpMyAdmin

    - by Jyoti
    It is very important to do backup of your MySql database, you will probably realize it when it is too late. A lot of web applications use MySql for storing the content. This can be blogs, and a lot of other things. When you have all your content as html files on your web server it is very easy to keep them safe from crashes, you just have a copy of them on your own PC and then upload them again after the web server is restored after the crash. All the content in the MySql database must also be backed up. If you have spent a lot of time making the content and it is only stored in the Mysql server, you will feel very bad if it gets lost for ever. Backing it up once every month or so makes sure you never loose too much of your work in case of a server crash, and it will make you sleep better at night. It is easy and fast, so there is no reason for not doing it. Step 1: Log into phpMyAdmin on your server. Step2: You can select the database that you would like to backup from the drop-down menu called Database. Step 3: A new page will be loaded in phpMyAdmin showing the selected database. In order to proceed with the backup click on the Export tab. Step 4: The options that you should select apart from the default ones are Save as file which will save the file locally to your computer in an .sql format and Add DROP TABLE which will add the drop table functionality if the table already exists in the database backup as shown below. Step 5: Click on the Go button to start the export/backup procedure for your database. A download window will pop up prompting for the exact place where you would like to save the file on your local computer. It is possible that the download starts automatically. This depends on your browser’s settings.

    Read the article

  • Why is MySQL with InnoDB doing a table scan when key exists and choosing to examine 70 times more ro

    - by andysk
    Hello, I'm troubleshooting a query performance problem. Here's an expected query plan from explain: mysql> explain select * from table1 where tdcol between '2010-04-13:00:00' and '2010-04-14 03:16'; +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | 1 | SIMPLE | table1 | range | tdcol | tdcol | 8 | NULL | 5437848 | Using where | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ 1 row in set (0.00 sec) That makes sense, since the index named tdcol (KEY tdcol (tdcol)) is used, and about 5M rows should be selected from this query. However, if I query for just one more minute of data, we get this query plan: mysql> explain select * from table1 where tdcol between '2010-04-13 00:00' and '2010-04-14 03:17'; +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | 1 | SIMPLE | table1 | ALL | tdcol | NULL | NULL | NULL | 381601300 | Using where | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ 1 row in set (0.00 sec) The optimizer believes that the scan will be better, but it's over 70x more rows to examine, so I have a hard time believing that the table scan is better. Also, the 'USE KEY tdcol' syntax does not change the query plan. Thanks in advance for any help, and I'm more than happy to provide more info/answer questions.

    Read the article

  • is mysql index useful on column 'state' when only doing bit-operations on the column?

    - by Geert-Jan
    I have a lot of domain entities (stored in mysql) which undergo lots of different operations. Each operation is executed from a different program. I need to keep (flow)-state for these entities which I implemented in as a long field 'flowstate' used as a bitset. to query mysql for entities which have undergone a certain operation I do something like: select * from entities where state >> 7 & 1 = 1 Indicating bit 7 (cooresponding to operation 7) has run. (<-- simplified) Anyway, I really didn't pay attention to the performance implications of this setup in the beginning, and I think I'm in a bit of trouble since queries as the above run pretty slow. What I'd like to know: Does an mysql index on 'flowstate' help at all? After all it's not a single value Mysql can quickly find using a binary sort or whatever. If it doesn't, are there any other things I could do to speed things up? . Are there special 'mask-indices' for fields with use-cases as the above? TIA, Geert-jan

    Read the article

  • How can I get back my privilege to create a new database in MySQL?

    - by Steven
    I can not use MySQL. MySQL is on my local computer. Currently I added skip-grant-tables in My.ini so I can use MySQL. But I have no privilege to create a new database. My problem is tough, although I asked related questions on SO, but no answer can resolve my problem. I almost give up. So I lower my expectation. I am developing a website, so I need to create database, tables and operate tables. You don't have to consider security. Is there a simple solution that can give me privilege to create a new database? Maybe by adding some command in my.ini or something? You won't need to completely resolve my problem. Maybe after the development, I will upload the database and tables to another server(The current database server is my personal computer, windows XP) so I can uninstall and reinstall MySQL. The root of problem is that I lack privileges.

    Read the article

  • How do I get PHP variables from this MySQL query?

    - by CT
    I am working on an Asset Database problem using PHP / MySQL. In this script I would like to search my assets by an asset id and have it return all related fields. First I query the database asset table and find the asset's type. Then depending on the type I run 1 of 3 queries. <?php //make database connect mysql_connect("localhost", "asset_db", "asset_db") or die(mysql_error()); mysql_select_db("asset_db") or die(mysql_error()); //get type of asset $type = mysql_query(" SELECT asset.type From asset WHERE asset.id = 93120 ") or die(mysql_error()); switch ($type){ case "Server": //do some stuff that involves a mysql query mysql_query(" SELECT asset.id ,asset.company ,asset.location ,asset.purchase_date ,asset.purchase_order ,asset.value ,asset.type ,asset.notes ,server.manufacturer ,server.model ,server.serial_number ,server.esc ,server.user ,server.prev_user ,server.warranty FROM asset LEFT JOIN server ON server.id = asset.id WHERE asset.id = 93120 "); break; case "Laptop": //do some stuff that involves a mysql query mysql_query(" SELECT asset.id ,asset.company ,asset.location ,asset.purchase_date ,asset.purchase_order ,asset.value ,asset.type ,asset.notes ,laptop.manufacturer ,laptop.model ,laptop.serial_number ,laptop.esc ,laptop.user ,laptop.prev_user ,laptop.warranty FROM asset LEFT JOIN laptop ON laptop.id = asset.id WHERE asset.id = 93120 "); break; case "Desktop": //do some stuff that involves a mysql query mysql_query(" SELECT asset.id ,asset.company ,asset.location ,asset.purchase_date ,asset.purchase_order ,asset.value ,asset.type ,asset.notes ,desktop.manufacturer ,desktop.model ,desktop.serial_number ,desktop.esc ,desktop.user ,desktop.prev_user ,desktop.warranty FROM asset LEFT JOIN desktop ON desktop.id = asset.id WHERE asset.id = 93120 "); break; } ?> So far I am able to get asset.type into $type. How would I go about getting the rest of the variables (laptop.model to $model, asset.notes to $notes and so on)? Thank you.

    Read the article

  • Is it a good idea to use MySQL and Neo4j together?

    - by Sanoj
    I will make an application with a lot of similar items (millions), and I would like to store them in a MySQL database, because I would like to do a lot of statistics and search on specific values for specific columns. But at the same time, I will store relations between all the items, that are related in many connected binary-tree-like structures (transitive closure), and relation databases are not good at that kind of structures, so I would like to store all relations in Neo4j which have good performance for this kind of data. My plan is to have all data except the relations in the MySQL database and all relations with item_id stored in the Neo4j database. When I want to lookup a tree, I first search the Neo4j for all the item_id:s in the tree, then I search the MySQL-database for all the specified items in a query that would look like: SELECT * FROM items WHERE item_id = 45 OR item_id = 345435 OR item_id = 343 OR item_id = 78 OR item_id = 4522 OR item_id = 676 OR item_id = 443 OR item_id = 4255 OR item_id = 4345 Is this a good idea, or am I very wrong? I haven't used graph-databases before. Are there any better approaches to my problem? How would the MySQL-query perform in this case?

    Read the article

  • MySQL PHP | "SELECT FROM table" using "alphanumeric"-UUID. Speed vs. Indexed Integer / Indexed Char

    - by dropson
    At the moment, I select rows from 'table01' using: SELECT * FROM table01 WHERE UUID = 'whatever'; The UUID column is a unique index. I know this isn't the fastest way to select data from the database, but the UUID is the only row-identifier that is available to the front-end. Since I have to select by UUID, and not ID, I need to know what of these two options I should go for, if say the table consists of 100'000 rows. What speed differences would I look at, and would the index for the UUID grow to large, and lag the DB? Get the ID before doing the "big" select 1. $id = "SELECT ID FROM table01 WHERE UUID = '{alphanumeric character}'"; 2. $rows = SELECT * FROM table01 WHERE ID = $id; Or keep it the way it is now, using the UUID. 1. SELECT FROM table01 WHERE UUID '{alphanumeric character}'; Side note: All new rows are created by checking if the system generated uniqueid exists before trying to insert a new row. Keeping the column always unique. The "example" table. CREATE TABLE Table01 ( ID int NOT NULL PRIMARY KEY, UUID char(15), name varchar(100), url varchar(255), `date` datetime ) ENGINE = InnoDB; CREATE UNIQUE INDEX UUID ON Table01 (UUID);

    Read the article

  • [PHP] MySql Proccesslist filled with "Sleep" Entries leading to "To many Connections" ?

    - by edorian
    Hi, i'd like to ask your help on a longstanding issue with php/mysql connections. Every time i execute a "SHOW PROCESSLIST" command it shows me about 400 idle (Status: Sleep) connections to the database Server emerging from our 5 Webservers. That never was much of a problem (and i didn't find a quick solution) until recently traffic numbers increased and since then MySql reports the "to many connections" Problems repeatedly, even so 350+ of those connections are in "sleep" state. Also a server can't get a mysql connection even if there are sleeping connection to that same server. All those connections vanish when a apache server is restated. The PHP Code used to create the Database connections uses the normal "mysql" Module, the "mysqli" Module, PEAR::DB and Zend Framework Db Adapter. (Different projects). NONE of the projects uses persistent connections. Raising the connection-limit is possible but doesn't seem like a good solution since it's 450 now and there are only 20-100 "real" connections at a time anyways. My question: Why are there so many connections in sleep state and how can i prevent that. Thank you for your time, if theres anything unclear or missing please let me know

    Read the article

  • InnoDB: Error: log file ./ib_logfile0 is of different size

    - by jack
    I just added the following lines in /etc/mysql/my.cnf after I converted one database to use InnoDB engine. innodb_buffer_pool_size = 2560M innodb_log_file_size = 256M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 16 innodb_flush_method = O_DIRECT But it raise "ERROR 2013 (HY000) at line 2: Lost connection to MySQL server during query" error restarting mysqld. And mysql error log shows the following InnoDB: Error: log file ./ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 268435456 bytes! 100118 20:52:52 [ERROR] Plugin 'InnoDB' init function returned error. 100118 20:52:52 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100118 20:52:52 [ERROR] Unknown/unsupported table type: InnoDB 100118 20:52:52 [ERROR] Aborting So I commented out this line # innodb_log_file_size = 256M And it restarted mysql successfully. I wonder what's the "5242880 bytes of log file" showed in mysql error? It's the first database on InnoDB engine on this server so when and where is that log file created? In this case, how can I enable innodb_log_file_size directive in my.cnf? EDIT I tried to delete /var/lib/mysql/ib_logfile0 and restart mysqld but it still failed. It now shows the following in error log. 100118 21:27:11 InnoDB: Log file ./ib_logfile0 did not exist: new to be created InnoDB: Setting log file ./ib_logfile0 size to 256 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 InnoDB: Error: log file ./ib_logfile1 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 268435456 bytes! Resolution It works now after deleted both ib_logfile0 and ib_logfile1 in /var/lib/mysql

    Read the article

  • jdbc4 CommunicationsException

    - by letronje
    I have a machine running a java app talking to a mysql instance running on the same instance. the app uses jdbc4 drivers from mysql. I keep getting com.mysql.jdbc.exceptions.jdbc4.CommunicationsException at random times. Here is the whole message. Could not open JDBC Connection for transaction; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was25899 milliseconds ago.The last packet sent successfully to the server was 25899 milliseconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. For mysql, the value of global 'wait_timeout' and 'interactive_timeout' is set to 3600 seconds and 'connect_timeout' is set to 60 secs. the wait timeout value is much higher than the 26 secs(25899 msecs). mentioned in the exception trace. I use dbcp for connection pooling and here is spring bean config for the datasource. <bean id="dataSource" destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource" > <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://localhost:3306/db"/> <property name="username" value="xxx"/> <property name="password" value="xxx" /> <property name="poolPreparedStatements" value="false" /> <property name="maxActive" value="3" /> <property name="maxIdle" value="3" /> </bean> Any idea why this could be happening? Will using c3p0 solve the problem ?

    Read the article

  • mysql_query missing during installation

    - by Arsenal
    Hi, I'm trying to install the pdo_mysql extension... I managed to install it succesfully, but ever since I upgraded mysql to 5.1.34 (using rpm packages) it seems to have gone down so I tried to reinstall it. However it seems to crash on ./configure as it gives 'mysql_query not found' error: configure:3961: checking for mysql_query in -lmysqlclient configure:3991: gcc -o conftest -g -O2 -I/usr/local/include/php -Wl,- rpath,/usr/lib/mysql -L/usr/lib/mysql -lmysqlclient -lz -lcrypt -lnsl -lm -lmygcc conftest.c -lmysqlclient -rdynamic -L/usr/lib/mysql -lmysqlclient -lz -lcrypt -lnsl -lm - lmygcc >&5 /usr/bin/ld: skipping incompatible /usr/lib/mysql/libmysqlclient.a when searching for -lmysqlclient /usr/bin/ld: skipping incompatible /usr/lib/mysql/libmysqlclient.a when searching for -lmysqlclient /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../libmysqlclient.so when searching for -lmysqlclient /usr/bin/ld: skipping incompatible /usr/lib/libmysqlclient.so when searching for -lmysqlclient /usr/bin/ld: cannot find -lmysqlclient collect2: ld returned 1 exit status configure:3997: $? = 1 configure: failed program was: | /* confdefs.h. */ ... In that file there seems to be a mysql_query(); statement. I'm pretty sure mysql_query works however since all of my websites are running normally. However the current setup is a mess (previous students kind of messed it up) and there are a whole lot of libmysqclient's in /etc: libmysqlclient.so.10.0.0 libmysqlclient.so.12.0.0 libmysqlclient.so.14.0.0 libmysqlclient.so.15.0.0 libmysqlclient.so.16.0.0 libmysqlclient_r.so.10.0.0 libmysqlclient_r.so.12.0.0 libmysqlclient_r.so.14.0.0 libmysqlclient_r.so.15.0.0 libmysqlclient_r.so.16.0.0 And just as much symlinks. Does anyone know how to get this right? Many thanks! (oh, and no pecl install pdo_mysql doesn't get me any further). I'm runnnig CentOS 4 with php 5.2.9 compiled from source and MySQL 5.1.34

    Read the article

  • jdbc4 CommunicationsException

    - by letronje
    I have a machine running a java app talking to a mysql instance running on the same instance. the app uses jdbc4 drivers from mysql. I keep getting com.mysql.jdbc.exceptions.jdbc4.CommunicationsException at random times. Here is the whole message. Could not open JDBC Connection for transaction; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was25899 milliseconds ago.The last packet sent successfully to the server was 25899 milliseconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. For mysql, the value of global 'wait_timeout' and 'interactive_timeout' is set to 3600 seconds and 'connect_timeout' is set to 60 secs. the wait timeout value is much higher than the 26 secs(25899 msecs). mentioned in the exception trace. I use dbcp for connection pooling and here is spring bean config for the datasource. <bean id="dataSource" destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource" > <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://localhost:3306/db"/> <property name="username" value="xxx"/> <property name="password" value="xxx" /> <property name="poolPreparedStatements" value="false" /> <property name="maxActive" value="3" /> <property name="maxIdle" value="3" /> </bean> Any idea why this could be happening? Will using c3p0 solve the problem ?

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >