Search Results

Search found 15376 results on 616 pages for 'mysql triggers'.

Page 85/616 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Deleting MySQL rows causes lock table error

    - by Dave L
    I had a couple million rows to delete but they can't be deleted at once without this error ERROR 1206 (HY000): The total number of locks exceeds the lock table size So I wrote a script to delete 100,000 rows 10,000 at a time. It ran once but when I run it a second time I get the error on the first attempt to delete 10,000. The way I'm trying to delete the 10,000 rows is to use a delete statement that refers to all 2 million rows but I use a limit clause to affect only 10,000. I've tried adding an "unlock tables;" statement to the script before the first delete but that doesn't help. I still get the lock table error on the first delete. Any ideas how I can do this? Is there a way I can tell it NOT to lock records? I can make sure nothing else is accessing the table.

    Read the article

  • Why is MySQL table_cache full but never used

    - by Jeremy Clarke
    I have been using the tuning-primer.sh script to tune my my.cnf settings. I have most things working well but the part about TABLE CACHE makes no sense: TABLE CACHE Current table_cache value = 900 tables. You have a total of 0 tables You have 900 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use. You should probably increase your table_cache When I do SHOW STATUS; I get the following table-related numbers: Open_tables = 900 Opened_tables = 0 It seems like something is going wrong. I have some extra memory I could use on increasing the table_cache size, but my sense is that the 900 tables already available aren't doing anything, and increasing it will just waste more energy. Why might this be happening? Are there other settings that could cause all my table_cache slots to be used even though there are no hits to them? I have 150 max connections and probably no more than 4 tables per join, FWIW. Here is the tuner script output for temp tables, which I've also been tuning: TEMP TABLES Current max_heap_table_size = 90 M Current tmp_table_size = 90 M Of 11032358 temp tables, 40% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables. Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables.

    Read the article

  • Backing up MySQL DB wtih mixture of innodb and myisam tables

    - by madphp
    I have a large database (almost 1GB) and it has a mixture of innodb and myisam tables. Does anyone have any general tips when backing it up or more specifically the commands i should send to mysqldump. I see that i should lock myisam tables, and that single transactions for innodb, but what if i have both. Also, what is actually happening when i lock an entire (very big) table on a production database.

    Read the article

  • Doing a mysql dump causes swapping issues

    - by DFischer
    I do a mysqldump manually every night. I just noticed that after it is done and I try to access the website it is very slow. After I take a look at the free -mh I notice that the server is now swapping when it otherwise wasn't before the mysqldump. What am I to do in this case? Just restart the server every time I backup? That doesn't seem very effective. My database file raw is 1.1gb after the dump.

    Read the article

  • MySQL slave server from dumps

    - by HTF
    I've created a slave server from live machine which is acting as a master now. I use the following procedure to create it: mysqldump --opt -Q -B --master-data=2 --all-databases > dump.sql then I imported this dump on the new machine, applied the "CHANGE MASTER TO..." directive with a log file/position from the dump. Please note that I have around 8000 databases and I didn't stop the master while the dumps were running. The replication works fine but is this a properly method for creating a slave server? I'm planning to promote this slave to a master (different location) so I would like to make sure that there is a 100% data consistency between the servers. I've found this article where it says: The naive approach is just to use mysqldump to export a copy of the master and load it on the slave server. This works if you only have one database. With multiple database, you'll end up with inconsistent data. Mysqldump will dump data from each database on the server in a different transaction. That means that your export will have data from a different point in time for each database. Thank you

    Read the article

  • MySQL "Host" permissions

    - by Wayne M
    Exactly what is the best way to configure this? I have a user account for a web app specified, but I also want to connect to the database via a GUI. The host is specified as % but the GUI tool repeatedly says access denied although I am using the proper password. If I change this to localhost then I can connect via the command line, but not via the GUI. If I add two entries, then I can connect via the command line and not the GUI. Leaving only the % doesn't let me connect via the command line OR the GUI. I want to be able to connect both on the actual server (via the web app itself) AND via the GUI tool.

    Read the article

  • How to create a mysql database that can contain any character, also different languages

    - by Jakke
    I'm trying to create a database that has to contain articles in different languages. I'm using Mariadb as my server and I know bits of SQL. My knowledge doesn't really cover details like the differences between engines like MyISAM, InnoDB etc or character sets like utf8/16/32, latin 5/7/etc. I do know that the character set has importance, I guess what I'm looking for is an all-encompassing character set and an engine that best deals with this type of content. Also, is there an advantage in storing articles in multiple data rows (equivalent of different pages) to make things a little faster, or would you store a whole article in a single data row. Or does that depend on the size of the articles? Sorry for my noobish question, I know the information is all out on the internet but it would take me quite a long time to research and get a grip on everything. Would be cool if someone with experience could give me a little head start and point me in the right direction. This is for a intranet site, consider the content to be somewhat like a blog (and no, I don't want wordpress or something similar at this point). Not sure if it matters, but I tend to create and manipulate my tables with phpmyadmin, I use apache as web server and it all runs on Linux.

    Read the article

  • SASL (Postfix) authentication with MySQL and SHA1 pre-encrypted passwords

    - by webo
    I have a Rails app with the Devise authentication gem running user registration and login. I want to use the db table that Devise populates when a user registers as the table that Postfix uses to authenticate users. The table has all the fields that Postfix may want for SASL authentication except that Devise encrypts the password using SHA1 before placing it in the database. How could I go about getting Postfix/SASL to decrypt those passwords so that the user can be authenticated properly? Devise salts the password so I'm not sure if that helps. Any suggestions? I'd likely want to do something similar with Dovecot or Courier, I'm not attached to one quite yet.

    Read the article

  • How do I mirror a MySQL database?

    - by user45745
    I'm running two load balanced servers for one website, and I'd like the databases to be synchronized. Queries may be run on either of the two servers because they are both production sites, so the replication can't just work one way. It doesn't have to be in real-time, just fairly accurate so people don't notice a difference when they get switched to a different server.

    Read the article

  • MySQL gzipped Export in PhpMyAdmin has wrong size in Mozilla

    - by Michal Gow
    That is really strange. I am using PhpMyAdmin 2.11.9.6 on Linux hosting. While I am Exporting databases using "gzipped" compression in Mozilla, I am getting files which have size of uncompressed database, but they seems to be downloading in incredible speed (10 times quicker than is possible using my ISP). So at the end: for database of 10M size I am getting 10M gzip downloaded in miniseconds it has indeed shown 10M size on drive it is corrupted Zip compression is working just fine (I am getting file with cca 1M size with fine content of compressed database) And the weirdest thing: that is happening for Mozilla Firefox (13.0.1) only, Internet Explorer 9 is downloading correct gzipped files... Any hint?

    Read the article

  • 10 Million records = wiped MySQL DB?

    - by Josh K
    So I was trying to load some test data and it appears to have killed my entire database. This is one case where it's great to have backups! They were all plain insert queries, probably about a 900 MB file. What could have gone wrong?

    Read the article

  • Extracting a line section of mysql backup using sed

    - by carpii
    I occasionally need to extract a single record from a mysqlbackup To do this, I first extract the single table I want from the backup... sed -n -e '/CREATE TABLE.*usertext/,/CREATE TABLE/p' 20120930_backup.sql > table.sql In table.sql, the records are batched using extended inserts (with maybe 100 records per insert before it creates a new line starting with INSERT INTO), so they look like... INSERT INTO usertext VALUES (1, field2 etc), (2, field2 etc), INSERT INTO usertext VALUES (101, field2 etc), (102, field2 etc), ... Im trying to extract record 239560 from this, using... sed -n -e '/(239560.*/,/)/p' table.sql > record.sql Ie.. start streaming when it finds 239560, and stop when it hits the closing bracket But this isnt working as I hoped, it just results in the full insert batch being output. Please can someone give me some pointers as to where Im going wrong? Would I be better off using awk for extracting segments of lines, and use sed for extracting lines within a file?

    Read the article

  • Backing up a 22 GB MySQL database daily

    - by unknown (yahoo)
    Right now I am able to do the backup using mysqldump. But I have to take down the web server AND it takes around 5 minutes to do the backup. If I don't take down the web server, it takes forever and never finishes + the website becomes inaccessible during the backup. Is there a quicker/better way to backup my 22 GB and growing database? All the tables are MyISAM.

    Read the article

  • innodb memory usage mysql

    - by Tiddo
    I have a small vps, with only 256mb of ram, with maximum burst up to 512mb. When I configure my vps without innodb, it only uses 130 mb of ram, so that is no problem for me. But when I turn on innodb, The memory usage grows to about 300-400 mb. Is it possible to run innodb such that I won't exceed the 256mb? preferably I don't want to use more than 100mb for innodb. I already came across some sites which said I could limit the memory usage, but if I limit it to only 100mb will the db run well enough? (compared to for example the MyISAM storage engine) If 100mb is to little memory for innodb, can you recommend me any other storage engine which supports transactions?

    Read the article

  • Centos/MySQL Server Tweaking

    - by josephs8
    If I wanted a professional to like tweak my server where would I go about finding someone that does this? I have a web server that gets a lot of traffic and Im learning all about managing web servers, but before my traffic gets out of hand I want the server tweaked up. I already have some high load issues so I wanted to see if they could help with that. Then I would continue my learning on my own. Thank You

    Read the article

  • MySQL Non Index Queries Analysis

    - by Markii
    I'm using the log queries not using index but it logs all that use indexes but just more advanced or using IFs. Is there a parser or a program out there that can analyze the log and give me a literal output of saying "table.column should be a index" Thanks

    Read the article

  • CentOS not allowing remote MySQL connections

    - by nd8ad
    When assigning a user from a remote IP to connect to a database it is saying that it's failing to connect. It is also failing to connect with root so something is wrong. Bind IP is off and I have also tried disabling iptables, still no dice. Port 3306 is forwarded. I'm running on Centos 5.6, using phpmyadmin, but I have also tried to assign the user via the commandline and create a new database, still not working. Been googling and troubleshooting for hours now, no dice.

    Read the article

  • Auto-restart mysql when it dies

    - by Los Frijoles
    I have a rackspace server that I have been renting to run my personal projects upon. Since I am cheap, it has 256Mb of RAM and honestly can't handle alot. Every once in a while, when there is a sharp uptick in traffic, the server decides to start killing processes and it seems that mysqld is a popular one for it to kill. I try to visit my site and am greeted with the message that there was an error establishing the database connection. Inspection of the logs reveals that mysqld was killed due to lack of memory. Since I am still as poor as I was yesterday and don't want to upgrade my rackspace VM's RAM, is there a way I can tell it to automagically restart mysqld when it dies? I have a thought to use something like crontab, but alas, I don't know exactly what to do there either. I guess I am product of the "Linux on your desktop" generation since I can do most things on my desktop and laptop (which run Linux almost exclusively), but still lack a lot of server administration skills for Linux. The server runs CentOS 6.3

    Read the article

  • Move MySQL master

    - by Noodles
    I currently have a master db server (lets call it db1) and 6 slaves (slave1-6). I've setup a new server (db2) as a slave of db1 and it's in sync. I want to change all the slaves to use db2 instead of db1, but with minimal downtime/data loss. At the moment the only way I can think of doing it is shutting down our website (so data stops being written to db1), waiting until all the slaves are up to date, flush logs on db1, shut it down. Reset master on db2, change all slaves to point to db2 with log position = 0. Is this the right way to do it or is there a way to do it without taking the site offline?

    Read the article

  • How do I mirror a MySQL database?

    - by user45745
    I'm running two load balanced servers for one website, and I'd like the databases to be synchronized. Queries may be run on either of the two servers because they are both production sites, so the replication can't just work one way. It doesn't have to be in real-time, just fairly accurate so people don't notice a difference when they get switched to a different server.

    Read the article

  • Realtime website slowness (PHP/MYSQL)

    - by 3s2ng
    We got a website with realtime transactions. The past few days we experienced some slowness during a period of time. Usually the slowness last for 5 minutes. And also during the slowness we noticed that sometimes we can not connect to SSH and FTP or sometimes very slow to connect to those services. Currently we are trying to identify the issue. We already setup database monitoring tool. Now are about to signup with pingdom.com My question is. If the website is too slow due to the database (table or row lock) will it affect the other services like SSH and FTP? Does the ping correlates the page load and the connection between my PC and server? Thanks, Mark

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >