Search Results

Search found 36111 results on 1445 pages for 'mysql update'.

Page 85/1445 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Mysql process goes over 100% of CPU usage

    - by Temnovit
    Hello! I'm experiencing some problems with my LAMP server. Recently, everything became very slow, even though visitor count on my websites didn't change to much. When I run top command, it sais that mysql process has taken over 150-200% of CPU. How's that possible, I always thought that 100% is a maximum? I'm running Ubuntu 9.04 server edition with 1,5 GB RAM my.cnf settings: key_buffer = 64M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP max_connections = 200 table_cache = 512 table_definition_cache = 512 thread_concurrency = 2 read_buffer_size = 1M sort_buffer_size = 4M join_buffer_size = 1M query_cache_limit = 1M # the maximum size of individual query results query_cache_size = 128M Here is the output of MySQLTuner: The top command: What could be the cause of this problem? Can I make changes to my my.cnf to prevent server from hanging?

    Read the article

  • MySQL server simple insert/update/delete queries are taking a long time to execute

    - by ElGabbu
    We have a VPS hosting server with a MySQL server running on it. We host several databases for client's websites. Recently we have noticed that insert/update and delete queries are taking a long time to execute sometimes as close as 30 seconds. I use the following command to see these queries being executed: watch -n1 mysqladmin proc stat We have still not been able to track the root of this problem. I would apprecite if anyone had any pointers as to what we can check or improve to resolve the issue. Thanks

    Read the article

  • Open table cache in MySQL

    - by vvanscherpenseel
    I have my open table cache set to 1800 and I have a total of 1112 tables. MySQL Tuning Primer reports that 100% of my table cache is used yet my table cache hit rate is 5%. I understand that this happens due to concurrent connections all opening tables. I think I should raise the cache limit. I understand that the cache size is limited by the file descriptor limit of my operating system, but are there any other practical limitations I should be aware of? Searching Google or this very website yields mostly posts explaining the connection-factor or come up with indecisive answers. My question: can I safely increase the open table cache limit? Is there a maximum?

    Read the article

  • postfix + mysql, user unknown

    - by stoned
    I have installed postfix with dovecot and postfix admin and all seemed well at the beginning. I can log in with thunderbird and check the mailboxes (all empty now) and TRY to send mail, even TLS works. The problem comes when I try to send mail. This is the output of mail.log when I try to send mail from an address to the same address: Nov 23 16:41:55 mailforge postfix/local[6322]: 297792467C: to=, relay=local, delay=0.01, delays=0/0/0/0.01, dsn=5.1.1, status=bounced (unknown user: "test") Nov 23 16:41:55 mailforge postfix/qmgr[6293]: 297792467C: removed To me it looks like as if postfix tries to look for the user "test" while in the mysql database users are named as [email protected] . Where should I change this behaviour?

    Read the article

  • Installing old version of mysql

    - by Peter
    I'm trying to troubleshoot a database import problem and want to duplicate the environment onto another server. This will require installing an older version of mysql, but the packages that are listed are only showing a recent version. I'm currently running debian wheezy 7.1 and what was installed was the packaged 5.5.31. What is the official way to install an older copy? I guess I could hunt around Google and hope to find some files of the same version to install from source, but this doesn't seem like a reliable method.

    Read the article

  • Configure phpMyAdmin to connect to another MySql server

    - by Spirit
    I have installed WAMP server on my laptop and for the sake of simplicity I want to configure phpMyAdmin to connect to a mysql server on another machine so that I can dump the database tables. If this is possible (and i believe it is), does any1 knows where is phpMyAdmin settings file located? The location of wamp on my laptop is C:\wamp. I've noticed in C:\wamp\apps\phpmyadmin3.5.1 but there are a lot of php scripts in there. Which one of this should I modify?

    Read the article

  • setting my mysql server - limiting domains that can connect

    - by Alex
    I am trying to setup a mysql server on my machine. I would like to limit the domains that it listens for connections to. My understanding is that you can either have it listen to 1 ip or all ip's. Therefore, if i want to connect remotely I have to say all ips. Then I would like to block all domains but the ones I know should actually be connecting.. I believe this is done through windows firewall. However, how do i do this by domain instead of IP?

    Read the article

  • MySQL max_user_connections

    - by Sheriffen
    We're releasing a site in a couple of weeks that has been developed on a local machine but now when were testing on dev server we get MySQL error 'max_user_connections'. We have talked to the host company (biggest in sweden) and they say that we don't close our connections properly. But the thing is that we user the EXACT same code on another host where it works. And I also added echo "closed"; in the database_close function so that now in the very bottom of very page there is "closed". To me this means that we do close the connection, anyone got any idea of what could be wrong? We connect through the PHP PDO function and closes it by setting it to 'null', all according to the manual.

    Read the article

  • MySql in Bash: Show only errors

    - by TRWTFCoder
    Let me first start off by saying I am not an experienced linux user. I am trying to debug a mysql script in linux, however, my issue is that most of the queries are successful so I can not see the error messages because they scroll off the screen. I am executing the queries from a large file using the \. command. I was wondering if there was a way to show ONLY the error messages when I exececute the sql file. Right now it is showing both error messages and Query OK,.... I don't really care about the queries that are ok, just the errors. Thanks!

    Read the article

  • Linux filesystem suggestion for MySQL with a 100% SELECT workload

    - by gmemon
    I have a MySQL database that contains millions of rows per table and there are 9 tables in total. The database is fully populated, and all I am doing is reads i.e., there are no INSERTs or UPDATEs. Data is stored in MyISAM tables. Given this scenario, which linux file system would work best? Currently, I have xfs. But, I read somewhere that xfs has horrible read performance. Is that true? Should I shift the database to an ext3 file system? Thanks

    Read the article

  • How to handle monetary values in PHP and MySql?

    - by Songo
    I've inherited a huge pile of legacy code written in PHP on top of a MySQL database. The thing I noticed is that the application uses doubles for storage and manipulation of data. Now I came across of numerous posts mentioning how double are not suited for monetary operations because of the rounding errors. However, I have yet to come across a complete solution to how monetary values should be handled in PHP code and stored in a MySQL database. Is there a best practice when it comes to handling money specifically in PHP? Things I'm looking for are: How should the data be stored in the database? column type? size? How should the data be handling in normal addition, subtraction. multiplication or division? When should I round the values? How much rounding is acceptable if any? Is there a difference between handling large monetary values and low ones? Note: A VERY simplified sample code of how I might encounter money values in everyday life: $a= $_POST['price_in_dollars']; //-->(ex: 25.06) will be read as a string should it be cast to double? $b= $_POST['discount_rate'];//-->(ex: 0.35) value will always be less than 1 $valueToBeStored= $a * $b; //--> any hint here is welcomed $valueFromDatabase= $row['price']; //--> price column in database could be double, decimal,...etc. $priceToPrint=$valueFromDatabase * 0.25; //again cast needed or not? I hope you use this sample code as a means to bring out more use cases and not to take it literally of course. Bonus Question If I'm to use an ORM such as Doctrine or PROPEL, how different will it be to use money in my code.

    Read the article

  • mysqldb interfaceError

    - by Johanna
    I have a very weird probleme with mysqldb (mysql module for python). I have a file with queries for inserting records in tables. If I call the functions from the file, it works just fine. But when I try to call one of the functions from another file it throws me a _mysql_exception.InterfaceError: (0, '') I really don't get what I'm doing wrong here.. I call the function from buildDB.py : import create create.newFormat("HD", 0,0,0) The function newFormat(..) is in create.py (imported) : from Database import Database db = Database() def newFormat(name, width=0, height=0, fps=0): format_query = "INSERT INTO Format (form_name, form_width, form_height, form_fps) VALUES ('"+name+"',"+str(width)+","+str(height)+","+str(fps)+");" db.execute(format_query) And the class Databse is the following : import MySQLdb from MySQLdb.constants import FIELD_TYPE class Database(): def __init__(self): server = "localhost" login = "seq" password = "seqmanager" database = "Sequence" my_conv = { FIELD_TYPE.LONG: int } self.conn = MySQLdb.connection(host=server, user=login, passwd=password, db=database, conv=my_conv) # self.cursor = self.conn.cursor() def close(self): self.conn.close() def execute(self, query): self.conn.query(query) (I put only relevant code) Traceback : Z:\sequenceManager\mysql>python buildDB.py D:\ProgramFiles\Python26\lib\site-packages\MySQLdb\__init__.py:34: DeprecationWa rning: the sets module is deprecated from sets import ImmutableSet INSERT INTO Format (form_name, form_width, form_height, form_fps) VALUES ('HD',0 ,0,0); Traceback (most recent call last): File "buildDB.py", line 182, in <module> create.newFormat("HD") File "Z:\sequenceManager\mysql\create.py", line 52, in newFormat db.execute(format_query) File "Z:\sequenceManager\mysql\Database.py", line 19, in execute self.conn.query(query) _mysql_exceptions.InterfaceError: (0, '') The warning has never been a problem before so I don't think it's related.

    Read the article

  • CENTOS 6 - How to install php-mysql when php-common @remi is present?

    - by Multitut
    I am having troubles adding mysql support for my php installation, this installation was made using a ready to use-package that came with our VPS. This is my php.info: http://snake.quetzalcoatech.com/info.php I am trying to install php mysql using: yum install php-mysql And get this output: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.serveraxis.net * extras: mirror.fdcservers.net * updates: bay.uchicago.edu Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-mysql.x86_64 0:5.3.3-14.el6_3 will be installed --> Processing Dependency: php-common = 5.3.3-14.el6_3 for package: php-mysql-5.3.3-14.el6_3.x86_64 --> Finished Dependency Resolution Error: Package: php-mysql-5.3.3-14.el6_3.x86_64 (updates) Requires: php-common = 5.3.3-14.el6_3 Installed: php-common-5.3.17-2.el6.remi.x86_64 (@remi) php-common = 5.3.17-2.el6.remi Available: php-common-5.3.3-3.el6_2.8.x86_64 (base) php-common = 5.3.3-3.el6_2.8 Available: php-common-5.3.3-14.el6_3.x86_64 (updates) php-common = 5.3.3-14.el6_3 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest I am a noob using Linux, so could you tell me which command should I use to install a compatible php-mysql module? Thank you so much!

    Read the article

  • mysql connect error issue

    - by Alex
    I've php page which update Mysql Db. I don't understand why my following php code is saying that "Could not update marker. No database selected". Strange!! can you please tell me why it's showing error message. Thanks. Php code: <?php // database settings $db_username = 'root'; $db_password = ''; $db_name = 'parkool'; $db_host = 'localhost'; //mysqli $mysqli = new mysqli($db_host, $db_username, $db_password, $db_name); if (mysqli_connect_errno()) { header('HTTP/1.1 500 Error: Could not connect to db!'); exit(); } ################ Save & delete markers ################# if($_POST) //run only if there's a post data { //make sure request is comming from Ajax $xhr = $_SERVER['HTTP_X_REQUESTED_WITH'] == 'XMLHttpRequest'; if (!$xhr){ header('HTTP/1.1 500 Error: Request must come from Ajax!'); exit(); } // get marker position and split it for database $mLatLang = explode(',',$_POST["latlang"]); $mLat = filter_var($mLatLang[0], FILTER_VALIDATE_FLOAT); $mLng = filter_var($mLatLang[1], FILTER_VALIDATE_FLOAT); $mName = filter_var($_POST["name"], FILTER_SANITIZE_STRING); $mAddress = filter_var($_POST["address"], FILTER_SANITIZE_STRING); $mId = filter_var($_POST["id"], FILTER_SANITIZE_STRING); /*$result = mysql_query("SELECT id FROM test.markers WHERE test.markers.lat=$mLat AND test.markers.lng=$mLng"); if (!$result) { echo 'Could not run query: ' . mysql_error(); exit; } $row = mysql_fetch_row($result); $id=$row[0];*/ //$output = '<h1 class="marker-heading">'.$mId.'</h1><p>'.$mAddress.'</p>'; //exit($output); //Update Marker if(isset($_POST["update"]) && $_POST["update"]==true) { $results = mysql_query("UPDATE parkings SET latitude = '$mLat', longitude = '$mLng' WHERE locId = '94' "); if (!$results) { //header('HTTP/1.1 500 Error: Could not Update Markers! $mId'); echo "coudld not update marker." . mysql_error(); exit(); } exit("Done!"); } $output = '<h1 class="marker-heading">'.$mName.'</h1><p>'.$mAddress.'</p>'; exit($output); } ############### Continue generating Map XML ################# //Create a new DOMDocument object $dom = new DOMDocument("1.0"); $node = $dom->createElement("markers"); //Create new element node $parnode = $dom->appendChild($node); //make the node show up // Select all the rows in the markers table $results = $mysqli->query("SELECT * FROM parkings WHERE 1"); if (!$results) { header('HTTP/1.1 500 Error: Could not get markers!'); exit(); } //set document header to text/xml header("Content-type: text/xml"); // Iterate through the rows, adding XML nodes for each while($obj = $results->fetch_object()) { $node = $dom->createElement("marker"); $newnode = $parnode->appendChild($node); $newnode->setAttribute("name",$obj->name); $newnode->setAttribute("locId",$obj->locId); $newnode->setAttribute("address", $obj->address); $newnode->setAttribute("latitude", $obj->latitude); $newnode->setAttribute("longitude", $obj->longitude); //$newnode->setAttribute("type", $obj->type); } echo $dom->saveXML();

    Read the article

  • Return pre-UPDATE column values in PostgreSQL without using triggers, functions or other "magic"

    - by Python Larry
    I have a related question, but this is another part of MY puzzle. I would like to get the OLD VALUE of a Column from a Row that was UPDATEd... WITHOUT using Triggers (nor Stored Procedures, nor any other extra, non-SQL/-query entities). The query I have is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) RETURNING row_id If I could do "FOR UPDATE ON my_table" at the end of the subquery, that'd be devine (and fix my other question/problem). But, that won't work: can't have this AND a "GROUP BY" (which is necessary for figuring out the COUNT of trans_nbr's). Then I could just take those trans_nbr's and do a query first to get the (soon-to-be-) former processing_by values. I've tried doing like: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table old_my_table JOIN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) sub_my_table ON old_my_table.trans_nbr = sub_my_table.trans_nbr WHERE my_table.trans_nbr = sub_my_table.trans_nbr AND my_table.processing_by = old_my_table.processing_by RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by But that can't work; "old_my_table" is not viewable outside of the join; the RETURNING clause is blind to it. I've long since lost count of all the attempts I've made; I have been researching this for literally hours. If I could just find a bullet-proof way to lock the rows in my subquery - and ONLY those rows, and WHEN the subquery happens - all the concurrency issues I'm trying to avoid disappear... UPDATE: [WIPES EGG OFF FACE] Okay, so I had a typo in the non-generic code of the above that I wrote "doesn't work"; it does... thanks to Erwin Brandstetter, below, who stated it would, I re-did it (after a night's sleep, refreshed eyes, and a banana for bfast). Since it took me so long/hard to find this sort of solution, perhaps my embarrassment is worth it? At least this is on SO for posterity now... : What I now have (that works) is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table AS old_my_table WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(*) > 1 LIMIT our_limit_to_have_single_process_grab ) AND my_table.row_id = old_my_table.row_id RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by AS old_processing_by The COUNT(*) is per a suggestion from Flimzy in a comment on my other (linked above) question. (I was more specific than necessary. [In this instance.])

    Read the article

  • Large number of UPDATE queries slowing down page

    - by Bryan Lewis
    I am reading and validating large fixed-width text files (range from 10-50K lines) that are submitted via our ASP.net website (coded in VB.Net). I do an initial scan of the file to check for basic issues (line length, etc). Then I import each row into a MS SQL table. Each DB rows basically consists of a record_ID (Primary, auto-incrementing) and about 50 varchar fields. After the insert is done, I run a validation function on the file that checks each field in each row based on a bunch of criteria (trimmed length, isnumeric, range checks, etc). If it finds an error in any field, it inserts a record into the Errors table, which has an error_ID, the record_ID and an error message. In addition, if the field fails in a particular way, I have to do a "reset" on that field. A reset might consist of blanking the entire field, or simply replacing the value with another value (e.g. replacing the string with a new one that has all illegals chars taken out). I have a 5,000 line test file. The upload, initial check, and import takes about 5-6 seconds. The detailed error check and insert into the Errors table takes about 5-8 seconds (this file has about 1200 errors in it). However, the "resets" part takes about 40-45 seconds for 750 fields that need to be reset. When I comment out the resets function (returning immediately without actually calling the UPDATE stored proc), the process is very fast. With the resets turned on, the pages take 50 seconds to return. My UPDATE stored proc is using some recommended code from http://sommarskog.se/dynamic_sql.html, whereby it uses CASE instead of dynamic SQL: UPDATE dbo.Records SET dbo.Records.file_ID = CASE @field_name WHEN 'file_ID' THEN @field_value ELSE file_ID END, . . (all 50 varchar field CASE statements here) . WHERE dbo.Records.record_ID = @record_ID Is there any way I can help my performance here. Can I somehow group all of these UPDATE calls into a single transaction? Should I be reworking the UPDATE query somehow? Or is it just sheer quantity of 750+ UPDATEs and things are just slow (it's a quad proc server with 8GB ram). Any suggestions appreciated.

    Read the article

  • MySQL InnoDB Corruption after power outage, possible to recover?

    - by Tim Hackett
    Hey Guys, I recently started trying to get Redmine up and running after a power outage that seems to have corrupted our InnoDB database in MySQL. Redmine had an extensive set of documentation that I would like to get even if redmine isn't able to run. The service fails on startup. I have tried inserting innodb_force_recovery = 4 per the documentation from the url in the error log. (also tried 1 thru 6 as I have backed up all directories after the corruption) I have verified through "mysqld-nt --print-defaults" that it is starting with the recovery option in the params. The machine is running Windows Server 2003 SP2, Xeon E5335 with 2GB RAM, MySQL is not mirrored to another machine, nor is the machine a mirror. I do not have any backups because the previous person did not set them up. Here is the error log: InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100308 14:50:01 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 100308 14:50:02 InnoDB: Error: page 7 log sequence number 0 935521175 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 2 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 11 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 5 log sequence number 0 972973045 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 6 log sequence number 0 972984051 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 1577 log sequence number 0 972737368 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. InnoDB: Error: trying to access page number 4294965119 in space 0, InnoDB: space name .\ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10. InnoDB: If you get this error at mysqld startup, please check that InnoDB: your my.cnf matches the ibdata files that you have in the InnoDB: MySQL server. 100308 14:50:02InnoDB: Assertion failure in thread 960 in file .\fil\fil0fil.c line 3959 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: about forcing recovery. 100308 14:50:02 [ERROR] mysqld-nt: Got signal 11. Aborting! 100308 14:50:02 [ERROR] Aborting 100308 14:50:02 [Note] mysqld-nt: Shutdown complete

    Read the article

  • Update iPhone UIProgressView during NSURLConnection download.

    - by Scott Pendleton
    I am using this code: NSURLConnection *oConnection=[[NSURLConnection alloc] initWithRequest:oRequest delegate:self]; to download a file, and I want to update a progress bar on a subview that I load. For that, I use this code: - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [oReceivedData appendData:data]; float n = oReceivedData.length; float d = self.iTotalSize; NSNumber *oNum = [NSNumber numberWithFloat:n/d]; self.oDPVC.oProgress.progress = [oNum floatValue]; } The subview is oDPVC, and the progress bar on it is oProgress. Setting the progress property does not update the control. From what I have read on the Web, lots of people want to do this and yet there is not a complete, reliable sample. Also, there is much contradictory advice. Some people think that you don't need a separate thread. Some say you need a background thread for the progress update. Some say no, do other things in the background and update the progress on the main thread. I've tried all the advice, and none of it works for me. One more thing, maybe this is a clue. I use this code to load the subview during applicationDidFinishLaunching: self.oDPVC = [[DownloadProgressViewController alloc] initWithNibName:@"DownloadProgressViewController" bundle:nil]; [window addSubview:self.oDPVC.view]; In the XIB file (which I have examined in both Interface Builder and in a text editor) the progress bar is 280 pixels wide. But when the view opens, it has somehow been adjusted to maybe half that width. Also, the background image of the view is default.png. Instead of appearing right on top of the default image, it is shoved up about 10 pixels, leaving a white bar across the bottom of the screen. Maybe that's a separate issue, maybe not.

    Read the article

  • Mysql: Disk is full writing

    - by elma
    Hi there, I'm having some problems with my mysql server lately, so I've decided to check the error logs: [root@LSN-D1179 log]# tail -10 mysqld.log 100325 19:30:03 [ERROR] /usr/libexec/mysqld: Table './lfe/actions' is marked as crashed and should be repaired 100325 19:30:03 [ERROR] /usr/libexec/mysqld: Table './lfe/actions' is marked as crashed and should be repaired 100325 19:30:18 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_task_logs.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:34:34 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_profile_portal_views.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:39:46 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_posts.TMD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:40:18 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_task_logs.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:44:34 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_profile_portal_views.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:49:46 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_posts.TMD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:50:18 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_task_logs.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:54:34 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_profile_portal_views.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs And here's is my df -h output [root@LSN-D1179 log]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 143G 6.2G 129G 5% / /dev/sda1 99M 12M 83M 13% /boot tmpfs 490M 0 490M 0% /dev/shm As you can see, I have plenty of free space; so I couldn't figure out these "Disk is full" errors in mysqld.log. Does anyone know what should I do to fix this? Ugur

    Read the article

  • Replace text with spaces in MySQL

    - by javipas
    I'm trying to do a global replace of search in my database, which has a lot of articles with a double carriage return because of this code: <p> </p> I'd like to replace this in my WordPress blog so instead of that appears... nothing, and so I can delete the CR. I've tried this on my database UPDATE wp_posts set post_content = replace (post_content,'<p> </p>',''); but didn't work. Why? Do I have to add special thinks to consider the space between the <p>and the</p>? Mmm. Good points, both Jon Angliss and Wim. Jon, as you could have guessed, the database shows no entries with that text string. So there's something going on inside the post_content field. Wim, the famous   was replaced previously, but there are still hundreds of posts that for some reason have something different between the p and the /p tags. I've done a search of one of the posts with this error: mysql> select * from wp_posts where post_title like '%3DVisionLive%'; And looking in the wp_content field, this is a little piece of the post: Phil Eisler, responsable de la divisi?n 3D Vision.?</p> <p>?</p> <p>Este portal ser? por tanto No spanish tilde (accent) shown on the terminal, and instead of an space there's a quotation mark between the p and the /p tags. I've tried to replace <p>?</p>, but again, no results. There's some character (or several) there, but I don't know how to discover that. Maybe it's the character set of my terminal, but I've accessed the database from phpmyadmin and in that case there's a space character between the p and the /p. Weird.

    Read the article

  • How to use most of memory available on MySQL

    - by Zilvinas
    I've got a MySQL server which has both InnoDB and MyISAM tables. InnoDB tablespace is quite small under 4 GB. MyISAM is big ~250 GB in total of which 50 GB is for indexes. Our server has 32 GB of RAM but it usually uses only ~8GB. Our key_buffer_size is only 2GB. But our key cache hit ratio is ~95%. I find it hard to believe.. Here's our key statistics: | Key_blocks_not_flushed | 1868 | | Key_blocks_unused | 109806 | | Key_blocks_used | 1714736 | | Key_read_requests | 19224818713 | | Key_reads | 60742294 | | Key_write_requests | 1607946768 | | Key_writes | 64788819 | key_cache_block_size is default at 1024. We have 52 GB's of index data and 2GB key cache is enough to get a 95% hit ratio. Is that possible? On the other side data set is 200GB and since MyISAM uses OS (Centos) caching I would expect it to use a lot more memory to cache accessed myisam data. But at this stage I see that key_buffer is completely used, our buffer pool size for innodb is 4gb and is also completely used that adds up to 6GB. Which means data is cached using just 1 GB? My question is how could I check where all the free memory could be used? How could I check if MyISAM hits OS cache for data reads instead of disk?

    Read the article

  • mysql mass insert data

    - by user12145
    Edit: I realized that if I construct a large query in memory, the speed has increased almost 10 times of magnitude "insert ignore into xxx(col1, col2) values('a',1), values('b',1), values('c',1)..." Edit: since I have an index on the first column, the insert time creeps up as I insert more. Can I delay the index until the end? Original: I'm using the following to batch insert 10 million rows into mysql db(not all at once, since they don't all fit into memory), it's too slow(taking many hours). should I use load file to improve performance? I would have to create a second file to store all the 10 million rows, then load that into db. are there better ways? PreparedStatement st=con.prepareStatement("insert ignore into xxx (col1, col2) "+ " values (?, 1)"); Iterator d=data.iterator(); while(d.hasNext()){ st.clearParameters(); st.setString(1, (d.next()).toLowerCase()); st.addBatch(); } int[]updateCounts=st.executeBatch();

    Read the article

  • Logging hurts MySQL performance - but, why?

    - by jimbo
    I'm quite surprised that I can't see an answer to this anywhere on the site already, nor in the MySQL documentation (section 5.2 seems to have logging otherwise well covered!) If I enable binlogs, I see a small performance hit (subjectively), which is to be expected with a little extra IO -- but when I enable a general query log, I see an enormous performance hit (double the time to run queries, or worse), way in excess of what I see with binlogs. Of course I'm now logging every SELECT as well as every UPDATE/INSERT, but, other daemons record their every request (Apache, Exim) without grinding to a halt. Am I just seeing the effects of being close to a performance "tipping point" when it comes to IO, or is there something fundamentally difficult about logging queries that causes this to happen? I'd love to be able to log all queries to make development easier, but I can't justify the kind of hardware it feels like we'd need to get performance back up with general query logging on. I do, of course, log slow queries, and there's negligible improvement in general usage if I disable this. (All of this is on Ubuntu 10.04 LTS, MySQLd 5.1.49, but research suggests this is a fairly universal issue)

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >