Search Results

Search found 34274 results on 1371 pages for 'mysql table'.

Page 110/1371 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Upgrading mysql from 4.1.22 to 5.0

    - by Arsenal
    Hi, I'm trying to upgrade our company's 4.1.22 version of MySQL to 5. I'm using sudo yum --enablerepo=centosplus upgrade mysql* but keep getting an error of conflicted files with the 4.1 version. Does that mean there really isn't any other way than uninstalling 4.1 and installing 5.0? I have read that using the yum upgrade command should work however... Thanks in advance!

    Read the article

  • MySQL InnoDB insertion is very slow

    - by dharmapurikar
    We use MySQL server 5.1.43 64-bit edition. InnoDB is used as engine. We have a sql script which we execute every time we build the application. On ubuntu machine with MySQL server and InnoDB engine it takes about 55 seconds to complete the execution. If I run the same script on OSX, it takes close to 3 minutes! Any ideas why OSX is so slow while executing this script?

    Read the article

  • MySQL index building performance

    - by Christian
    I tried to build an index over a two columns of a 30,000,000 entry database. I canceled the process after ~60hr as it didn't seem to work. For some reason MySQL takes only 22 mb ram instead of using the RAM fully. Is index building an operation that needs no Ram or is there some way to tell MySQL to use more RAM to be faster?

    Read the article

  • MySQL ORDER BY DESC is fast but ASC is very slow

    - by Pepper
    Hello, I'm completely stumped on this one. For some reason when I sort this query by DESC it's super fast, but if sorted by ASC it's extremely slow. This takes about 150 milliseconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published DESC LIMIT 0, 50; This takes about 32 seconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published ASC LIMIT 0, 50; The EXPLAIN is the same for both queries. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts index NULL published 5 NULL 50 Using where I've tracked it down to "USE INDEX (published)". If I take that out it's the same performance both ways. But the EXPLAIN shows the query is less efficient overall. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts range feed_id feed_id 4 \N 759 Using where; Using filesort And here's the table. CREATE TABLE `posts` ( `id` int(20) NOT NULL AUTO_INCREMENT, `feed_id` int(11) NOT NULL, `post_url` varchar(255) NOT NULL, `title` varchar(255) NOT NULL, `content` blob, `author` varchar(255) DEFAULT NULL, `published` int(12) DEFAULT NULL, `updated` datetime NOT NULL, `created` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `post_url` (`post_url`,`feed_id`), KEY `feed_id` (`feed_id`), KEY `published` (`published`) ) ENGINE=InnoDB AUTO_INCREMENT=196530 DEFAULT CHARSET=latin1; Is there a fix for this? Thanks!

    Read the article

  • Mysql ninja tricks

    - by alexn
    Hi, what are your mysql ninja tricks? What features are extra special? I'm starting with ORDER BY FIELD which enables you to sort in a particular order, like this: SELECT url FROM customer ORDER BY FIELD(customer.priority, 1, 2, 3, 0) Features like this is hard to find in the mysql documentation. Bring it!

    Read the article

  • Optimizing MySql query to avoid using "Using filesort"

    - by usef_ksa
    I need your help to optimize the query to avoid using "Using filesort".The job of the query is to select all the articles that belongs to specific tag. The query is: "select title from tag,article where tag='Riyad' AND tag.article_id=article.id order by tag.article_id". the tables structure are the following: Tag table CREATE TABLE `tag` ( `tag` VARCHAR( 30 ) NOT NULL , `article_id` INT NOT NULL , INDEX ( `tag` ) ) ENGINE = MYISAM ; Article table CREATE TABLE `article` ( `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY , `title` VARCHAR( 60 ) NOT NULL ) ENGINE = MYISAM Sample data INSERT INTO `article` VALUES (1, 'About Riyad'); INSERT INTO `article` VALUES (2, 'About Newyork'); INSERT INTO `article` VALUES (3, 'About Paris'); INSERT INTO `article` VALUES (4, 'About London'); INSERT INTO `tag` VALUES ('Riyad', 1); INSERT INTO `tag` VALUES ('Saudia', 1); INSERT INTO `tag` VALUES ('Newyork', 2); INSERT INTO `tag` VALUES ('USA', 2); INSERT INTO `tag` VALUES ('Paris', 3); INSERT INTO `tag` VALUES ('France', 3);

    Read the article

  • MySql Query lag time / deadlock?

    - by Click Upvote
    When there are multiple PHP scripts running in parallel, each making an UPDATE query to the same record in the same table repeatedly, is it possible for there to be a 'lag time' before the table is updated with each query? I have basically 5-6 instances of a PHP script running in parallel, having been launched via cron. Each script gets all the records in the items table, and then loops through them and processes them. However, to avoid processing the same item more than once, I store the id of the last item being processed in a seperate table. So this is how my code works: function getCurrentItem() { $sql = "SELECT currentItemId from settings"; $result = $this->db->query($sql); return $result->get('currentItemId'); } function setCurrentItem($id) { $sql = "UPDATE settings SET currentItemId='$id'"; $this->db->query($sql); } $currentItem = $this->getCurrentItem(); $sql = "SELECT * FROM items WHERE status='pending' AND id > $currentItem'"; $result = $this->db->query($sql); $items = $result->getAll(); foreach ($items as $i) { //Check if $i has been processed by a different instance of the script, and if so, //leave it untouched. if ($this->getCurrentItem() > $i->id) continue; $this->setCurrentItem($i->id); // Process the item here } But despite of all the precautions, most items are being processed more than once. Which makes me think that there is some lag time between the update queries being run by the PHP script, and when the database actually updates the record. Is it true? And if so, what other mechanism should I use to ensure that the PHP scripts always get only the latest currentItemId even when there are multiple scripts running in parrallel? Would using a text file instead of the db help?

    Read the article

  • Convert MSSQL Varbinary field to MYSQL, keeping data intact

    - by Mike Sheridan
    I was given the daunting task of converting a ASP website to PHP and MSSQL to MySQL, and I ran into an issue that hopefully somebody can help I have a user table which has a password field with datatype Varbinary(128), are using pwdencrypt to encrypt the password. Is there a way to transfer that over to MySQL, and somehow i need to be able to keep the password intact... how can i go about that? any pointers would be greatly appreciated!

    Read the article

  • Finding column index using jQuery when table contains column-spanning cells

    - by Brant Bobby
    Using jQuery, how can I find the column index of an arbitrary table cell in the example table below, such that cells spanning multiple columns have multiple indexes? HTML <table> <tbody> <tr> <td>One</td> <td>Two</td> <td id="example1">Three</td> <td>Four</td> <td>Five</td> <td>Six</td> </tr> <tr> <td colspan="2">One</td> <td colspan="2">Two</td> <td colspan="2" id="example2">Three</td> </tr> <tr> <td>One</td> <td>Two</td> <td>Three</td> <td>Four</td> <td>Five</td> <td>Six</td> </tr> </tbody> </table> jQuery var cell = $("#example1"); var example1ColIndex = cell.parent("tr").children().index(cell); // == 2. This is fine. cell = $("#example2"); var example2ColumnIndex = cell.parent("tr").children().index(cell); // == 2. It should be 4 (or 5, but I only need the lowest). How can I do this?

    Read the article

  • MySQL query to view vertical data

    - by wenkhairu
    I have MySQL data that looks like this: +----------------------------------------+ |Name | kode | jum | +----------------------------------------+ | aman |kode1 | 2 | | aman |kode2 | 1 | | jhon |kode1 | 4 | | amir |kode2 | 4 | +--------------------+-----------+-------+ How can I make the table look like this one, using a MySQL query? kode1 kode2 count aman 2 1 3 jhon 0 4 4 amir 0 4 4

    Read the article

  • MySQL BinLog Statement Retrieval

    - by Jonathon
    I have seven 1G MySQL binlog files that I have to use to retrieve some "lost" information. I only need to get certain INSERT statements from the log (ex. where the statement starts with "INSERT INTO table SET field1="). If I just run a mysqlbinlog (even if per database and with using --short-form), I get a text file that is several hundred megabytes, which makes it almost impossible to then parse with any other program. Is there a way to just retrieve certain sql statements from the log? I don't need any of the ancillary information (timestamps, autoincrement #s, etc.). I just need a list of sql statements that match a certain string. Ideally, I would like to have a text file that just lists those sql statements, such as: INSERT INTO table SET field1='a'; INSERT INTO table SET field1='tommy'; INSERT INTO table SET field1='2'; I could get that by running mysqlbinlog to a text file and then parsing the results based upon a string, but the text file is way too big. It just times out any script I run and even makes it impossible to open in a text editor. Thanks for your help in advance.

    Read the article

  • How to join a table in symfony (Propel) and retrieve object from both table with one query

    - by Jean-Philippe
    Hi, I'm trying to get an easy way to fetch data from two joined Mysql table using Propel (inside Symfony) but in one query. Let's say I do this simple thing: $comment = CommentPeer::RetrieveByPk(1); print $comment->getArticle()->getTitle(); //Assuming the Article table is joined to the Comment table Symfony will call 2 queries to get that done. The first one to get the Comment row and the next one to get the Article row linked to the comment one. Now, I am trying to find a way to make all that within one query. I've tried to join them using $c = new Criteria(); $c->addJoin(CommentPeer::ARTICLE_ID, ArticlePeer::ID); $c->add(CommentPeer::ID, 1); $comment = CommentPeer::doSelectOne($c); But when I try to get the Article object using $comment->getArticle() It will still issue the query to get the Article row. I could easily clear all the selected columns and select the columns I need but that would not give me the Propel object I'd like, just an array of the query's raw result. So how can I get a populated propel object of two (or more) joined table with only one query? Thanks, JP

    Read the article

  • MySQL nested set hierarchy with foreign table

    - by Björn
    Hi! I'm using a nested set in a MySQL table to describe a hierarchy of categories, and an additional table describing products. Category table; id name left right Products table; id categoryId name How can I retrieve the full path, containing all parent categories, of a product? I.e.: RootCategory > SubCategory 1 > SubCategory 2 > ... > SubCategory n > Product Say for example that I want to list all products from SubCategory1 and it's sub categories, and with each given Product I want the full tree path to that product - is this possible? This is as far as I've got - but the structure is not quite right... select parent.`name` as name, parent.`id` as id, group_concat(parent.`name` separator '/') as path from categories as node, categories as parent, (select inode.`id` as id, inode.`name` as name from categories as inode, categories as iparent where inode.`lft` between iparent.`lft` and iparent.`rgt` and iparent.`id`=4 /* The category from which to list products */ order by inode.`lft`) as sub where node.`lft` between parent.`lft` and parent.`rgt` and node.`id`=sub.`id` group by sub.`id` order by node.`lft`

    Read the article

  • Partitioning mySQL tables that has foreign keys?

    - by Industrial
    Hi! What would be an appropriate way to do this, since mySQL obviously doesnt enjoy this. To leave either partitioning or the foreign keys out from the database design would not seem like a good idea to me. I'll guess that there is a workaround for this? Update 03/24: http://opendba.blogspot.com/2008/10/mysql-partitioned-tables-with-trigger.html http://stackoverflow.com/questions/1537219/how-to-handle-foreign-key-while-partitioning Thanks!

    Read the article

  • MySQL create memory leak in Tomcat

    - by mabuzer
    I have set a JDBCRealm for web-app inside tomcat, and when I reload it I got this from tomcat: SEVERE: A web application registered the JBDC driver [com.mysql.jdbc.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. I use tomcat 6.0.24, with MySQL Connector 5.1.10,,,

    Read the article

  • loops and conditionals inside triggers

    - by Ying
    I have this piece of logic I would like to implement as a trigger, but I have no idea how to do it! I want to create a trigger that, when a row is deleted, it checks to see if the value of one of its columns exists in another table, and if it does, it should also perform a delete on another table based on another column. So say we had a table Foo that has columns Bar, Baz. This is what id be doing if i did not use a trigger: function deleteFromFooTable(FooId) { SELECT (Bar,Baz) FROM FooTable WHERE id=FooId if not-empty(SELECT * FROM BazTable WHERE id=BazId) DELETE FROM BarTable WHERE id=BarId DELETE FROM FooTable WHERE id=FooId } I jumped some hoops in that pseudo code, but i hope you all get where im going. It seems what i would need is a way to do conditionals and to loop(in case of multiple row deletes?) in the trigger statement. So far, I haven't been able to find anything. Is this not possible, or is this bad practice? Thanks!

    Read the article

  • MySQL vs PostgreSQL for Web Applications

    - by cnu
    I am working on a web application using Python (Django) and would like to know whether MySQL or PostgreSQL would be better when deploying for production. In one podcast Joel said that he had some problems with MySQL and the data wasn't consistent. I would like to know whether someone had any such problems. Also when it comes to performance which can be easily tweaked?

    Read the article

  • Glib-Error MYSQL????

    - by sam
    Hi guys, I got this Error when querying MYSQL Database: Glib-Error **: gmem.c:173:falied to allocate 216000000 bytes aborting.. Do anybody have some explanations? I am using MYSQL Query Browser. thanks

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >