Search Results

Search found 30341 results on 1214 pages for 'mysql gui tools'.

Page 198/1214 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • mysql query clarification

    - by JPro
    I have a query which I am wondering if the result I am getting is the one that I am expecting. The table structure goes like this : Table : results ID TestCase Set Analyzed Verdict StartTime Platform 1 1010101 ros2 false fail 18/04/2010 20:23:44 P1 2 1010101 ros3 false fail 19/04/2010 22:22:33 P1 3 1232323 ros2 true pass 19/04/2010 22:22:33 P1 4 1232323 ros3 false fail 29/04/2010 22:22:33 P2 Table : testcases ID TestCase type 1 1010101 NOSNOS 2 1232323 N212NS is there any way to display only the latest fails on each platform? in the above case Result shoud be : ID TestCase Set Analyzed Verdict StartTime Platform type 2 1010101 ros3 false fail 19/04/2010 22:22:33 P1 NOSNOS 4 1232323 ros3 false fail 29/04/2010 22:22:33 P2 N212NS

    Read the article

  • Deferring frequent updates in MySQL

    - by cdecker
    I have frequent updates to a user table that simply sets the last seen time of a user, and I was wondering whether there is a simple way to defer them and group them into a single query after a short timeout (5 minutes or so). This would reduce queries on my user database quite a lot.

    Read the article

  • MySQL table organization and optimization (Rails)

    - by aguynamedloren
    I've been learning Ruby on Rails over the past few months with no prior programming experience. Lately, I've been thinking about database optimization and table organization. I know there are great books on the subject, but I typically learn by example / as I go. Here's a hypothetical situation: Let's say I am building a social network for a niche community with 250,000 members (users). The users have the ability to attend events. Let's say there are 50,000 past/present/future events. Much like Facebook events, a user can attend any number of events and an event can have any number of attendees. In the database, there would be a table for users and a table for events. Somehow I would have to create an association between the users and events. I could create an "events" column in the users table such that each user row would contain a hash of event IDs, or I could create an "attendees" column in the events table such that each event row would contain a hash of user IDs. Neither of these solutions seem ideal, however. On a users profile page, I want to display the list of events they are associated with, which would require scanning the 50,000 event rows for the user ID of said user if I include an "attendees" column in the events table. Likewise, on an event page, I want to display a list of attendees for the event, which would require scanning the 250,000 user rows for the event ID of said event if I include an "events" column in the users table. Option 3 would be to create a third table that contains the attendee information for each and every event - but I don't see how this would solve any problems. Are these non-issues? Rails makes accessing all of this information easy, but I guess I'm worried about scale. It is entirely possible that I am under-estimating the speed and processing power of modern databases / servers / etc. How long would it take to scan 250,000 user rows for specific event IDs - 10ms? 100ms? 1,000ms? I guess that's not that bad. Am I just over-thinking this?

    Read the article

  • MySQL MyISAM table performance... painfully, painfully slow

    - by Salman A
    I've got a table structure that can be summarized as follows: pagegroup * pagegroupid * name has 3600 rows page * pageid * pagegroupid * data references pagegroup; has 10000 rows; can have anything between 1-700 rows per pagegroup; the data column is of type mediumtext and the column contains 100k - 200kbytes data per row userdata * userdataid * pageid * column1 * column2 * column9 references page; has about 300,000 rows; can have about 1-50 rows per page The above structure is pretty straight forwad, the problem is that that a join from userdata to page group is terribly, terribly slow even though I have indexed all columns that should be indexed. The time needed to run a query for such a join (userdata inner_join page inner_join pagegroup) exceeds 3 minutes. This is terribly slow considering the fact that I am not selecting the data column at all. Example of the query that takes too long: SELECT userdata.column1, pagegroup.name FROM userdata INNER JOIN page USING( pageid ) INNER JOIN pagegroup USING( pagegroupid ) Please help by explaining why does it take so long and what can i do to make it faster. Edit #1 Explain returns following gibberish: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE userdata ALL pageid 372420 1 SIMPLE page eq_ref PRIMARY,pagegroupid PRIMARY 4 topsecret.userdata.pageid 1 1 SIMPLE pagegroup eq_ref PRIMARY PRIMARY 4 topsecret.page.pagegroupid 1 Edit #2 SELECT u.field2, p.pageid FROM userdata u INNER JOIN page p ON u.pageid = p.pageid; /* 0.07 sec execution, 6.05 sec fecth */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE u ALL pageid 372420 1 SIMPLE p eq_ref PRIMARY PRIMARY 4 topsecret.u.pageid 1 Using index SELECT p.pageid, g.pagegroupid FROM page p INNER JOIN pagegroup g ON p.pagegroupid = g.pagegroupid; /* 9.37 sec execution, 60.0 sec fetch */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE g index PRIMARY PRIMARY 4 3646 Using index 1 SIMPLE p ref pagegroupid pagegroupid 5 topsecret.g.pagegroupid 3 Using where Moral of the story Keep medium/long text columns in a separate table if you run into performance problems such as this one.

    Read the article

  • Regex in MySql - replace between Tags including Tags

    - by user998163
    I have a text like: [lang_de]Content in German[/lang_de][lang_en]Content in English[/lang_en]. I want to replace all text between [lang_en] and [/lang_en] including the opening and closing language tag. This is my current regex: UPDATE wp_posts SET post_content = '" . addslashes(preg_replace("/[lang_en](.*)[\/lang_en]/", '', $row['post_content'])) . "' WHERE ID = " . $row['ID'] With this regex, I do not get a valid results. It also replaces in html tags and so on. What would be the correct regex?

    Read the article

  • using joins or multiple queries in php/mysql

    - by askkirati
    Here i need help with joins. I have two tables say articles and users. while displaying articles i need to display also the user info like username, etc. So will it be better if i just use joins to join the articles and user tables to fetch the user info while displaying articles like below. SELECT a.*,u.username,u.id FROM articles a JOIN users u ON u.id=a.user_id OR can this one in php. First i get the articles with below sql SELECT * FROM articles Then after i get the articles array i loop though it and get the user info inside each loop like below SELECT username, id FROM users WHERE id='".$articles->user_id."'; Which is better can i have explanation on why too. Thank you for any reply or views

    Read the article

  • MySQL ORDER BY DESC is fast but ASC is very slow

    - by Pepper
    Hello, I'm completely stumped on this one. For some reason when I sort this query by DESC it's super fast, but if sorted by ASC it's extremely slow. This takes about 150 milliseconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published DESC LIMIT 0, 50; This takes about 32 seconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published ASC LIMIT 0, 50; The EXPLAIN is the same for both queries. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts index NULL published 5 NULL 50 Using where I've tracked it down to "USE INDEX (published)". If I take that out it's the same performance both ways. But the EXPLAIN shows the query is less efficient overall. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts range feed_id feed_id 4 \N 759 Using where; Using filesort And here's the table. CREATE TABLE `posts` ( `id` int(20) NOT NULL AUTO_INCREMENT, `feed_id` int(11) NOT NULL, `post_url` varchar(255) NOT NULL, `title` varchar(255) NOT NULL, `content` blob, `author` varchar(255) DEFAULT NULL, `published` int(12) DEFAULT NULL, `updated` datetime NOT NULL, `created` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `post_url` (`post_url`,`feed_id`), KEY `feed_id` (`feed_id`), KEY `published` (`published`) ) ENGINE=InnoDB AUTO_INCREMENT=196530 DEFAULT CHARSET=latin1; Is there a fix for this? Thanks!

    Read the article

  • Search Multiple Tables of a Mysql Database

    - by DogPooOnYourShoe
    I have the following code: $query = "select * from customer where Surname like \"%$trimmed%\" OR TitleName like \"%$trimmed%\" OR PostCode like \"%$trimmed%\" order by Surname"; However, I have another table which I want to search from with the same paramaters(variables) as that. I know that something like "select * from customer,othertable" might not be possible, Is there a way to do it?

    Read the article

  • MYSQL query to return rows that are NOT in a set

    - by iglurat
    Hi, I have two tables: Contact (id,name) Link (id, contact_id, source_id) I have the following query which works that returns the contacts with the source_id of 8 in the Link table. SELECT name FROM `Contact` LEFT JOIN Link ON Link.contact_id = Contact.id WHERE Link.source_id=8; However I am a little stumped on how to return a list of all the contacts which are NOT associated with source_id of 8. A simple != will not work as contacts without any links are not returned. Thanks.

    Read the article

  • Exporting to CSV from MySQL via PHP in FireFox

    - by typoknig
    Hi all, I am pulling some info from a database with the following code: <input type="button" value="Export to Excel" onClick="window.navigate('breakfast_service.php?action=export')"> Here is the code for that action. <?php if ($_GET['action'] == 'export') { // Get the registration data $user = 'root'; $pass = 'billiards'; $server = 'localhost'; $link = mysql_connect($server, $user, $pass); if (!$link) { die('Could not connect to database!' . mysql_error()); } mysql_select_db('breakfast', $link); $query = "SELECT * FROM registration"; $result = mysql_query($query); mysql_close($link); // format into CSV $contents = "id, school_id, first_name, last_name, email, attending, created_on\n"; $num = mysql_num_rows($result); for ($i = 0; $i < $num; $i++) { $row = mysql_fetch_array($result); $id = $row['id']; $school_id = $row['school_id']; $fname = $row['first_name']; $lname = $row['last_name']; $email = $row['email']; $attending = ($row['attending'] == 0) ? 'No' : 'Yes'; $date = $row['created_on']; $contents = $contents . "$id, $school_id, $fname, $lname, $email, $attending, $date\n"; } // return as excel file $filename = "export.csv"; header('Content-type: application/ms-excel'); header('Content-Disposition: attachment; filename='.$filename); echo $contents; } ?> This combination of code works excellent in IE, but fails to do create/download a file in Firefox or Chrome. Why?

    Read the article

  • mySQL - How to select a date interval

    - by fabriciols
    Hello, this is my table : ------------------------------------- | user | item | date_time | | 10 | 01 | 10-10-10 20:10:05 | | 10 | 02 | 10-10-10 20:10:10 | | 10 | 03 | 10-10-10 20:10:10 | | 20 | 02 | 10-10-10 20:15:10 | | 20 | 02 | 10-10-10 20:20:10 | | 30 | 10 | 10-10-10 20:01:10 | | 30 | 20 | 10-10-10 20:01:20 | | 30 | 30 | 10-10-10 20:05:20 | ------------------------------------- i want to do a query that return a user that took multiple items in a 1min interval, like this result : ------------------------------------- | user | item | date_time | | 10 | 01 | 10-10-10 20:10:05 | | 10 | 02 | 10-10-10 20:10:10 | | 10 | 03 | 10-10-10 20:10:10 | | 30 | 10 | 10-10-10 20:01:10 | | 30 | 20 | 10-10-10 20:01:20 | ------------------------------------- how i do this ?

    Read the article

  • MySQL - Fulltext Index Search Issue

    - by RC
    Hi all, Two rows in the my database have the following data: brand | product | style ================================================= Doc Martens | Doc Martens 1460 Boots | NULL NewBalance | New Balance WR1062 SG Width | NULL Mininum word length is set to 3, and a FULLTEXT index is created across all the three columns above. When I run a search for IS BOOLEAN matches for +doc in the index, I get the first row returned as a result. When I search for +new, I get no results. Can someone explain why? Thanks.

    Read the article

  • Mysql Replication

    - by ychian
    My current database design uses MyIsam mainly as the storage engine, I wonder if its possible to split some of the tables into MyIsam and some into Innodb in the same database. Reason of switching some of the tables to Innodb is because i need row-based locking which Innodb offers. I am not too sure whether this would have any effect on replication?

    Read the article

  • MySQL and Collation

    - by user294787
    I have a table with a column using utf8_unicode_ci character set. This table stores Japanese data and my problem is that using this character set, I'm not able to store the same word written in katakana and hiragana because it's considered to be the same word. For example ??? and ???, which mean I, me. I know that I can change the character set to utf8_general_ci to resolve this problem but is it possible to bypass this limitation ? I mean, keep utf8_unicode_ci character set and make those two words be inserted? Is it possible to make this work using CONVERT or CAST operators? Thanks.

    Read the article

  • PHP: PDOStatement simple MySQL Select doesn't work.

    - by Alan
    Hi I have the following PHP code doing a very simple select into a table. $statement = $db->prepare("SELECT * FROM account WHERE fbid = :fbid"); $statement->bindParam(":fbid",$uid, PDO::PARAM_STR,45); $out = $statement->execute(); $row = $statement->fetch(); $out is true (success) yet $row is null. If I modify the code as follows: $statement = $db->prepare("SELECT * FROM account WHERE fbid = $uid"); $out = $statement->execute(); $row = $statement->fetch(); $row contains the record I'm expecting. I'm at a loss. I'm using the PDO::prepare(), bindParams() etc to protect against SQL Injection (maybe I'm mistaken on that). Please halp.

    Read the article

  • Store GZIP:ed text in mysql?

    - by Industrial
    Hi! Is it a common thing for bigger applications and databases to GZIP text data before inserting it to the database? I'll guess that any full-text search on the actual text field will not be working before unzipping it again? Thansks

    Read the article

  • MySQL Volleyball Standings

    - by Torez
    I have a database table full of game by game results and want to know if I can calculate the following: GP (games played) Wins Loses Points (2 points for each win, 1 point for each lose) Here is my table structure: CREATE TABLE `results` ( `id` int(10) unsigned NOT NULL auto_increment, `home_team_id` int(10) unsigned NOT NULL, `home_score` int(3) unsigned NOT NULL, `visit_team_id` int(10) unsigned NOT NULL, `visit_score` int(3) unsigned NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=7 ; And a few testing results: INSERT INTO `results` VALUES(1, 1, 21, 2, 25); INSERT INTO `results` VALUES(2, 3, 21, 4, 17); INSERT INTO `results` VALUES(3, 1, 25, 3, 9); INSERT INTO `results` VALUES(4, 2, 7, 4, 22); INSERT INTO `results` VALUES(5, 1, 19, 4, 20); INSERT INTO `results` VALUES(6, 2, 24, 3, 26); Here is what a final table would look like: +-------------------+----+------+-------+--------+ | Team Name | GP | Wins | Loses | Points | +-------------------+----+------+-------+--------+ | Spikers | 4 | 4 | 0 | 8 | | Leapers | 4 | 2 | 2 | 6 | | Ground Control | 4 | 1 | 3 | 5 | | Touch Guys | 4 | 0 | 4 | 4 | +-------------------+----+------+-------+--------+

    Read the article

  • Many-to-Many Relationships in MySQL

    - by Kaji
    I've been reading up on foreign keys and joins recently, and have been pleasantly surprised that many of the basic concepts are things I'm already putting into practice. For example, with one project I'm currently working on, I'm organizing word lists, and have a table for the sets, like so: `words` Table `word_id` `headword` `category_id` `categories` Table `category_id` `category_name` Now, generally speaking this would be a one-to-many relationship, with several words being placed under a single category with the foreign key category_id. Let's assume for a moment, however, that a user chooses to add another category to a word, making it many-to-many—Is there a way to set up my words table to handle additional categories for words without creating extra columns like category_2, category_3, etc.?

    Read the article

  • Optimizing MySql query to avoid using "Using filesort"

    - by usef_ksa
    I need your help to optimize the query to avoid using "Using filesort".The job of the query is to select all the articles that belongs to specific tag. The query is: "select title from tag,article where tag='Riyad' AND tag.article_id=article.id order by tag.article_id". the tables structure are the following: Tag table CREATE TABLE `tag` ( `tag` VARCHAR( 30 ) NOT NULL , `article_id` INT NOT NULL , INDEX ( `tag` ) ) ENGINE = MYISAM ; Article table CREATE TABLE `article` ( `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY , `title` VARCHAR( 60 ) NOT NULL ) ENGINE = MYISAM Sample data INSERT INTO `article` VALUES (1, 'About Riyad'); INSERT INTO `article` VALUES (2, 'About Newyork'); INSERT INTO `article` VALUES (3, 'About Paris'); INSERT INTO `article` VALUES (4, 'About London'); INSERT INTO `tag` VALUES ('Riyad', 1); INSERT INTO `tag` VALUES ('Saudia', 1); INSERT INTO `tag` VALUES ('Newyork', 2); INSERT INTO `tag` VALUES ('USA', 2); INSERT INTO `tag` VALUES ('Paris', 3); INSERT INTO `tag` VALUES ('France', 3);

    Read the article

  • MySql Query lag time / deadlock?

    - by Click Upvote
    When there are multiple PHP scripts running in parallel, each making an UPDATE query to the same record in the same table repeatedly, is it possible for there to be a 'lag time' before the table is updated with each query? I have basically 5-6 instances of a PHP script running in parallel, having been launched via cron. Each script gets all the records in the items table, and then loops through them and processes them. However, to avoid processing the same item more than once, I store the id of the last item being processed in a seperate table. So this is how my code works: function getCurrentItem() { $sql = "SELECT currentItemId from settings"; $result = $this->db->query($sql); return $result->get('currentItemId'); } function setCurrentItem($id) { $sql = "UPDATE settings SET currentItemId='$id'"; $this->db->query($sql); } $currentItem = $this->getCurrentItem(); $sql = "SELECT * FROM items WHERE status='pending' AND id > $currentItem'"; $result = $this->db->query($sql); $items = $result->getAll(); foreach ($items as $i) { //Check if $i has been processed by a different instance of the script, and if so, //leave it untouched. if ($this->getCurrentItem() > $i->id) continue; $this->setCurrentItem($i->id); // Process the item here } But despite of all the precautions, most items are being processed more than once. Which makes me think that there is some lag time between the update queries being run by the PHP script, and when the database actually updates the record. Is it true? And if so, what other mechanism should I use to ensure that the PHP scripts always get only the latest currentItemId even when there are multiple scripts running in parrallel? Would using a text file instead of the db help?

    Read the article

  • mysql error in query for timestamp interval

    - by nik parsa
    I have voting_IP table as follow: +-----------+-------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------+-------------+------+-----+---------+----------------+ | ip_id | int(11) | NO | PRI | NULL | auto_increment | | mes_id_fk | int(11) | YES | MUL | NULL | | | ip_add | varchar(40) | YES | | NULL | | | timestamp | datetime | YES | | NULL | | +-----------+-------------+------+-----+---------+----------------+ I want to run this query which has errors and I don't know how to correct it? $ip_sql=mysql_query("select ip_add from Voting_IP where mes_id_fk='$id' and ip_add='$ip' and timestamp > (DATE_ADD(now(), INTERVAL -1 HOUR);)"); I also tried this one: $ip_sql=mysql_query("select ip_add from Voting_IP where (mes_id_fk='$id' and ip_add='$ip' and timestamp > (DATE_ADD(now(), INTERVAL -1 HOUR);))");

    Read the article

  • MySql: Select all entries with count...

    - by Scarface
    Hey guys quick question, I have a query that I want to count all entries it finds, and select all so when I use while($row=mysql_fetch_assoc($query)){ it will list all entries. The problem I am encountering is that while, all the entries are successfully counted and the right number is listed, only the latest entry is select and listed when I echo $row['title']. If I delete , COUNT(*) as total then it selects all but I was wondering if it was possible to use count and select *. I was wondering if anyone knew what I am doing wrong? SELECT *, COUNT(*) as total FROM new_messages WHERE username='$session->username

    Read the article

  • UPDATE REGEX MYSQL

    - by Simon
    I have a table of contacts and a table of postcode data. I need to match the first part of the postcode and the join that with the postcode table... and then perform an update... I want to do something like this... UPDATE `contacts` LEFT JOIN `postcodes` ON PREG_GREP("/^[A-Z]{1,2}[0-9][0-9A-Z]{0,1}/", `contacts`.`postcode`) = `postcodes`.`postcode` SET `contacts`.`lat` = `postcode`.`lat`, `contacts`.`lng` = `postcode`.`lng` Is it possible?? Or do I need to use an external script? Many thanks.

    Read the article

  • Seeting up MySQL database

    - by mathew
    I do have single database and near about 11 tables. while my web page is opening informations from these 11 tables will be accessed same time. according to my current settings what I did now is for each table database is opening and closing. say I had given username and password to open databse for each table and close after retrieving information from that table. Is this the right way to do it?? I feel because of this the database is opeing and closing 11 times!!!! Am I right?? is this the right way to do that?? THanks Mathew

    Read the article

  • mysql complex key or + auto increment key (guid)

    - by darko
    Hi, I have not very big db. I am using auto increment primary keys and in my case there is no problem with that. GUID is not necessary. I have a table containing this fields: from_destination to_testination shipper quantity Where the fields 1,2,3 needs to be unique. Also I have second table that for the fields 1,2,3 stores bought quantities per day One to many. from_destination to_destination shipper date reserved_quantity case 1 Is it better to make fields 1,2,3 as primary complex key in the first table and the same fields in the second table to be foreign key First table from_destination | to_destination | primary shipper | quaitity Second table second_id - autoincrement primary from_destination | to_destination | foreign key shipper | date reserved_quantity Case 2 or just to add auto increment filed in the first table and make fields 1,2,3 unique. In the second table there will be one ingeger foreign key pointing to the first table, and one auto increment key for the table. First table first_id - autoincrement primary from_destination | to_destination | unique shipper | quaitity Second table second_id - autoincrement primary first_id - forein date reserved_quantity If so why we need complex keys, when we can have one field auto increment or GUID and all other fields that are candidates for complex key to be unique. Regards

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >