Search Results

Search found 20931 results on 838 pages for 'mysql insert'.

Page 179/838 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • Problem importing mysql triggers generated from mysqldump

    - by OM The Eternity
    I am using phpmyadmin for using the mysqldump query, but as per my requirement i have to create a new database which is clone of the previous one, now in this case when i import the main DB it contain all the trigger information as well with the DB name mentioned in it.. As i import this DB to new one my triggers get imported as well but the trigger_schema are not changed as per new DB.. What could be done to get resolve this problem?

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • Use of HAVING in MySQL

    - by KBrian
    I have a table from which I need to select all persons that have a first name that is not unique and that that set should be selected only if among the persons with a similar first name, all have a different last name. Example: FirstN LastN Bill Clinton Bill Cosby Bill Maher Elvis Presley Elvis Presley Largo Winch I want to obtain FirstN LastN Bill Clinton or FirstN LastN Bill Clinton Bill Cosby Bill Maher I tried this but it does not return what I want. SELECT * FROM Ids GROUP BY FirstN, LastN HAVING (COUNT(FirstN)>1 AND COUNT(LastN)=1)) [Edited my post after Aleandre P. Lavasseur remark]

    Read the article

  • What is an index in MySQL?

    - by Eric
    http://i.imgur.com/JdsUK.jpg I created a table like the picture above. What are the "Indexes"? primary key? unique? It works well without setting indexes.. What do they do? why do I need them? Also, I set all String fields to TEXT because I didn't know how many characters I need. Is this a good idea? I don't see any difference. Thanks!

    Read the article

  • MySql too many connections

    - by MichaelMcCabe
    I hate to bring up a question which is widely asked on the web, but I cant seem to solve it. I started a project a while back and after a month of testing, I hit a "Too many connections" error. I looked into it, and "Solved" it by increasing the max_connections. This then worked. Since then more and more people started to use it, and it hit again. When I am the only user on the site, i type "show processlist" and it comes up with about 50 connections which are still open (saying "Sleep" in the command). Now, I dont know enough to speculate why these are open, but in my code I tripple checked and every connection I open, I close. ie. public int getSiteIdFromName(String name, String company)throws DataAccessException,java.sql.SQLException{ Connection conn = this.getSession().connection(); Statement smt = conn.createStatement(); ResultSet rs=null; String query="SELECT id FROM site WHERE name='"+name+"' and company_id='"+company+"'"; rs=smt.executeQuery(query); rs.next(); int id=rs.getInt("id"); rs.close(); smt.close(); conn.close(); return id; } Every time I do something else on the site, another load of connections are opened and not closed. Is there something wrong with my code? and if not, what could be the problem?

    Read the article

  • MySql left join on several regs

    - by egidiocs
    Hi there! I have this table1 idproduct(PK) | date_to_go 1 2010-01-18 2 2010-02-01 3 2010-02-21 4 2010-02-03 and this other table2 that controls date_to_go updates id | idproduct(FK) | prev_date_to_go | date_to_go | update_date 1 1 2010-01-01 2010-01-05 2009-12-01 2 1 2010-01-05 2010-01-10 2009-12-20 3 1 2010-01-10 2010-01-18 2009-12-20 4 3 2010-01-20 2010-02-03 2010-01-05 So, in this example, for table1.idproduct #1 2010-01-18 is the actual date_to_go and 2010-01-01 (table2.prev_date_to_go, first reg) is the original date_to_go . using this query select v.idproduct, v.date_to_go, p.prev_date_to_go original_date_to_go from table1 v left join produto_datas p on p.idproduto = v.idproduto group by (v.idproduto) order by v.idproduto can I assume that original_date_to_go will be the first related reg of table2? idproduct | date_to_go | original_date_to_go 1 2010-01-18 2010-01-01 2 2010-02-01 NULL 3 2010-02-21 2010-01-20 4 2010-02-03 NULL

    Read the article

  • Get list of duplicate rows in MySql

    - by user347033
    Hi, i have a table like this ID nachname vorname 1 john doe 2 john doe 3 jim doe 4 Michael Knight I need a query that will return all the fields (select *) from the records that have the same nachname and vorname (in this case, records 1 and 2). Can anyone help me with this? Thanks

    Read the article

  • mysql - joining three tables with HAVING

    - by Qiao
    I have table: id name type where "type" is 1 or 2 I need to join this table with two other. Rows with "type = 1" should be joined with first table, and =2 with second. Something like SELECT * FROM tbl INNER JOIN tbl_1 ON tbl.name = tbl_1.name HAVING tbl.type = 1 INNER JOIN tbl_2 ON tbl.name = tbl_2.name HAVING tbl.type = 2 But it does not working. How it can be implemented?

    Read the article

  • mysql subselect alternative

    - by Arnold
    Hi, Lets say I am analyzing how high school sports records affect school attendance. So I have a table in which each row corresponds to a high school basketball game. Each game has an away team id and a home team id (FK to another "team table") and a home score and an away score and a date. I am writing a query that matches attendance with this seasons basketball games. My sample output will be (#_students_missed_class, day_of_game, home_team, away_team, home_team_wins_this_season, away_team_wins_this_season) I now want to add how each team did the previous season to my analysis. Well, I have their previous season stored in the game table but i should be able to accomplish that with a subselect. So in my main select statement I add the subselect: SELECT COUNT(*) FROM game_table WHERE game_table.date BETWEEN 'start of previous season' AND 'end of previous season' AND ( (game_table.home_team = team_table.id AND game_table.home_score > game_table.away_score) OR (game_table.away_team = team_table.id AND game_table.away_score > game_table.home_score)) In this case team-table.id refers to the id of the home_team so I now have all their wins calculated from the previous year. This method of calculation is neither time nor resource intensive. The Explain SQL shows that I have ALL in the Type field and I am not using a Key and the query times out. I'm not sure how I can accomplish a more efficient query with a subselect. It seems proposterously inefficient to have to write 4 of these queries (for home wins, home losses, away wins, away losses). I am sure this could be more lucid. I'll absolutely add color tomorrow if anyone has questions

    Read the article

  • php, mySQL & AJAX: Unable to use sessions across the scripts in the same domain

    - by Devner
    Hi all, I have the following pages: page1.php, page2.php and page3.php. Code in each of them is as below CODE: page1.php <script type="text/javascript"> $(function(){ $('#imgID').upload({ submit_to_url: "page2.php", file_name: 'myfile1', description : "Image", limit : 1, file_types : "*.jpg", }) }); </script> <body> <form action="page3.php" method="post" enctype="multipart/form-data" name="frm1" id="frm1"> //Some other text fields <input type="submit" name="submit" id="submit" value="Submit" /> </form> </body> page2.php <?php session_start(); $a = $_SESSION['a']; $b = $_SESSION['b']; $c = $_SESSION['c']; $res = mysql_query("SELECT col FROM table WHERE col1 = $a AND col2 = $b AND col3 = $c LIMIT 1"); $num_rows = mysql_num_rows($res); echo $num_rows; //echos 0 when in fact it should have been 1 because the data in the Session exists. //Ok let's proceed further //... Do some stuff... //Store some more values and create new session variables (and assume that page1.php is going to be able to use it) $_SESSION['d'] = 'd'; $_SESSION['e'] = 'e'; $_SESSION['f'] = 'f'; if (move_uploaded_file($_FILES['file']['tmp_name'], $file)) { echo "success"; } else { echo "error ".$_FILES['file']['error']; } ?> page3.php <?php session_start(); if( isset($_POST['submit']) ) { //These sessions are non-existent although the AJAX request //to page2.php may have created them when called via AJAX from within page1.php echo $_SESSION['d'].$_SESSION['e'].$_SESSION['f']; ?> } ?> As the code says it I am posting some info via AJAX call from page1.php to page2.php. page2.php is supposed to be able to use the session values from page1.php i.e. $_SESSION['a'], $_SESSION['b'] and $_SESSION['c'] but it does not. Why? How can I fix this? page2.php is creating some more sessions after some processing is done and a response is sent back to page1.php. The submit button of the form on page1.php is hit and the page gets POST'ed to page3.php. But when the SESSION info that gets created in page2.php is echoed, it's blank signifying that SESSIONS from page2.php are not used. How can I fix this? I looked over a lot of information and have spent about 50 hours trying to do different things with my scripts before arriving at the above conclusions. My app. is custom made using function (not OOPS) and does not use any PHP frameworks & I am not even about to use any as my knowledge of OOP concepts is limited any many frameworks are object oriented. I came across race conditions, but the solutions provided don't help too much. One more solution of using DB to hold sessions and seek and retrieve from DB is the last thing on my mind and I really want to avoid creating table, coding and maintaining code for a task as simple as just keeping sessions across pages in the same domain. So my request is: Is there a way that I can solve the above problem(s) via simple coding in present conditions? Any help is appreciated. Thank you.

    Read the article

  • De-normalization alternative to specific MYSQL problem?

    - by Booker
    I am facing quite a specific optimization problem. I currently have 4 normalized tables of data. Every second, possibly thousands of users will pull down up-to-date info from these tables using AJAX. The thing is that I can predict relatively easily which subset of data they need... The most recent 100 or so entries in those 4 normalized tables. I have been researching de-normalization... but feel that perhaps there is an easier solution. I was thinking that I could somehow every second run one sql query to condense the needed info, store it in a temp cached table and then have all of the user queries just draw from this. This will allow the complex join of 4 tables to only be run once, and then from there the users just need to do a simple lookup from the cached table. I really don't know if this is feasible. Comments on this or any other suggestions would be much appreciated. Thanks!

    Read the article

  • PHP mySQL - select unique value that not being used from dirrefent table

    - by apis17
    Updates : Please see below i have table: data +-----------------------+--------------+-----------+ | State | d_country | d_postcode| +-----------------------+--------------+-----------+ | State1 | Country1 | 1111 | | State2 | Country2 | 2222 | | State3 | Country3 | 3333 | | State4 | Country4 | 4444 | +-----------------------+--------------+-----------+ And another table: user +-----------------------+--------------+-----------+ | Name | u_country | u_postcode| +-----------------------+--------------+-----------+ | Name1 | Country3 | 3333 | | Name2 | Country5 | 5555 | | Name3 | | 6666 | | Name4 | Country6 | 6666 | | Name5 | Country6 | 6666 | +-----------------------+--------------+-----------+ What SQL should i use to: Determine the number (count) of country that are not listed on table data. For example u_postcode is not listed in d_postcode is 5555 and 6666. It will return 2. List down name and what country not available in table data yet. Updates I want to use grouping to filter postcode and make Name3 and Name4 as different rows. For example: +-----------------------+--------------+-----------+ | Name | u_country | u_postcode| +-----------------------+--------------+-----------+ | Name2 | Country5 | 5555 | | Name3 | | 6666 | | Name4 | Country6 | 6666 | +-----------------------+--------------+-----------+ Any possible idea?

    Read the article

  • data in mysql show after barcode split and matches character

    - by klox
    i need some code for the next step..this my first step: <script> $("#mod").change(function() { var barcode; barCode=$("#mod").val(); var data=barCode.split(" "); $("#mod").val(data[0]); $("#seri").val(data[1]); var str=data[0]; var matches=str.matches(/EE|[EJU]).*(D)/i); }); </script> after matches..i want the result can connect to data base then show data from table inside <div id="value">...how to do that?

    Read the article

  • auto_increment in MySQL - can I omit it?

    - by kees-kist
    I've noticed that PHPmyAdmin creates the following SQL for table creation: CREATE TABLE something ( ... ) auto_increment=1; When I write a database creation script I don't use the auto_increment bit. From reading related questions here I understand that it determines the starting value for auto_increment values. But it is good practice to reset it to 1, or should I just leave it out of the SQL so that the default is used?

    Read the article

  • mysql union query

    - by Sergio
    The table that contains information about members has a structure like: id | fname | pic | status -------------------------------------------------- 1 | john | a.jpg | 1 2 | mike | b.jpg | 1 3 | any | c.jpg | 1 4 | jacky | d.jpg | 1 Table for list of friends looks like: myid | date | user ------------------------------- 1 | 01-01-2011 | 4 2 | 04-01-2011 | 3 I want to make a query that will as result print users from "friendlist" table that contains photos and names of that users from "members" table of both, myid (those who adding) and user (those who are added). That table in this example will look like: myid | myidname | myidpic | user | username | userpic | status ----------------------------------------------------------------------------------- 1 | john | a.jpg | 4 | jacky | d.jpg | 1 2 | mike | b.jpg | 3 | any | c.jpg | 1

    Read the article

  • mysql 2 primary key onone table

    - by Bharanikumar
    CREATE TABLE Orders -> ( -> ID SMALLINT UNSIGNED NOT NULL, -> ModelID SMALLINT UNSIGNED NOT NULL, -> Descrip VARCHAR(40), -> PRIMARY KEY (ID, ModelID) -> ); Basically May i know ... Shall we create the two primary key on one table... Is it correct... Bcoz as per sql law,,, We can create N number of unque key in one table, and only one primary key only is the LAW know... Then how can my system allowing to create multiple primary key ? Please advise .... what is the general rule

    Read the article

  • Tricky MySQL Query for messaging system in Rails - Please Help

    - by ole_berlin
    Hi, I'm writing a facebook style messaging system for a Rails App and I'm having trouble selecting the Messages for the inbox (with will_paginate). The messages are organized in threads, in the inbox the most recent message of a thread will appear with a link to it's thread. The thread is organized via a parent_id 1-n relationship with itself. So far I'm using something like this: class Message < ActiveRecord::Base belongs_to :sender, :class_name => 'User', :foreign_key => "sender_id" belongs_to :recipient, :class_name => 'User', :foreign_key => "recipient_id" has_many :children, :class_name => "Message", :foreign_key => "parent_id" belongs_to :thread, :class_name => "Message", :foreign_key => "parent_id" end class MessagesController < ApplicationController def inbox @messages = current_user.received_messages.paginate :page => params[:page], :per_page => 10, :order => "created_at DESC" end end That gives me all the messages, but for one thread the thread itself and the most recent message will appear (and not only the most recent message). I can also not use the GROUP BY clause, because for the thread itself (the parent so to say) the parent_id = nil of course. Anyone got an idea on how to solve this in an elegant way? I already thought about adding the parent_id to the parent itself and then group by parent_id, but I'm not sure if that works. Thanks

    Read the article

  • PHP Serialize Function - Adding serialized data to mysql and then fetch and display

    - by Abhilash Shukla
    I want to know whether the PHP serialize function is 100% secure, also if we store serialized data into a database and want to do something after fetching it, will it be a nice way. For example:- I have a website with different user privileges, now i want to store the permissions settings for a particular privilege to my database (This data i want to store is to be done through php serialize function), now when a user logs in i want to fetch this data and set the privilege for the customer. Now i am ok to do this thing, what i want to know is, whether it is the best way to do or something more efficient can be done. Also, i was going through php manual and found this code, can anybody explain me a bit what's happening in this code:- [Specially why base64_encode is used?] <?php mySerialize( $obj ) { return base64_encode(gzcompress(serialize($obj))); } myUnserialize( $txt ) { return unserialize(gzuncompress(base64_decode($txt))); } ?> Also if somebody can provide me their own code to show me to do this thing in the most efficient manner. Thanks.

    Read the article

  • Caching Mysql database for better performance

    - by kobey
    Hi, I'm using Amazon cloud and I've performance issue since the HDD is not located on my machine. My database is small (~500MB) and I can afford to keep it all in my RAM. I do not want to keep queries in my RAM, i need all the tables there. How can i do it? Thanks, Koby P.S. I'm using ubuntu server...

    Read the article

  • MySQL query killing my server

    - by Webnet
    Looking at this query there's got to be something bogging it down that I'm not noticing. I ran it for 7 minutes and it only updated 2 rows. //set product count for makes $tru->query->run(array( 'name' => 'get-make-list', 'sql' => 'SELECT id, name FROM vehicle_make', 'connection' => 'core' )); while($tempMake = $tru->query->getArray('get-make-list')) { $tru->query->run(array( 'name' => 'update-product-count', 'sql' => 'UPDATE vehicle_make SET product_count = ( SELECT COUNT(product_id) FROM taxonomy_master WHERE v_id IN ( SELECT id FROM vehicle_catalog WHERE make_id = '.$tempMake['id'].' ) ) WHERE id = '.$tempMake['id'], 'connection' => 'core' )); } I'm sure this query can be optimized to perform better, but I can't think of how to do it. vehicle_make = 45 rows taxonomy_master = 11,223 rows vehicle_catalog = 5,108 rows All tables have appropriate indexes

    Read the article

  • MySQL Select statement Where table1.id != table2.id

    - by Michael
    I have a table of data which has posts, then I have a separate table of data which has deleted posts. What happens when a post is deleted is that it's ID get's added to the deleted table rather than removing the post entry. What is a clean efficient way of selecting all the posts from the posts table without selecting the ones that have their ID in the deleted table

    Read the article

  • mySQL Query JOIN in same table

    - by jeerose
    Table structure goes something like this: Table: Purchasers Columns: id | organization | city | state Table: Events Columns: id | purchaser_id My query: SELECT purchasers.*, events.id AS event_id FROM purchasers INNER JOIN events ON events.purchaser_id = purchasers.id WHERE purchasers.id = '$id' What I would like to do, is obviously to select entries by their id from the purchasers table and join from events. That's the easy part. I can also easily to another query to get other purchasers with the same organization, city and state (there are multiples) but I'd like to do it all in the same query. Is there a way I can do this? In short, grab purchasers by their ID but then also select other purchasers that have the same organization, city and state. Thanks.

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >