Search Results

Search found 34274 results on 1371 pages for 'mysql table'.

Page 344/1371 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • fetch some data from two tables

    - by user1753971
    i have site like imdb and we provide movie information sin site..and our website have option to rate all movies for every users. I have two tables 1 . imdb (its for store movie details) id,name,actors,vote 2. ratings (its for store users rating details) id,rating_id(its same as id from first table),rating_num,IP now what am doing is..when anyone rating a movie take the avg of that movie rating by using rating tables (total ratings/number of ratings) and insert that value into "vote" column in first table..my demands this..thats why done like this.. Now my problem is..i want to fetch top rated movies..i mean in vote column which movie have top rating which want to list and one more condition is that that movie should rated by 10 users(use ratings table for that) thanks in advance

    Read the article

  • SQL: joining multiples tables into one.

    - by Graveen
    I have 4 tables. r1, r2, r3 and r4. The table columns are the following: rId | rName I want to have, in fine, an unique table - let's call it R. Obviously, R will have the following structure: rTableName | rId | rName I'm looking for a solution, and the more natural for me is to: add a single column to all rX insert this column the table name i'm processing generate SQLs and concatenate them all Although I see exactly how to perform 1 and 3 with batching, editing, etc... (I have only to perform it once and for all), I don't see how to do the point 2: self-getting the tablename to insert into SQL. Have you an idea / or a different way to do that could solve my problem? Note: In fact, there are 250+ rX tables. That's why i can't do this manually. Note2: Precisely, this is with MySQL.

    Read the article

  • Cannot running next process when a variable save no value

    - by bruine
    First, I wanna compare between 2 tables tb_wrapper and tb_summary to get the data in the tb_wrapper that doesn't exist in the tb_summary then save in $link. If I don't get the same data, I want to print the result $link. When I don't get the not same data, I want it to go to another process. Here's the code : $q2 = mysql_query(" SELECT a.doc_url FROM tb_wrapper a LEFT JOIN tb_summary b ON a.doc_name = b.doc_summ WHERE b.doc_summ IS NULL"); while ($row = mysql_fetch_array($q2)){ $link = $row['doc_url']; if (!$link){ include 'next_process.php'; } else { print_r($link); } } it doesn't work. When I don't get the not same data or $link not save a value. table : CREATE TABLE tb1 (`id` int, `doc_name` varchar(100), `doc_url` varchar(50)) } CREATE TABLE tb2 (`id` int, `doc_summ` varchar(100)) }

    Read the article

  • High CPU - What to do.

    - by Udi Kantzuker
    I have a high CPU problem with MYSQL using "top" ( linux ) shows cpu peaks of 90%. I was trying to find the source of the problem, turned on general log and slow query log, The slow query log did not find anything. The Db contains a few small tables and one large table that contains almost 100k rows, Database Engine is MyIsam. strange thing i have noticed that on the large table, select, insert are very fast but update takes 0.2 - 0.5 secs. already used optimize and repair and no improvement. the table is being updated frequently, could this be the source of the high CPU% ? What can i do to improve this?

    Read the article

  • Changing set_timezone does not always take effect

    - by LearneR
    I have two table table-1 id date-time ----------------------- 1 2012-12-13 15:20:13 table-2 id date-time ----------------------- 1 2012-12-13 15:20:13 Now i am selecting the record with mysql set_timezone function Case-1 SET time_zone='+00:00'; SELECT `date-time` FROM `table-1`; // 2012-12-13 09:50:13 Case-2 SET time_zone='+00:00'; SELECT `date-time` FROM `table-2`; // 2012-12-13 15:20:13 ---Not converting to specified timezone In case-1 it's giving converted date-time, but not in Case-2. What would be the issue?

    Read the article

  • Copying a database into a new database including structure and data

    - by Jason
    In phpMyAdmin under operations I can "Copy database to:" and select Structure and data CREATE DATABASE before copying Add AUTO_INCREMENT value I need to be able to do that without using phpMyAdmin. I know how to create the database and user. I have a source database that's a shell that I can work from so all I really need is the how to copy all the table structure and data part. (I know, the harder part) system() & exec() are not options for me which rules out mysqldump. (I think) How can I loop through each table and recreate it's structure and data? Is it just looping through the results of SHOW TABLES then for each table looping through DESCRIBE tablename Then, is there an easy way for getting the data copied?

    Read the article

  • Removing certain characters in all rows that match a regex?

    - by user001
    I'd like to change {foo, {bar}, foobar} to {foo, bar, foobar} in all rows that match '{.*{'. I.e. remove all curly braces { and } except the outer most pair. So doing mysql -h $H -u $U -p$P $DB -B -e "SELECT id FROM t WHERE col REGEXP '{.*{'" > bad.txt selects all the rows that will need this substitution. How do I make this substitution very quickly? EDIT: Could I do it by update table set column = REPLACE(column,'{',''); Then restore the out most pair update table set column = REPLACE(column,'^','{'); update table set column = REPLACE(column,'$','}');

    Read the article

  • Help with SQL query

    - by user154301
    Hello, I have list of DateTime values, and for each value I need to fetch something from the database. I would like to do this with one query. I know it's possible to pass a table (list) to the stored procedure, but Im not sure how to write the query itself. Let's say I have the following table: CREATE TABLE Shows( ShowId [int] NOT NULL, StartTime DateTime NOT NULL, EndTime DateTime NOT NULL ) and an array of dates DECLARE @myDateArray MyCustomDateArrayType Now, if I were fetching a single item, I would write a query like this: SELECT * FROM Shows WHERE StartTime > @ArrayItem and @ArrayItem < EndTime where @ArrayItem is an item from @myDateArray . But how do I formulate the query that would fetch the information for all array items? Thanks!

    Read the article

  • delete all records except the id I have in a python list

    - by jay_t
    Hi all, I want to delete all records in a mysql db except the record id's I have in a list. The length of that list can vary and could easily contain 2000+ id's, ... Currently I convert my list to a string so it fits in something like this: cursor.execute("""delete from table where id not in (%s)""",(list)) Which doesn't feel right and I have no idea how long list is allowed to be, .... What's the most efficient way of doing this from python? Altering the structure of table with an extra field to mark/unmark records for deletion would be great but not an option. Having a dedicated table storing the id's would indeed be helpful then this can just be done through a sql query... but I would really like to avoid these options if possible. Thanks,

    Read the article

  • How can I use innerHTML without delete the before innerHTML?

    - by Mikelon85
    I want to convert an Array to a HTML table, but innerHTML deletes it writes before. Here is the code: <html><div id="tablaP"></div> function cargar2() { document.getElementById('tablaP').innerHTML= "<table>" var h=1 for (i=0;i<miArray.length;i++){ document.getElementById('tablaP').innerHTML = '<tr>' for (j=0;j<miArray[i].length;j++){ document.getElementById("tablaP").innerHTML = '<td><td>' } document.getElementById('tablaP').innerHTML = '</tr>' h++ } document.getElementById('tablaP').innerHTML = '</table>' }

    Read the article

  • Rename INDEX Column

    - by Lee
    Hey All I have a database with around 40 tables and need to rename every index column. IE USER a table has a bunch of fields like user_id | user_username | user_password | etc... I want to rename the ID columns just to id ie id | user_username | user_password | etc... But I keep getting mysql errors on the alter table command ie. ALTER TABLE database RENAME COLUMN user_id to id; Plus many different variations. Whats the best way to do this ? Hope you can advise

    Read the article

  • differentiating results of sql right join

    - by Sourabh
    Hi I have a below SQL query SELECT `User`.`username` , Permalink.perma_link_id, Permalink.locale, Permalink.title, DATEDIFF( CURDATE( ) , Permalink.created ) AS dtdiff, `TargetSegment`.segment_text, TargetSegment.source_segment_id ,TargetSegment.perma_link_id ,TargetSegment.created ,TargetSegment.updated, DATEDIFF( CURDATE( ) , TargetSegment.updated ) AS datediff FROM `users` AS `User` RIGHT JOIN perma_links AS `PermaLink` ON ( `PermaLink`.`username` = `User`.`username` ) RIGHT JOIN target_segments AS `TargetSegment` ON ( `TargetSegment`.`username` = `User`.`username` ) RIGHT JOIN source_segments AS `SourceSegment` ON ( `SourceSegment`.`source_detail_id` = `PermaLink`.`source_detail_id` ) LEFT JOIN source_details AS `SourceDetail` ON ( `SourceSegment`.`source_detail_id` = `SourceDetail`.`id` ) WHERE `TargetSegment`.`username` = "xxxx" AND `TargetSegment`.`segment_text` <> "" AND `Permalink`.`perma_link_id` = `TargetSegment`.`perma_link_id` AND `TargetSegment`.`source_segment_id` = `SourceSegment`.`id` AND `Permalink`.`source_detail_id` = `SourceDetail`.`id` ORDER BY `TargetSegment`.`updated` DESC LIMIT 0 , 10 This SQL is fetching correct results for me.I want to identify from which table each row if from , to be specific which result is due to PermaLink table and which is from TargetSegment table. is this achievable ?

    Read the article

  • Archiving rows dynamically

    - by Serge
    I was wondering what would be the best solution to dynamically archive rows. For instance when a user marks a task as completed, that task needs to be archived yet still accessible. What would be the best practices for achieving this? Should I just leave it all in the same table and leave out completed tasks from the queries? I'm afraid that over time the table will become huge (1,000,000 rows in a year or less). Or should I create another table ie task_archive and query that row whenever data is needed from it? I know similar questions have been asked before but most of them where about archiving thousands of rows simultaneously, I just need to know what would be the best method (and why) to archive 1 row at a time once it's been marked completed

    Read the article

  • Ruby on Rails: Best way to save search queries in a database

    - by Adam Templeton
    For a RoR app I'm helping develop, I need to save all search queries in a database so I can analyze them later. My plan right now is to create a Result model and table, and just save each search query's text in that table, along with a user's ID, the time, etc. However, the app has about 15,000 users, so I'm afraid the single table approach won't be super efficient when it comes time to parse that data. (The database is setup via MySQL, if that factors in at all.) Am I just being paranoid? Is there a Ruby gem that handles this sort of thing, or a better approach I could take? Any input would be appreciated.

    Read the article

  • why does InnoDB keep on growing without for every update?

    - by Akash Kava
    I have a table which consists of heavy blobs, and I wanted to conduct some tests on it. I know deleted space is not reclaimed by innodb, so I decided to reuse existing records by updating its own values instead of createing new records. But I noticed, whether I delete and insert a new entry, or I do UPDATE on existing ROW, InnoDB keeps on growing. Assuming I have 100 Rows, each Storing 500KB of information, My InnoDB size is 10MB, now when I call UPDATE on all rows (no insert/ no delete), the innodb grows by ~8MB for every run I do. All I am doing is I am storing exactly 500KB of data in each row, with little modification, and size of blob is fixed. What can I do to prevent this? I know about optimize table, but I cant do it because on regular usage, the table is going to be 60-100GB big, and running optimize will just stall entire server.

    Read the article

  • Set AUTO_INCREMENT value programmatically

    - by Tim
    So this works... ALTER TABLE variation AUTO_INCREMENT = 10; But I want to do this; ALTER TABLE variation AUTO_INCREMENT = (SELECT MAX(id)+1 FROM old_db.varaition); but that doesnt work, and neither does; SELECT MAX(id)+1 INTO @old_auto_inc FROM old_db.variation ALTER TABLE variation AUTO_INCREMENT = @old_auto_inc; So does anyone know how to do this? ( I'm trying to ensure that AUTO_INCREMENT keys dont collide between an old and a new site and need to do this automatically. So I can just run a script when the new db goes live )

    Read the article

  • SQL most popular

    - by Brae
    I have a mysql table with items in relation to their order. CREATE DATABASE IF NOT EXISTS `sqltest`; USE `sqltest`; DROP TABLE IF EXISTS `testdata`; CREATE TABLE `testdata` ( `orderID` varchar(10) DEFAULT NULL, `itemID` varchar(10) DEFAULT NULL, `qtyOrdered` int(10) DEFAULT NULL, `sellingPrice` decimal(10,2) DEFAULT NULL ) INSERT INTO `testdata`(`orderID`,`itemID`,`qtyOrdered`,`sellingPrice`) values ('1','a',1,'7.00'),('1','b',2,'8.00'),('1','c',3,'3.00'),('2','a',1,'7.00'),('2','c',4,'3.00'); Intended Result: A = (1+1)2 B = 2 C = (2+4)6 <- most popular How do I add up all the qty's for each item and result the highest one? It should be fairly strait forward but I'm new to SQL and I can't work this one out :S Solution needs to be mysql and or php. I guess there needs to be some sort of temporary tally variable for each item ID, but that seems like it could get messy with too many items.

    Read the article

  • Where to store users visited pages?

    - by kofto4ka
    Hi there. I have a project, where I have posts for example. The task is next: I must show to user his last posts visit. This is my solution: every time user visits new (for him) topic, I create a new record in table visits. Table visits has next structure: id, user_id, post_id, last_visit. Now my tables visits has ~14,000,000 records and its still growing every day.. May be my solution isnt optimal and exists another way how to store users visits? Its important to save every visit as standalone record, because I also have feature to select and use users visits. And I cant purge this table, because data could be needed later month, year. How I could optimize this situation?

    Read the article

  • Working with sets of rows in (My)SQL and comparing values

    - by Pep.
    Hello, I am trying to figure out the SQL for doing some relatively simple operations on sets of records in a table but I am stuck. Consider a table with multiple rows per item, all identified by a common key. For example: serial model color XX1 A blue XX2 A blue XX3 A green XX5 B red XX6 B blue XX1 B blue What I would for example want to do is: Assuming that all model A rows must have the same color, find the rows which dont. (for example, XX3 is green). Assuming that a given serial number can only point to a single type of model, find out the rows which that does not occur (for example XX1 points both to A and B) These are all simple logically things to do. To abstract it, I want to know how to group things by using a single key (or combination of keys) and then compare the values of those records. Should I use a join on the same table? should i use some sort of array or similar? thanks for your help

    Read the article

  • CakePHP: Missing database table

    - by Justin
    I have a CakePHP application that is running fine locally. I uploaded it to a production server and the first page that uses a database connection gives the "Missing Database Table" error. When I look at the controller dump, it's complaining about the first table. I've tried a variety of things to fix this problem, with no luck: I've confirmed that at the command line I can login with the given MySQL credentials in database.php I've confirmed this table exists I've tried using the MySQL root credentials (temporarily) to see if the problem lies with permissions of the user. The same error appeared. My debug level is currently set to 3 I've deleted the entire contents of /app/tmp/cache I've set 777 permissions on /app/tmp* I've confirmed that I can run DESCRIBE commands at the commant line MySQL when logged in with the MySQL credentials used by by the application I've verified that the CakePHP log file only contains the error I'm setting in the browser window. I've tried all the suggestions I could find in similar postings on SO I've Googled around and didn't find any other ideas I think I've eliminating the obvious problems and my research isn't turning anything up. I feel like I'm missing something obvious. Any ideas?

    Read the article

  • How to return result set based on other rows

    - by understack
    I've 2 tables - packages and items. Items table contains all items belonging to the packages along with location information. Like this: Packages table id, name, type(enum{general,special}) 1, name1, general 2, name2, special Items table id, package_id, location 1, 1, America 2, 1, Africa 3, 1, Europe 4, 2, Europe Question: I want to find all 'special' packages belonging to a location and if no special package is found then it should return 'general' packages belonging to same location. So, for 'Europe' : package 2 should be returned since it is special package (Though package 1 also belongs to Europe but not required since its a general package) for 'America' : package 1 should be returned since there are no special packages

    Read the article

  • PHP from database and query

    - by Kyle R
    I have a table: id, affiliate Each time somebody clicks a link, a new row is inserted, ID being the ID of the page, and affiliate being the ID of the affiliate. For example: Page ID: 9 Affiliate ID: 1 Page ID: 9 Affiliate ID: 2 Page ID: 9 Affiliate ID: 3 I only have 3 affiliates. I want to select this information, and group them by affiliate, for the ID. I have tried this query: SELECT COUNT(*) FROM table WHERE id = '9' GROUP BY affiliate It works fine when I do it in php my admin, how do I get the info in PHP? I have tried: $q = mysql_query("SELECT COUNT(*) FROM table WHERE id = '" . $id . "' GROUP BY affiliate"); $r = mysql_fetch_array($q); When trying to print the data onto the page, I am only getting one result. Do I need to use a foreach/while loop to get all 3? How would I go about doing this? Thank you!

    Read the article

  • Selecting only the entries that have a distinct combination of values?

    - by Theodore E O'Neal
    I have a table, links1, that has the columns headers CardID and AbilityID, that looks like this: CardID | AbilityID 1001 | 1 1001 | 2 1001 | 3 1002 | 2 1002 | 3 1002 | 4 1003 | 3 1003 | 4 1003 | 5 What I want is to be able to return all the CardID that that have two specific AbilityID. For example: If I choose 1 and 2, it returns 1001. If I choose 3 and 4, it returns 1002 and 1003. Is it possible to do this with only one table, or will I need to create an identical table and do an INNER JOIN on those?

    Read the article

  • Home ADSL Modem Dropping Packets?

    - by Cody
    I know this is supposed to be a "pro" forum, but I'm hoping someone can help since my ISP isn't doing much to try and fix things. My ISP has given me a DSL modem / Router combo - a ADB / Pirelli P.DG A2100N and I have a 4096 / 767 kbps connection. I use it purely as modem and router, and have the wireless AP feature turned off. I run it to a Ubiquiti Networks Toughswitch and use a Ubiquiti UAP as the wireless access point - although I've ran tests directly wired to the router with nothing else connected, and still see the same issues. I've been having issues where latency suddenly spikes from 8ms to google.com to 250+ if someone does anything on the internet. If I run a speedtest or something, I can see latencies above 3000ms. Regularly when downloading something, even if the speed is throttled to , it can get random drops to 0kbps every few seconds. Online gaming is impossible because I notice the sudden lag-outs in the connection, and video streams or VoIP drop out as well - it's not at all consistent. I managed to find the password to my modem and I don't think I see anything wrong with the settings - but I looked for the logs and found this: Jun 6 17:10:30 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:10:30 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:10:31 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:10:40 user warn kernel: __ratelimit: 63 callbacks suppressed Jun 6 17:10:40 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:10:40 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:10:40 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:10:40 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:10:40 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:10:40 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:22 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:23 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:24 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:24 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:24 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:24 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:24 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:25 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:25 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:25 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:29 user warn kernel: __ratelimit: 15 callbacks suppressed Jun 6 17:11:29 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:29 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:30 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:30 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:55:26 user warn kernel: bcmxtmcfg: OAM loopback response not received on VCC 1.1.3 Jun 6 17:55:27 user warn kernel: bcmxtmcfg: OAM loopback response not received on VCC 1.1.4 So, as I understand it, it appears the router is dropping packets? If that's the case, is there anything in the config that I can change? Or should I buy a new router, a new modem, or both?

    Read the article

  • MySQL Master-Master w/ multiple read slave cost effective setup in AWS

    - by Ross
    I've been evaluating Amazon Web Services RDS for MySQL and costing out potential scenarios involving a simple multi-AZ deployment read/write setup vs. a multi-AZ deployment mysql master (hot-standby) with additional read-only slaves. the issue I'm trying to cost-optimize includes their reserved instance vs on-demand instances. Situation 1: purchase reserved multi-az setup for Extra-large-hi-mem(17GB RAM) instance for $5200/yr and have my application query the master all the time. the problem is, if I don't need all the resources of the (17GB RAM) all the time and therefore, especially not a hot-standby, what alternatives for savings can a better topology create, like potentially situation 2 below: Situation 2: purchase reserved multi-az setup using smaller master instances than above for the master-master hot-standby to receive the writes only. Then create and load balance several read-only slaves off the master and add/remove and/or scale up/down the read slaves based on demand. This might only cost $1000 + the on-demand usage of the read slaves. My thinking is, if I have a variable read-intensive application load, with low write load, the single level topology in situation 1 means I'm paying for a lot of resources at the write level of topology when I don't need them there. My hope is that situation 2 can yield cost savings from smaller reserved instances on the master-master resource level allowing me to scale up and down and/or out on the read-level according to demand as needed. Does anyone see a downside to doing this or know of some reason this isn't possible with RDS? Any other thoughts or advice always welcome of course. Thanks in advance, R

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >