Search Results

Search found 18661 results on 747 pages for 'linq to mysql'.

Page 312/747 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • webserver horrible slow, sometimes incredible fast

    - by dhanke
    i am running a small community ( 6000+ Members ) on a non-virtual 64-bit ubuntu 11.04 system. I am not a Linux-pro, not even advanced, i just tried to setup a webserver, which does nothing special actually. Delivering some dynamic PHP and RoR websites is its task. So it might be that my configuration files do look horrible bad. Also, i might use the wrong vocabulary, so in doubt, please ask. Having a current all-time record of 520 registered users (board-accounts, no system-users) online at same time, average server-load is about 2.0 - 5.0. Meantime (~250 users) average server load value is at about 0.4 - 0.8, sometimes, on some expensive searches a bit higher. everything fine. From time to time however, the load increases up to 120 (120.0, not 12.0 ;) ). In this time, its hard to even connect via SSH, but when i reach the server, and use top/htop/iotop to see whats happening, i cannot identify any process causing high CPU load. iotop tells me about a current reading/writing speed of about approx. 70kb/s, which is quite equal to power-off i think. Memory-Usage is max. at ~ 12GB of 16GB, so swap remains empty. now the odd (at least for me:) waiting some minutes ( since i always get a bit into a panic when this happens, it feels like 5 minutes, but i suppose its more like 20-30 minutes) and the server is back to normal. everything continues as normal. another odd fact: when i run hdparm -tT /dev/sda, i get answer like: /dev/sda: Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec when i run the same command while the server is "frozen", the answer is like /dev/sda: <- takes about 5 minutes until this line appears Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec <- 5 more minutes Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec <- another 5 minutes so the values are the same, but the quoted time is completely wrong. using time command as prefix also tells me that ~ 15 minutes were used. I searched in dmesg, /var/log/[messages|syslog] - nothing found. /var/log/errors however tells me that: Jul 4 20:28:30 localhost kernel: [19080.671415] INFO: task php5-fpm:27728 blocked for more than 120 seconds. Jul 4 20:28:30 localhost kernel: [19080.671419] "echo 0 /proc/sys/kernel/hung_task_timeout_secs" disables this message. multiple times. now that message does tell me that php5-fpm task was blocked or did block ? - but not if that is the cause or just one of the results of that "freeze". Anyone? to cut the long story short, i dont know where even to start analyzing. So if you can give me any advice by looking at following specs and configs, or ask me to provide more information, i`d be glad. Specs: 6 Core AMD Phenom(tm) II X6 1055T Processor * 16 Gigabyte Ram 2x 1.5 TB Seagate ST1500DL003-9VT16L via SATA 3 via SoftwareRaid (i suppose) Services: (due to service --status-all, those with [ + ]) nginx Webserver 1.0.14 mySQL 5.1.63 Server Ruby on Rails 2.3.11 ( passenger-nginx-module ) php5-fpm 5.3.6-13ubuntu3.7 SSH ido2db Further services: default crontab + nightly backup. syslog-ng Website consists of 2 subdomains, forum. and www. where forum is a phpBB3.x PHP-Board, and www a Ruby on Rails 2.3.11 application (portal). Mini-Note: sometimes i notice that the forum is pretty slow, in contrast to the always-fast (except for this "freeze") portal. Both share the same Database, but the portal is using it read-only. The Webserver is nginx, using phusion passenger module to communicate with the ruby-application. Also, for the forum it communicates with php5-fpm via socket: relevant nginx configuration parts ( with comments/questions starting by ; ) ; in case of freeze due to too high Filesystem activity, maybe adding a limit? #worker_rlimit_nofile 50000; user www-data; ; 6 cores, so i read 6 fits. maybe already wrong? worker_processes 6; pid /var/run/nginx.pid; events { worker_connections 1024; } http { passenger_root /var/lib/gems/1.8/gems/passenger-3.0.11; passenger_ruby /usr/bin/ruby1.8; ; the forum once featured a chat, which was working w/o websockets. ; so it was a hell of pull requests (deactivated now, freeze still happening) keepalive_timeout 65; keepalive_requests 50; gzip on; server { listen 80; server_name www.domain.tld; root /var/www/domain/rails/public; passenger_enabled on; } server { listen 80; server_name forum.domain.tld; location / { root /var/www/domain/forum; index index.php; } ; satic stuff to be handled by nginx location ~* ^/style/.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; root /var/www/domain/forum/; } ; now the php magic, note the "backend"-fcgi_pass location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/domain/forum$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; } location ~ /\.ht { deny all; } } ;the php5-fpm socket. i read that /dev/shm/ whould be the fastes place for this. bad idea in general? upstream backend { server unix:/dev/shm/phpfpm; } ... } php5-fpm settings (i changed this values due to php5-fpm error log messages higher and higher.. (freeze-problem was there before as well)* listen = /dev/shm/phpfpm user = www-data group = www-data pm = dynamic ; holy, 4000! well, shinking this value to earth-level gave me ; 100s of 502 bad gateway commands. this values were quite stable. ; since there are only max 520 users online i dont get it, why i would need ; as many children as configured here. due to keep-alive maybe? ; asking questions is easier for me since restarting server will make ; my community-members angry ;) pm.max_children = 4000 pm.start_servers = 100 pm.min_spare_servers = 50 pm.max_spare_servers = 150 pm.max_requests = 10 pm.status_path = /status ping.path = /ping ping.response = pong slowlog = log/$pool.log.slow ;should i use rlimit? ;rlimit_files = 1024 chdir = / mysql/my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP ; high number, but less gives some phpBB errors. max_connections = 450 table_cache = 512 ; i read twice the cpu cores, bad? thread_concurrency = 12 join_buffer_size = 2084K concurrent_insert = 3 query_cache_limit = 64M query_cache_size = 512M query_cache_type = 1 log_error = /var/log/mysql/error.log log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 expire_logs_days = 10 max_binlog_size = 100M low_priority_updates=1 [mysqldump] quick quote-names max_allowed_packet = 16M [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/ I used smartctl already, hdds seem to be fine. /proc/mdstatus quotes: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sda3[1] 1459264192 blocks [2/1] [_U] md1 : active raid1 sda1[0] 3911680 blocks [2/1] [U_] unused devices: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127727 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 127727 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited I quote some questions in my configuration files, these are not (intentional) directly problem-related, but would be nice for me to know wether they are indeed questionable or done right. One additional Fact: my MYSQL-database is at 12GB size. i dont know if that does matter, but mytop sometimes shows me 4-5 seconds long insert queries, some are 20-30 seconds long. Its just a feeling that i am unable to prove (because i dont know how), but when i disable the database, the freeze seems not to happen. Example: i created a dummy rails application to see the development log. the app made some sql-queries, reads and inserts. the log quite often was like: DbTest Load (0.3ms) SELECT * FROM `db_test` WHERE (`db_test`.`id` = 31722) LIMIT 1 SQL (0.1ms) BEGIN DbTest Update (0.3ms) UPDATE `db_test` SET `updated_at` = '2012-07-04 23:32:34' WHERE `id` = 31722 - now the log stands still for 5-60 seconds. SQL (49.1ms) COMMIT - SQL-Update time in the log does not include freeze time Rendering test/index Completed in 96ms (View: 16, DB: 59) | 200 OK [http://localhost:9000/test] Bad part is: this mini-freeze here only happens from time to time as well. note: meanwhile i cannot even upload files via scp. I currently feel like running form bad to worse and back by googling for my server-problem due to immense lack of knowledge regarding server configurations. It still makes me wonder, why those problems even appear, since 250 users a time is not such a high amount, right? So my questions: whats wrong and how to fix? ;) or: what information can i provide to make the situation more clear? can you point at some critical bad configuration-line which i should consider to catch up in the documentation? are there any tools i can run to see some possible bottlenecks? any further advice? (next to: "pay someone who knows what he does" - its a private project, server costs enough already. :)) Thanks for your time and help. Best Regards, Daniel P.S.: i renamed the configfiles to domain.tld since i dont want to have any % more load to the server until its fixed. might be a exaggeratedly thought.. P.P.S: if i asked a complete duplicate question, sorry. my search results seemed to be quite specific in their own way.

    Read the article

  • Juju stuck in "pending" state when using LXC

    - by Andre
    So I'm trying to get started with Juju, and tried to do this locally using LXC. I followed the instructions here: How do I configure juju for local usage? Unfortunately this doesn't seem to work for me. status shows the following: $ juju status machines: 0: agent-state: running dns-name: localhost instance-id: local instance-state: running services: mysql: charm: cs:precise/mysql-1 relations: db: - wordpress units: mysql/0: agent-state: pending machine: 0 public-address: null wordpress: charm: cs:precise/wordpress-0 exposed: true relations: db: - mysql units: wordpress/0: agent-state: pending machine: 0 open-ports: [] public-address: null 2012-05-10 14:09:38,155 INFO 'status' command finished successfully As you can see the agent-state is 'pending' and there is no public address where I'm able to access the newly created site. Am I missing something here? UPDATE: Tried destroying the environment an doing everything again (multiple times). This is the output for debug-log: ~$ juju debug-log 2012-05-11 08:50:23,790 INFO Enabling distributed debug log. 2012-05-11 08:50:23,806 INFO Tailing logs - Ctrl-C to stop. 2012-05-11 08:50:42,338 Machine:0: juju.agents.machine DEBUG: Units changed old:set([]) new:set(['mysql/0']) 2012-05-11 08:50:42,339 Machine:0: juju.agents.machine DEBUG: Starting service unit: mysql/0 ... 2012-05-11 08:50:42,459 Machine:0: unit.deploy DEBUG: Downloading charm cs:precise/mysql-1 to /home/andre/.juju/data/andre-local/charms 2012-05-11 08:50:42,620 Machine:0: unit.deploy DEBUG: Using <juju.machine.unit.UnitContainerDeployment object at 0x9c54b6c> for mysql/0 in /home/andre/.juju/data/andre-local 2012-05-11 08:50:42,648 Machine:0: unit.deploy DEBUG: Starting service unit mysql/0... 2012-05-11 08:50:42,649 Machine:0: unit.deploy DEBUG: Creating master container... 2012-05-11 08:54:33,992 Machine:0: unit.deploy DEBUG: Created master container andre-local-0-template 2012-05-11 08:54:33,993 Machine:0: unit.deploy INFO: Creating container mysql-0... 2012-05-11 08:56:18,760 Machine:0: unit.deploy INFO: Container created for mysql/0 2012-05-11 08:56:19,466 Machine:0: unit.deploy DEBUG: Charm extracted into container 2012-05-11 08:56:19,569 Machine:0: unit.deploy DEBUG: Starting container... 2012-05-11 08:56:22,707 Machine:0: unit.deploy INFO: Started container for mysql/0 2012-05-11 08:56:22,707 Machine:0: unit.deploy INFO: Started service unit mysql/0 2012-05-11 08:56:23,012 Machine:0: juju.agents.machine DEBUG: Units changed old:set(['mysql/0']) new:set(['wordpress/0', 'mysql/0']) 2012-05-11 08:56:23,039 Machine:0: juju.agents.machine DEBUG: Starting service unit: wordpress/0 ... 2012-05-11 08:56:23,154 Machine:0: unit.deploy DEBUG: Downloading charm cs:precise/wordpress-0 to /home/andre/.juju/data/andre-local/charms 2012-05-11 08:56:23,396 Machine:0: unit.deploy DEBUG: Using <juju.machine.unit.UnitContainerDeployment object at 0x9c519cc> for wordpress/0 in /home/andre/.juju/data/andre-local 2012-05-11 08:56:23,620 Machine:0: unit.deploy DEBUG: Starting service unit wordpress/0... 2012-05-11 08:56:23,621 Machine:0: unit.deploy INFO: Creating container wordpress-0... 2012-05-11 08:58:24,739 Machine:0: unit.deploy INFO: Container created for wordpress/0 2012-05-11 08:58:25,163 Machine:0: unit.deploy DEBUG: Charm extracted into container 2012-05-11 08:58:25,397 Machine:0: unit.deploy DEBUG: Starting container... 2012-05-11 08:58:27,982 Machine:0: unit.deploy INFO: Started container for wordpress/0 2012-05-11 08:58:27,983 Machine:0: unit.deploy INFO: Started service unit wordpress/0 This is the result for the status command (with verbose flag): ~$ juju -v status 2012-05-11 08:51:53,464 DEBUG Initializing juju status runtime 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.5 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@662: Client environment:host.name=andre-ufo 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@669: Client environment:os.name=Linux 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@670: Client environment:os.arch=3.2.0-24-generic-pae 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@671: Client environment:os.version=#37-Ubuntu SMP Wed Apr 25 10:47:59 UTC 2012 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@679: Client environment:user.name=andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@687: Client environment:user.home=/home/andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@699: Client environment:user.dir=/home/andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@zookeeper_init@727: Initiating client connection, host=192.168.122.1:41779 sessionTimeout=10000 watcher=0xb7780620 sessionId=0 sessionPasswd=<null> context=0x9242ee8 flags=0 2012-05-11 08:51:53,627:4030(0xb6b90b40):ZOO_INFO@check_events@1585: initiated connection to server [192.168.122.1:41779] 2012-05-11 08:51:53,649:4030(0xb6b90b40):ZOO_INFO@check_events@1632: session establishment complete on server [192.168.122.1:41779], sessionId=0x1373ae057d90007, negotiated timeout=10000 2012-05-11 08:51:53,651 DEBUG Environment is initialized. machines: 0: agent-state: running dns-name: localhost instance-id: local instance-state: running services: mysql: charm: cs:precise/mysql-1 relations: db: - wordpress units: mysql/0: agent-state: pending machine: 0 public-address: null wordpress: charm: cs:precise/wordpress-0 relations: db: - mysql units: wordpress/0: agent-state: pending machine: 0 public-address: null

    Read the article

  • How do I break down MySQL query results into categories, each with a specific number of rows?

    - by Mel
    Hello, Problem: I want to list n number of games from each genre (order not important) The following MySQL query resides inside a ColdFusion function. It is meant to list all games under a platform (for example, list all PS3 games; list all Xbox 360 games; etc...). The variable for PlatformID is passed through the URL. I have 9 genres, and I would like to list 10 games from each genre. SELECT games.GameID AS GameID, games.GameReleaseDate AS rDate, titles.TitleName AS tName, titles.TitleShortDescription AS sDesc, genres.GenreName AS gName, platforms.PlatformID, platforms.PlatformName AS pName, platforms.PlatformAbbreviation AS pAbbr FROM (((games join titles on((games.TitleID = titles.TitleID))) join genres on((genres.GenreID = games.GenreID))) join platforms on((platforms.PlatformID = games.PlatformID))) WHERE (games.PlatformID = '#ARGUMENTS.PlatformID#') ORDER BY GenreName ASC, GameReleaseDate DESC Once the query results come back I group them in ColdFusion as follows: <cfoutput query="ListGames" group="gName"> (first loop which lists genres) #ListGames.gName# <cfoutput> (nested loop which lists games) #ListGames.tName# </cfoutput> </cfoutput> The problem is that I only want 10 games from each genre to be listed. If I place a "limit" of 50 in the SQL, I will get ~ 50 games of the same genre (depending on how much games of that genre there are). The second issue is I don't want the overload of querying the database for all games when each person will only look at a few. What is the correct way to do this? Many thanks!

    Read the article

  • How do I Handle Ties When Ranking Results in MySQL?

    - by Laxmidi
    Hi, How does one handle ties when ranking results in a mysql query? I've simplified the table names and columns in this example, but it should illustrate my problem: SET @rank=0; SELECT student_names.students, @rank := @rank +1 AS rank, scores.grades FROM student_names LEFT JOIN scores ON student_names.students = scores.students ORDER BY scores.grades DESC So imagine the the above query produces: Students Rank Grades Al 1 90 Amy 2 90 George 3 78 Bob 4 73 Mary 5 NULL William 6 NULL Even though Al and Amy have the same grade, one is ranked higher than the other. Amy got ripped-off. How can I make it so that Amy and Al have the same ranking, so that they both have a rank of 1. Also, William and Mary didn't take the test. They bagged class and were smoking in the boy's room. They should be tied for last place. The correct ranking should be: Students Rank Grades Al 1 90 Amy 1 90 George 3 78 Bob 4 73 Mary 5 NULL William 5 NULL If anyone has any advice, please let me know. Thank you! -Laxmidi

    Read the article

  • can I make this select follower/following script more organized? (PHP/Mysql)

    - by ggfan
    In this script, it gets the followers of a user and the people the user is following. Is there a better relationship/database structure to get a user's followers and who they are following? All I have right now is 2 columns to determine their relationship. I feel this is "too simple"? (MYSQL) USER | FRIEND avian gary cend gary gary avian mike gary (PHP) $followers = array(); $followings = array(); $view = $_SESSION['user']; //view is the person logged in $query = "SELECT * FROM friends WHERE user='$view'"; $result = $db->query($query); while($row=$result->fetch_array()) { $follower=$row['friend']; $followers[] = $follower; } print_r($followers); echo "<br/>"; $query2 = "SELECT * FROM friends WHERE friend='$view'"; $result2 = $db->query($query2); while($row2=$result2->fetch_array()) { $following=$row2['user']; $followings[] = $following; } print_r($followings); echo "<br/>"; $mutual = array_intersect($followers, $followings); print_r($mutual); **DISPLAY** Your mutual friends avian Your followers avian You are following avian cen mike (I know avian is in all 3 displays, but I want to keep it that way)

    Read the article

  • Can you notice what's wrong with my PHP or MYSQL code?

    - by Jenna
    I am trying to create a category menu with sub categories. I have the following MySQL table: -- -- Table structure for table `categories` -- CREATE TABLE IF NOT EXISTS `categories` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(1000) NOT NULL, `slug` varchar(1000) NOT NULL, `parent` int(11) NOT NULL, `type` varchar(255) NOT NULL, PRIMARY KEY (`ID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=66 ; -- -- Dumping data for table `categories` -- INSERT INTO `categories` (`ID`, `name`, `slug`, `parent`, `type`) VALUES (63, 'Party', '/category/party/', 0, ''), (62, 'Kitchen', '/category/kitchen/', 61, 'sub'), (59, 'Animals', '/category/animals/', 0, ''), (64, 'Pets', '/category/pets/', 59, 'sub'), (61, 'Rooms', '/category/rooms/', 0, ''), (65, 'Zoo Creatures', '/category/zoo-creatures/', 59, 'sub'); And the following PHP: <?php include("connect.php"); echo "<ul>"; $query = mysql_query("SELECT * FROM categories"); while ($row = mysql_fetch_assoc($query)) { $catId = $row['id']; $catName = $row['name']; $catSlug = $row['slug']; $parent = $row['parent']; $type = $row['type']; if ($type == "sub") { $select = mysql_query("SELECT name FROM categories WHERE ID = $parent"); while ($row = mysql_fetch_assoc($select)) { $parentName = $row['name']; } echo "<li>$parentName >> $catName</li>"; } else if ($type == "") { echo "<li>$catName</li>"; } } echo "</ul>"; ?> Now Here's the Problem, It displays this: * Party * Rooms >> Kitchen * Animals * Animals >> Pets * Rooms * Animals >> Zoo Creatures I want it to display this: * Party * Rooms >> Kitchen * Animals >> Pets >> Zoo Creatures Is there something wrong with my loop? I just can't figure it out.

    Read the article

  • How does this Switch statement know which case to execute? (PHP/MySQL)

    - by ggfan
    Here is a code form a PHP+MySQL book I am reading and I am having trouble understanding this code. This code is about checking a file uploaded into a database. (Please excuse any spelling errors if any, I was typing it in) Q1: How does it know which case to echo? In the whole code, there is no mention of each case. Q2: Why do they skip case 5?! Or does it not matter which numbers you use(so I can have case 1, case 18, case 2?) if($_FILES['userfile']['error']>0) { echo 'Problem: '; switch($_FILES['userfile']['error']) { case 1: echo 'File exceeded upload_max_filesize'; break; case 2: echo 'File exceeded max_file_size'; break; case 3: echo 'File only partially uploaded'; break; case 4: echo 'No file uploaded'; break; case 6: echo 'Cannot upload file: no temp directory specified'; break; case 7: echo 'Upload failed: Cannot write to disk'; break; } exit; }

    Read the article

  • Is there a way to effect user defined data types in MySQL?

    - by Dancrumb
    I have a database which stores (among other things), the following pieces of information: Hardware IDs BIGINTs Storage Capacities BIGINTs Hardware Names VARCHARs World Wide Port Names VARCHARs I'd like to be able to capture a more refined definition of these datatypes. For instance, the hardware IDs have no numerical significance, so I don't care how they are formatted when displayed. The Storage Capacities, however, are cardinal numbers and, at a user's request, I'd like to present them with thousands and decimal separators, e.g. 123,456.789. Thus, I'd like to refine BIGINT into, say ID_NUMBER and CARDINAL. The same with Hardware Names, which are simple text and WWPNs, which are hexstrings, e.g. 24:68:AC:E0. Thus, I'd like to refine VARCHAR into ENGLISH_WORD and HEXSTRING. The specific datatypes I made up are just for illustrative purposes. I'd like to keep all this information in one place and I'm wondering if anybody knows of a good way to hold this all in my MySQL table definitions. I could use the Comment field of the table definition, but that smells fishy to me. One approach would be to define the data structure elsewhere and use that definition to generate my CREATE TABLEs, but that would be a major rework of the code that I currently have, so I'm looking for alternatives. Any suggestions? The application language in use is Perl, if that helps.

    Read the article

  • Top 10 collection completion - a monster in-query formula in MySQL?

    - by Andrew Heath
    I've got the following tables: User Basic Data (unique) [userid] [name] [etc] User Collection (one to one) [userid] [game] User Recorded Plays (many to many) [userid] [game] [scenario] [etc] Game Basic Data (unique) [game] [total_scenarios] I would like to output a table that shows the collection play completion percentage for the Top 10 users in descending order of %: Output Table [userid] [collection_completion] 3 95% 1 81% 24 68% etc etc In my mind, the calculation sequence for ONE USER is: grab user's total owned scenarios from User Collection joined with Game Basic Data and COUNT(gbd.total_scenarios) grab all recorded plays by COUNT(DISTINCT scenario) for that user Divide all recorded plays by total owned scenarios So that's 2 queries and a little PHP massage at the end. For a list of users sorted by completion percentage things get a little more complicated. I figure I could grab all users' collection totals in one query, and all users recorded plays in another, and then do the calcs and sort the final array in PHP, but it seems like overkill to potentially be doing all that for 1000+ users when I only ever want the Top 10. Is there a wicked monster query in MySQL that could do all that and LIMIT 10? Or is sticking with PHP handling the bulk of the work the way to go in this case?

    Read the article

  • How do I break down MySQL query results into categories, each with a specific number of rows?

    - by Mel
    Hello, Problem: I want to list n number of games from each genre (order not important) The following MySQL query resides inside a ColdFusion function. It is meant to list all games under a platform (for example, list all PS3 games; list all Xbox 360 games; etc...). The variable for PlatformID is passed through the URL. I have 9 genres, and I would like to list 10 games from each genre. SELECT games.GameID AS GameID, games.GameReleaseDate AS rDate, titles.TitleName AS tName, titles.TitleShortDescription AS sDesc, genres.GenreName AS gName, platforms.PlatformID, platforms.PlatformName AS pName, platforms.PlatformAbbreviation AS pAbbr FROM (((games join titles on((games.TitleID = titles.TitleID))) join genres on((genres.GenreID = games.GenreID))) join platforms on((platforms.PlatformID = games.PlatformID))) WHERE (games.PlatformID = '#ARGUMENTS.PlatformID#') ORDER BY GenreName ASC, GameReleaseDate DESC Once the query results come back I group them in ColdFusion as follows: <cfoutput query="ListGames" group="gName"> (first loop which lists genres) #ListGames.gName# <cfoutput> (nested loop which lists games) #ListGames.tName# </cfoutput> </cfoutput> The problem is that I only want 10 games from each genre to be listed. If I place a "limit" of 50 in the SQL, I will get ~ 50 games of the same genre (depending on how much games of that genre there are). The second issue is I don't want the overload of querying the database for all games when each person will only look at a few. What is the correct way to do this? Many thanks!

    Read the article

  • mysql prevent displaying a row ONE which has reference in another row TWO but no reference in row THREE

    - by Jayapal Chandran
    I have a table like the following id | name | pid 1 | sam | NULL 2 | sams ref | 1 3 | pam | NULL For the first time the first row gets inserted which will have pid as null I insert a row which is related to the first row and then i insert a row which is new and which may be referred by another row in future. now i want only the third row to be displayed and not the first and second row as the second row contains the reference of first row. so if any row has a reference to another row then both the rows should not be displayed. Only rows which is not having any reference should be displayed. BESIDES, IS IT A GOOD PRACTICE? PLEASE ADVICE ON THIS. Edited When i updated in server the query is always giving empty result. here is what i have and this one When pid is NULL then that row should appear but when another entry in the same table with pid as its parent id or any other rows id appears then both the rows should not appear. so if any pid has been referred then both the rows should not appear. here only one row will refer another row and not more than that. in my localhost i have mysql version 5.0.1 or something like that but when i installed xampp in another system it had 5.5 and in the live server it was 5.3 so in version around 5.0 the query is returning rows but in higher versions it is returning empty rows. so now i this case how to make a query?

    Read the article

  • Starting with versioning mysql schemata without overkill. Good solutions?

    - by tharkun
    I've arrived at the point where I realise that I must start versioning my database schemata and changes. I consequently read the existing posts on SO about that topic but I'm not sure how to proceed. I'm basically a one man company and not long ago I didn't even use version control for my code. I'm on a windows environment, using Aptana (IDE) and SVN (with Tortoise). I work on PHP/mysql projects. What's a efficient and sufficient (no overkill) way to version my database schemata? I do have a freelancer or two in some projects but I don't expect a lot of branching and merging going on. So basically I would like to keep track of concurrent schemata to my code revisions. [edit] Momentary solution: for the moment I decided I will just make a schema dump plus one with the necessary initial data whenever I'm going to commit a tag (stable version). That seems to be just enough for me at the current stage.[/edit] [edit2]plus I'm now also using a third file called increments.sql where I put all the changes with dates, etc. to make it easy to trace the change history in one file. from time to time I integrate the changes into the two other files and empty the increments.sql[/edit]

    Read the article

  • Record Disappeared from Mysql Table, How Can I Find Out What Happened?

    - by Jascha
    I got the fire alarm phone call, AIM messages and email today from a client stating "The site is down!, WTF happened?!" Well, after a little digging, it turns out one of the records in a table had been wiped clean, but without removing the row itself. So, I had the representation of data, but a bunch of empty fields. (needless to day I need to write into my code a catch for this.) What my real question is, where can I figure out what happened? I've got access to phpmyadmin and that's about it. I found some access logs on in the root directory of my server, but that just tells me the client was in the admin area I built editing that record, I'd like to know specifically what they did that made all of the data go away. (what query was run etc...) is it possible without real server admin access? is there a neat little php to mysql class that returns data like this? Thanks in advance. -Jascha

    Read the article

  • Fastest way to modify a decimal-keyed table in MySQL?

    - by javanix
    I am dealing with a MySQL table here that is keyed in a somewhat unfortunate way. Instead of using an auto increment table as a key, it uses a column of decimals to preserve order (presumably so its not too difficult to insert new rows while preserving a primary key and order). Before I go through and redo this table to something more sane, I need to figure out how to rekey it without breaking everything. What I would like to do is something that takes a list of doubles (the current keys) and outputs a list of integers (which can be cast down to doubles for rekeying). For example, input {1.00, 2.00, 2.50, 2.60, 3.00} would give output {1, 2, 3, 4, 5). Since this is a database, I also need to be able to update the rows nicely: UPDATE table SET `key`='3.00' WHERE `key`='2.50'; Can anyone think of a speedy algorithm to do this? My current thought is to read all of the doubles into a vector, take the size of the vector, and output a new vector with values from 1 => doubleVector.size. This seems pretty slow, since you wouldn't want to read every value into the vector if, for instance, only the last n/100 elements needed to be modified. I think there is probably something I can do in place, since only values after the first non-integer double need to be modified, but I can't for the life of me figure anything out that would let me update in place as well. For instance, setting 2.60 to 3.00 the first time you see 2.50 in the original key list would result in an error, since the key value 3.00 is already used for the table.

    Read the article

  • How to compare 2 lists and merge them in Python/MySQL?

    - by NJTechGuy
    I want to merge data. Following are my MySQL tables. I want to use Python to traverse though a list of both Lists (one with dupe = 'x' and other with null dupes). For instance : a b c d e f key dupe -------------------- 1 d c f k l 1 x 2 g h j 1 3 i h u u 2 4 u r t 2 x From the above sample table, the desired output is : a b c d e f key dupe -------------------- 2 g c h k j 1 3 i r h u u 2 What I have so far : import string, os, sys import MySQLdb from EncryptedFile import EncryptedFile enc = EncryptedFile( os.getenv("HOME") + '/.py-encrypted-file') user = enc.getValue("user") pw = enc.getValue("pw") db = MySQLdb.connect(host="127.0.0.1", user=user, passwd=pw,db=user) cursor = db.cursor() cursor2 = db.cursor() cursor.execute("select * from delThisTable where dupe is null") cursor2.execute("select * from delThisTable where dupe is not null") result = cursor.fetchall() result2 = cursor2.fetchall() for cursorFieldname in cursor.description: for cursorFieldname2 in cursor2.description: if cursorFieldname[0] == cursorFieldname2[0]: ### How do I compare the record with same key value and update the original row null field value with the non-null value from the duplicate? Please fill this void... cursor.close() cursor2.close() db.close() Thanks guys!

    Read the article

  • Full complete MySQL database replication? Ideas? What do people do?

    - by mauriciopastrana
    Currently I have two Linux servers running MySQL, one sitting on a rack right next to me under a 10 Mbit/s upload pipe (main server) and another some couple of miles away on a 3 Mbit/s upload pipe (mirror). I want to be able to replicate data on both servers continuously, but have run into several roadblocks. One of them being, under MySQL master/slave configurations, every now and then, some statements drop (!), meaning; some people logging on to the mirror URL don't see data that I know is on the main server and vice versa. Let's say this happens on a meaningful block of data once every month, so I can live with it and assume it's a "lost packet" issue (i.e., god knows, but we'll compensate). The other most important (and annoying) recurring issue is that, when for some reason we do a major upload or update (or reboot) on one end and have to sever the link, then LOAD DATA FROM MASTER doesn't work and I have to manually dump on one end and upload on the other, quite a task nowadays moving some .5 TB worth of data. Is there software for this? I know MySQL (the "corporation") offers this as a VERY expensive service (full database replication). I am just wondering what people out there do. The way it's structured, we run an automatic failover where if one server is not up, then the main URL just resolves to the other server.

    Read the article

  • Load an html table from a mysql database when an onChange event is triggered from a select tag

    - by Crew Peace
    So, here's the deal. I have an html table that I want to populate. Specificaly the first row is the one that is filled with elements from a mysql database. To be exact, the table is a questionnaire about mobile phones. The first row is the header where the cellphone names are loaded from the database. There is also a select tag that has company names as options in it. I need to trigger an onChange event on the select tag to reload the page and refill the first row with the new names of mobiles from the company that is currently selected in the dropdown list. This is what my select almost looks like: <select name="select" class="companies" onChange="reloadPageWithNewElements()"> <?php $sql = "SELECT cname FROM companies;"; $rs = mysql_query($sql); while($row = mysql_fetch_array($rs)) { echo "<option value=\"".$row['cname']."\">".$row['cname']."</option>\n "; } ?> </select> So... is there a way to refresh this page with onChange and pass the selected value to the same page again and assign it in a new php variable so i can do the query i need to fill my table? <?php //$mobileCompanies = $_GET["selectedValue"]; $sql = "SELECT mname FROM ".$mobileCompanies.";"; $rs = mysql_query($sql); while ($row = mysql_fetch_array($rs)) { echo "<td><div class=\"q1\">".$row['mname']."</div></td>"; } ?> something like this. (The reloadPageWithNewElements() and selectedValue are just an idea for now)

    Read the article

  • Is there a simple way to "roll your own forms" for mysql in php, for example in jquery?

    - by talkingnews
    I've been googling around for a really simple way of making what is, in effect, nothing more than an enhanced phpMySql. In a mysql database, I have: Name, address, phone, website etc, plus 2 or 3 custom fields. This data is pulled out to make a website. All I want is to be able to make a freeform form, a bit like Access, but for the web, and the only thing I want to do over and above normal field editing would be to have a list of when I contact them, what was said, and perhaps a reminder when the next action is due. I've looked at so many CRMs my mind is boggling, and they all do WAY more than I need. I don't have leads or accounts, all I have is the need to make sure than when I update the person's details, and for that data to be in the same DB as my site is generate from. I'm happy to learn if I can get pointed in the right direction, and I have a feeling that something like what I want might lie in the direction of jquery. It's just that there's so much good jquery stuff about, I can't see the wood for the trees! Thanks.

    Read the article

  • How to store data in mysql, to get the fastest performance?

    - by Oden
    Hey, I'm thinking about it, witch of the following two query types would give me the fastest performance for a user messaging module inside my site: The first one i thought about is a multi table setup, witch has a connection table, and a main table. The connection table holds the connection between accounts, and the messaging table. In this case a query would look like following, to get some data of the author, and the messages he has sent: SELECT m.*, a.username FROM messages AS m LEFT JOIN connection_table ON (message_id = m.id) LEFT JOIN accounts AS a ON (account_id = a.id) WHERE m.id = '32341' Inserting into it is a little bit more "complicated". My other idea, and in my thought the better solution of this problem is that i store the data i would use in a connection table in the same table where is store the data of the mail. Sounds like i would get lots of duplicated entries, but no, because i have a field witch has text type and holds user ids like this: *24*32*249* If I want to query them, i use the mysql LIKE method. Deleting is an other problem, but for this i have one more field where i store who has deleted the post. Sad about that i don't know how to join this. So what would you recommend? Are there other ways?

    Read the article

  • Beginner Question: For extract a large subset of a table from MySQL, how does Indexing, order of tab

    - by chongman
    Sorry if this is too simple, but thanks in advance for helping. This is for MySQL but might be relevant for other RDMBSs tblA has 4 columns: colA, colB, colC, mydata, A_id It has about 10^9 records, with 10^3 distinct values for colA, colB, colC. tblB has 3 columns: colA, colB, B_id It has about 10^4 records. I want all the records from tblA (except the A_id) that have a match in tblB. In other words, I want to use tblB to describe the subset that I want to extract and then extract those records from tblA. Namely: SELECT a.colA, a.colB, a.colC, a.mydata FROM tblA as a INNER JOIN tblB as b ON a.colA=b.colA a.colB=b.colB ; It's taking a really long time (more than an hour) on a newish computer (4GB, Core2Quad, ubuntu), and I just want to check my understanding of the following optimization steps. ** Suppose this is the only query I will ever run on these tables. So ignore the need to run other queries. Now my questions: 1) What indexes should I create to optimize this query? I think I just need a multiple index on (colA, colB) for both tables. I don't think I need separate indexes for colA and colB. Another stack overflow article (that I can't find) mentioned that when adding new indexes, it is slower when there are existing indexes, so that might be a reason to use the multiple index. 2) Is INNER JOIN correct? I just want results where a match is found. 3) Is it faster if I join (tblA to tblB) or the other way around, (tblB to tblA)? This previous answer says that the optimizer should take care of that. 4) Does the order of the part after ON matter? This previous answer say that the optimizer also takes care of the execution order.

    Read the article

  • Need to exclude results in a MySQL query where two table fields are not of certain values (brain far

    - by DondeEstaMiCulo
    I don't know if I'm just burnt out and can't think, or what... But I can't seem to make this work right... (We're using MySQL 5.1...) I have two tables which have some transactional stuff stored in them. There will be many records per user_id in each table. Table1 and Table2 have a one-to-one relationship with each other. I want to pull records from both tables, but I want to exclude records which have certain values in both tables. I don't care if they both don't have these values, or if just one does, but both tables should not have both values. (Does this make any sense? lol) For example: SELECT t1.id, t1.type, t2.name FROM table1 t1 INNER JOIN table2 t2 ON table.xid = table2.id WHERE t1.user_id = 100 AND (t1.type != 'FOO' AND t2.name != 'BAR') So t1.type is type ENUM with about 10 different options, and t2.name is also type ENUM with 2 options. My expected results would look a little like: 1, FOO, YUM 2, BOO, BAR 3, BOO, YUM But instead, all I'm getting is: 3, BOO, YUM Because it's filtering out all records which has 'FOO' as the type, and 'BAR' as the name. I keep waiting for that D'oh! moment where it hits me and I feel like an idiot for not realizing what I'm doing wrong. But it hasn't come. And I still feel like an idiot, lol. I appreciate any light any of you can shed on this! Many thanks in advance for the help!

    Read the article

  • How do you convert a parent-child (adjacency) table to a nested set using PHP and MySQL?

    - by mrbinky3000
    I've spent the last few hours trying to find the solution to this question online. I've found plenty of examples on how to convert from nested set to adjacency... but few that go the other way around. The examples I have found either don't work or use MySQL procedures. Unfortunately, I can't use procedures for this project. I need a pure PHP solution. I have a table that uses the adjacency model below: id parent_id category 1 0 ROOT_NODE 2 1 Books 3 1 CD's 4 1 Magazines 5 2 Books/Hardcover 6 2 Books/Large Format 7 4 Magazines/Vintage And I would like to convert it to a Nested Set table below: id left right category 1 1 14 Root Node 2 2 7 Books 3 3 4 Books/Hardcover 4 5 6 Books/Large Format 5 8 9 CD's 6 10 13 Magazines 7 11 12 Magazines/Vintage Here is an image of what I need: I have a function, based on the pseudo code from this forum post (http://www.sitepoint.com/forums/showthread.php?t=320444) but it doesn't work. I get multiple rows that have the same value for left. This should not happen. <?php /** -- -- Table structure for table `adjacent_table` -- CREATE TABLE IF NOT EXISTS `adjacent_table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `father_id` int(11) DEFAULT NULL, `category` varchar(128) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; -- -- Dumping data for table `adjacent_table` -- INSERT INTO `adjacent_table` (`id`, `father_id`, `category`) VALUES (1, 0, 'ROOT'), (2, 1, 'Books'), (3, 1, 'CD''s'), (4, 1, 'Magazines'), (5, 2, 'Hard Cover'), (6, 2, 'Large Format'), (7, 4, 'Vintage'); -- -- Table structure for table `nested_table` -- CREATE TABLE IF NOT EXISTS `nested_table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `lft` int(11) DEFAULT NULL, `rgt` int(11) DEFAULT NULL, `category` varchar(128) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; */ mysql_connect('localhost','USER','PASSWORD') or die(mysql_error()); mysql_select_db('DATABASE') or die(mysql_error()); adjacent_to_nested(0); /** * adjacent_to_nested * * Reads a "adjacent model" table and converts it to a "Nested Set" table. * @param integer $i_id Should be the id of the "root node" in the adjacent table; * @param integer $i_left Should only be used on recursive calls. Holds the current value for lft */ function adjacent_to_nested($i_id, $i_left = 0) { // the right value of this node is the left value + 1 $i_right = $i_left + 1; // get all children of this node $a_children = get_source_children($i_id); foreach ($a_children as $a) { // recursive execution of this function for each child of this node // $i_right is the current right value, which is incremented by the // import_from_dc_link_category method $i_right = adjacent_to_nested($a['id'], $i_right); // insert stuff into the our new "Nested Sets" table $s_query = " INSERT INTO `nested_table` (`id`, `lft`, `rgt`, `category`) VALUES( NULL, '".$i_left."', '".$i_right."', '".mysql_real_escape_string($a['category'])."' ) "; if (!mysql_query($s_query)) { echo "<pre>$s_query</pre>\n"; throw new Exception(mysql_error()); } echo "<p>$s_query</p>\n"; // get the newly created row id $i_new_nested_id = mysql_insert_id(); } return $i_right + 1; } /** * get_source_children * * Examines the "adjacent" table and finds all the immediate children of a node * @param integer $i_id The unique id for a node in the adjacent_table table * @return array Returns an array of results or an empty array if no results. */ function get_source_children($i_id) { $a_return = array(); $s_query = "SELECT * FROM `adjacent_table` WHERE `father_id` = '".$i_id."'"; if (!$i_result = mysql_query($s_query)) { echo "<pre>$s_query</pre>\n"; throw new Exception(mysql_error()); } if (mysql_num_rows($i_result) > 0) { while($a = mysql_fetch_assoc($i_result)) { $a_return[] = $a; } } return $a_return; } ?> This is the output of the above script. INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '2', '5', 'Hard Cover' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '2', '7', 'Large Format' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '1', '8', 'Books' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '1', '10', 'CD\'s' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '10', '13', 'Vintage' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '1', '14', 'Magazines' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '0', '15', 'ROOT' ) As you can see, there are multiple rows sharing the lft value of "1" same goes for "2" In a nested-set, the values for left and right must be unique. Here is an example of how to manually number the left and right ID's in a nested set: UPDATE - PROBLEM SOLVED First off, I had mistakenly believed that the source table (the one in adjacent-lists format) needed to be altered to include a source node. This is not the case. Secondly, I found a cached page on BING (of all places) with a class that does the trick. I've altered it for PHP5 and converted the original author's mysql related bits to basic PHP. He was using some DB class. You can convert them to your own database abstraction class later if you want. Obviously, if your "source table" has other columns that you want to move to the nested set table, you will have to adjust the write method in the class below. Hopefully this will save someone else from the same problems in the future. <?php /** -- -- Table structure for table `adjacent_table` -- DROP TABLE IF EXISTS `adjacent_table`; CREATE TABLE IF NOT EXISTS `adjacent_table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `father_id` int(11) DEFAULT NULL, `category` varchar(128) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; -- -- Dumping data for table `adjacent_table` -- INSERT INTO `adjacent_table` (`id`, `father_id`, `category`) VALUES (1, 0, 'Books'), (2, 0, 'CD''s'), (3, 0, 'Magazines'), (4, 1, 'Hard Cover'), (5, 1, 'Large Format'), (6, 3, 'Vintage'); -- -- Table structure for table `nested_table` -- DROP TABLE IF EXISTS `nested_table`; CREATE TABLE IF NOT EXISTS `nested_table` ( `lft` int(11) NOT NULL DEFAULT '0', `rgt` int(11) DEFAULT NULL, `id` int(11) DEFAULT NULL, `category` varchar(128) DEFAULT NULL, PRIMARY KEY (`lft`), UNIQUE KEY `id` (`id`), UNIQUE KEY `rgt` (`rgt`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; */ /** * @class tree_transformer * @author Paul Houle, Matthew Toledo * @created 2008-11-04 * @url http://gen5.info/q/2008/11/04/nested-sets-php-verb-objects-and-noun-objects/ */ class tree_transformer { private $i_count; private $a_link; public function __construct($a_link) { if(!is_array($a_link)) throw new Exception("First parameter should be an array. Instead, it was type '".gettype($a_link)."'"); $this->i_count = 1; $this->a_link= $a_link; } public function traverse($i_id) { $i_lft = $this->i_count; $this->i_count++; $a_kid = $this->get_children($i_id); if ($a_kid) { foreach($a_kid as $a_child) { $this->traverse($a_child); } } $i_rgt=$this->i_count; $this->i_count++; $this->write($i_lft,$i_rgt,$i_id); } private function get_children($i_id) { return $this->a_link[$i_id]; } private function write($i_lft,$i_rgt,$i_id) { // fetch the source column $s_query = "SELECT * FROM `adjacent_table` WHERE `id` = '".$i_id."'"; if (!$i_result = mysql_query($s_query)) { echo "<pre>$s_query</pre>\n"; throw new Exception(mysql_error()); } $a_source = array(); if (mysql_num_rows($i_result)) { $a_source = mysql_fetch_assoc($i_result); } // root node? label it unless already labeled in source table if (1 == $i_lft && empty($a_source['category'])) { $a_source['category'] = 'ROOT'; } // insert into the new nested tree table // use mysql_real_escape_string because one value "CD's" has a single ' $s_query = " INSERT INTO `nested_table` (`id`,`lft`,`rgt`,`category`) VALUES ( '".$i_id."', '".$i_lft."', '".$i_rgt."', '".mysql_real_escape_string($a_source['category'])."' ) "; if (!$i_result = mysql_query($s_query)) { echo "<pre>$s_query</pre>\n"; throw new Exception(mysql_error()); } else { // success: provide feedback echo "<p>$s_query</p>\n"; } } } mysql_connect('localhost','USER','PASSWORD') or die(mysql_error()); mysql_select_db('DATABASE') or die(mysql_error()); // build a complete copy of the adjacency table in ram $s_query = "SELECT `id`,`father_id` FROM `adjacent_table`"; $i_result = mysql_query($s_query); $a_rows = array(); while ($a_rows[] = mysql_fetch_assoc($i_result)); $a_link = array(); foreach($a_rows as $a_row) { $i_father_id = $a_row['father_id']; $i_child_id = $a_row['id']; if (!array_key_exists($i_father_id,$a_link)) { $a_link[$i_father_id]=array(); } $a_link[$i_father_id][]=$i_child_id; } $o_tree_transformer = new tree_transformer($a_link); $o_tree_transformer->traverse(0); ?>

    Read the article

  • Ways to update a dependent table in the same MySQL transaction?

    - by codie
    I need to update two tables inside a single transaction. The individual queries look something like this: 1. INSERT INTO t1 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = val2; If the above query causes an insert then I need to run the following statement on the second table: 2. INSERT INTO t2 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = col2 + val2; otherwise, 3. UPDATE t2 SET col2 = col2 - old_val2 + val2 WHERE col1 = val1; -- old_val2 is the value of t1.col2 before it was updated Right now I run a SELECT on t1 first, to determine whether statement 1 will cause an insert or update on t1. Then I run statement 1 and either of 2 and 3 inside a transaction. What are the ways in which I can do all of these inside one transaction itself? The approach I was thinking of is the following: UPDATE t2, t1 set t2.col2 = t2.col2 - t1.col2 WHERE t1.col1 = t2.col2 and t1.col1 = val1; INSERT INTO t1 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = val2; INSERT INTO t2, t1 (t2.col1, t2.col2) VALUES (t1.col1, t1.col2) ON DUPLICATE KEY UPDATE t2.col2 = t2.col2 + t1.col2 WHERE t1.col1 = t2.col2 and t1.col1 = val1; Unfortunately, there's no multi-table INSERT... ON DUPLICATE KEY UPDATE in MySQL 5.0. What else could I do?

    Read the article

  • wordpress generating slow mysql queries - is it index problem?

    - by tash
    Hello Stack Overflow I've got very slow Mysql queries coming up from my wordpress site. It's making everything slow and I think this is eating up CPU usage. I've pasted the Explain results for the two most frequently problematic queries below. This is a typical result - although very occasionally teh queries do seem to be performed at a more normal speed. I have the usual wordpress indexes on the database tables. You will see that one of the queries is generated from wordpress core code, and not from anything specific - like the theme - for my site. I have a vague feeling that the database is not always using the indexes/is not using them properly... Is this right? Does anyone know how to fix it? Or is it a different problem entirely? Many thanks in advance for any help anyone can offer - it is hugely appreciated Query: [wp-blog-header.php(14): wp()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort Query time: 34.2829 (ms) 9) Query: [wp-content/themes/LMHR/index.php(40): query_posts()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('217', '218', '223', '224') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort 2 DEPENDENT SUBQUERY tr ref PRIMARY,term_taxonomy_id PRIMARY 8 func 1 Using index 2 DEPENDENT SUBQUERY tt eq_ref PRIMARY,term_id_taxonomy,taxonomy PRIMARY 8 antin1_lovemusic2010.tr.term_taxonomy_id 1 Using where Query time: 70.3900 (ms)

    Read the article

  • Why aren't my MySQL Group By Hours vs Half Hours files Not displaying same data?

    - by stogdilla
    I need to be able to display data that I have in 15 minute increments in different display types. I have two queries that are giving me trouble. One shows data by half an hour, the other shows data by hour. The only issue is that the data totals change between queries. It's not counting the data that happens between the time frames, only AT the time frames. Ex: There are 5 things that happen at 7:15am. 2 that happen at 7:30am and 4 that show at 7:00am. The 15 minute view displays all of the data. The half hour view displays the data from 7:00am and from 7:30am but ignores the 7:15am. The hour display only shows the 7:00am data Here are my queries: $query="SELECT * FROM data WHERE startDate='$startDate' and queue='$queue' GROUP BY HOUR(start),floor(minute(start)/30)"; and $query="SELECT * FROM data WHERE startDate='$startDate' and queue='$queue' GROUP BY HOUR(start) "; How can I pull out the data in groups like I have but get all the data included? Is the issue the way the data is stored in the mysql table? Currently I have a column with dates (2010-03-29) and a column with times (00:00) Do I need to convert these into something else?

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >