Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 323/1698 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • Multiple/nested "select where" with Zend_Db_Select

    - by DJRayon
    Hi there I need to create something like this: select name from table where active = 1 AND (name LIKE 'bla' OR description LIKE 'bla') The first part is easy: $sqlcmd = $db->select() ->from("table", "name") ->where("active = ?", 1) Now comes the tricky part. How can I nest? I know that I can just write ->orWhere("name LIKE ? OR description LIKE ?", "bla") But thats wron, because I need to dynamically change all the parts. The query will be built all the time the script runs. Some parts get deleted, some altered. In this example I need to add those OR-s because sometimes I need to search wider. "My Zend Logic" tells me that the correct way is like this: $sqlcmd = $db->select() ->from("table", "name") ->where("active = ?", 1) ->where(array( $db->select->where("name LIKE ?", "bla"), $db->select->orWhere("description LIKE ?", "bla") )) But that doesn't work (atleast I dont remember it working). Please. Can someone help me to find a object oriented way for nesting "where"-s

    Read the article

  • Using array instead of lots of db queries in PHP

    - by Tural Teyyuboglu
    My function looks like that. It works but does lots of work (recursively calls itself and does lots of db queries.). There must be another way to do same thing but with array (with one query). I can't figure out how to modify this function to get it work with array. Please help. function genMenu($parent, $level, $menu, $utype) { global $db; $stmt=$db->prepare("select id, name FROM navigation WHERE parent = ? AND menu=? AND user_type=?") or die($db->error); $stmt->bind_param("iii", $parent, $menu, $utype) or die($stmt->error); $stmt->execute() or die($stmt->error); $stmt->store_result(); /* bind variables to prepared statement */ $stmt->bind_result($id, $name) or die($stmt->error); if ($level > 0 && $stmt->num_rows > 0) { echo "\n<ul>\n"; } while ($stmt->fetch()) { echo "<li>"; echo '<a href="?page=' . $id . '">' . $name . '</a>'; //display this level's children genMenu($id, $level+1, $menu, $utype); echo "</li>\n\n"; } if ($level > 0 && $stmt->num_rows > 0) { echo "</ul>\n"; } $stmt->close(); }

    Read the article

  • get data from gridview without querying database

    - by frank2009
    Hi there I am new at this so please bear with me... I have managed to get the following code to work...so when I click on the "select" link in each row of the gridview, the data is transfered to other label/textbox on the webpage. So far so good, the thing is that everytime I click on select...it goes and checks on the database for the data and there is a delay of a few seconds... I was hoping that the data, since it is already visible on the gridrows, is simply "picked up" and used on other labels/textboxes...without requerying the database. Is this possible ? Thanks in advance Protected Sub GridView1_SelectedIndexChanged(ByVal sender As Object, ByVal e As System.EventArgs) Label1.Text = GridView2.SelectedRow.Cells(8).Text Label2.Text = GridView2.SelectedRow.Cells(9).Text TextBox1.Text = GridView2.SelectedRow.Cells(7).Text End Sub

    Read the article

  • How do I write this GROUP BY in mysql UNION query

    - by user1652368
    Trying to group the results of two queries together. When I run this query: SELECT pr_id, pr_sbtcode, pr_sdesc, od_quantity, od_amount FROM ( SELECT `bgProducts`.`pr_id`, `bgProducts`.`pr_sbtcode`, `bgProducts`.`pr_sdesc`, SUM(`od_quantity`) AS `od_quantity`, SUM(`od_amount`) AS `od_amount`, MIN(UNIX_TIMESTAMP(`or_date`)) AS `or_date` FROM `bgOrderMain` JOIN `bgOrderData` JOIN `bgProducts` WHERE `bgOrderMain`.`or_id` = `bgOrderData`.`or_id` AND `od_pr` = `pr_id` AND UNIX_TIMESTAMP(`or_date`) >= '1262322000' AND UNIX_TIMESTAMP(`or_date`) <= '1346990399' AND (`pr_id` = '415' OR `pr_id` = '1088') GROUP BY `bgProducts`.`pr_id` UNION SELECT `bgProducts`.`pr_id`, `bgProducts`.`pr_sbtcode`, `bgProducts`.`pr_sdesc`,SUM(`od_quantity`) AS `od_quantity`, SUM(`od_amount`) AS `od_amount`, MIN(UNIX_TIMESTAMP(`or_date`)) AS `or_date` FROM `npOrderMain` JOIN `npOrderData` JOIN `bgProducts` WHERE `npOrderMain`.`or_id` = `npOrderData`.`or_id` AND `od_pr` = `pr_id` AND UNIX_TIMESTAMP(`or_date`) >= '1262322000' AND UNIX_TIMESTAMP(`or_date`) <= '1346990399' AND (`pr_id` = '415' OR `pr_id` = '1088') GROUP BY `bgProducts`.`pr_id` ) TEMPTABLE3; it produces this result +-------+------------+--------------------------+-------------+-----------+ | pr_id | pr_sbtcode | pr_sdesc | od_quantity | od_amount +-------+------------+--------------------------+-------------+-----------+ | 415 | NP13 | Product 13 | 5 | 125 | 1088 | NPAW | Product AW | 4 | 100 | 415 | NP13 | Product 13 | 5 | 125 | 1088 | NPAW | Product AW | 2 | 50 +-------+------------+--------------------------+-------------+-----------+</pre> What I want to get a result that combines those into 2 lines: +-------+------------+--------------------------+-------------+-----------+ | pr_id | pr_sbtcode | pr_sdesc | od_quantity | od_amount +-------+------------+--------------------------+-------------+-----------+ | 415 | NP13 | Product 13 | 10 | 250 | 1088 | NPAW | Product AW | 6 | 150 +-------+------------+--------------------------+-------------+-----------+</pre> So I added GROUP BY pr_id to the end of the query: SELECT pr_id, pr_sbtcode, pr_sdesc, od_quantity, od_amount FROM ( SELECT `bgProducts`.`pr_id`, `bgProducts`.`pr_sbtcode`, `bgProducts`.`pr_sdesc`, SUM(`od_quantity`) AS `od_quantity`, SUM(`od_amount`) AS `od_amount`, MIN(UNIX_TIMESTAMP(`or_date`)) AS `or_date` FROM `bgOrderMain` JOIN `bgOrderData` JOIN `bgProducts` WHERE `bgOrderMain`.`or_id` = `bgOrderData`.`or_id` AND `od_pr` = `pr_id` AND UNIX_TIMESTAMP(`or_date`) >= '1262322000' AND UNIX_TIMESTAMP(`or_date`) <= '1346990399' AND (`pr_id` = '415' OR `pr_id` = '1088') GROUP BY `bgProducts`.`pr_id` UNION SELECT `bgProducts`.`pr_id`, `bgProducts`.`pr_sbtcode`, `bgProducts`.`pr_sdesc`,SUM(`od_quantity`) AS `od_quantity`, SUM(`od_amount`) AS `od_amount`, MIN(UNIX_TIMESTAMP(`or_date`)) AS `or_date` FROM `npOrderMain` JOIN `npOrderData` JOIN `bgProducts` WHERE `npOrderMain`.`or_id` = `npOrderData`.`or_id` AND `od_pr` = `pr_id` AND UNIX_TIMESTAMP(`or_date`) >= '1262322000' AND UNIX_TIMESTAMP(`or_date`) <= '1346990399' AND (`pr_id` = '415' OR `pr_id` = '1088') GROUP BY `bgProducts`.`pr_id` ) TEMPTABLE3 GROUP BY pr_id; But that just gives me this: +-------+------------+--------------------------+-------------+-----------+ | pr_id | pr_sbtcode | pr_sdesc | od_quantity | od_amount +-------+------------+--------------------------+-------------+-----------+ | 415 | NP13 | Product 13 | 5 | 125 | 1088 | NPAW | Product AW | 4 | 100 +-------+------------+--------------------------+-------------+-----------+ What am I missing here??

    Read the article

  • Performance of inter-database query (between linked servers)

    - by Swoosh
    I have an import between 2 linked servers. I basically got to get the data from a multiple join into a table on my side. The current query is something like this: select a.* from db1.dbo.tbl1 a inner join db1.dbo.tbl2 on ... inner join db1.dbo.tbl3 on ... inner join db1.dbo.tbl4 on ... inner join db2.dbo.myside on ... db1 = linked server db2 = my own database After this one, I am using an insert into + select to add this data in my table which is located in db2. (usually few hundred records - this import running once a minute) My question is related to performance. The tables on the linked server (tbl1, tbl2, tbl3, tbl4) are huge tables, with millions of records, and it is slowing down the import process. I was told that, if I do the join on the "other" side (db1 - linked server) for example in a stored procedure, than, even if the query looks the same, it would run faster. Is that right? This is kinda hard to test. Note that the join contains a table from my database too. Also. are there other "tricks" I could use in order to make this run faster? Thanks

    Read the article

  • Filemaker XSL 20sec Query Latency

    - by Ian Wetherbee
    I have an ASP frontend that loads data from a Filemaker database using XSL to perform simple queries. The problem is that the first page load takes 20 seconds +/- 200ms, then the next few page refreshes within a minute of the first request take <200ms, then the cycle starts over again. Each page load makes only 2 XSL queries, and they execute fast after the first page load, so what is causing the delay on the first page load? I have caching turned up with a 100% hit rate, and number of connections at 100. I've tried with XSL database sessions on and off, and session time anywhere from 1 to 60 minutes without any changes. The XSL loads from ASP use a GET request and add a Basic Authorization header to authenticate each time. During fast page requests, the fmserver.exe and fmswpc.exe processes don't even flinch, but during a 20 second holdup I see fmserver jump to 30% CPU and a 3mb I/O read a few seconds into the request, and occasionally fmswpc jump to 60% CPU.

    Read the article

  • Require reasonably random results from an SQL SELECT query within a Joomla article (Cache enabled)

    - by Shrinivas
    Setup: Joomla website on LAMP stack I have a MySQL table containing some records, these are queried by a simple SELECT on the Joomla article, as pasted below. This specific Joomla website has Caching turned on in Joomla's Global Configuration. I need to randomize the order in which I display the resultset, each time the page is loaded. Regular php/mysql would offer me two approaches for this: 1. use 'order by RAND()' or any of a number of methods to allow a SELECT query to return reasonably random results. 2. once php gets the result from the SELECT into an array, shuffle the array to get a reasonably random order of array items. However, as this Joomla instance has Caching turned ON in its Global Configuration, either of the above approaches fails. The first time I load the page the order is randomized, however any further reloads do not cause the order to change, as the page is delivered from cache. The instant the Cache is disabled, both approaches (shuffle/order by rand) work perfectly. What am I missing? How do I override the Global Cache for this specific article? A very simple requirement, that is met by both php and mysql reasonably well, is blocked by the Joomla Cache that I cannot turn off. The php that returns results from the database. <pre> $db = JFactory::getDBO(); $select = "SELECT id FROM jos_mytable;"; //order by RAND() $db->setQuery($select); echo $db->getQuery(); //Show me the Query! $rows = $db->loadObjectList(); //shuffle($rows); foreach($rows as $row) { echo "$row->id"; }

    Read the article

  • Complex sorting on MySQL database

    - by ChrisR
    I'm facing the following situation. We've got an CMS with an entity with translations. These translations are stored in a different table with a one-to-many relationship. For example newsarticles and newsarticle_translations. The amount of available languages is dynamically determined by the same CMS. When entering a new newsarticle the editor is required to enter at least one translation, which one of the available languages he chooses is up to him. In the newsarticle overview in our CMS we would like to show a column with the (translated) article title, but since none of the languages are mandatory (one of them is mandatory but i don't know which one) i don't really know how to construct my mysql query to select a title for each newsarticle, regardless of the entered language. And to make it all a little harder, our manager asked for the possibilty to also be able to sort on title, so fetching the translations in a separate query is ruled out as far as i know. Anyone has an idea on how to solve this in the most efficient way? Here are my table schema's it it might help > desc news; +-----------------+----------------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+----------------+------+-----+-------------------+----------------+ | id | int(10) | NO | PRI | NULL | auto_increment | | category_id | int(1) | YES | | NULL | | | created | timestamp | NO | | CURRENT_TIMESTAMP | | | user_id | int(10) | YES | | NULL | | +-----------------+----------------+------+-----+-------------------+----------------+ > desc news_translations; +-----------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | enabled | tinyint(1) | NO | | 0 | | | news_id | int(1) unsigned | NO | | NULL | | | title | varchar(255) | NO | | | | | summary | text | YES | | NULL | | | body | text | NO | | NULL | | | language | varchar(2) | NO | | NULL | | +-----------------+------------------+------+-----+---------+----------------+ PS: i've though about subqueries and coalesce() solutions but those seem rather dirty tricks, wondering if something better is know that i'm not thinking of?

    Read the article

  • How to make ActiveRecord work with legacy partitioned/sharded databases/tables?

    - by Utensil
    thanks for your time first...after all the searching on google, github and here, and got more confused about the big words(partition/shard/fedorate),I figure that I have to describe the specific problem I met and ask around. My company's databases deals with massive users and orders, so we split databases and tables in various ways, some are described below: way database and table name shard by (maybe it's should be called partitioned by?) YZ.X db_YZ.tb_X order serial number last three digits YYYYMMDD. db_YYYYMMDD.tb date YYYYMM.DD db_YYYYMM.tb_ DD date too The basic concept is that databases and tables are seperated acording to a field(not nessissarily the primary key), and there are too many databases and too many tables, so that writing or magically generate one database.yml config for each database and one model for each table isn't possible or at least not the best solution. I looked into drnic's magic solutions, and datafabric, and even the source code of active record, maybe I could use ERB to generate database.yml and do database connection in around filter, and maybe I could use named_scope to dynamically decide the table name for find, but update/create opertions are bounded to "self.class.quoted_table_name" so that I couldn't easily get my problem solved. And even I could generate one model for each table, because its amount is up to 30 most. But this is just not DRY! What I need is a clean solution like the following DSL: class Order < ActiveRecord::Base shard_by :order_serialno do |key| [get_db_config_by(key), #because some or all of the databaes might share the same machine in a regular way or can be configed by a hash of regex, and it can also be a const get_db_name_by(key), get_tb_name_by(key), ] end end Can anybody enlight me? Any help would be greatly appreciated~~~~

    Read the article

  • Notepad Tutorial: deleteDatabase() function

    - by FelixA
    Hello I have a short question to the notepad tutorial on the android website. I wrote a simple function in the tutorial code to delete the whole database. It looks like this: DataHelper.java public void deleteDatabase() { this.mDb.delete(DATABASE_NAME, null, null); } Notepadv1.java @Override public boolean onCreateOptionsMenu(Menu menu) { boolean result = super.onCreateOptionsMenu(menu); menu.add(0, DELETE_ID, 0, "Delete whole Database"); return result; } @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { case DELETE_ID: mDbHelper.deleteDatabase(); return true; } return super.onOptionsItemSelected(item); } But when I run the app and try to delete the database I will get this error in LogCat: sqlite returned: error code = 1, msg= no such table: data Can you help how to fix this problem. It seems that the function deleteDatabase can not reach the database. Thank you very much. Felix

    Read the article

  • How to refresh database data only in SQL Server

    - by MaxGeek
    So I want to copy just the data from a Prod database (SQL 2005) down to my local machine (SQL 2005 & SQL 2008 Management Studio installed). The problem is I'm running into foreign key constraints that are causing the task/scripts to fail. I can get by these errors if I import certain tables first, but is there an easier way to do this all at once? I'm not a DBA so I don't have access to the database back up. I've tried the SQL Import/Export data Wizard and Publishing Wizard, but it also gets the PK error.

    Read the article

  • trying to backup mysql database using php

    - by user225269
    I got this code from this site: http://www.php-mysql-tutorial.com/wikis/mysql-tutorials/using-php-to-backup-mysql-databases.aspx But I'm just a beginner so I don't know what the config.php and opendb.php suppose to mean. Do I have to create those 2 files in order for this code to work? If yes, then how do I create it, it isn't included in the site how to create it. <?php include 'config.php'; include 'opendb.php'; $tableName = 'mypet'; $backupFile = 'backup/mypet.sql'; $query = "SELECT * INTO OUTFILE '$backupFile' FROM $tableName"; $result = mysql_query($query); include 'closedb.php'; ?> can I just include these lines on the top code so that I will not be putting the include 'opendb.php' anymore: $con = mysql_connect("localhost","root",""); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("Hospital", $con);

    Read the article

  • Not having address/phone number in WHOIS database?

    - by HighCommander4
    When I sign up for an account with a domain name registrar like 10dollar.ca, it asks for my address and phone number. Will these show up when someone does a WHOIS lookup on my domain name? I noticed that when you do a WHOIS lookup on some websites (e.g. http://www.chrismanieri.ca), no address/phone number comes up. I want mine to be like that, too (don't want my address/phone number exposed to the public).

    Read the article

  • query keepalived

    - by tdimmig
    *Note: I have trouble deciding what should go in serverfault and what should go in superuser, if some kindly admin decides this is in the wrong place please move it for me - many thanks. I am implementing a basic HA system with keepalived. I only want to be notified of the failover in the case of hardware failure. I do, however, have the servers switch roles periodically. I have a track_script running on the backup that will vary it's return between 0 and 1 on an interval (once a week, once a month, whatever). Upon returning 0, the priority is raised above that of the master, upon returning 1 the priority is lowered again. This way they trade places on the configured interval. The question: What can I do to tell the difference between a switch caused by my script, and a switch caused because one of the servers died? I certainly want to be notified when there is an actual problem, but not every time the servers change places because of the script. I see that version 1.2.7 has snmp support and I may be able to use it to get some information that could tell me one way or another, but to be honest I've never used snmp before and I don't know how to get the information I want with it (my Google foo failed me).

    Read the article

  • JBoss database connection pool configuration

    - by Qben
    I am facing an connection pool issue in my clustered JBoss installation. From time to time one of my connection pools will hit the roof and I get a lot of these in my logfile. java.sql.SQLException: No ManagedConnections available within configured blocking timeout ( 30000 [ms] ); The odd thing is that I can see in the JMX console that the ConnectionCount hit the roof, but at the same time InUseConnectionCount is often quite small. The problem will resolve itself after a couple of minutes but during recovery phase my application will not work (for obvious reasons). The question is if this indicate an error in the configured timeouts of the connections (I pretty much use defaults), or if my pool is simply too small to handle the peaks. Under normal operation I would say I use ~40% of the configured max number of connections. The reason I just don't increase the max number of connection is that if I actually used up all connections I suspect that InUseConnectionCount would hit the roof. Hence I suspect I might have more issues than just a too small pool size. Maybe InUseConnectionCount has decreased at the time I check jmx-console and it actually do hit the roof? I tend to collect data every second minute. Any hints are more than welcome.

    Read the article

  • Backup all plesk MySQL Databases to individual files

    - by Michael
    Hy, Because I'm new to shell scripting I need a hand. I currently backup all mydatabases to a single file, thing that makes the restore preaty hard. The second problem that my MySQL password dosen't work because of a Plesk bug and i get the password from "/etc/psa/.psa.shadow". Here is the code that I use to backup all my databases to a single file. mysqldump -uadmin -p`cat /etc/psa/.psa.shadow` --all-databases | bzip2 -c > /root/21.10.2013.sql.bz2 I found some scripts on the web that backup each database to individual files but I don't know how to make them work for my situation. Here is a example script: for db in $(mysql -e 'show databases' -s --skip-column-names); do mysqldump $db | gzip > "/backups/mysqldump-$(hostname)-$db-$(date +%Y-%m-%d-%H.%M.%S).gz"; done Can someone help me make the script above work for my situation? Requirements: Backup each database to individual file using plesk password location.

    Read the article

  • Create DB in Sql Server based on Visio Data Model

    - by Yaakov Ellis
    I have created a database model in Visio Professional (2003). I know that the Enterprise version has the ability to create a DB in Sql Server based on the data in Visio. I do not have the option to install Enterprise. Aside from going through the entire thing one table and relationship at a time and creating the whole database from scratch, by hand, can anyone recommend any tool/utility/method for converting the visio database model into a Sql Script that can be used to create a new DB in Sql Server?

    Read the article

  • History tables pros, cons and gotchas - using triggers, sproc or at application level.

    - by Nathan W
    I am currently playing around with the idea of having history tables for some of my tables in my database. Basically I have the main table and a copy of that table with a modified date and an action column to store what action was preformed eg Update,Delete and Insert. So far I can think of three different places that you can do the history table work. Triggers on the main table for update, insert and delete. (Database) Stored procedures. (Database) Application layer. (Application) My main question is, what are the pros, cons and gotchas of doing the work in each of these layers. One advantage I can think of by using the triggers way is that integrity is always maintained no matter what program is implmentated on top of the database.

    Read the article

  • JQuery Ajax Updating MySQL Database, But Not Running Success Function

    - by myrmidon16
    I am currently using the JQuery ajax function to call an exterior PHP file, in which I select and add data in a database. Once this is done, I run a success function in JavaScript. What's weird is that the database is updating successfully when ajax is called, however the success function is not running. Here is my code: <!DOCTYPE html> <head> <script type="text/javascript" src="jquery-1.6.4.js"></script> </head> <body> <div onclick="addtask();" style="width:400px; height:200px; background:#000000;"></div> <script> function addtask() { var tid = (Math.floor(Math.random() * 3)) + 1; var tsk = (Math.floor(Math.random() * 10)) + 1; if(tsk !== 1) { $.ajax({ type: "POST", url: "taskcheck.php", dataType: "json", data: {taskid:tid}, success: function(task) {alert(task.name);} }); } } </script> </body> </html> And the PHP file: session_start(); $connect = mysql_connect('x', 'x', 'x') or die('Not Connecting'); mysql_select_db('x') or die ('No Database Selected'); $task = $_REQUEST['taskid']; $uid = $_SESSION['user_id']; $q = "SELECT task_id, taskname FROM tasks WHERE task_id=" .$task. " LIMIT 1"; $gettask = mysql_fetch_assoc(mysql_query($q)); $q = "INSERT INTO user_tasks (ut_id, user_id, task_id, taskstatus, taskactive) VALUES (null, " .$uid. ", '{$gettask['task_id']}', 0, 1)"; $puttask = mysql_fetch_assoc(mysql_query($q)); $json = array( "name" => $gettask['taskname'] ); $output = json_encode($json); echo $output; Let me know if you have any questions or comments, thanks.

    Read the article

  • Warn user when new data is inserted on database

    - by João Menighin
    I don't know how to search about this so I'm kinda lost (the two topics I saw here were closed). I have a news website and I want to warn the user when a new data is inserted on the database. I want to do that like here on StackOverflow where we are warned without reloading the page or like in facebook where you are warned about new messages/notifications without reloading. Which is the best way to do that? Is it some kind of listener with a timeout that is constantly checking the database? It doesn't sounds efficient... Thanks in advance.

    Read the article

  • Need help with formulating LINQ query

    - by eponymous23
    I'm building a word anagram program that uses a database which contains one simple table: Words --------------------- varchar(15) alphagram varchar(15) anagram (other fields omitted for brevity) An alphagram is the letters of a word arranged in alphabetical order. For example, the alphagram for OVERFLOW would be EFLOORVW. Every Alphagram in my database has one or more Anagrams. Here's a sample data dump of my table: Alphagram Anagram EINORST NORITES EINORST OESTRIN EINORST ORIENTS EINORST STONIER ADEINRT ANTIRED ADEINRT DETRAIN ADEINRT TRAINED I'm trying to build a LINQ query that would return a list of Alphagrams along with their associated Anagrams. Is this possible?

    Read the article

  • Relating categories with tags using SQL

    - by Pablo
    I want be able to find tags of items under the a certain category. Following is example of my database design: images +----------+-----+-------------+-----+ | image_id | ... | category_id | ... | +----------+-----+-------------+-----+ | 1 | ... | 11 | ... | +----------+-----+-------------+-----+ | 2 | ... | 12 | ... | +----------+-----+-------------+-----+ | 3 | ... | 11 | ... | +----------+-----+-------------+-----+ | 4 | ... | 11 | ... | +----------+-----+-------------+-----+ images_tags +----------+--------+ | image_id | tag_id | +----------+--------+ | 1 | 53 | +----------+--------+ | 3 | 54 | +----------+--------+ | 2 | 55 | +----------+--------+ | 1 | 56 | +----------+--------+ | 4 | 57 | +----------+--------+ tags and categories each have their own table relating the id to an actual name(text). So my question is how will i find out that images with category_id=11 have have the tag_id 53 54 55 56 57. In other words how to find the tags that images in certain category have?

    Read the article

  • MySQL temp table issue

    - by AmyD
    Hi folks! I'm trying to use temp tables to speed up my MySQL 4.1.22-standard database and what seems like a simple operation is causing me all kinds of issues. My code is below.... CREATE TEMPORARY TABLE nonDerivativeTransaction_temp (accession_number varchar(30), transactionDateValue date)) TYPE=HEAP; INSERT INTO nonDerivativeTransaction_temp VALUES( SELECT accession_number, transactionDateValue FROM nonDerivativeTransaction WHERE transactionDateValue = "2010-06-15"); SELECT * FROM nonDerivativeTransaction_temp; The original table (nonDerivativeTransaction) has two fields, accession_number (varchar(30)) and transactionDateValue (date). Apparently I am getting an issue with the first two statements but I can't seem to nail down what it is. Any help would be appreciated. Amy D.

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >