Search Results

Search found 17036 results on 682 pages for 'mysql administrator'.

Page 325/682 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • DATE_FORMAT in DQL symfon2

    - by schurtertom
    I would like to use some MySQL functions such as DATE_FORMAT in my QueryBuilder. I saw this post did not understand totally how I should achieve it: SELECT DISTINCT YEAR Doctrine class SubmissionManuscriptRepository extends EntityRepository { public function findLayoutDoneSubmissions( $fromDate, $endDate, $journals ) { if( true === is_null($fromDate) ) return null; $commQB = $this->createQueryBuilder( 'c' ) ->join('c.submission_logs', 'k') ->select("DATE_FORMAT(k.log_date,'%Y-%m-%d')") ->addSelect('c.journal_id') ->addSelect('COUNT(c.journal_id) AS numArticles'); $commQB->where("k.hash_key = c.hash_key"); $commQB->andWhere("k.log_date >= '$fromDate'"); $commQB->andWhere("k.log_date <= '$endDate'"); if( $journals != null && is_array($journals) && count($journals)>0 ) $commQB->andWhere("c.journal_id in (" . implode(",", $journals) . ")"); $commQB->andWhere("k.new_status = '20'"); $commQB->orderBy("k.log_date", "ASC"); $commQB->groupBy("c.hash_key"); $commQB->addGroupBy("c.journal_id"); $commQB->addGroupBy("DATE_FORMAT(k.log_date,'%Y-%m-%d')"); return $commQB->getQuery()->getResult(); } } Entity SubmissionManuscript /** * MDPI\SusyBundle\Entity\SubmissionManuscript * * @ORM\Entity(repositoryClass="MDPI\SusyBundle\Repository\SubmissionManuscriptRepository") * @ORM\Table(name="submission_manuscript") * @ORM\HasLifecycleCallbacks() */ class SubmissionManuscript { ... /** * @ORM\OneToMany(targetEntity="SubmissionManuscriptLog", mappedBy="submission_manuscript") */ protected $submission_logs; ... } Entity SubmissionManuscriptLog /** * MDPI\SusyBundle\Entity\SubmissionManuscriptLog * * @ORM\Entity(repositoryClass="MDPI\SusyBundle\Repository\SubmissionManuscriptLogRepository") * @ORM\Table(name="submission_manuscript_log") * @ORM\HasLifecycleCallbacks() */ class SubmissionManuscriptLog { ... /** * @ORM\ManyToOne(targetEntity="SubmissionManuscript", inversedBy="submission_logs") * @ORM\JoinColumn(name="hash_key", referencedColumnName="hash_key") */ protected $submission_manuscript; ... } any help I would appreciate a lot. EDIT 1 I have now successfully be able to add the Custom Function DATE_FORMAT. But now if I try with my Group By I get the following Error: [Semantical Error] line 0, col 614 near '(k.logdate,'%Y-%m-%d')': Error: Cannot group by undefined identification variable. Anyone knows about this?

    Read the article

  • How to restrict text search to a certain subset of the database ?

    - by Nikhil Garg
    I have a large central database of around 1 million heavy records. In my app, for every user I would have a subset of rows from central table, which would be very small (probably 100 records each).When a particular user has logged in , I would want to search on this data set only. Example: Say I have a central database of all cars in the world. I have a user profile for General Motors(GM) , Ferrari etc. When GM is logged in I just want to search(a full text search and not fire a sql query) for those cars which are manufactured by GM. Also GM may launch/withdraw a model in which case central db would be updated & so would be rowset associated with GM. In case of acquisitions, db of certain profiles may change without launch/removal of new car. So central db wont change then , but rowsets may. Whats the best way to implement such a design ? These smaller row sets would need to be dynamic depending on user activities. We are on Rails 2.3.5 and use thinking_sphinx as the connector and Sphinx/MySQL for search and relational associations.

    Read the article

  • Is there anyway to carry a value in php forward to a second page?

    - by Henry Aspden
    I have created a php site, and previously it was listing only products with defined values. I have now changed it to include an array of products for example all products WHERE id = "spotlights" and this works great so it means I can add new products just to the database, but I still have to add the second page manually. e.g going from the product div on the main page, through to www.example.com/spotlight_1.php Is there anyway in PHP to carry the data from my index.php e.g. the ID through to the next page? so that I can have a template product.php page, and I can use a database pull to echo the product information required. So on index.php i click on the product with ID="1" and on the product.php page, it loads the relevant data for product 1. I can write the php SQL/mySQL calls myself, its just the way to carry accross a value from the previous page which I dont understand Regards Henry p.s. all the IDs and things are stored in the database already as 1 to 3digit values e.g. 3 or or 93 or 254 Any advice as always is greatly appreciated Regards Henry

    Read the article

  • JSON VIEW using GROUP_CONCAT question

    - by Dan Beam
    Hey DBAs and overall smart dudes. I have a question for you. We use MySQL VIEWs to format our data as JSON when it's returned (as a BLOB), which is convenient (though not particularly nice on performance, but we already know this). But, I can't seem to get a particular query working right now (each row contains NULL when it should contain a created JSON object with the values of multiple JOINs). Here's the general idea: SELECT CONCAT( "{", "\"some_list\":[", GROUP_CONCAT( DISTINCT t1.id ), "],", "\"other_list\":[", GROUP_CONCAT( DISTINCT t2.id ), "],", "}" ) cool_json FROM table_name tn INNER JOIN ( some_table st ) ON st.some_id = tn.id LEFT JOIN ( another_table at, another_one ao, used_multiple_times t1 ) ON st.id = at.some_id AND at.different_id = ao.different_id AND ao.different_id = t1.id LEFT JOIN ( another_table2 at2, another_one2 ao2, used_multiple_times t2 ) ON st.id = at2.some_id AND at2.different_id = ao2.different_id AND ao2.different_id = t2.id GROUP BY tn.id ORDER BY tn.name Anybody know the problem here? Am I missing something I should be grouping by? It was working when I was only doing 1 LEFT JOIN & GROUP_CONCAT, but now with multiple JOINs / GROUP_CONCATs it's messing it up. When I move the GROUP_CONCATs from the "cool_json" field they work as expected, but I'd like my data formatted as JSON so I can decode it server-side or client-side in one step.

    Read the article

  • Complex SQL query, one to many relationship

    - by Ethan
    Hey SO, I have a query such that I need to get A specific dog All comments relating to that dog The user who posted each comment All links to images of the dog the user who posted each link I've tried a several things, and can't figure out quite how to work it. Here's what I have (condensed so you don't have to wade through it all): SELECT s.dog_id, s.name, c.comment, c.date_added AS comment_date_added, u.username AS comment_username, u.user_id AS comment_user_id, l.link AS link, l.date_added AS link_date_added, u2.username AS link_username, u2.user_id AS link_user_id FROM dogs AS d LEFT JOIN comments AS c ON c.dog_id = d.dog_id LEFT JOIN users AS u ON c.user_id = u.user_id LEFT JOIN links AS l ON l.dog_id = d.dog_id LEFT JOIN users AS u2 ON l.user_id = u2.user_id WHERE d.dog_id = '1' It's sorta close to working, but it'll only return me the first comment, and the first link all as one big array with all the info i requested. The are multiple comments and links per dog, so I need it to give me all the comments and all the links. Ideally it'd return an object with dog_id, name, comments(an array of the comments), links(an array of the links) and then comments would have a bunch of comments, date_added, username, and user_id and links would have a bunch of links with link, date_added, username and user_id. It's got to work even if there are no links or comments. I learned the basics of mySQL somewhat recently, but this is pretty far over my head. Any help would be wonderful. Thanks!

    Read the article

  • Optimal two variable linear regression SQL statement

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 5 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) <15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; and insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here: Questions How do I return the y value against all rows without repeating the same query to collect and collate the data? That is, how do I "reuse" the list of t values? How would you change the query to eliminate outliers (at an 85% confidence interval)? The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Thank you!

    Read the article

  • Web page database query optimization

    - by morpheous
    I am putting together a web page which is quite 'expensive' in terms of database hits. I don't want to start optimizing at this stage - though with me trying to hit a deadline, I may end up not optimizing at all. Currently the page requires 18 (that's right eighteen) hits to the db. I am already using joins, and some of the queries are UNIONed to minimize the trips to the db. My local dev machine can handle this (page is not slow) however, I feel if I release this into the wild, the number of queries will quickly overwhelm my database (MySQL). I could always use memcache or something similar, but I would much rather continue with my other dev work that needs to be completed before the deadline - at least retrieving the page works - its simply a matter of optimization now (if required). My question therefore is - is 18 db queries for a single page retrieval completely outrageous - (i.e. I should put everything on hold and optimize the hell of the retrieval logic), or shall I continue as normal, meet the deadline and release on schedule and see what happens? [Edit] Just to clarify, I have already done the 'obvious' things like using (single and composite) indexes for fields used in the queries. What I haven't yet done is to run a query analyzer to see if my indexes etc are optimal.

    Read the article

  • memory leak in php script

    - by Jasper De Bruijn
    Hi, I have a php script that runs a mysql query, then loops the result, and in that loop also runs several queries: $sqlstr = "SELECT * FROM user_pred WHERE uprType != 2 AND uprTurn=$turn ORDER BY uprUserTeamIdFK"; $utmres = mysql_query($sqlstr) or trigger_error($termerror = __FILE__." - ".__LINE__.": ".mysql_error()); while($utmrow = mysql_fetch_array($utmres, MYSQL_ASSOC)) { // some stuff happens here // echo memory_get_usage() . " - 1241<br/>\n"; $sqlstr = "UPDATE user_roundscores SET ursUpdDate=NOW(),ursScore=$score WHERE ursUserTeamIdFK=$userteamid"; if(!mysql_query($sqlstr)) { $err_crit++; $cLog->WriteLogFile("Failed to UPDATE user_roundscores record for user $userid - teamuserid: $userteamid\n"); echo "Failed to UPDATE user_roundscores record for user $userid - teamuserid: $userteamid<br>\n"; break; } unset($sqlstr); // echo memory_get_usage() . " - 1253<br/>\n"; // some stuff happens here too } The update query never fails. For some reason, between the two calls of memory_get_usage, there is some memory added. Because the big loop runs about 500.000 or more times, in the end it really adds up to alot of memory. Is there anything I'm missing here? could it herhaps be that the memory is not actually added between the two calls, but at another point in the script? Edit: some extra info: Before the loop it's at about 5mb, after the loop about 440mb, and every update query adds about 250 bytes. (the rest of the memory gets added at other places in the loop). The reason I didn't post more of the "other stuff" is because its about 300 lines of code. I posted this part because it looks to be where the most memory is added.

    Read the article

  • Designing a general database interface in PHP

    - by lamas
    I'm creating a small framework for my web projects in PHP so I don't have to do the basic work over and over again for every new website. It is not my goal to create a second CakePHP or Codeigniter and I'm also not planning to build my websites with any of the available frameworks as I prefer to use things I've created myself in general. I have no problems in designing that framework when it comes to parts like the core structure, request handling, and so on but I'm getting stuck with designing the database interface for my modules. I've already thought about using the MVC pattern but thought that it would be a bit of a overkill. So the exact problem I'm facing is how my frameworks modules (viewCustomers could be a module, for example) should interact with the database. Is it a good idea to write SQL directly in PHP (mysql_query( 'SELECT firstname, lastname(.....))? How could I abstract a query like SELECT firstname, lastname FROM customers WHERE id=X Would MySQL helper functions like $this->db->get( array('firstname', 'lastname'), array('id'=>X) ) be a good idea? I suppose not because they actually make everything more complicated by requiring arrays to be created and passed. Is the Model pattern from MVC my only real option?

    Read the article

  • User Friendly Video Review with Locking

    - by James Cori
    We have a jquery/php/mysql system that allows a user to log in and review videos built by a system for online viewing. When a user begins reviewing a video, the video is marked as such. But now we've cornered ourselves into the classic browser-based application problem of the user navigating away or closing the browser without completing review. That video would then enter a state of limbo of constantly being reviewed, but never completed, and never re-entering the queue. Options we have are: Build a service (which we already have others) to find review sessions that are outside a duration boundary and reset them back into the queue. Reset review sessions outside a duration boundary when that user logs in. Essentially, if a user locks out a video for review, it'll be unlocked the next time they log in. A suggestion made to me was to use the php/apache session length and on expiration, reset any pending review jobs. I don't even know where to look to implement this as this is one project on a shared server, so it shouldn't be an apache config, but the reset mechanism would need to know the database credentials to be able to reset it... The worst solution everyone hates is preventing the user from navigating away with javascript, asking "Are you sure?!" This system is used by a few hired reviewers, so I'm not exactly dealing with the public here, but I can't prevent users from sharing logins for speedier review, which would knock out the 2nd option above because it would unlock a video being reviewed by someone else using the same login.

    Read the article

  • Optimal two variable linear regression calculation

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT, FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 15 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < 15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here: Question The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Related Sites Least absolute deviations Robust regression Thank you!

    Read the article

  • PHP News Feed Database & Design

    - by pws5068
    I'm designing a News Feed system using PHP/MySQL similar to facebook's. I have asked a similar question before but now I've changed the design and I'm looking for feedback. Example News: User_A commented on User_B's new album. "Hey man nice pictures!" User_B added a new Photo to [his/her] profile. [show photo thumbnail] Initially, I implemented this using excessive columns for Obj1:Type1 | Obj2:Type2 | etc.. Now the design is set up using a couple special keywords, and actor/receiver relationships. My database uses a table of messages joined on a table containing userid,actionid,receiverid,receiverObjectTypeID, Here's a condensed version of what it will look like once joined: News_ID | User_ID | Message | Timestamp 2643 A %a commented on %o's new %r. SomeTimestamp 2644 B %a added a new %r to [his/her] profile. SomeTimestamp %a = the User_ID of the person doing the action %r = the receiving object %o = the owner of the receiving object (for example the owner of the album) (NULL if %r is a user) Questions: Is this a smart (efficient/scalable) way to move forward? How can I show messages like: "User_B added 4 new photos to his profile."?

    Read the article

  • Database frontend for multiple db engines

    - by xeroxed_yeti
    Hey Stackoverflow, yeah it's spring and a lot of things happens to me... Also changing some software things at my computer, because suddenly everything seems to be boring after starting my laptop. I even changed my wallpaper!!! Besides I'm looking for a new database frontend and after using google with serveral queries I didn't find the right software. You have to know, my laptop and me are very very special :) I'm looking for a database frontend which should have following features can access PostgreSQL and MySQL databases can handle schemata overs a nice sql query tool supports an import and export functionality (something like tab separated text files) it for free looks awesome - every time when a college come to my office he must get the feeling: oh boy, this man really knows his job and should get more money! At the moment I used phpmyadmin, phppgadmin, pgadminIII, mysqladmin and dbVisualizer. Furthermore I was a big fan of the aqua datastudio until it became commercial. This tools offers a great variety of functionalities which can simplify programmes live. However, now you have to buy a license...I'm a scientist and money for software is limited =) So it's my first time (question) here at stackoverflow please be cheerful :)

    Read the article

  • Is it wise to use temporary tables?

    - by Industrial
    Hi guys, We have a mySQL database table for products. We are utilizing a cache layer to reduce database load, but we think that it's a good idea to minimize the actual data needed to be stored in the cache layer to speed up the application further. All the products in the database, that is visible to visitors have a price attached to them: The prices are stored in a different table, called prices . There are multiple price categories depending on which discount level each visitor (customer) applies to. From time to time, there are campaigns which means that a special price for each product is available. The special prices are stored in a table called specials. Is it a bad to make a temp table that binds the tables together? It would only have the neccessary information and would ofcourse be cached. -------------|-------------|------------ | productId | hasPrice | hasSpecial -------------|-------------|------------ 1 | 1 | 0 2 | 1 | 1 By doing such, it would be super easy to know if the specific product really has a price, without having to iterate through the complete prices or specials table each time a product should be listed or presented. Are temp tables a common thing for web applications or is it just bad design?

    Read the article

  • How to handle dates that repeat indefinitely

    - by Addsy
    I am implementing a fairly simple calendar on a website using PHP and MySQL. I want to be able to handle dates that repeat indefinitely and am not sure of the best way to do it. For a time limited repeating event it seems to make sense to just add each event within the timeframe into my db table and group them with some form of recursion id. But when there is no limit to how often the event repeats, is it better to a) put records in the db for a specific time frame (eg the next 2 years) and then periodically check and add new records as time goes by - The problem with this is that if someone is looking 3 years ahead, the event won't show up b) not actually have records for each event but instead when i check in my php code for events within a specified time period, calculate wether a repeated event will occur within this time period - The problem with this is that it means there isn't a specific record for each event which i can see being a pain when i then want to associate other info (attendance etc) with that event. It also seems like it might be a bit slow Has anyone tried either of these methods? If so how did it work out? Or is there some other ingenious crafty method i'm missing?

    Read the article

  • How to get database table header information into an CSV File.

    - by Rachel
    I am trying to connect to the database and get current state of a table and update that information into csv file, with below mentioned piece of code, am able to get data information into csv file but am not able to get header information from database table into csv file. So my questions is How can I get Database Table Header information into an CSV File ? $config['database'] = 'sakila'; $config['host'] = 'localhost'; $config['username'] = 'root'; $config['password'] = ''; $d = new PDO('mysql:dbname='.$config['database'].';host='.$config['host'], $config['username'], $config['password']); $query = "SELECT * FROM actor"; $stmt = $d->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); $data = fopen('file.csv', 'w'); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Header information meaning: Vehicle Build Model car 2009 Toyota jeep 2007 Mahindra So header information for this would be Vehicle Build Model Any guidance would be highly appreciated.

    Read the article

  • Fastest way to convert a list of doubles to a unique list of integers?

    - by javanix
    I am dealing with a MySQL table here that is keyed in a somewhat unfortunate way. Instead of using an auto increment table as a key, it uses a column of decimals to preserve order (presumably so its not too difficult to insert new rows while preserving a primary key and order). Before I go through and redo this table to something more sane, I need to figure out how to rekey it without breaking everything. What I would like to do is something that takes a list of doubles (the current keys) and outputs a list of integers (which can be cast down to doubles for rekeying). For example, input {1.00, 2.00, 2.50, 2.60, 3.00} would give output {1, 2, 3, 4, 5). Since this is a database, I also need to be able to update the rows nicely: UPDATE table SET `key`='3.00' WHERE `key`='2.50'; Can anyone think of a speedy algorithm to do this? My current thought is to read all of the doubles into a vector, take the size of the vector, and output a new vector with values from 1 => doubleVector.size. This seems pretty slow, since you wouldn't want to read every value into the vector if, for instance, only the last n/100 elements needed to be modified. I think there is probably something I can do in place, since only values after the first non-integer double need to be modified, but I can't for the life of me figure anything out that would let me update in place as well. For instance, setting 2.60 to 3.00 the first time you see 2.50 in the original key list would result in an error, since the key value 3.00 is already used for the table.

    Read the article

  • Detect how many times the users have click the button...

    - by Jerry
    Hello guys. Just want to know if there is a way to detect how many times a user has clicked a button by using Jquery. My main application has a button that can add input fields depend on the users. He/She can adds as many input fields as they need. When they submit the form, The add page will add the data to my database. My current idea is to create a hidden input field and set the value to zero. Every time a user clicks the button, jquery would update the attribute of the hidden input field value. Then the "add page" can detect the loop time. See the example below. I just want to know if there are better practices to do this. Thanks for the helps. main page <form method='post' action='add.php'> //omit <input type="hidden" id="add" name="add" value="0"/> <input type="button" id="addMatch" value="Add a match"/> //omit </form> jquery $(document).ready(function(){ var a =0; $("#addMatch").live('click', function(){ $('#table').append("<input name='match"+a+"Name' />") //the input field will append //as many as the user wants. a++; $('#add').attr('value', 'a'); //pass the a value to hidden input field return false; }); Add Page $a=$_POST['a']; // for($k=0;$k<$a;$k++){ //get all matchName input field $matchName=$_POST['match'.$k.'Name']; //insert the match $updateQuery=mysql_query("INSERT INTO game (team) values('$matchName')",$connection); if(!$updateQuery){ DIE('mysql Error:'+mysql_error()); }

    Read the article

  • problem in fetching data from several tables in one query

    - by Mac Taylor
    hey guys in an attempt to union my querries into one query to database , now im in need of geting username of first poster and last poster of a topic in my forums here is my code to do as i told :: $result = $db->sql_query("SELECT t.*,p.*,u.* SUM(t.topic_approved='1') AS Amount_Of_Topics, SUM(p.post_approved ='1') AS Amount_Of_Posts FROM bb3topics t, bb3posts p, bb3users u GROUP BY t.topic_last_post_id ORDER BY t.topic_last_post_id DESC LIMIT 10 " ); while( $row = $db->sql_fetchrow($result) ) { $Amount_Of_Topics = $row['Amount_Of_Topics']; $Amount_Of_Posts = $row['Amount_Of_Posts']; $Amount_Of_Topic_Replies = $Amount_Of_Topic_Replies + $row['topic_replies']; $Amount_Of_Topic_Views = $Amount_Of_Topic_Views + $row['topic_views']; $topic_id = $row['topic_id']; $forum_id = $row['forum_id']; $topic_last_post_id = $row['topic_last_post_id']; $topic_title = $row['topic_title']; $topic_poster = $row['topic_poster']; $topic_views = $row['topic_views']; $topic_replies = $row['topic_replies']; $topic_moved_id = $row['topic_moved_id']; $topic_time = $row['topic_time']; $result2 = $db->sql_query( "SELECT topic_id, poster_id, post_time FROM bb3posts where post_id = '$topic_last_post_id'" ); list( $topic_id, $poster_id, $post_time ) = $db->sql_fetchrow( $result2 ); $result3 = $db->sql_query( "SELECT username, user_id FROM bb3users where user_id='$poster_id'" ); list( $uname, $uid ) = $db->sql_fetchrow( $result3 ); $LastPoster = "$uname"; $result4 = $db->sql_query( "SELECT username, user_id FROM bb3users where user_id='$topic_poster'" ); list( $uname, $uid ) = $db->sql_fetchrow( $result4 ); $OrigPoster = "$uname"; now i need to query all this together not in separated ones i tried using left join but didn't worked what mysql conjunction should i use ?!

    Read the article

  • SQL Inner Join : DB stuck

    - by SurfingCat
    I postet this question a few days ago but I didn't explain exactly what I want. I ask the question better formulated again: To clarify my problem I added some new information: I got an MySQL DB with MyISAM tables. The two relevant tables are: * orders_products: orders_products_id, orders_id, product_id, product_name, product_price, product_name, product_model, final_price, ... * products: products_id, manufacturers_id, ... (for full information about the tables see screenshot products (Screenshot) and screenshot orders_products (Screenshot)) Now what I want is this: - Get all Orders who ordered products with manufacturers_id = 1. And the product name of the product of this order (with manufacturers_id = 1). Grouped by orders. What I did so far is this: SELECT op.orders_id, p.products_id, op.products_name, op.products_price, op.products_quantity FROM orders_products op , products p INNER JOIN products ON op.products_id = p.products_id WHERE p.manufacturers_id = 1 AND p.orders_id > 10000 p.orders_id 10000 for testing to get only a few order_id's. But thies query takes much time to get executed if it even works. Two times the sql server stucked. Where is the mistake?

    Read the article

  • How to prevent PHP variables from being arrays?

    - by MJB
    I think that (the title) is the problem I am having. I set up a MySQL connection, I read an XML file, and then I insert those values into a table by looping through the elements. The problem is, instead of inserting only 1 record, sometimes I insert 2 or 3 or 4. It seems to depend on the previous values I have read. I think I am reinitializing the variables, but I guess I am missing something -- hopefully something simple. Here is my code. I originally had about 20 columns, but I shortened the included version to make it easier to read. $ctr = 0; $sql = "insert into csd (id,type,nickname,hostname,username,password) ". "values (?,?,?,?,?,?)"; $cur = $db->prepare($sql); for ($ctr = 0; $ctr < $expected_count; $ctr++) { list ( $lbl, $type, $nickname, $hostname, $username, $password) = ""; $bind_vars = array(); $lbl = "csd_{$ctr}"; $type = $ref->itm->csds->$lbl->type; $nickname = $ref->itm->csds->$lbl->nickname; $hostname = $ref->itm->csds->$lbl->hostname; $username = $ref->itm->csds->$lbl->username; $password = $ref->itm->csds->$lbl->password; $bind_vars = array($id,$type,$nickname,$hostname,$username,$password); $res = $db->execute($cur, $bind_vars); # this is a separate function which works, but which only # does SELECTS and cannot be the problem. I include it because I # want to count the total rows. printf ("%d CSDs on that ITEM now.\n", CountCSDs($id_to_sync)); } P.S. I also tagged this SimpleXML because that is how I am reading the file, though that code is not included above. It looks like this: $Ref = simplexml_load_file($file);

    Read the article

  • SQL Query to duplicate records based on If statement

    - by user328371
    Hi, I'm trying to write an SQL query that will duplicate records depending on a field in another table. I am running mySQL 5. (I know duplicating records shows that the database structure is bad, but I did not design the database and am not in a position to redo it all - it's a shopp ecommerce database running on wordpress.) Each product with a particular attribute needs a link to the same few images, so the product will need a row per image in a table - the database doesn't actually contain the image, just its filename. (the images are of clipart for a customer to select from) Based on these records... SELECT * FROM `wp_shopp_spec` WHERE name='Can Be Personalised' and content='Yes' I want to do something like this.. For each record that matches that query, copy records 5134 - 5139 from wp_shopp_asset but change the id so it's unique and set the cell in column 'parent' to have the value of 'product' from the table wp_shopp_spec. This will mean 6 new records are created for each record matching the above query, all with the same value in 'parent' but with unique ids and every other column copied from the original (ie. records 5134-5139) Hope that's clear enough - any help greatly appreciated.

    Read the article

  • A good solution for displaying galleries with lytebox and php

    - by Johann
    Hello I have thought for a while over an issue with the loading of images on a website-solution that I have programmed (For fun and the experience) The programming language used is PHP with MYSQL as the database language It also uses javascript, but not extensively I have recently realized that the engine I programmed, while it has it's smart solutions also carry a lot of flaws and redundant code. I have therefore decided to make a new one, now incorporating what I know, but didn't when I started the previous project. For the new system, there will be an option to add galleries to a site, and upload images to it. I have used the javascript image viewer Lytebox before. The screen goes dark and an image appears with a "Previous" and "next" button to view the other images. The problem is that I used groups with lytebox and the images themselves, resized as thumbs. This causes lytebox to work only when all the images have loaded. If you click a link before that, the image is shown as if you right click and choose "Show image" Information about these images is parsed from a database using a while statement with a counter that goes from 0 to sizeof() I'm thinking it probably isn't a good idea to have the images as the thumbs, even if you restrict the upload size. Likewise, adding thumbs at upload also seems like a hassle. It would be practical if the thumbs didn't show up before they were fully loaded. Has anyone got any good tips. Any help would be appreciated. Johann

    Read the article

  • one table is shared between several websites

    - by sami
    I have a static table that's shared by several websites. By static, I mean that the data is read but never updated by the websites. Currently, all websites are served from the same server but that may change. I want to minimize the need for creating/maintaining this table for each of the websites, so I thought about turning it to an xml file that's stored in a shared library that all websites have access to. The problem is I use an ORM and use forign key constraints to ensure integrity of the ids used from that table, so by removing that table out of the MySQL database into an XML file, will this affect the integrity of the ids coming from that table? My table looks like this <table name="entry"> <column name="id" type="INTEGER" primaryKey="true" autoIncrement="true" /> <column name="title" type="VARCHAR" size="500" required="true" /> </table> and I use it as a foreign key in other tables <table name="refer"> <column name="id" type="INTEGER" primaryKey="true" autoIncrement="true" /> <column name="linkto" type="INTEGER"/> <foreign-key foreignTable="entry"> <reference local="linkto" foreign="id" /> </foreign-key> </table> So I'm wondering if I remove that table out of the database, is there a way to retain that referential integrity? And of course are these any other efficient ways to do the same thing? I just don't want to have to repeat that table for several websites.

    Read the article

  • Good strategy for copying a "sliding window" of data from a table?

    - by chiborg
    I have a MySQL table from a third-party application that has millions of rows and only one index - the timestamp of each entry. Now I want to do some heavy self-joins and queries on the data using fields other than the timestamp. Doing the query on the original table would bring the database to a crawl, adding indexes to the table is not an option. Additionally, I only need entries that are newer than one week. My current strategy for doing the queries efficiently is to use a separate table (aux_table) that has the necessary indexes. My questions are: Is there another way to do the queries? and if not, How do I update the data in the indexed table efficiently? So far I have found two approaches for updating aux_table: Truncate aux_table and insert the desired data from the original table. Not very efficient because all the indexes must be re-crated. Check for the biggest timestamp in aux_table and insert all entries with a greater or equal timestamp from the original table. Occasionally drop older entries. Only copying entries with greater timestamp leads to dropped entries (because of entries with same timestamp that were inserted into the original table after the last update).

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >