Search Results

Search found 68715 results on 2749 pages for 'mysql data'.

Page 170/2749 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Slow MySQL Query not using filesort

    - by Canadaka
    I have a query on my homepage that is getting slower and slower as my database table grows larger. tablename = tweets_cache rows = 572,327 this is the query I'm currently using that is slow, over 5 seconds. SELECT * FROM tweets_cache t WHERE t.province='' AND t.mp='0' ORDER BY t.published DESC LIMIT 50; If I take out either the WHERE or the ORDER BY, then the query is super fast 0.016 seconds. I have the following indexes on the tweets_cache table. PRIMARY published mp category province author So i'm not sure why its not using the indexes since mp, provice and published all have indexes? Doing a profile of the query shows that its not using an index to sort the query and is using filesort which is really slow. possible_keys = mp,province Extra = Using where; Using filesort I tried adding a new multie-colum index with "profiles & mp". The explain shows that this new index listed under "possible_keys" and "key", but the query time is unchanged, still over 5 seconds. Here is a screenshot of the profiler info on the query. http://i355.photobucket.com/albums/r469/canadaka_bucket/slow_query_profile.png Something weird, I made a dump of my database to test on my local desktop so i don't screw up the live site. The same query on my local runs super fast, milliseconds. So I copied all the same mysql startup variables from the server to my local to make sure there wasn't some setting that might be causing this. But even after that the local query runs super fast, but the one on the live server is over 5 seconds. My database server is only using around 800MB of the 4GB it has available. here are the related my.ini settings i'm using default-storage-engine = MYISAM max_connections = 800 skip-locking key_buffer = 512M max_allowed_packet = 1M table_cache = 512 sort_buffer_size = 4M read_buffer_size = 4M read_rnd_buffer_size = 16M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 128M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8 # Disable Federated by default skip-federated key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M

    Read the article

  • Encoding MySQL text fields into UTF-8 text files - problems with special characters

    - by Matt Andrews
    I'm writing a php script to export MySQL database rows into a .txt file formatted for Adobe InDesign's internal markup. Exports work, but when I encounter special characters like é or umlauts, I get weird symbols (eg Chloë Hanslip instead of Chloë Hanslip). Rather than run a search and replace for every possible weird character, I need a better method. I've checked that when the text hits the database, it's saved properly - in the database I see the special characters. My export code basically runs some regular expressions to put in the InDesign code tags, and I'm left with the weird symbols. If I just output the text to the browser (rather than prompt for a text file download), it displays properly. When I save the file I use this code: header("Content-disposition: attachment; filename=test.txt"); header("Content-Type: text/plain; charset=utf-8"); I've tried various combinations of utf8_encode() and iconv() to no avail. Can anybody point me in the right direction here?

    Read the article

  • Load Balancing of PHP/MYSQL script without big code changes

    - by DR.GEWA
    Sorry for my Dummy Question, but... I am making a script on php/mysql (codeigniter) and I am extremally interested in knowing if there is a way without big architectural changes of the script make a load balancing. I mean, that for example now I will rent a medium dedicated server with 2GB ram, 200GB memory and good processor, and this will be enough lets say half year for the users which will come. But when they will become more and more, and as its a social net and at nights the server is waiting to have 500-1500 or 5000-8000 users online, I wander if there is a way for lets say just add second server with some config which will bear next pressure. After again one and so on... ???? <? if($answer=YES) { how(??); } esle{ whatToDo(??); } ?> If there is no way, than maybe you could point to a easiest way of load balancing solution.... I will be extremally thanksfull if you can tell me for such purposes , should I move lets say to PostgreSQl or FireBird? Which of them will be more easy in the future to handle ? I am getting on the mysite.com/users/show/$userId page something like 60queries for all data... maybe too much, but anyway....after some optimization it can be 20-30....

    Read the article

  • Distributed Lock Service over MySql/GigaSpaces/Netapp

    - by ripper234
    Disclaimer: I already asked this question, but without the deployment requirement. I got an answer that got 3 upvotes, and when I edited the question to include the deployment requirement the answer then became irrelevant. The reason I'm resubmitting is because SO considers the original question 'answered', even though I got no meaningful upvoted answer. I opened a uservoice submission about this problem. The reason I reposted is so StackOverflow consider the original question answered, so it doesn't show up on the 'unanswered questions' tab. Which distributed lock service would you use? Requirements are: A mutual exclusion (lock) that can be seen from different processes/machines lock...release semantics Automatic lock release after a certain timeout - if lock holder dies, it will automatically be freed after X seconds Java implementation Easy deployment - must not require complicated deployment beyond either Netapp, MySql or GigaSpaces. Must play well with those products (especially GigaSpaces - this is why TerraCotta was ruled out). Nice to have: .Net implementation If it's free: Deadlock detection / mitigation I'm not interested in answers like "it can be done over a database", or "it can be done over JavaSpaces" - I know. Relevant answers should only contain a ready, out-of-the-box, proven implementation.

    Read the article

  • conditional update records mysql query

    - by Shakti Singh
    Hi, Is there any single msql query which can update customer DOB? I want to update the DOB of those customers which have DOB greater than current date. example:- if a customer have dob 2034 update it to 1934 , if have 2068 updated with 1968. There was a bug in my system if you enter date less than 1970 it was storing it as 2070. The bug is solved now but what about the customers which have wrong DOB. So I have to update their DOB. All customers are stored in customer_entity table and the entity_id is the customer_id Details is as follows:- desc customer_entity -> ; +------------------+----------------------+------+-----+---------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+----------------------+------+-----+---------------------+----------------+ | entity_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | entity_type_id | smallint(8) unsigned | NO | MUL | 0 | | | attribute_set_id | smallint(5) unsigned | NO | | 0 | | | website_id | smallint(5) unsigned | YES | MUL | NULL | | | email | varchar(255) | NO | MUL | | | | group_id | smallint(3) unsigned | NO | | 0 | | | increment_id | varchar(50) | NO | | | | | store_id | smallint(5) unsigned | YES | MUL | 0 | | | created_at | datetime | NO | | 0000-00-00 00:00:00 | | | updated_at | datetime | NO | | 0000-00-00 00:00:00 | | | is_active | tinyint(1) unsigned | NO | | 1 | | +------------------+----------------------+------+-----+---------------------+----------------+ 11 rows in set (0.00 sec) And the DOB is stored in the customer_entity_datetime table the column value contain the DOB. but in this table values of all other attribute are also stored such as fname,lname etc. So the attribute_id with value 11 is DOB attribute. mysql> desc customer_entity_datetime; +----------------+----------------------+------+-----+---------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+----------------------+------+-----+---------------------+----------------+ | value_id | int(11) | NO | PRI | NULL | auto_increment | | entity_type_id | smallint(8) unsigned | NO | MUL | 0 | | | attribute_id | smallint(5) unsigned | NO | MUL | 0 | | | entity_id | int(10) unsigned | NO | MUL | 0 | | | value | datetime | NO | | 0000-00-00 00:00:00 | | +----------------+----------------------+------+-----+---------------------+----------------+ 5 rows in set (0.01 sec) Thanks.

    Read the article

  • MySQL BinLog Statement Retrieval

    - by Jonathon
    I have seven 1G MySQL binlog files that I have to use to retrieve some "lost" information. I only need to get certain INSERT statements from the log (ex. where the statement starts with "INSERT INTO table SET field1="). If I just run a mysqlbinlog (even if per database and with using --short-form), I get a text file that is several hundred megabytes, which makes it almost impossible to then parse with any other program. Is there a way to just retrieve certain sql statements from the log? I don't need any of the ancillary information (timestamps, autoincrement #s, etc.). I just need a list of sql statements that match a certain string. Ideally, I would like to have a text file that just lists those sql statements, such as: INSERT INTO table SET field1='a'; INSERT INTO table SET field1='tommy'; INSERT INTO table SET field1='2'; I could get that by running mysqlbinlog to a text file and then parsing the results based upon a string, but the text file is way too big. It just times out any script I run and even makes it impossible to open in a text editor. Thanks for your help in advance.

    Read the article

  • MySQL Query Error

    - by Nano HE
    I am debug my php forum. Before the error,I modified the DB table name from cdb_sessions to imc_forum_sessions successfully. I tried my best to debug with NetBeans but can't find the reason. Could you please have a look at my post below. Thank you. // It will run to else during debug if($sid) { if($discuz_uid) { $query = $db->query("SELECT s.sid, s.styleid, s.groupid='6' AS ipbanned, s.pageviews AS spageviews, s.lastolupdate, s.seccode, $membertablefields FROM {$tablepre}sessions s, {$tablepre}members m WHERE m.uid=s.uid AND s.sid='$sid' AND CONCAT_WS('.',s.ip1,s.ip2,s.ip3,s.ip4)='$onlineip' AND m.uid='$discuz_uid' AND m.password='$discuz_pw' AND m.secques='$discuz_secques'"); } else { $query = $db->query("SELECT sid, uid AS sessionuid, groupid, groupid='6' AS ipbanned, pageviews AS spageviews, styleid, lastolupdate, seccode FROM {$tablepre}sessions WHERE sid='$sid' AND CONCAT_WS('.',ip1,ip2,ip3,ip4)='$onlineip'"); } } MySQL data table exported as below CREATE TABLE IF NOT EXISTS `imc_forum_sessions` ( `sid` char(6) NOT NULL DEFAULT '', `ip1` tinyint(3) unsigned NOT NULL DEFAULT '0', `ip2` tinyint(3) unsigned NOT NULL DEFAULT '0', `ip3` tinyint(3) unsigned NOT NULL DEFAULT '0', `ip4` tinyint(3) unsigned NOT NULL DEFAULT '0', `uid` mediumint(8) unsigned NOT NULL DEFAULT '0', `username` char(15) NOT NULL DEFAULT '', `groupid` smallint(6) unsigned NOT NULL DEFAULT '0', `styleid` smallint(6) unsigned NOT NULL DEFAULT '0', `invisible` tinyint(1) NOT NULL DEFAULT '0', `action` tinyint(1) unsigned NOT NULL DEFAULT '0', `lastactivity` int(10) unsigned NOT NULL DEFAULT '0', `lastolupdate` int(10) unsigned NOT NULL DEFAULT '0', `pageviews` smallint(6) unsigned NOT NULL DEFAULT '0', `seccode` mediumint(6) unsigned NOT NULL DEFAULT '0', `fid` smallint(6) unsigned NOT NULL DEFAULT '0', `tid` mediumint(8) unsigned NOT NULL DEFAULT '0', `bloguid` mediumint(8) unsigned NOT NULL DEFAULT '0', UNIQUE KEY `sid` (`sid`), KEY `uid` (`uid`), KEY `bloguid` (`bloguid`) ) ENGINE=MEMORY DEFAULT CHARSET=utf8 MAX_ROWS=5000; -- -- Dumping data for table `imc_forum_sessions` -- INSERT INTO `imc_forum_sessions` (`sid`, `ip1`, `ip2`, `ip3`, `ip4`, `uid`, `username`, `groupid`, `styleid`, `invisible`, `action`, `lastactivity`, `lastolupdate`, `pageviews`, `seccode`, `fid`, `tid`, `bloguid`) VALUES ('NYC4r7', 127, 0, 0, 1, 0, '', 6, 5, 0, 3, 1271372018, 0, 0, 939015, 51, 303, 0); And the IE error showed, Time: 2010-4-16 7:12am Script: /forum/index.php SQL: SELECT sid, uid AS sessionuid, groupid, groupid='6' AS ipbanned, pageviews AS spageviews, styleid, lastolupdate, seccode FROM [Table]sessions WHERE sid='NYC4r7' AND CONCAT_WS('.',ip1,ip2,ip3,ip4)='127.0.0.1' Error: Table 'dbbbs.[Table]sessions' doesn't exist Errno.: 1146 Similar error report has beed dispatched to administrator before.

    Read the article

  • MySQL nested set hierarchy with foreign table

    - by Björn
    Hi! I'm using a nested set in a MySQL table to describe a hierarchy of categories, and an additional table describing products. Category table; id name left right Products table; id categoryId name How can I retrieve the full path, containing all parent categories, of a product? I.e.: RootCategory > SubCategory 1 > SubCategory 2 > ... > SubCategory n > Product Say for example that I want to list all products from SubCategory1 and it's sub categories, and with each given Product I want the full tree path to that product - is this possible? This is as far as I've got - but the structure is not quite right... select parent.`name` as name, parent.`id` as id, group_concat(parent.`name` separator '/') as path from categories as node, categories as parent, (select inode.`id` as id, inode.`name` as name from categories as inode, categories as iparent where inode.`lft` between iparent.`lft` and iparent.`rgt` and iparent.`id`=4 /* The category from which to list products */ order by inode.`lft`) as sub where node.`lft` between parent.`lft` and parent.`rgt` and node.`id`=sub.`id` group by sub.`id` order by node.`lft`

    Read the article

  • highlighting search results in php/mysql

    - by fusion
    how do i highlight search results from mysql query using php? this is my code: $search_result = ""; $search_result = $_GET["q"]; $result = mysql_query('SELECT cQuotes, vAuthor, cArabic, vReference FROM thquotes WHERE cQuotes LIKE "%' . $search_result .'%" ORDER BY idQuotes DESC', $conn) or die ('Error: '.mysql_error()); function h($s) { echo htmlspecialchars($s, ENT_QUOTES); } ?> <div class="center_div"> <table> <caption>Search Results</caption> <?php while ($row= mysql_fetch_array($result)) { ?> <tr> <td style="text-align:right; font-size:15px;"><?php h($row['cArabic']) ?></td> <td style="font-size:16px;"><?php h($row['cQuotes']) ?></td> <td style="font-size:12px;"><?php h($row['vAuthor']) ?></td> <td style="font-size:12px; font-style:italic; text-align:right;"><?php h($row['vReference']) ?></td> </tr> <?php } ?> </table> </div>

    Read the article

  • MySQL efficiency as it relates to the database/table size

    - by mlissner
    I'm building a system using django, Sphinx and MySQL that's very quickly becoming quite large. The database currently has about 2000 rows, and I've written a program that's going to populate it with another 40,000 rows in a couple days. Since the database is live right now, and since I've never had a database with this much information in it, I'm worried about some things: Is adding all these rows going to seriously degrade the efficiency of my django app? Will I need to go back through it and optimize all my database calls so they're doing things more cleverly? Or will this make the database slow all around to the extent that I can't do anything about it at all? If you scoff at my 40k rows, then, my next question is, at what point SHOULD I be concerned? I will likely be adding another couple hundred thousand soon, so I worry, and I fret. How is sphinx going to feel about all this? Is it going to freak out when it realizes it has to index all this data? Or will it be fine? Is this normal for it? If it is, at what point should I be concerned that it's too much data for Sphinx? Thanks for any thoughts.

    Read the article

  • Mysql query problem

    - by Sergio
    I have a problem with (for me to complicated) MySql query. Okay, here is what I need to do: First I need to check messages that some specific user received $mid=$_SESSION['user']; $stat1=mysql_query("SELECT id, fromid, toid, subject FROM messages WHERE toid = '".$mid."' AND subject != 'not readed' GROUP BY fromid ") or die(mysql_error()); while ($h = mysql_fetch_array($stat1)) { $whosend=$h['fromid']; Second thing that I need to do is check the status of the users (deleted or not) who sent the messages ("fromid") to my specific user ("toid"). This I must do from another table: $stat2=mysql_query("SELECT id, status FROM members WHERE id='".$whosend."' AND status ='1'")or die(mysql_error()); while ($s = mysql_fetch_array($stat)) { Then my problems begin to show up. How can I get the number of the users who sent messages to my specific user with status =1? Not the number of the messages but the total number of the users who sent them. Is there any easier way to do this query? I tried with join tables like $stat=mysql_query("SELECT memebers.id, memebers.status, messages.toid, messages.fromid,messages.subject,messages.id FROM members, messages WHERE messages.toid='".$mid."' AND members.status ='7' .... But even in this query I need to have id's of the user who sent messages before this query so there will be another query before this join tables.

    Read the article

  • Why isn't this simple PHP/MySQL code working?

    - by Sammy
    I am very new to php/mysql and this is causing me to loose hairs, I am trying to build a multi level site navigation. In this part of my script I am readying the sub and parent categories coming from a form for insertion into the database: // get child categories $catFields = $_POST['categories']; if (is_array($catFields)) { $categories = $categories; for ($i=0; $i<count($catFields); $i++) { $categories = $categories . $catFields[$i]"; } } // get parent category $select = mysql_query ("SELECT parent FROM categories WHERE id = $categories"); while ($return = mysql_fetch_assoc($select)) { $parentId = $return['parent']; } The first part of my script works fine, it grabs all the categories that the user has chosen to assign a post by checking the checkboxes in a form and readies it for insertion into the database. But the second part does not work and I can't understand why. I am trying to match a category with a parent that is stored in it's own table, but it returns nothing even though the categories all have parents. Can anyone tell me why this is? p.s. The $categories variable contains the sub category id.

    Read the article

  • MySQL GIS and Spatial Extensions - how to map regions and query against them

    - by chibineku
    I am trying to make a smartphone app which will return a list of users within a certain proximity, say 100m. It's easy to get the coordinates of my BlackBerry and write them to a database, but in order to return a list of other users within 100m, I need to pull every other record from the database and compare the distance between the two points, checking to see if it's within range, before outputting that user's information. This is going to be time consuming if there are many users involved. So I would like to map areas (countries, cities, I'm not yet sure of the resolution I'll need) so that I can first target a smaller subset of all users. This will save on processing time. I have read the basics of GIS and spatial querying on the mysql website but to be honest the query is over my head and I hate copying and pasting code without understanding it. Plus it only checks for proximity - I want to first check if a coordinate falls within a certain area. Does anyone have any experience of such matters and feel like giving me some pointers? Resources such as any preexisting databases of points describing countries as polygons would be really helpful too. Many thanks to anyone who takes the time :)

    Read the article

  • MySQL Stored Procedures not working with SELECT (basic question)

    - by TMG
    Hello, I am using a platform (perfectforms) that requires me to use stored procedures for most of my queries, and having never used stored procedures, I can't figure out what I'm doing wrong. The following statement executes without error: DELIMITER // DROP PROCEDURE IF EXISTS test_db.test_proc// CREATE PROCEDURE test_db.test_proc() SELECT 'foo'; // DELIMITER ; But when I try to call it using: CALL test_proc(); I get the following error: #1312 - PROCEDURE test_db.test_proc can't return a result set in the given context I am executing these statements from within phpmyadmin 3.2.4, PHP Version 5.2.12 and the mysql server version is 5.0.89-community. When I write a stored procedure that returns a parameter, and then select it, things work fine (e.g.): DELIMITER // DROP PROCEDURE IF EXISTS test_db.get_sum// CREATE PROCEDURE test_db.get_sum(out total int) BEGIN SELECT SUM(field1) INTO total FROM test_db.test_table; END // DELIMITER ; works fine, and when I call it: CALL get_sum(@t); SELECT @t; I get the sum no problem. Ultimately, what I need to do is have a fancy SELECT statement wrapped up in a stored procedure, so I can call it, and return multiple rows of multiple fields. For now I'm just trying to get any select working. Any help is greatly appreciated.

    Read the article

  • Load Spikes on a Apache MySQL Server with Wordpress MU

    - by Vikram Goyal
    Hi there, I am trying to investigate the reasons for some mysterious load spikes on a Linux Apache server (2.2.14) running PHP 5.2.9 on a dedicated server with enough processing power and memory. My primary web application is a Wordpress MU (2.9.2) installation. I have investigated and ruled out DOS attack, MySQL or Apache configuration issues. The log files don't give me anything of interest, except to tell me that there is severe load. The load (which can go up to 100) just seems to come and go. It helps that I have a script that checks every 3 minutes for the load, and restarts Apache. Restarting it helps, and the server comes back, till it happens again. There seems to be no set time frame, or visitor numbers on the site that can trigger this. Even a low number of concurrent visitors (20) can trigger it. I am almost convinced that there is a rewrite loop somewhere that is causing Apache to go mad. Apache is trying to serve something that is causing it to spawn more and more processes till it keels over. My question is: Given that I am convinced that this is a rewrite issue or something similar, how can I try and figure out what the issue is? What should I monitor? Apache logs are voluminous, and not very helpful. Of course, if this is not the issue, then at least knowing what to look for will help me eliminate this as an issue and look for something else. Thanks! Vikram

    Read the article

  • MySQL multidimensional arrays...

    - by jay
    What is the best way to store data that is dynamic in nature using MySQL? Let's say I have a table in which one item is "dynamic". For some entries I need to store one value, but for others it could be one hundred values. For example let's say I have the following simple table: CREATE TABLE manager ( name char(50), worker_1_name(50), worker_2_name(50), ... worker_N_name(50) ); Clearly, this is not an ideal way to set up a database. Because I have to accommodate the largest group that a manager could potentially have, I am wasting a lot of space in the database. What I would prefer is to have a table that I can use as a member of another table (like I would do in C++ through inheritance) that can be used by the "manager" table to handle the variable number of employees. It might look something like this. CREATE TABLE manager ( name char(50), underlings WORKERS ); CREATE TABLE WORKERS ( name char(50), ); I would like to be able to add a variable number of workers to each manager. Is this possible or am I constrained to enumerating all the possible number of employees even though I will use the full complement only rarely?

    Read the article

  • Extract primary key from MySQL in PHP

    - by Parth
    I have created a PHP script and I am lacking to extract the primary key, I have given flow below, please help me in how can i modify to get primary key I am using MySQL DB, working for Joomla, My requirement is tracking the activity like insert/update/delete on any table and store it in another audit table using triggers, i.e. I am doing Auditing. DB's table structure: Few tables dont have any PK nor auto increment key Flow of my script is : I fetch out all table from DB. I check whether the table have any trigger or not. If yes then it moves to check nfor next table and so on. If it does'nt find any trigger then it creates the triggers for the table, such that, -it first checks if the table has any primary key or not(for inserting in Tracking audit table for every change made) if it has the primary key then it uses it further in creation of trigger. if it doesnt find any PK then it proceeds further in creating the trigger without inserting any id in audit table Now here, My problem is I need the PK every time so that I can record the id of any particular table in which the insert/update/delete is performed, so that further i can use this audit track table to replicate in production DB.. Now as I haave mentioned earlier that I am not available with PK/auto-incremented in some table, then what should I do get the particular id in which change is done? please guide me...GEEKS!!!

    Read the article

  • php mysql parallel array checkboxes

    - by gramware
    I have an array of checkboxes that I edit at once to set up a 'tinyint' field. the problem comes in when i uncheck the checkbox and post the vales to mysql. since it posts an array of checkboxes and another parallel array of values to edit, unchecking a checkbox results in the 0 value been ignored by PHP_POST and hence the checkbox array will be less by the number of unchecked values in the form while the array to be edited will have all the records in the form. here is the submit code while($row=mysql_fetch_array($result)) { $checked = ($row[active]==1) ? 'checked="checked"' : ''; ... echo "<input type='hidden' name='TrID[]' value='$TrID'>"; echo "<input type='checkbox' name='active1[]' value='$row[active]''$checked' >"; ... and the processing php script $userid = ($_POST['TrID']); $checked= ($_POST['active']); $i=0; foreach ($userid as $usid) { if ($checked[$i]==1){ $check = 1; } else{ $check = 0; } $qry1 ="UPDATE `epapers`.`clientelle` SET `active` = '$check' WHERE `clientelle`.`user_id` = '$usid' "; $result = mysql_query($qry1); $i++; }

    Read the article

  • question about MySQL database migration

    - by WilliamLou
    Hi there: If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc.. Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database. Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while. Thanks!!

    Read the article

  • Rails Active Record Mysql find query HAVING clause

    - by meetraghu28
    Is there a way to use the HAVING clause in some other way without using group by. I am using rails and following is a sample sccenario of the problem that i am facing. In rails you can use the Model.find(:all,:select,conditions,:group) function to get data. In this query i can specify a having clause in the :group param. But what if i dont have a group by clause but want to have a having clause in the result set. Ex: Lets take a query select sum(x) as a,b,c from y where "some_conditions" group by b,c; This query has a sum() aggregation on one of the fields. No if there is nothing to aggregate then my result should be an empty set. But mysql return a NULL row. So this problem can be solved by using select sum(x) as a,b from y where "some_conditions" group by b having a NOT NULL; but what happens in case i dont have a group by clause?? a query like below select sum(x) as a,b from y where "some_conditions"; so how to specify that sum(x) should not be NULL? Any solution that would return an empty set in this case instead of a NULL row will help and also that solution should be doable in rails. We can use subqueries to get this condition working with sumthin like this select * from ((select sum(x) as b FROM y where "some_condition") as subq) where subq.b is not null; but is there a better way to do this thru sql/rails ??

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • Best way to handle Many-to-Many relationships in PHP MySQL

    - by Jayrox
    I am looking for the best way to handle a database of many-to-many relationships in PHP and MySQL. Right now I have 2 tables: Users (id, user_name, first_name, last_name) Connections (id_1, id_2) In the User table id is auto incremented on add and user_name is unique, but can be changed. Unfortunately, I don't have control over the user_name and its ability to be changed, but I must account for it. The Connections table is obviously, user1 and user2's id. The connection table needs to account for these possible relations: user1 --> user2 (user 1 friends with user 2 but not user2 friends with user1) user2 --> user1 (user 2 friends with user 1 but not user1 friends with user2) user1 <--> user2 (user 1 and user 2 mutually friends) user1 <-!-> user2 (user 1 and user 2 not friends) That part is not the problem, The problem I am having with is keeping these relations unique when and if they change in batches. Possible solution 1: delete all of user 1's relations and readd them with the updated list. I think this might be too slow for my needs. Solution 2? Anyone else encounter this problem? How should I best handle this? update: distinguishing relationships: i handle relationships like this: user1, user2 user1, user3 user2, user1 in that example the following is true: user1 follows user2 and user3 user2 only follows user1 but doesn't follow user3 user3 doesn't follow either user1 or user2

    Read the article

  • De-normalization alternative to specific MYSQL problem?

    - by Booker
    I am facing quite a specific optimization problem. I currently have 4 normalized tables of data. Every second, possibly thousands of users will pull down up-to-date info from these tables using AJAX. The thing is that I can predict relatively easily which subset of data they need... The most recent 100 or so entries in those 4 normalized tables. I have been researching de-normalization... but feel that perhaps there is an easier solution. I was thinking that I could somehow every second run one sql query to condense the needed info, store it in a temp cached table and then have all of the user queries just draw from this. This will allow the complex join of 4 tables to only be run once, and then from there the users just need to do a simple lookup from the cached table. I really don't know if this is feasible. Comments on this or any other suggestions would be much appreciated. Thanks!

    Read the article

  • Having an issue with Nullable MySQL columns in SubSonic 3.0 templates

    - by omegawkd
    Looking at this line in the Settings.ttinclude string CheckNullable(Column col){ string result=""; if(col.IsNullable && col.SysType !="byte[]" && col.SysType !="string") result="?"; return result; } It describes how it determines if the column is nullable based on requirements and returns either "" or "?" to the generated code. Now I'm not too familiar with the ? nullable type operator but from what I can see a cast is required. For instance, if I have a nullable integer MySQL column and I generate the code using the default template files it returns a line similar to this: int? _User_ID; When trying to compile the project I get the error: Cannot implicitly convert type 'int?' to 'int'. An explicit conversion exists (are you missing a cast?) I checked teh Settings files for the other database types and they all seems to have the same routine. So my question is, is this behaviour expected or is this a bug? I need to solve it one way or the other before I can procede. Thanks for your help.

    Read the article

  • mysql/algorithm: Weighting an average to accentuate differences from the mean

    - by Sai Emrys
    This is for a new feature on http://cssfingerprint.com (see /about for general info). The feature looks up the sites you've visited in a database of site demographics, and tries to guess what your demographic stats are based on that. All my demgraphics are in 0..1 probability format, not ratios or absolute numbers or the like. Essentially, you have a large number of data points that each tend you towards their own demographics. However, just taking the average is poor, because it means that by adding in a lot of generic data, the number goes down. For example, suppose you've visited sites S0..S50. All except S0 are 48% female; S0 is 100% male. If I'm guessing your gender, I want to have a value close to 100%, not just the 49% that a straight average would give. Also, consider that most demographics (i.e. everything other than gender) does not have the average at 50%. For example, the average probability of having kids 0-17 is ~37%. The more a given site's demographics are different from this average (e.g. maybe it's a site for parents, or for child-free people), the more it should count in my guess of your status. What's the best way to calculate this? For extra credit: what's the best way to calculate this, that is also cheap & easy to do in mysql?

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >