Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 543/670 | < Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >

  • Top 10 collection completion - a monster in-query formula in MySQL?

    - by Andrew Heath
    I've got the following tables: User Basic Data (unique) [userid] [name] [etc] User Collection (one to one) [userid] [game] User Recorded Plays (many to many) [userid] [game] [scenario] [etc] Game Basic Data (unique) [game] [total_scenarios] I would like to output a table that shows the collection play completion percentage for the Top 10 users in descending order of %: Output Table [userid] [collection_completion] 3 95% 1 81% 24 68% etc etc In my mind, the calculation sequence for ONE USER is: grab user's total owned scenarios from User Collection joined with Game Basic Data and COUNT(gbd.total_scenarios) grab all recorded plays by COUNT(DISTINCT scenario) for that user Divide all recorded plays by total owned scenarios So that's 2 queries and a little PHP massage at the end. For a list of users sorted by completion percentage things get a little more complicated. I figure I could grab all users' collection totals in one query, and all users recorded plays in another, and then do the calcs and sort the final array in PHP, but it seems like overkill to potentially be doing all that for 1000+ users when I only ever want the Top 10. Is there a wicked monster query in MySQL that could do all that and LIMIT 10? Or is sticking with PHP handling the bulk of the work the way to go in this case?

    Read the article

  • Explaining verity index and document search limits

    - by Ahmad
    As present, we currently have a CF8 standard edition server which have some limitations around verity indexing. According to Adobe Verity Server has the following document search limits (limits are for all collections registered to Verity Server): - 10,000 documents for ColdFusion Developer Edition - 125,000 documents for ColdFusion Standard Edition - 250,000 documents for ColdFusion Enterprise Edition We have now reached a stage where the server wide number of documents indexed exceed 125k. However, the largest verity collection consists of about 25k documents(and this is expected to grow). Only one collection is ever searched at a time. In my understanding, this means that I can still search an entire collection with no restrictions. Is this correct? Or does it mean that only documents that were indexed across all collection prior to reaching the limit are actually searchable? We are considering moving to CF9 standard as a solution to this and to use the Solr solution which has no restrictions. The coldfusionjedi highlights some differences between Verity and Solr. However, before we upgrade I am trying to gain a clearer understanding of this before we commit to an upgrade. Can someone provide me a clear explanation as to what this means and how it actually affects verity searching and indexing?

    Read the article

  • RSS feed created with PHP only shows the title in the feed reader

    - by James Simpson
    I am using the following PHP code to generate the XML for an RSS feed, but it doesn't seem to be working correctly. No short description is displayed in the feed reader, all I see is the title of the article. Also, all of the articles say they were published at the same time. This is the first time I have tried to setup an RSS feed, so I'm sure I've made several stupid mistakes. $result = mysql_query("SELECT * FROM blog ORDER BY id DESC LIMIT 10"); $date = date(DATE_RFC822); header('Content-type: text/xml'); echo ("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n"); echo ("<rss version=\"2.0\">\n"); echo ("<channel>\n"); echo ("<lastBuildDate>$date</lastBuildDate>\n"); echo ("<pubDate>$date</pubDate>\n"); echo ("<title>my website name</title>\n"); echo ("<description><![CDATA[the description]]></description>\n"); echo ("<link>http://my-domain.com</link>\n"); echo ("<language>en</language>\n"); $ch=100; while ($a = mysql_fetch_array($result)) { $headline = htmlentities(stripslashes($a['subject'])); $posturl = $a[perm_link]; $content = $a['post']; $date = date(DATE_RFC822, $a['posted']); echo ("<item>\n"); echo ("<title>$headline</title>\n"); echo ("<link>$posturl</link>\n"); echo ("<description><![CDATA[$content]]></description>\n"); echo ("<guid isPermaLink=\"true\">$posturl</guid>\n"); echo ("<pubDate>$date2</pubDate>\n"); echo ("</item>\n"); } echo ("</channel>\n"); echo ("</rss>\n");

    Read the article

  • JQGRID Master detail on different pages

    - by dennisg
    Hello, I need some help on next things. I was trying to do next few things. I have one grid_test.php (that creates jqgrid) and index_grid.php (just to display grid.php). jQGrid in grid.php have custom button named 'Prikazi'. I want the user to select one row from that grid and when press custom button 'Prikazi' to redirect to another page to show subgrid (and to pass parameter which is the id of the selected row). Subgrid is in the file detail_test.php and also I have file called index_detail.php (for displaying the file detail_test.php with jqgrid). These php files communicate by passing parameter id_reda (or id) that is id of the selected row in grid_test.php. I have tried many ways to achieve that but I wasn't able. Subgrid php file (detail_test.php) receives that parameter but when I add that to sql statement in subgrid file it shows next error: Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[42000]: Syntax error or access violation: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'LIMIT 0, 0' at line 16' in C:\Zend\Apache2\htdocs\jqSuitePHP3811SourceRadna\php\jqGridPdo.php:62 I really don't know what am I doing wrong. Maybe the passing of parameter is wrong and maybe subgrid can't create colModel properly. Or it has something to do with sql statements. Actually my work was based on one of your examples masterdetail, but I wanted to have master grid on one page and when user clicks custom button, goes to another page with detail grid. You can see my example on next page: http://pljevlja.org/grid/index_test.php. And all my php files are here: -http://pljevlja.org/grid/TXT.zip<- Thanks in advance,

    Read the article

  • Bulk retrieval in HQL: could not execute native bulk manipulation query

    - by user179056
    Hello, We are getting a "could not execute native bulk manipulation query" error in HQL When we execute a query which is something like this. String query = "Select person.id from Person person where"; String bindParam = ""; List subLists = getChucnkedList(dd); for(int i = 0 ; i < subLists.size() ;i++){ bindParam = bindParam + " person.id in (:param" + i + ")"; if (i < subLists.size() - 1 ) { bindParam = bindParam + " OR " ; } } query = query + bindParam; final Query query1 = session.createQuery(query.toString()); for(int i = 0 ; i < subLists.size() ;i++){ query1.setParameterList("param" + i, subLists.get(i)); } List personIdsList = query1.list(); Basically to avoid the limit on IN clause in terms of number of ids which can be inserted (not more than 1000), we have created sublists of ids of not more than 1000 in number. We are using bind parameters to bind each sublist. However we still get error "could not execute native bulk manipulation query" How does one avoid the problem of limited parameters possible in IN query when parameters passed are more than 1000? regards Sameer

    Read the article

  • Optimize an SQL statement

    - by kovshenin
    Hey, I'm running WordPress, the database diagram could be found here: http://codex.wordpress.org/Database_Description After doing tonnes of filters and applying some hooks to the core, I'm left with the following query: SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts JOIN wp_postmeta ppmeta_beds ON (ppmeta_beds.post_id = wp_posts.ID AND ppmeta_beds.meta_key = 'pp-general-beds' AND ppmeta_beds.meta_value >= 2) JOIN wp_postmeta ppmeta_baths ON (ppmeta_baths.post_id = wp_posts.ID AND ppmeta_baths.meta_key = 'pp-general-baths' AND ppmeta_baths.meta_value >= 3) JOIN wp_postmeta ppmeta_furnished ON (ppmeta_furnished.post_id = wp_posts.ID AND ppmeta_furnished.meta_key = 'pp-general-furnished' AND ppmeta_furnished.meta_value = 'yes') JOIN wp_postmeta ppmeta_pool ON (ppmeta_pool.post_id = wp_posts.ID AND ppmeta_pool.meta_key = 'pp-facilities-pool' AND ppmeta_pool.meta_value = 'yes') JOIN wp_postmeta ppmeta_pool_type ON (ppmeta_pool_type.post_id = wp_posts.ID AND ppmeta_pool_type.meta_key = 'pp-facilities-pool-type' AND ppmeta_pool_type.meta_value IN ('tennis', 'voleyball', 'basketball', 'fitness')) JOIN wp_postmeta ppmeta_sport ON (ppmeta_sport.post_id = wp_posts.ID AND ppmeta_sport.meta_key = 'pp-facilities-sport' AND ppmeta_sport.meta_value = 'yes') JOIN wp_postmeta ppmeta_sport_type ON (ppmeta_sport_type.post_id = wp_posts.ID AND ppmeta_sport_type.meta_key = 'pp-facilities-sport-type' AND ppmeta_sport_type.meta_value IN ('tennis', 'voleyball', 'basketball', 'fitness')) JOIN wp_postmeta ppmeta_parking ON (ppmeta_parking.post_id = wp_posts.ID AND ppmeta_parking.meta_key = 'pp-facilities-parking' AND ppmeta_parking.meta_value = 'yes') JOIN wp_postmeta ppmeta_parking_type ON (ppmeta_parking_type.post_id = wp_posts.ID AND ppmeta_parking_type.meta_key = 'pp-facilities-parking-type' AND ppmeta_parking_type.meta_value IN ('street', 'off-street', 'garage')) JOIN wp_postmeta ppmeta_garden ON (ppmeta_garden.post_id = wp_posts.ID AND ppmeta_garden.meta_key = 'pp-facilities-garden' AND ppmeta_garden.meta_value = 'yes') JOIN wp_postmeta ppmeta_garden_type ON (ppmeta_garden_type.post_id = wp_posts.ID AND ppmeta_garden_type.meta_key = 'pp-facilities-garden-type' AND ppmeta_garden_type.meta_value IN ('private', 'communal')) JOIN wp_postmeta ppmeta_type ON (ppmeta_type.post_id = wp_posts.ID AND ppmeta_type.meta_key = 'pp-general-type' AND ppmeta_type.meta_value IN ('villa', 'apartment', 'penthouse')) JOIN wp_postmeta ppmeta_status ON (ppmeta_status.post_id = wp_posts.ID AND ppmeta_status.meta_key = 'pp-general-status' AND ppmeta_status.meta_value IN ('off-plan', 'resale')) JOIN wp_postmeta ppmeta_location_type ON (ppmeta_location_type.post_id = wp_posts.ID AND ppmeta_location_type.meta_key = 'pp-location-type' AND ppmeta_location_type.meta_value IN ('beachfront', 'countryside', 'town-center', 'near-the-sea', 'hillside', 'private-resort')) JOIN wp_postmeta ppmeta_price_range ON (ppmeta_price_range.post_id = wp_posts.ID AND ppmeta_price_range.meta_key = 'pp-general-price' AND ppmeta_price_range.meta_value BETWEEN 10000 AND 50000) JOIN wp_postmeta ppmeta_area_range ON (ppmeta_area_range.post_id = wp_posts.ID AND ppmeta_area_range.meta_key = 'pp-general-area' AND ppmeta_area_range.meta_value BETWEEN 50 AND 150) WHERE 1=1 AND (((wp_posts.post_title LIKE '%fdsfsad%') OR (wp_posts.post_content LIKE '%fdsfsad%'))) AND wp_posts.post_type = 'property' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 10 It's way too big. Could anybody please show me a way of optimizing all those joins into fewer statements? As you can see they all use the same tables but under different names. I'm not an SQL guru but I think there should be a way, because this is insane ;) Thanks! Update Here's what explain returns: http://twitpic.com/1cd36p

    Read the article

  • How do I break down MySQL query results into categories, each with a specific number of rows?

    - by Mel
    Hello, Problem: I want to list n number of games from each genre (order not important) The following MySQL query resides inside a ColdFusion function. It is meant to list all games under a platform (for example, list all PS3 games; list all Xbox 360 games; etc...). The variable for PlatformID is passed through the URL. I have 9 genres, and I would like to list 10 games from each genre. SELECT games.GameID AS GameID, games.GameReleaseDate AS rDate, titles.TitleName AS tName, titles.TitleShortDescription AS sDesc, genres.GenreName AS gName, platforms.PlatformID, platforms.PlatformName AS pName, platforms.PlatformAbbreviation AS pAbbr FROM (((games join titles on((games.TitleID = titles.TitleID))) join genres on((genres.GenreID = games.GenreID))) join platforms on((platforms.PlatformID = games.PlatformID))) WHERE (games.PlatformID = '#ARGUMENTS.PlatformID#') ORDER BY GenreName ASC, GameReleaseDate DESC Once the query results come back I group them in ColdFusion as follows: <cfoutput query="ListGames" group="gName"> (first loop which lists genres) #ListGames.gName# <cfoutput> (nested loop which lists games) #ListGames.tName# </cfoutput> </cfoutput> The problem is that I only want 10 games from each genre to be listed. If I place a "limit" of 50 in the SQL, I will get ~ 50 games of the same genre (depending on how much games of that genre there are). The second issue is I don't want the overload of querying the database for all games when each person will only look at a few. What is the correct way to do this? Many thanks!

    Read the article

  • How will Arel affect rails' includes() 's capabilities.

    - by Tim Snowhite
    I've looked over the Arel sources, and some of the activerecord sources for Rails 3.0, but I can't seem to glean a good answer for myself as to whether Arel will be changing our ability to use includes(), when constructing queries, for the better. There are instances when one might want to modify the conditions on an activerecord :include query in 2.3.5 and before, for the association records which would be returned. But as far as I know, this is not programmatically tenable for all :include queries: (I know some AR-find-includes make t#{n}.c#{m} renames for all the attributes, and one could conceivably add conditions to these queries to limit the joined sets' results; but others do n_joins + 1 number of queries over the id sets iteratively, and I'm not sure how one might hack AR to edit these iterated queries.) Will Arel allow us to construct ActiveRecord queries which specify the resulting associated model objects when using includes()? Ex: User :has_many posts( has_many :comments) User.all(:include => :posts) #say I wanted the post objects to have their #comment counts loaded without adding a comment_count column to `posts`. #At the post level, one could do so by: posts_with_counts = Post.all(:select => 'posts.*, count(comments.id) as comment_count', :joins => 'left outer join comments on comments.post_id = posts.id', :group_by => 'posts.id') #i believe #But it seems impossible to do so while linking these post objects to each #user as well, without running User.all() and then zippering the objects into #some other collection (ugly) #OR running posts.group_by(&:user) (even uglier, with the n user queries)

    Read the article

  • Using NHibernate to select entities based on activity of children entities

    - by mannish
    I'm having a case of the Mondays... I need to select blog posts based on recent activity in the post's comments collection (a Post has a List<Comment> property and likewise, a Comment has a Post property, establishing the relationship. I don't want to show the same post twice, and I only need a subset of the entities, not all of the posts. First thought was to grab all posts that have comments, then order those based on the most recent comment. For this to work, I'm pretty sure I'd have to limit the comments for each Post to the first/newest Comment. Last I'd simply take the top 5 (or whatever max results number I want to pass into the method). Second thought would be to grab all of the comments, ordered by CreatedOn, and filter so there's only one Comment per Post. Then return those top (whatever) posts. This seems like the same as the first option, just going through the back door. I've got an ugly, two query option I've got working with some LINQ on the side for filtering, but I know there's a more elegant way to do it in using the NHibernate API. Hoping to see some good ideas here.

    Read the article

  • Design pattern to use instead of multiple inheritance

    - by mizipzor
    Coming from a C++ background, Im used to multiple inheritance. I like the feeling of a shotgun squarely aimed at my foot. Nowadays, I work more in C# and Java, where you can only inherit one baseclass but implement any number of interfaces (did I get the terminology right?). For example, lets consider two classes that implement a common interface but different (yet required) baseclasses: public class TypeA : CustomButtonUserControl, IMagician { public void DoMagic() { // ... } } public class TypeB : CustomTextUserControl, IMagician { public void DoMagic() { // ... } } Both classes are UserControls so I cant substitute the base class. Both needs to implement the DoMagic function. My problem now is that both implementations of the function are identical. And I hate copy-and-paste code. The (possible) solutions: I naturally want TypeA and TypeB to share a common baseclass, where I can write that identical function definition just once. However, due to having the limit of just one baseclass, I cant find a place along the hierarchy where it fits. One could also try to implement a sort of composite pattern. Putting the DoMagic function in a separate helper class, but the function here needs (and modifies) quite a lot of internal variables/fields. Sending them all as (reference) parameters would just look bad. My gut tells me that the adapter pattern could have a place here, some class to convert between the two when necessery. But it also feels hacky. I tagged this with language-agnostic since it applies to all languages that use this one-baseclass-many-interfaces approach. Also, please point out if I seem to have misunderstood any of the patterns I named. In C++ I would just make a class with the private fields, that function implementation and put it in the inheritance list. Whats the proper approach in C#/Java and the like?

    Read the article

  • MySQL - How do I inner join sorting the joined data

    - by Gary
    I'm trying to write a report which will join a person, their work, and their hourly wage at the time of work. I cannot seem to figure out the best way to join the person's cost when the date is less than the date of the work. Let's say a person cost $30 per hour at the start of the year then got a $10 raise o Feb 5 and another on Mar 1. 01/01/2010 $30.00 (per hour) 02/05/2010 $40.00 03/01/2010 $45.00 The person put in hours several days which span the rasies. 01/05/2010 10 hours (should be at $30/hr) 01/27/2010 5 hours (again at $30) 02/10/2010 10 hours (at $40/hr) 03/03/2010 5 hours (at $45/hr) I'm trying to write one SQL statement which will pull the hours, the cost per hour, and the hours*cost. The cost is the hourly rate last entered into the system so the cost date is less than the work date, ordered by cost date limit 1. SELECT person.id, person.name, work.hours, person_costs.value, work.hours * person_costs.value AS value FROM person INNER JOIN work ON (person.id = work.person_id) INNER JOIN person_costs ON (person.id = person_costs.person_id AND person_costs.date < work.date) WHERE person.id = 1234 ORDER BY work.date ASC The problem I'm having, the person_costs isn't ordered by date in descending order. It's pulling out "any" value (naturally sorted by record position) which matches the condition. How do I select the first person_cost value which is older than the work date? Thanks!

    Read the article

  • array of structs in C

    - by Hristo
    I'm trying to create an array of structs and also a pointer to that array. I don't know how large the array is going to be, so it should be dynamic. My struct would look something like this: typedef struct _stats_t { int hours[24]; int numPostsInHour; int days[7]; int numPostsInDay; int weeks[20]; int numPostsInWeek; int totNumLinesInPosts; int numPostsAnalyzed; } stats_t; ... and I need to have multiple of these structs for each file (unknown amount) that I will analyze. I'm not sure how to do this. I don't like the following approach because of the limit of the size of the array: # define MAX 10 typedef struct _stats_t { int hours[24]; int numPostsInHour; int days[7]; int numPostsInDay; int weeks[20]; int numPostsInWeek; int totNumLinesInPosts; int numPostsAnalyzed; } stats_t[MAX]; So how would I create this array? Also, would a pointer to this array would look something this? stats_t stats[]; stats_t *statsPtr = &stats[0]; Thanks, Hristo

    Read the article

  • Why do I have to unserialize this serialized value twice? (wordpress/bbpress maybe_serialize)

    - by 55skidoo
    What am I doing wrong here? I'm serializing a value, storing it in a database (table bb_meta), retrieving it... OK so far... but then I have to unserialize it twice. Shouldn't I be able to just unserialize once? This seems to work, but I'm wondering what point about serialization I'm missing here. //check database to see if user has ever visited before. $querystring = $bbdb->prepare( "SELECT `meta_value` FROM `$bbdb->meta` WHERE `object_type` = %s AND `object_id` = %s AND `meta_key` = %s LIMIT 1", $bbtype, $bb_this_thread, $bbuser ); $bb_last_visits = $bbdb->get_row($querystring, OBJECT); //if $bb_last_visits is empty, add time() as the metavalue using bb_update_meta if (empty($bb_last_visits)) { $first_visit = time(); echo 'serialized first visit: ' . $bb_this_visit_time_serialized = serialize(array($bb_this_thread => $first_visit)); bb_update_meta( $bb_this_thread, $bbuser, $bb_this_visit_time_serialized, $bbtype ); //add to database, bb_meta table echo '$bb_last_visits was empty. Setting first visit time as ' . $bb_this_visit_time_serialized . '<br>'; } else { //else, test by unserializing the data for use. echo 'last visit time already set: '; echo $bb_last_visits->meta_value; echo '<br>'; //fatal error - echo 'unserialized: ' . $bb_last_visits_unserialized = unserialize($bb_last_visits[0]->meta_value); echo '<br>'; echo 'unserialize: ' . $unserialized_visits = unserialize($bb_last_visits->meta_value); echo '<br>'; echo 'hmm, need to unserialize again??: '; echo $unserialized_unserialized_visits = unserialize($unserialized_visits); echo '<br>'; echo 'hey look, it\'s an array value I can finally use now. phew: ' . $unserialized_unserialized_visits[$bb_this_thread]; }

    Read the article

  • How to write to a varchar(max) column using ODBC

    - by andyjohnson
    Summary: I'm trying to write a text string to a column of type varchar(max) using ODBC and SQL Server 2005. It fails if the length of the string is greater than 8000. Help! I have some C++ code that uses ODBC (SQL Native Client) to write a text string to a table. If I change the column from, say, varchar(100) to varchar(max) and try to write a string with length greater than 8000, the write fails with the following error [Microsoft][ODBC SQL Server Driver]String data, right truncation So, can anyone advise me on if this can be done, and how? Some example (not production) code that shows what I'm trying to do: SQLHENV hEnv = NULL; SQLRETURN iError = SQLAllocEnv(&hEnv); HDBC hDbc = NULL; SQLAllocConnect(hEnv, &hDbc); const char* pszConnStr = "Driver={SQL Server};Server=127.0.0.1;Database=MyTestDB"; UCHAR szConnectOut[SQL_MAX_MESSAGE_LENGTH]; SWORD iConnectOutLen = 0; iError = SQLDriverConnect(hDbc, NULL, (unsigned char*)pszConnStr, SQL_NTS, szConnectOut, (SQL_MAX_MESSAGE_LENGTH-1), &iConnectOutLen, SQL_DRIVER_COMPLETE); HSTMT hStmt = NULL; iError = SQLAllocStmt(hDbc, &hStmt); const char* pszSQL = "INSERT INTO MyTestTable (LongStr) VALUES (?)"; iError = SQLPrepare(hStmt, (SQLCHAR*)pszSQL, SQL_NTS); char* pszBigString = AllocBigString(8001); iError = SQLSetParam(hStmt, 1, SQL_C_CHAR, SQL_VARCHAR, 0, 0, (SQLPOINTER)pszBigString, NULL); iError = SQLExecute(hStmt); // Returns SQL_ERROR if pszBigString len > 8000 The table MyTestTable contains a single colum defined as varchar(max). The function AllocBigString (not shown) creates a string of arbitrary length. I understand that previous versions of SQL Server had an 8000 character limit to varchars, but not why is this happening in SQL 2005? Thanks, Andy

    Read the article

  • Why do I have to unserialize this serialized value twice?

    - by 55skidoo
    What am I doing wrong here? I'm serializing a value, storing it in a database (table bb_meta), retrieving it... OK so far... but then I have to unserialize it twice. Shouldn't I be able to just unserialize once? This seems to work, but I'm wondering what point about serialization I'm missing here. //check database to see if user has ever visited before. $querystring = $bbdb->prepare( "SELECT `meta_value` FROM `$bbdb->meta` WHERE `object_type` = %s AND `object_id` = %s AND `meta_key` = %s LIMIT 1", $bbtype, $bb_this_thread, $bbuser ); $bb_last_visits = $bbdb->get_row($querystring, OBJECT); //if $bb_last_visits is empty, add time() as the metavalue using bb_update_meta if (empty($bb_last_visits)) { $first_visit = time(); echo 'serialized first visit: ' . $bb_this_visit_time_serialized = serialize(array($bb_this_thread => $first_visit)); bb_update_meta( $bb_this_thread, $bbuser, $bb_this_visit_time_serialized, $bbtype ); //add to database, bb_meta table echo '$bb_last_visits was empty. Setting first visit time as ' . $bb_this_visit_time_serialized . '<br>'; } else { //else, test by unserializing the data for use. echo 'last visit time already set: '; echo $bb_last_visits->meta_value; echo '<br>'; //fatal error - echo 'unserialized: ' . $bb_last_visits_unserialized = unserialize($bb_last_visits[0]->meta_value); echo '<br>'; echo 'unserialize: ' . $unserialized_visits = unserialize($bb_last_visits->meta_value); echo '<br>'; echo 'hmm, need to unserialize again??: '; echo $unserialized_unserialized_visits = unserialize($unserialized_visits); echo '<br>'; echo 'hey look, it\'s an array value I can finally use now. phew: ' . $unserialized_unserialized_visits[$bb_this_thread]; }

    Read the article

  • iterate through fb:random

    - by fusion
    does anyone know how i would fill data from mysql database in fb:random and iterate through it, to pick a random quote? fb:random $facebook->api_client->fbml_setRefHandle('quotes', '<fb:random> <fb:random-option>Quote 1</fb:random-option> <fb:random-option>Quote 2</fb:random-option> </fb:random>'); mysql data: $rowcount = mysql_result($result, 0, 0); $rand = rand(0,$rowcount-1); $result = mysql_query("SELECT cQuotes, vAuthor, cArabic, vReference FROM thquotes LIMIT $rand, 1", $conn) or die ('Error: '.mysql_error()); $row = mysql_fetch_array($result, MYSQL_ASSOC); if ( !$row ) { echo "Empty"; } else{ $fb_box = "<p>" . h($row['cArabic']) . "</p>"; $fb_box .= "<p>" . h($row['cQuotes']) . "</p>"; $fb_box .= "<p>" . h($row['vAuthor']) . "</p>"; $fb_box .= "<p>" . h($row['vReference']) . "</p>"; }

    Read the article

  • Slow MySQL Query not using filesort

    - by Canadaka
    I have a query on my homepage that is getting slower and slower as my database table grows larger. tablename = tweets_cache rows = 572,327 this is the query I'm currently using that is slow, over 5 seconds. SELECT * FROM tweets_cache t WHERE t.province='' AND t.mp='0' ORDER BY t.published DESC LIMIT 50; If I take out either the WHERE or the ORDER BY, then the query is super fast 0.016 seconds. I have the following indexes on the tweets_cache table. PRIMARY published mp category province author So i'm not sure why its not using the indexes since mp, provice and published all have indexes? Doing a profile of the query shows that its not using an index to sort the query and is using filesort which is really slow. possible_keys = mp,province Extra = Using where; Using filesort I tried adding a new multie-colum index with "profiles & mp". The explain shows that this new index listed under "possible_keys" and "key", but the query time is unchanged, still over 5 seconds. Here is a screenshot of the profiler info on the query. http://i355.photobucket.com/albums/r469/canadaka_bucket/slow_query_profile.png Something weird, I made a dump of my database to test on my local desktop so i don't screw up the live site. The same query on my local runs super fast, milliseconds. So I copied all the same mysql startup variables from the server to my local to make sure there wasn't some setting that might be causing this. But even after that the local query runs super fast, but the one on the live server is over 5 seconds. My database server is only using around 800MB of the 4GB it has available. here are the related my.ini settings i'm using default-storage-engine = MYISAM max_connections = 800 skip-locking key_buffer = 512M max_allowed_packet = 1M table_cache = 512 sort_buffer_size = 4M read_buffer_size = 4M read_rnd_buffer_size = 16M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 128M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8 # Disable Federated by default skip-federated key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M

    Read the article

  • Extract a pattern from the output of curl

    - by allentown
    I would like to use curl, on the command line, to grab a url, pipe it to a pattern, and return a list of urls that match that pattern. I am running into problems with greedy aspects of the pattern, and can not seem to get past it. Any help on this would be apprecaited. curl http://www.reddit.com/r/pics/ | grep -ioE "http://imgur\.com/.+(jpg|jpeg|gif|png)" So, grab the data from the url, which returns a mess of html, which may need some linebreaks somehow replaced in, onless the regex can return more than one pattern in a single line. The patter is pretty simple, any string that matches... starts with http://imgur.com/ has A-Z a-z 0-9 (maybe some others) and is so far, 5 chars long, 8 should cover it forever if I wanted to limit that aspect of the patter, which I don't ends in a .grraphic_file_format_extention (jpg, jpeg, gif, png) Thats about it, at that url, with default settings, I should generally get back a good set of images. I would not be objectionable to using the RSS feel url for the same page, it may be easier to parse actually. Thanks everyone!

    Read the article

  • Reading a CSV with file_get_contents in PHP

    - by JPro
    I am reading a 'kind' of csv file and exploding it and storing it in array. The file I am reading has this structure Id,Log,Res,File mydb('Test1','log1','Pass','lo1.txt'). mydb('Test2','log2','Pass','lo2.txt'). mydb('Test3','log3','Pass','lo3.txt'). Now what I am trying to do is : reading the last record in my database, get the Name, lets say in this case 'Test1' and then I am searching through my file and where I can find the position of 'Test1' and get the next lines in the file, extract the ID,s and add it to database. I am getting the position of desired string in the file, but I am not sure how to get the next lines too. Here's my code so far. <?php mysql_connect("localhost", "root", "") or die(mysql_error()); mysql_select_db("testing") or die(mysql_error()); $result = mysql_query("select ID from table_1 order by S_no DESC limit 1") or die(mysql_error()); $row = mysql_fetch_array( $result ); $a = $row['ID']; echo 'Present Top Row is '.$a.'<br>'; $addresses = explode("\n", file_get_contents('\\\\fil1\\logs\\tes.pl')); foreach($addresses as $val) { $pos = strstr($val, $a); if ($pos === false) { } else { echo "The string <br> '$a' <br>was found in the string <br>'$val' <br>"; echo " and exists at position <br>$pos<br>"; } }

    Read the article

  • "image contains error", trying to create and display images using google app engine

    - by bert
    Hello all the general idea is to create a galaxy-like map. I run into problems when I try to display a generated image. I used Python Image library to create the image and store it in the datastore. when i try to load the image i get no error on the log console and no image on the browser. when i copy/paste the image link (including datastore key) i get a black screen and the following message: The image “view-source:/localhost:8080/img?img_id=ag5kZXZ-c3BhY2VzaW0xMnINCxIHTWFpbk1hcBgeDA” cannot be displayed because it contains errors. the firefox error console: Error: Image corrupt or truncated: /localhost:8080/img?img_id=ag5kZXZ-c3BhY2VzaW0xMnINCxIHTWFpbk1hcBgeDA import cgi import datetime import urllib import webapp2 import jinja2 import os import math import sys from google.appengine.ext import db from google.appengine.api import users from PIL import Image #SNIP #class to define the map entity class MainMap(db.Model): defaultmap = db.BlobProperty(default=None) #SNIP class Generator(webapp2.RequestHandler): def post(self): #SNIP test = Image.new("RGBA",(100, 100)) dMap=MainMap() dMap.defaultmap = db.Blob(str(test)) dMap.put() #SNIP result = db.GqlQuery("SELECT * FROM MainMap LIMIT 1").fetch(1) if result: print"item found<br>" #debug info if result[0].defaultmap: print"defaultmap found<br>" #debug info string = "<div><img src='/img?img_id=" + str(result[0].key()) + "' width='100' height='100'></img>" print string else: print"nothing found<br>" else: self.redirect('/?=error') self.redirect('/') class Image_load(webapp2.RequestHandler): def get(self): self.response.out.write("started Image load") defaultmap = db.get(self.request.get("img_id")) if defaultmap.defaultmap: try: self.response.headers['Content-Type'] = "image/png" self.response.out.write(defaultmap.defaultmap) self.response.out.write("Image found") except: print "Unexpected error:", sys.exc_info()[0] else: self.response.out.write("No image") #SNIP app = webapp2.WSGIApplication([('/', MainPage), ('/generator', Generator), ('/img', Image_load)], debug=True) the browser shows the "item found" and "defaultmap found" strings and a broken imagelink the exception handling does not catch any errors Thanks for your help Regards Bert

    Read the article

  • emacs: x-popup-menu max size constraints?

    - by Cheeso
    I'm working on an intellisense or code-completion capability for C#. So far, so good. Right now I have basic completion working. There are 2 ways to request completion. The first cycles through all the potential matches. The second presents a popup menu of the matches. It works for types: And also for local variables: I'm confronting two problems with x-popup-menu: the popup menu can expand to consume all available screen space, when the number of choices is large. Literally it can obscure the entire screen. The silly thing is, it's scrollable. First it expands to consume all available space, then it also becomes scrollable. Is there a way I can limit the maximum size of x-popup-menu? To specify the position of the popup menu, I pass in a position, and x-popup-menu uses that as the *middle*, not the left, of the top line of the menu. Why middle? who knows. What this means is, if I specify (40 . 60) for the location of the menu, and the menu happens to be 100 pixels wide, the menu will extend beyond the left border of the emacs window. You can see this in the 2nd image above. If I knew how wide the popup would be before specifying the position, I could compensate. But I don't. Is there a workaround? Is there a way to get x-popup-menu to take its position as the LEFT rather than the middle.

    Read the article

  • When *not* to use prepared statements?

    - by Ben Blank
    I'm re-engineering a PHP-driven web site which uses a minimal database. The original version used "pseudo-prepared-statements" (PHP functions which did quoting and parameter replacement) to prevent injection attacks and to separate database logic from page logic. It seemed natural to replace these ad-hoc functions with an object which uses PDO and real prepared statements, but after doing my reading on them, I'm not so sure. PDO still seems like a great idea, but one of the primary selling points of prepared statements is being able to reuse them… which I never will. Here's my setup: The statements are all trivially simple. Most are in the form SELECT foo,bar FROM baz WHERE quux = ? ORDER BY bar LIMIT 1. The most complex statement in the lot is simply three such selects joined together with UNION ALLs. Each page hit executes at most one statement and executes it only once. I'm in a hosted environment and therefore leery of slamming their servers by doing any "stress tests" personally. Given that using prepared statements will, at minimum, double the number of database round-trips I'm making, am I better off avoiding them? Can I use PDO::MYSQL_ATTR_DIRECT_QUERY to avoid the overhead of multiple database trips while retaining the benefit of parametrization and injection defense? Or do the binary calls used by the prepared statement API perform well enough compared to executing non-prepared queries that I shouldn't worry about it? EDIT: Thanks for all the good advice, folks. This is one where I wish I could mark more than one answer as "accepted" — lots of different perspectives. Ultimately, though, I have to give rick his due… without his answer I would have blissfully gone off and done the completely Wrong Thing even after following everyone's advice. :-) Emulated prepared statements it is!

    Read the article

  • Connect xampp to MongoDB

    - by Jhonny D. Cano -Leftware-
    Hello I have a xampp 1.7.3 instance running and a MongoDB 1.2.4 server on the same machine. I want to connect them, so I basically have been following this tutorial on php.net, it seems to connect but the cursors are always empty. I don't know what am I missing. Here is the code I am trying. The cursor-valid always says false. thanks <?php $m = new Mongo(); // connect try { $m->connect(); } catch (MongoConnectionException $ex) { echo $ex; } echo "conecta..."; $dbs = $m->listDBs(); if ($dbs == NULL) { echo $m->lastError(); return; } foreach($dbs as $db) { echo $db; } $db = $m->selectDB("CDO"); echo "elige bd..."; $col = $db->selectCollection("rep_consulta"); echo "elige col..."; $rangeQuery = array('id' => array( '$gt' => 100)); $col->insert(array('id' => 456745764, 'nombre' => 'cosa')); $cursor = $col->find()->limit(10); echo "buscando..."; var_dump($cursor); var_dump($cursor->valid()); if ($cursor == NULL) echo 'cursor null'; while($cursor->hasNext()) { $item = $cursor->current(); echo "en while..."; echo $item["nombre"].'...'; } ?> doing this by command line works perfect use CDO db.rep_consulta.find() -- lot of data here

    Read the article

  • Pulling info from database and dynamically adding them into different divs

    - by Matt Nathanson
    This may be a bit of a php n00b question but i've managed to get it working this far and I'm a little bit stuck. I'm pulling 12 images with descriptions out of a database. I want to insert them into a client rotator that has 3 sets of 4. They will be contained in divs called clientrotate1, clientrotate2, and clientrotate3 respectively. Inside each of those divs I want to have 4 images with classes thumb1, thumb2, thumb3, and thumb4 I am successful in having the images and descriptions pull into clientrotator 1. Where I get stuck is to how to dynamically add the 2nd and 3rd set of data into clientrotator2 and clientrotator3. Here is my function: public function DisplayClientRotator(){ $result = mysql_query( 'SELECT * FROM project ORDER BY id DESC LIMIT 12' ) or die("SELECT Error: ".mysql_error()); $iterator = 1; while ($row = mysql_fetch_assoc($result)) { echo "<div id='clientrotate1'>"; echo "<div class='thumb$iterator'>"; echo "<img style='width: 172px; height: 100px;' src=".$row['photo']." />"; echo "<div class='transbox'>"; echo "<div class='thumbtext'>"; echo $row['campaign']; echo "</div>"; echo "</div>"; echo "</div>"; echo "</div>"; $iterator++; } Any help or pointers int he right direction would be greatly appreciated! Thanks so much, Matt

    Read the article

  • Optimize GROUP BY&ORDER BY query

    - by Jan Hancic
    I have a web page where users upload&watch videos. Last week I asked what is the best way to track video views so that I could display the most viewed videos this week (videos from all dates). Now I need some help optimizing a query with which I get the videos from the database. The relevant tables are this: video (~239371 rows) VID(int), UID(int), title(varchar), status(enum), type(varchar), is_duplicate(enum), is_adult(enum), channel_id(tinyint) signup (~115440 rows) UID(int), username(varchar) videos_views (~359202 rows after 6 days of collecting data, so this table will grow rapidly) videos_id(int), views_date(date), num_of_views(int) The table video holds the videos, signup hodls users and videos_views holds data about video views (each video can have one row per day in that table). I have this query that does the trick, but takes ~10s to execute, and I imagine this will only get worse over time as the videos_views table grows in size. SELECT v.VID, v.title, v.vkey, v.duration, v.addtime, v.UID, v.viewnumber, v.com_num, v.rate, v.THB, s.username, SUM(vvt.num_of_views) AS tmp_num FROM video v LEFT JOIN videos_views vvt ON v.VID = vvt.videos_id LEFT JOIN signup s on v.UID = s.UID WHERE v.status = 'Converted' AND v.type = 'public' AND v.is_duplicate = '0' AND v.is_adult = '0' AND v.channel_id <> 10 AND vvt.views_date >= '2001-05-11' GROUP BY vvt.videos_id ORDER BY tmp_num DESC LIMIT 8 And here is a screenshot of the EXPLAIN result: So, how can I optimize this?

    Read the article

< Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >