Search Results

Search found 14874 results on 595 pages for 'mysql connector'.

Page 322/595 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • Scalable way to store files on server (PHP)?

    - by Nathaniel Bennett
    I'm creating my first web application - a really simplistic online text editor. What I need to do is find the best way to store text based files - a lot of them. These text files can be past 10,000 words in size (text words not computer words.) in essence I want the text documents to be limitless in size. I was thinking about storing the text files in my MySQL database - but thought there was a better way. Instead I'm planing on storing the text files in XML based format in a directory on my server. The rows in the database define the name of the xml based text file and the user who created the text along with basic metadata. An ID is generated using a V4 GUID generator , which gives the text an id and stores the text in the "/store" directory on my server. The text definitions in my server contain this id, and the android app I'm developing gets the contents of the text file by retrieving the text definition and then downloading the text to the local device using the GUID in the text definition. I just think this is a botch job? how can I improve this system? There has been cases of GUID colliding. I don't want this to happen. A "slim" possibility isn't good enough - I need to make sure there is absolutely no chance in a GUID collision. I was planning on checking the database for texts that have the same id before storing the text with a particular id - I however believe with over 20,000 pieces of text in my database this would take an long time and produce unneeded stress on the server. How can I make GUID safe? What happens when a GUID collides? The server backend is going to be written in PHP.

    Read the article

  • Applying iterative algorithm to a set of rows from database

    - by Corvin
    Hello, this question may seem too basic to some, but please bear with be, it's been a while since I dealt with decent database programming. I have an algorithm that I need to program in PHP/MySQL to work on a website. It performs some computations iteratively on an array of objects (it ranks the objects based on their properties). In each iteration the algorithm runs through all collection a couple of times, accessing various data from different places of the whole collection. The algorithm needs several hundred iterations to complete. The array comes from a database. The straightforward solution that I see is to take the results of a database query and create an object for each row of the query, put the objects to an array and pass the array to my algorithm. However, I'm concerned with efficacy of such solution when I have to work with an array of several thousand of items because what I do is essentially mirror the results of a query to memory. On the other hand, making database query a couple of times on each iteration of the algorithm also seems wrong. So, my question is - what is the correct architectural solution for a problem like this? Is it OK to mirror the query results to memory? If not, which is the best way to work with query results in such an algorithm? Thanks!

    Read the article

  • How to handle product ratings in a database

    - by Mel
    Hello, I would like to know what is the best approach to storing product ratings in a database. I have in mind the following two (simplified, and assuming a MySQL db) scenarios: Scenario 1: Create two columns in the product table to store number of votes and the sum of all votes. Use columns to get an average on the product display page: products(productedID, productName, voteCount, voteSum) Pros: I will only need to access one table, and thus execute one query to display product data and ratings. Cons: Write operations will be executed in a table whose original purpose is only to furnish product data. Scenario 2: Create an additional table to store ratings. products(productID, productName) ratings(productID, voteCount, voteSum) Pros: Isolate ratings into a separate table, leaving the products table to furnish data on available products. Cons: I will have to execute two separate queries on product page requests (one for data and another for ratings). In terms of performance, which of the following two approaches is best: Allow users to execute an occasional write query to a table that will handle hundreds of read requests? Execute two queries at every product page, but isolate the write query into a separate table. I'm a novice to database development, and often find myself struggling with simple questions such as these. Many thanks,

    Read the article

  • Merging rows with uniqueness constraints

    - by Flambino
    I've got a little time-tracking web app (implemented in Rails 3.2.8 & MySQL). The app has several users who add their time to specific tasks, on a given date. The system is set up so a user can only have 1 time entry (i.e. row) per task per date. I.e. if you add time twice on the same task and date, it'll add time to the existing row, rather than create a new one. Now I'm looking to merge 2 tasks. In the simplest terms, merging task ID 2 into task ID 1 would take this time | user_id | task_id | date ------+----------+----------+----------- 10 | 1 | 1 | 2012-10-29 15 | 2 | 1 | 2012-10-29 10 | 1 | 2 | 2012-10-29 5 | 3 | 2 | 2012-10-29 and change it into this time | user_id | task_id | date ------+----------+----------+----------- 20 | 1 | 1 | 2012-10-29 <-- time values merged (summed) 15 | 2 | 1 | 2012-10-29 <-- no change 5 | 3 | 1 | 2012-10-29 <-- task_id changed (no merging necessary) I.e. merge by summing the time values, where the given user_id/date/task combo would conflict. I figure I can use a unique constraint to do a ON DUPLICATE KEY UPDATE ... if I do an insert for every task_id=2 entry. But that seems pretty inelegant. I've also tried to figure a way to first update all the rows in task 1 with the summed-up times, but I can't quite figure that one out. Any ideas?

    Read the article

  • Joining tables, if percentage is above certain value

    - by CluelessGerman
    My question is similar to this one: Compare rows and get percentage However, little different. I adapted my question to the other post. I got 2 tables. First table: user_id | post_id 1 1 1 2 1 3 2 12 2 15 And second table: post_id | rating 1 1 1 2 1 3 2 1 2 5 3 null 3 1 3 4 12 4 15 1 So now I would like to count the rating for each post, in the second table. If the rating has more than, lets say, 50% positive ratings than I want to get the post_id and going it to the post_id from table one and add 1 to the user_id. At the end it would return the user_id with the number of positive posts. The result for above table would be: user_id | helpfulPosts 1 2 2 1 The post with post_id 1 and 3 have positive rating, because more than 50% have ratings of 1-3. The post with id = 2 is not positive, because the rating is exactly 50%. How would I achieve this? For clarification: It's a mysql rdbm and a positive post, is one where the number of rating_ids with 1, 2 and 3 are more than half of the overall rating. Basically the same thing, from the other thread I posted above.

    Read the article

  • Not getting concept of null

    - by appu
    Hy Guys, Beginning with mysql. I am not able to grasp the concept of NULL. Check screen-shot (*declare_not_null, link*). In it when I specifically declared 'name' field to be NOT NULL. When i run the 'desc test' table command, the table description shows default value for name field to be NULL.Why is that so? From what I have read about NULL, it connotes a missing or information that is not applicable. So when I declare a field to be NOT NULL it implies (as per my understanding) that user must enter a value for the name field else the DB engine should generate an error i.e. record will not be entered in DB. However when i run 'insert into test value();' the DB engine enters the record in table. Check screen-shot(*empty_value, link*). FLICKR LINKS *declare_not_null* http://www.flickr.com/photos/55097319@N03/5302758813/ *empty_values* Check the second screenshot on flickr Q.2 what would be sql statemetn to drop a primary key from a table's field. If I use 'ALTER TABLE test drop key id;' it gives the following: ERROR: Incorrect table definition; there can be only one auto column and it must be defined as a key. Thanks for your help..

    Read the article

  • PHP the SELECT FROM WHERE col IN no working using array

    - by Sam Ram San
    I'm trying to pull some data from MySQL using an Array that was fetch from a first query. Everything's fine all the way to the implode after that, it's been a headache for me. Can someone help me? <?php $var = 'somedata'; include("config/conect.php"); $zip="SELECT * FROM table WHERE firstcol LIKE '%$var%' ORDER BY seconcol"; $results = $connetction->query($zip); while ($row = $results->fetch_array()){ $mycode[]=$row['zip']; } array_pop($mycode); $mycode = implode(', ',$mycode); //print_r ($mycode); echo '<br /><br /><br />'; $usr="SELECT * FROM reg_temp WHERE zip IN('".join("','", $mycode)."')"; $results = $asies->query($usr); while ($row = $results-> fetch_arra()) { $name = $row['name']; echo $name; } ?>

    Read the article

  • how to get something to display only once in a while loop

    - by Matt Nathanson
    I've got a mysql query running and it checks to see if an iterator is equal to 1, then display this div title... if ($this->dliterator == 1) {echo "<div class='clientsection' id='downloads'>Downloads</div><br/>";}; The problem is, is that the dl iterator may not necessarily start at 1. (it is directly related to a downloadid from the database). How can I get this to display only for the first time through the loop ONLY? while ($row = mysql_fetch_assoc($result)) { if ($row['download'] != null){ if ($this->dliterator == 1) {echo "<div class='clientsection' id='downloads'>Downloads</div><br/>";}; if ($editDownload == 1) { echo "<div class='clientlink' style='margin-top: 15px;'>"; echo "<input name='downloads[$this->dliterator][name]' type='text' id='download$this->dliterator' value='" . $row['download'] . "'/>"; echo "<input name='downloads[$this->dliterator][title]' type='text' id='downloadtitle$this->dliterator' value='" . $row['downloadtitle'] . "'/>"; echo "<img class='removelink' src='/images/deletelink.png' width='15' />"; echo "<input id='downloadid' name='downloads[$this->dliterator][id]' type='hidden' value='".$row['downloadid']."' style='display: none'/>"; echo "<br/><img id='uploaddownload$uploaditerator' class='uploaddownload' src='../images/upload.png' width='80'/>"; echo "</div>"; }; }; $this->dliterator++; $uploaditerator++; };

    Read the article

  • Duplicate information from sql result

    - by puddleJumper
    I looked in about 18 other posts on here an most people are asking how to delete the records not just hide them. So my problem: I have a database with staff members who are associated with locations. Many of the staff members are associated with more than one location. What I want to do is to only display the first location listed in the mysql result and skip over the others. I have the sql query linking the tables together and it works aside from it showing the same information for those staff members that are in those other locations multiple times so example would be like this: This is the sql statement I have currently SELECT staff_tbl.staffID, staff_tbl.firstName, staff_tbl.middleInitial, staff_tbl.lastName, location_tbl.locationID, location_tbl.staffID, officelocations_tbl.locationID, officelocations_tbl.officeName, staff_title_tbl.title_ID, staff_title_tbl.staff_ID, titles_tbl.titleID, titles_tbl.titleName FROM staff_tbl INNER JOIN location_tbl ON location_tbl.staffID = staff_tbl.staffID INNER JOIN officelocations_tbl ON location_tbl.locationID = officelocations_tbl.locationID INNER JOIN staff_title_tbl ON staff_title_tbl.staff_ID = staff_tbl.staffID INNER JOIN titles_tbl ON staff_title_tbl.title_ID = titles_tbl.titleID and my php is <?php do { ?> <tr> <td><?php echo $row_rs_Staff_Info['firstName']; ?>&nbsp; <?php echo $row_rs_Staff_Info['lastName']; ?></td> <td><?php echo $row_rs_Staff_Info['titleName']; ?>&nbsp; </td> <td><?php echo $row_rs_Staff_Info['officeName']; ?>&nbsp; </td> </tr> <?php } while ($row_mysqlResult = mysql_fetch_assoc($rs_mysqlResult)); ?> What I would like to know is there a way using php to select only the first entry listed for each person and display that and just skip over the other two. I was thinking it could be done by possibly adding the staffID's to an array and if they are in there to skip over the next one listed in the staff_title_tbl but wasn't quite sure how to write it that way. Any help would be great thank you in advance.

    Read the article

  • Query to bring count from comma seperated Value

    - by Mugil
    I have Two Tables One for Storing Products and Other for Storing Orders List. CREATE TABLE ProductsList(ProductId INT NOT NULL PRIMARY KEY, ProductName VARCHAR(50)) INSERT INTO ProductsList(ProductId, ProductName) VALUES(1,'Product A'), (2,'Product B'), (3,'Product C'), (4,'Product D'), (5,'Product E'), (6,'Product F'), (7,'Product G'), (8,'Product H'), (9,'Product I'), (10,'Product J'); CREATE TABLE OrderList(OrderId INT NOT NULL PRIMARY KEY AUTO_INCREMENT, EmailId VARCHAR(50), CSVProductIds VARCHAR(50)) SELECT * FROM OrderList INSERT INTO OrderList(EmailId, CSVProductIds) VALUES('[email protected]', '2,4,1,5,7'), ('[email protected]', '5,7,4'), ('[email protected]', '2'), ('[email protected]', '8,9'), ('[email protected]', '4,5,9'), ('[email protected]', '1,2,3'), ('[email protected]', '9,10'), ('[email protected]', '1,5'); Output ItemName NoOfOrders Product A 4 Product B 3 Product C 1 Product D 3 Product E 4 Product F 0 Product G 2 Product H 1 Product I 2 Product J 1 The Order List Stores the ItemsId as Comma separated value for every customer who places order.Like this i am having more than 40k Records in my dB table Now I am assigned with a task of creating report in which I should display Items and No of People ordered Items as Shown Below I Used Query as below in my PHP to bring the Orders One By One and storing in array. SELECT COUNT(PL.EmailId) FROM OrderList PL WHERE CSVProductIds LIKE '2' OR CSVProductIds LIKE '%,2,%' OR CSVProductIds LIKE '%,2' OR CSVProductIds LIKE '2,%'; 1.Is it possible to get the same out put by using Single Query 2.Does using a like in mysql query slows down the dB when the table has more no of records i.e 40k rows

    Read the article

  • Storing large json strings to database + hash

    - by Guy
    I need to store quiete large JSON data strings to the database. I am using gzip to compress the string and therefore BLOB MySQL data type to store it. However, only 5% of all the requests contain unique data and only unique data ought to be stored to the database. My approach is as follows. array_multisort data (array [a, b, c] is virtually the same as [a, c, b]). json_encode data (json_encode is faster than serialize; we need string array representation for the step 3). sha1 data (slower than md5, though less possible the collisions). Check if the hash exists in the database. 5.1 yes – do not insert the data. 5.2. no – gzip the data and store it along the hash. Is there anything about this (apart from storing JSON data to the database in the first place) that sounds fishy or should be done a different way? p.s. We are talking about a database with roughly 1kk unique records being created every month.

    Read the article

  • undefined method `code' for nil:NilClass message with rails and a legacy database

    - by Jude Osborn
    I'm setting up a very simple rails 3 application to view data in a legacy MySQL database. The legacy database is mostly rails ORM compatible, except that foreign key fields are pluralized. For example, my "orders" table has a foreign key field to the "companies" table called "companies_id" (rather than "company_id"). So naturally I'm having to use the ":foreign_key" attribute of "belongs_to" to set the field name manually. I haven't used rails in a few years, but I'm pretty sure I'm doing everything right, yet I get the following error when trying to access "order.currency.code": undefined method `code' for nil:NilClass This is a very simple application so far. The only thing I've done is generate the application and a bunch of scaffolds for each of the legacy database tables. Then I've gone into some of the models to make adjustments to accommodate the above mentioned difference in database naming conventions, and added some fields to the views. That's it. No funny business. So my database tables look like this (relevant fields only): orders ------ id description invoice_number currencies_id currencies ---------- id code description My Order model looks like this: class Order < ActiveRecord::Base belongs_to :currency, :foreign_key=>'currencies_id' end My Currency model looks like this: class Currency < ActiveRecord::Base has_many :orders end The relevant view snippet looks like this: <% @orders.each do |order| %> <tr> <td><%= order.description %></td> <td><%= order.invoice_number %></td> <td><%= order.currency.code %></td> </tr> <% end %> I'm completely out of ideas. Any suggestions?

    Read the article

  • Rails: foreign_key that is set with a CASE WHEN mysql statement

    - by user305270
    belongs_to :removed_friend, :class_name => 'Profile', :foreign_key => "(CASE WHEN friendships.profile_id = #{self.id} THEN friendships.friend_id ELSE friendships.profile_id END)" The self.id its not working too :s it should be the id of the profile that is triggered the action. Anyway the main problem is that this foreign_key is not working and i got a mysql error. please help.. thanks

    Read the article

  • While loops within while loops and output php?

    - by NovacTownCode
    I have a while loop to show the replies for a post on my website. The value for parentID used in the query is $post['postID'] which is an array of details for the post being viewed. As seen below it outputs the following (each subject is a link to view the full post) $q = $dbc -> prepare("SELECT * FROM boardposts WHERE parentID = ?"); $q -> execute(array($post['postID'])); while ($postReply = $q -> fetch(PDO::FETCH_ASSOC)) { echo '<p><a href="http://www.example.com/boards?topic=' . $_GET['topic'] . '&amp;view=' . $postReply['postID'] . '">' . $postReply['subject'] . '</a>'; } This currently outputs something along the lines of, Replies To This Message: subject 1 subject 2 subject 3 subject 4 Is there a way in which I can also in the list include replies to the replies, something along the lines of, Replies To This Message: subject 1          subject 1 reply          subject 1 reply                  subject 1 reply reply subject 2 subject 3          subject 3 reply          subject 3 reply                  subject 3 reply reply subject 4          subject 4 reply subject 5 subject 6          subject 6 reply                  subject 4 reply reply I understand all the indenting can be with css, but am stuck as to how to pull the data from the mysql database and in the correct order, I tried while loops within while loops, but that involved queries inside while loops, which is bad! Thanks for your input!

    Read the article

  • Stuck on the logic of creating tags for posts (like SO tags)? (PHP)

    - by ggfan
    I am stuck on how to create tags for each post on my site. I am not sure how to add the tags into database. Currently... I have 3 tables: +---------------------+ +--------------------+ +---------------------+ | Tags | | Posting | | PostingTags | +---------------------+ +--------------------+ +---------------------+ | + TagID | | + posting_id | | + posting_id | +---------------------+ +--------------------+ +---------------------+ | + TagName | | + title | | + tagid | +---------------------+ +--------------------+ +---------------------+ The Tags table is just the name of the tags(ex: 1 PHP, 2 MySQL,3 HTML) The posting (ex: 1 What is PHP?, 2 What is CSS?, 3 What is HTML?) The postingtags shows the relation between posting and tags. When users type a posting, I insert the data into the "posting" table. It automatically inserts the posting_id for each post(posting_id is a primary key). $title = mysqli_real_escape_string($dbc, trim($_POST['title'])); $query4 = "INSERT INTO posting (title) VALUES ('$title')"; mysqli_query($dbc, $query4); HOWEVER, how do I insert the tags for each post? When users are filling out the form, there is a checkbox area for all the tags available and they check off whatever tags they want. (I am not doing where users type in the tags they want just yet) This shows each tag with a checkbox. When users check off each tag, it gets stored in an array called "postingtag[]". <label class="styled">Select Tags:</label> <?php $dbc = mysqli_connect(DB_HOST, DB_USER, DB_PASSWORD, DB_NAME); $query5 = "SELECT * FROM tags ORDER BY tagname"; $data5 = mysqli_query($dbc, $query5); while ($row5 = mysqli_fetch_array($data5)) { echo '<li><input type="checkbox" name="postingtag[]" value="'.$row5['tagname'].'" ">'.$row5['tagname'].'</li>'; } ?> My question is how do I insert the tags in the array ("postingtag") into my "postingtags" table? Should I... $postingtag = $_POST["postingtag"]; foreach($postingtag as $value){ $query5 = "INSERT INTO postingtags (posting_id, tagID) VALUES (____, $value)"; mysqli_query($dbc, $query5); } 1.In this query, how do I get the posting_id value of the post? I am stuck on the logic here, so if someone can help me explain the next step, I would appreciate it! Is there an easier way to insert tags?

    Read the article

  • sql: Group by x,y,z; return grouped by x,y with lowest f(z)

    - by Sai Emrys
    This is for http://cssfingerprint.com I collect timing stats about how fast the different methods I use perform on different browsers, etc., so that I can optimize the scraping speed. Separately, I have a report about what each method returns for a handful of URLs with known-correct values, so that I can tell which methods are bogus on which browsers. (Each is different, alas.) The related tables look like this: CREATE TABLE `browser_tests` ( `id` int(11) NOT NULL AUTO_INCREMENT, `bogus` tinyint(1) DEFAULT NULL, `result` tinyint(1) DEFAULT NULL, `method` varchar(255) DEFAULT NULL, `url` varchar(255) DEFAULT NULL, `os` varchar(255) DEFAULT NULL, `browser` varchar(255) DEFAULT NULL, `version` varchar(255) DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=33784 DEFAULT CHARSET=latin1 CREATE TABLE `method_timings` ( `id` int(11) NOT NULL AUTO_INCREMENT, `method` varchar(255) DEFAULT NULL, `batch_size` int(11) DEFAULT NULL, `timing` int(11) DEFAULT NULL, `os` varchar(255) DEFAULT NULL, `browser` varchar(255) DEFAULT NULL, `version` varchar(255) DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=28849 DEFAULT CHARSET=latin1 (user_agent is broken down pre-insert into browser, version, and os from a small list of recognized values using regex; I keep the original user-agent string just in case.) I have a query like this that tells me the average timing for every non-bogus browser / version / method tuple: select c, avg(bogus) as bog, timing, method, browser, version from browser_tests as b inner join ( select count(*) as c, round(avg(timing)) as timing, method, browser, version from method_timings group by browser, version, method having c > 10 order by browser, version, timing ) as t using (browser, version, method) group by browser, version, method having bog < 1 order by browser, version, timing; Which returns something like: c bog tim method browser version 88 0.8333 184 reuse_insert Chrome 4.0.249.89 18 0.0000 238 mass_insert_width Chrome 4.0.249.89 70 0.0400 246 mass_insert Chrome 4.0.249.89 70 0.0400 327 mass_noinsert Chrome 4.0.249.89 88 0.0556 367 reuse_reinsert Chrome 4.0.249.89 88 0.0556 383 jquery Chrome 4.0.249.89 88 0.0556 863 full_reinsert Chrome 4.0.249.89 187 0.0000 105 jquery Chrome 5.0.307.11 187 0.8806 109 reuse_insert Chrome 5.0.307.11 123 0.0000 110 mass_insert_width Chrome 5.0.307.11 176 0.0000 231 mass_noinsert Chrome 5.0.307.11 176 0.0000 237 mass_insert Chrome 5.0.307.11 187 0.0000 314 reuse_reinsert Chrome 5.0.307.11 187 0.0000 372 full_reinsert Chrome 5.0.307.11 12 0.7500 82 reuse_insert Chrome 5.0.335.0 12 0.2500 102 jquery Chrome 5.0.335.0 [...] I want to modify this query to return only the browser/version/method with the lowest timing - i.e. something like: 88 0.8333 184 reuse_insert Chrome 4.0.249.89 187 0.0000 105 jquery Chrome 5.0.307.11 12 0.7500 82 reuse_insert Chrome 5.0.335.0 [...] How can I do this, while still returning the method that goes with that lowest timing? I could filter it app-side, but I'd rather do this in mysql since it'd work better with my caching.

    Read the article

  • Geolocation SQL query not finding exact location

    - by Iridium52
    I have been testing my geolocation query for some time now and I haven't found any issues with it until now. I am trying to search for all cities within a given radius, often times I'm searching for cities surrounding a city using that city's coords, but recently I tried searching around a city and found that the city itself was not returned. I have these cities as an excerpt in my database: city latitude longitude Saint-Mathieu 45.316708 -73.516253 Saint-Édouard 45.233374 -73.516254 Saint-Michel 45.233374 -73.566256 Saint-Rémi 45.266708 -73.616257 But when I run my query around the city of Saint-Rémi, with the following query... SELECT tblcity.city, tblcity.latitude, tblcity.longitude, truncate((degrees(acos( sin(radians(tblcity.latitude)) * sin(radians(45.266708)) + cos(radians(tblcity.latitude)) * cos(radians(45.266708)) * cos(radians(tblcity.longitude - -73.616257) ) ) ) * 69.09*1.6),1) as distance FROM tblcity HAVING distance < 10 ORDER BY distance desc I get these results: city latitude longitude distance Saint-Mathieu 45.316708 -73.516253 9.5 Saint-Édouard 45.233374 -73.516254 8.6 Saint-Michel 45.233374 -73.566256 5.3 The town of Saint-Rémi is missing from the search. So I tried a modified query hoping to get a better result: SELECT tblcity.city, tblcity.latitude, tblcity.longitude, truncate(( 6371 * acos( cos( radians( 45.266708 ) ) * cos( radians( tblcity.latitude ) ) * cos( radians( tblcity.longitude ) - radians( -73.616257 ) ) + sin( radians( 45.266708 ) ) * sin( radians( tblcity.latitude ) ) ) ),1) AS distance FROM tblcity HAVING distance < 10 ORDER BY distance desc But I get the same result... However, if I modify Saint-Rémi's coords slighly by changing the last digit of the lat or long by 1, both queries will return Saint-Rémi. Also, if I center the query on any of the other cities above, the searched city is returned in the results. Can anyone shed some light on what may be causing my queries above to not display the searched city of Saint-Rémi? I have added a sample of the table (with extra fields removed) below. I'm using MySQL 5.0.45, thanks in advance. CREATE TABLE `tblcity` ( `IDCity` int(1) NOT NULL auto_increment, `City` varchar(155) NOT NULL default '', `Latitude` decimal(9,6) NOT NULL default '0.000000', `Longitude` decimal(9,6) NOT NULL default '0.000000', PRIMARY KEY (`IDCity`) ) ENGINE=MyISAM AUTO_INCREMENT=52743 DEFAULT CHARSET=latin1 AUTO_INCREMENT=52743; INSERT INTO `tblcity` (`city`, `latitude`, `longitude`) VALUES ('Saint-Mathieu', 45.316708, -73.516253), ('Saint-Édouard', 45.233374, -73.516254), ('Saint-Michel', 45.233374, -73.566256), ('Saint-Rémi', 45.266708, -73.616257);

    Read the article

  • Data denormalization and C# objects DB serialization

    - by Robert Koritnik
    I'm using a DB table with various different entities. This means that I can't have an arbitrary number of fields in it to save all kinds of different entities. I want instead save just the most important fields (dates, reference IDs - kind of foreign key to various other tables, most important text fields etc.) and an additional text field where I'd like to store more complete object data. the most obvious solution would be to use XML strings and store those. The second most obvious choice would be JSON, that usually shorter and probably also faster to serialize/deserialize... And is probably also faster. But is it really? My objects also wouldn't need to be strictly serializable, because JsonSerializer is usually able to serialize anything. Even anonymous objects, that may as well be used here. What would be the most optimal solution to solve this problem? Additional info My DB is highly normalised and I'm using Entity Framework, but for the purpose of having external super-fast fulltext search functionality I'm sacrificing a bit DB denormalisation. Just for the info I'm using SphinxSE on top of MySql. Sphinx would return row IDs that I would use to fast query my index optimised conglomerate table to get most important data from it much much faster than querying multiple tables all over my DB. My table would have columns like: RowID (auto increment) EntityID (of the actual entity - but not directly related because this would have to point to different tables) EntityType (so I would be able to get the actual entity if needed) DateAdded (record timestamp when it's been added into this table) Title Metadata (serialized data related to particular entity type) This table would be indexed with SPHINX indexer. When I would search for data using this indexer I would provide a series of EntityIDs and a limit date. Indexer would have to return a very limited paged amount of RowIDs ordered by DateAdded (descending). I would then just join these RowIDs to my table and get relevant results. So this won't actually be full text search but a filtering search. Getting RowIDs would be very fast this way and getting results back from the table would be much faster than comparing EntityIDs and DateAdded comparisons even though they would be properly indexed.

    Read the article

  • PHP: How to find connections between users so I can create a closed friend circle?

    - by CuSS
    Hi all, First of all, I'm not trying to create a social network, facebook is big enough! (comic) I've chosen this question as example because it fits exactly on what I'm trying to do. Imagine that I have in MySQL a users table and a user_connections table with 'friend requests'. If so, it would be something like this: Users Table: userid username 1 John 2 Amalia 3 Stewie 4 Stuart 5 Ron 6 Harry 7 Joseph 8 Tiago 9 Anselmo 10 Maria User Connections Table: userid_request userid_accepted 2 3 7 2 3 4 7 8 5 6 4 5 8 9 4 7 9 10 6 1 10 7 1 2 Now I want to find circles between friends and create a structure array and put that circle on the database (none of the arrays can include the same friends that another has already). Return Example: // First Circle of Friends Circleid => 1 CircleStructure => Array( 1 => 2, 2 => 3, 3 => 4, 4 => 5, 5 => 6, 6 => 1, ) // Second Circle of Friends Circleid => 2 CircleStructure => Array( 7 => 8, 8 => 9, 9 => 10, 10 => 7, ) I'm trying to think of an algorithm to do that, but I think it will take a lot of processing time because it would randomly search the database until it 'closes' a circle. PS: The minimum structure length of a circle is 3 connections and the limit is 100 (so the daemon doesn't search the entire database) EDIT: I've think on something like this: function browse_user($userget='random',$users_history=array()){ $user = user::get($userget); $users_history[] = $user['userid']; $connections = user::connection::getByUser($user['userid']); foreach($connections as $connection){ $userid = ($connection['userid_request']!=$user['userid']) ? $connection['userid_request'] : $connection['userid_accepted']; // Start the circle array if(in_array($userid,$users_history)) return array($user['userid'] => $userid); $res = browse_user($userid, $users_history); if($res!==false){ // Continue the circle array return $res + array($user['userid'] => $userid); } } return false; } while(true){ $res = browse_user(); // Yuppy, friend circle found! if($res!==false){ user::circle::create($res); } // Start from scratch again! } The problem with this function is that it could search the entire database without finding the biggest circle, or the best match.

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >