Search Results

Search found 14874 results on 595 pages for 'mysql connector'.

Page 105/595 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • MySQL - optimising selection across two linked tables

    - by user293594
    I have two MySQL tables, states and trans: states (200,000 entries) looks like: id (INT) - also the primary key energy (DOUBLE) [other stuff] trans (14,000,000 entries) looks like: i (INT) - a foreign key referencing states.id j (INT) - a foreign key referencing states.id A (DOUBLE) I'd like to search for all entries in trans with trans.A 30. (say), and then return the energy entries from the (unique) states referenced by each matching entry. So I do it with two intermediate tables: CREATE TABLE ij SELECT i,j FROM trans WHERE A30.; CREATE TABLE temp SELECT DISTINCT i FROM ij UNION SELECT DISTINCT j FROM ij; SELECT energy from states,temp WHERE id=temp.i; This seems to work, but is there any way to do it without the intermediate tables? When I tried to create the temp table with a single command straight from trans: CREATE TABLE temp SELECT DISTINCT i FROM trans WHERE A30. UNION SELECT DISTINCT j FROM trans WHERE A30.; it took a longer (presumably because it had to search the large trans table twice. I'm new to MySQL and I can't seem to find an equivalent problem and answer out there on the interwebs. Many thanks, Christian

    Read the article

  • MySQL INJECTION Solution...

    - by Val
    I have been bothered for so long by the MySQL injections and was thinking of a way to eliminate this problem all together. I have came up with something below hope that many people will find this useful. The only Draw back I can think of this is the partial search: Jo =returns "John" by using the like %% statement. Here is a php solution: <?php function safeQ(){ $search= array('delete','select');//and every keyword... $replace= array(base64_encode('delete'),base64_encode('select')); foreach($_REQUEST as $k=>$v){ str_replace($search, $replace, $v); } } foo(); function html($str){ $search= array(base64_encode('delete'),base64_encode('select')); $replace= array('delete','select');//and every keyword... str_replace($search, $replace, $str); } //example 1 ... ... $result = mysql_fetch_array($query); echo html($result[0]['field_name']); //example 2 $select = 'SELECT * FROM safeQ($_GET['query']) '; //example 3 $insert = 'INSERT INTO .... value(safeQ($_GET['query']))'; ?> I know, I know that you still could inject using 1=1 or any other type of injections... but this I think could solve half of your problem so the right mysql query is executed. So my question is if anyone can find any draw backs on this then please feel free to comment here. PLEASE GIVE AN ANSWER only if you think that this is a very useful solution and no major drawbacks are found OR you think is a bad idea all together...

    Read the article

  • MySQL FULLTEXT not working

    - by Ross
    I'm attempting to add searching support for my PHP web app using MySQL's FULLTEXT indexes. I created a test table (using the MyISAM type, with a single text field a) and entered some sample data. Now if I'm right the following query should return both those rows: SELECT * FROM test WHERE MATCH(a) AGAINST('databases') However it returns none. I've done a bit of research and I'm doing everything right as far as I can tell - the table is a MyISAM table, the FULLTEXT indexes are set. I've tried running the query from the prompt and from phpMyAdmin, with no luck. Am I missing something crucial? UPDATE: Ok, while Cody's solution worked in my test case it doesn't seem to work on my actual table: CREATE TABLE IF NOT EXISTS `uploads` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` text NOT NULL, `size` int(11) NOT NULL, `type` text NOT NULL, `alias` text NOT NULL, `md5sum` text NOT NULL, `uploaded` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=6 ; And the data I'm using: INSERT INTO `uploads` (`id`, `name`, `size`, `type`, `alias`, `md5sum`, `uploaded`) VALUES (1, '04 Sickman.mp3', 5261182, 'audio/mp3', '1', 'df2eb6a360fbfa8e0c9893aadc2289de', '2009-07-14 16:08:02'), (2, '07 Dirt.mp3', 5056435, 'audio/mp3', '2', 'edcb873a75c94b5d0368681e4bd9ca41', '2009-07-14 16:08:08'), (3, 'header_bg2.png', 16765, 'image/png', '3', '5bc5cb5c45c7fa329dc881a8476a2af6', '2009-07-14 16:08:30'), (4, 'page_top_right2.png', 5299, 'image/png', '4', '53ea39f826b7c7aeba11060c0d8f4e81', '2009-07-14 16:08:37'), (5, 'todo.txt', 392, 'text/plain', '5', '7ee46db77d1b98b145c9a95444d8dc67', '2009-07-14 16:08:46'); The query I'm now running is: SELECT * FROM `uploads` WHERE MATCH(name) AGAINST ('header' IN BOOLEAN MODE) Which should return row 3, header_bg2.png. Instead I get another empty result set. My options for boolean searching are below: mysql> show variables like 'ft_%'; +--------------------------+----------------+ | Variable_name | Value | +--------------------------+----------------+ | ft_boolean_syntax | + -><()~*:""&| | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | +--------------------------+----------------+ 5 rows in set (0.02 sec) "header" is within the word length restrictions and I doubt it's a stop word (I'm not sure how to get the list). Any ideas?

    Read the article

  • Using Partitions for a large MySQL table

    - by user293594
    An update on my attempts to implement a 505,000,000-row table on MySQL on my MacBook Pro: Following the advice given, I have partitioned my table, tr: i UNSIGNED INT NOT NULL, j UNSIGNED INT NOT NULL, A FLOAT(12,8) NOT NULL, nu BIGINT NOT NULL, KEY (nu), key (A) with a range on nu. nu ought to be a real number, but because I only have 6-d.p. accuracy and the maximum value of nu is 30000. I multiplied it by 10^8 made it a BIGINT - I gather one can't use FLOAT or DOUBLE values to PARTITION a MySQL table. Anyway, I have 15 partitions (p0: nu<25,000,000,000, p1: nu<50,000,000,000, etc.). I was thinking that this should speed up a typical to SELECT: SELECT * FROM tr WHERE nu>95000000000 AND nu<100000000000 AND A.>1. to something of the order of the same query on a table consisting of only the data in the relevant partition (<30 secs). But it's taking 30mins+ to return rows for queries within a partition and double that if the query is for rows spanning two (contiguous) partitions. I realise I could just have 15 different tables, and query them separately, but is there a way to do this 'automatically' with partitions? Has anyone got any suggestions?

    Read the article

  • Asp.net renders string with wrong encoding, but PHP doesn't (MySQL)

    - by citronas
    I took over some old php application with MySQL as database. Inside the database, there are tables including content with localized strings (therefore containing special chars) Currently there is a PHP application accessing that database. My job is to create an ASP.net (C# codebehind) application that accesses that strings as well. That works, as far as encoding goes. If I try to access these strings, I do get a kind of encoding problem, like 'Ändern' and 'Prüfzeichen', but only in the ASP.net application. The PHP app sets utf-8 as charset and the strings are perfectly rendered. In the ASP.net application it's gibberish, regardless of the page encoding. In the MySQL database, the charset for the specified table 'translations' is set to 'latin --cp1252 West European' and collation to 'latin_swedish_ci'. I can't seem to figure out what PHP apparently does, and ASP.net does not. I traced the php code and could not find any sign of special encoding while getting a string from the database. The question is, how can I ensure correct encoding inside the ASP.net application without modifying the database, because big changes at the php code are not possible? Does anybody have a clue?

    Read the article

  • Changing character encoding in MySQL, PHP scripts, HTML

    - by Sandman
    So, I have built on this system for quite some time, and it is currently outputting Latin1 (ISO-8859-1) to the web browser, and this is the components: MySQL - all data is stored with the Latin1 character set PHP - All PHP text files are stored on disk with Latin1 encoding HTML - The output has the http-equiv="content-type" content="text/html; charset=iso-8859-1" meta tag So, I'm trying to understand how the encoding of the different parts come into play in my workflow. If I open a PHP script and change its encoding within the text editor to UTF-8 and save it back to disk and reload the web browser, the text is all messed up - unless the text comes from the DB. If I change the encoding of the DB to UTF-8 and keep the PHP files in latin1 I have to use utf8_decode() for the data to display correctly. And if I change the HTML code the browser will read it incorrectly. So yeah, I realise that if I want to "upgrade" to UTF8, I have to update all three parts of this setup for it to work correctly, but since it's a huge system with some 180k lines of PHP code and millions of posts in a lot of databases/tables, I don't want to start something like this without understanding everything correctly. What haven't I thought about? What could mess this up beyond fixing? What are the procedures for changing the encoding of an entire MySQL installation and what's the easiest way to change the encoding of hundreds or thousands of PHP files on disk? The META tag is luckily added dynamically, so I'll change that in one place only :) Let me hear about your experiences with this.

    Read the article

  • MySQL Normalization stored procedure performance

    - by srkiNZ84
    Hi, I've written a stored procedure in MySQL to take values currently in a table and to "Normalize" them. This means that for each value passed to the stored procedure, it checks whether the value is already in the table. If it is, then it stores the id of that row in a variable. If the value is not in the table, it stores the newly inserted value's id. The stored procedure then takes the id's and inserts them into a table which is equivalent to the original de-normailized table, but this table is fully normalized and consists of mainly foreign keys. My problem with this design is that the stored procedure takes approximately 10ms or so to return, which is too long when you're trying to work through some 10million records. My suspicion is that the performance is to do with the way in which I'm doing the inserts. i.e. INSERT INTO TableA (first_value) VALUES (argument_from_sp) ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id); SET @TableAId = LAST_INSERT_ID(); The "ON DUPLICATE KEY UPDATE" is a bit of a hack, due to the fact that on a duplicate key I don't want to update anything but rather just return the id value of the row. If you miss this step though, the LAST_INSERT_ID() function returns the wrong value when you're trying to run the "SET ..." statement. Does anyone know of a better way to do this in MySQL? Thank you

    Read the article

  • Running mysql query using node blocks the whole process and then timesout

    - by lobengula3rd
    I have a node javascript that uses mysql npm (Felix). I have a procedure stored in my DB which I call when the user selects an option to kind of create its own instance of the program. The user chooses for how long he wants that data to be initialized for him. This is suppsoed to be between 1 and 2 years. So if he choose 1 year this query will insert around 20,000 rows into 1 table. If I run this query and a local DB this takes around 30 seconds (I suppose it is reasonable because its a big query which should be done only once in 1 or 2 years so its ok). For some reason my node script freezes as if it can't handle any more calls from other users. The even worse problem is that after like 2 minutes my client ui gets like an error from the server. At this point not all the data that was supposed to enter the DB is entered. After waiting like another minute all the data finally gets to the DB and only then it will accept new requests. This is my connection: this.connection = mysql.createConnection({ host : '********rds.amazonaws.com', user : 'admin', password : '******', database : '*****' }); and this is my query function: this.createCourts = function (req, res, next){ connection.query('CALL filldates("' + req.body['startDate'] + '","' + req.body['endDate'] + '","' + req.body['numOfCourts'] + '","' + req.body['duration'] + '","' + req.body['sundayOpen'] + '","' + req.body['mondayOpen'] + '","' + req.body['tuesdayOpen'] + '","' + req.body['wednesdayOpen'] + '","' + req.body['thursdayOpen'] + '","' + req.body['fridayOpen'] + '","' + req.body['saturdayOpen'] + '","' + req.body['sundayClose'] + '","' + req.body['mondayClose'] + '","' + req.body['tuesdayClose'] + '","' + req.body['wednesdayClose'] + '","' + req.body['thursdayClose'] + '","' + req.body['fridayClose'] + '","' + req.body['saturdayClose'] + '");', function(err){ if (err){ console.log(err); } else return res.send(200); }); }; what am i missing here? as i understand connection.query should by async so why is it actually blocking my node script? thanks.

    Read the article

  • How to query MySQL for exact length and exact UTF-8 characters

    - by oskarae
    I have table with words dictionary in my language (latvian). CREATE TABLE words ( value varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; And let's say it has 3 words inside: INSERT INTO words (value) VALUES ('teja'); INSERT INTO words (value) VALUES ('vejš'); INSERT INTO words (value) VALUES ('feja'); What I want to do is I want to find all words that is exactly 4 characters long and where second character is 'e' and third character is 'j' For me it feels that correct query would be: SELECT * FROM words WHERE value LIKE '_ej_'; But problem with this query is that it returs not 2 entries ('teja','vejš') but all three. As I understand it is because internally MySQL converts strings to some ASCII representation? Then there is BINARY addition possible for LIKE SELECT * FROM words WHERE value LIKE BINARY '_ej_'; But this also does not return 2 entries ('teja','vejš') but only one ('teja'). I believe this has something to do with UTF-8 2 bytes for non ASCII chars? So question: What MySQL query would return my exact two words ('teja','vejš')? Thank you in advance

    Read the article

  • Heavy Mysql operation & Time Constraints [closed]

    - by Rahul Jha
    There is a performance issue where that I have stuck with my application which is based on PHP & MySql. The application is for Data Migration where data has to be uploaded and after various processes (Cleaning from foreign characters, duplicate check, id generation) it has to be inserted into one central table and then to 5 different tables. There, an id is generated and that id has to be updated to central table. There are different sets of records and validation rules. The problem I am facing is that when I insert say(4K) rows file (containing 20 columns) it is working fine within 15 min it gets inserted everywhere. But, when I insert the same records again then at this time it is taking one hour to insert (ideally it should get inserted by marking earlier inserted data as duplicate). After going through the log file, I noticed is that there is a Mysql select statement where I am checking the duplicates and getting ID which are duplicates. Then I am calling a function inside for loop which is basically inserting records into 5 tables and updates id to central table. This Calling function is major time of whole process. P.S. The records has to be inserted record by record.. Kindly Suggest some solution.. //This is that sample code $query=mysql_query("SELECT DISTINCT p1.ID FROM table1 p1, table2 p2, table3 a WHERE p2.datatype =0 AND (p1.datatype =1 || p1.datatype=2) AND p2.ID =0 AND p1.ID = a.ID AND p1.coulmn1 = p2.column1 AND p1.coulmn2 = p2.coulmn2 AND a.coulmn3 = p2.column3"); $num=mysql_num_rows($query); for($i=0;$i<$num;$i++) { $f=mysql_result($query,$i,"ID"); //calling function RecordInsert($f); }

    Read the article

  • MySQL db Audit Trail Trigger

    - by Natkeeran
    I need to track changes (audit trail) in certain tables in a MySql Db. I am trying to implement the solution suggested here. I have an AuditLog Table with the following columns: AuditLogID, TableName, RowPK, FieldName, OldValue, NewValue, TimeStamp. The mysql stored procedure is the following (this executes fine, and creates the procedure): The call to the procedure such as: CALL addLogTrigger('ProductTypes', 'ProductTypeID'); executes, but does not create any triggers (see the image). SHOW TRIGGERS returns empty set. Please let me know what could be the issue, or an alternate way to implement this. DROP PROCEDURE IF EXISTS addLogTrigger; DELIMITER $ CREATE PROCEDURE addLogTrigger(IN tableName VARCHAR(255), IN pkField VARCHAR(255)) BEGIN SELECT CONCAT( 'DELIMITER $\n', 'CREATE TRIGGER ', tableName, '_AU AFTER UPDATE ON ', tableName, ' FOR EACH ROW BEGIN ', GROUP_CONCAT( CONCAT( 'IF NOT( OLD.', column_name, ' <=> NEW.', column_name, ') THEN INSERT INTO AuditLog (', 'TableName, ', 'RowPK, ', 'FieldName, ', 'OldValue, ', 'NewValue' ') VALUES ( ''', table_name, ''', NEW.', pkField, ', ''', column_name, ''', OLD.', column_name, ', NEW.', column_name, '); END IF;' ) SEPARATOR ' ' ), ' END;$' ) FROM information_schema.columns WHERE table_schema = database() AND table_name = tableName; END$ DELIMITER ;

    Read the article

  • MySQL Stored Procedures : Use a variable as the database name in a cursor declaration

    - by Justin
    I need to use a variable to indicate what database to query in the declaration of a cursor. Here is a short snippet of the code : CREATE PROCEDURE `update_cdrs_lnp_data`(IN dbName VARCHAR(25), OUT returnCode SMALLINT) cdr_records:BEGIN DECLARE cdr_record_cursor CURSOR FOR SELECT cdrs_id, called, calling FROM dbName.cdrs WHERE lrn_checked = 'N'; # Setup logging DECLARE EXIT HANDLER FOR SQLEXCEPTION BEGIN #call log_debug('Got exception in update_cdrs_lnp_data'); SET returnCode = -1; END; As you can see, I'm TRYING to use the variable dbName to indicate in which database the query should occur within. However, MySQL will NOT allow that. I also tried things such as : CREATE PROCEDURE `update_cdrs_lnp_data`(IN dbName VARCHAR(25), OUT returnCode SMALLINT) cdr_records:BEGIN DECLARE cdr_record_cursor CURSOR FOR SET @query = CONCAT("SELECT cdrs_id, called, calling FROM " ,dbName, ".cdrs WHERE lrn_checked = 'N' "); PREPARE STMT FROM @query; EXECUTE STMT; # Setup logging DECLARE EXIT HANDLER FOR SQLEXCEPTION BEGIN #call log_debug('Got exception in update_cdrs_lnp_data'); SET returnCode = -1; END; Of course this doesn't work either as MySQL only allows a standard SQL statement in the cursor declaration. Can anyone think of a way to use the same stored procedure in multiple databases by passing in the name of the db that should be affected?

    Read the article

  • Best way to create a SPARQL endpoint for a RDBMS (MySQL database)

    - by Ankur
    I am doing (want to do) some experiments with Linked Open Datasets particularly those put out by governments. I have a RDBMS (more specifically MySQL). I designed it with semantic web ideas in mind i.e. I have a information stored as objects, predicates and classes which define objects. In turn all objects are related to each other though statements of the form subject -- predicate -- object (where the subjects are from the objects table). I want to be able to query other RDF triple stores from my application and let other triple stores query my data. Is it possible to "set something up" so that this is possible? I have looked at Jena. Using Jena seems to mean I have to it as a storage application rather than MySQL - the only problem with this is that I include a new concept called a category (which I don't think is part of the semantic web languages). I will use categories to help with displaying information (they don't have any other meaning) but using Jena seems to mean that I can't organise predicates under categories for more convenient viewing. I am using Java so a JAVA API is preferred. It's also possible I misunderstood the purpose of Jena, and maybe that can be of use, but I am not sure how. I am sure four days from now this question will seem rather silly, but at the moment I am somewhat confused about how to proceed.

    Read the article

  • Fastest way to become a MySQL expert?

    - by Kerry
    I have been using MySQL for years, mainly on smaller projects until the last year or so. I'm not sure if it's the nature of the language or my lack of real tutorials that gives me the feeling of being unsure if what I'm writing is the proper way for optimization purposes and scaling purposes. While self-taught in PHP I'm very sure of myself and the code I write, easily can compare it to others and so on. With MySQL, I'm not sure whether (and in what cases) an INNER JOIN or LEFT JOIN should be used, nor am I aware of the large amount of functionality that it has. While I've written code for databases that handled tens of millions of records, I don't know if it's optimum. I often find that a small tweak will make a query take less than 1/10 of the original time... but how do I know that my current query isn't also slow? I would like to become completely confident in this field in the ability to optimize databases and be scalable. Use is not a problem -- I use it on a daily basis in a number of different ways. So, the question is, what's the path? Reading a book? Website/tutorials? Recommendations?

    Read the article

  • JAVA MySql multiple word search

    - by user1703849
    i have a database in MySql that has a name column in it which contains several words(description). I am connected to database with java through eclipse. I have a search, that returns results if only name field contains one word. id: name: info: type: 1 balloon big red balloon big 2 house expensive beautiful luxury 3 chicken wings deep fried wings tasty these are just random words but as an example my search can only see ex. balloon and then show info, but if i type chicken wings, it does nothing. so it possible somehow to search from columns with multiple words? this is my search code below import java.io.*; import java.sql.*; import java.util.*; class Search { public static void main(String[] args) { Scanner inp``ut = new Scanner(System.in); try { Connection con = DriverManager.getConnection( "jdbc:mysql://example/mydb", "user", "password"); Statement stmt = (Statement) con.createStatement(); System.out.print("enter search: "); String name = input.next(); String SQL = "SELECT * FROM menu where name LIKE '" + name + "'"; ResultSet rs = stmt.executeQuery(SQL); while (rs.next()) { System.out.println("Name: " +rs.getString("name")); System.out.println("Description: " + rs.getString("info") ); System.out.println("Price: " + rs.getString("Price")); } } catch (Exception e) { System.out.println("ERROR: " + e.getMessage()); } } }

    Read the article

  • MySQL nested CASE error I need help with?

    - by AK
    What I am trying to do here is: IF the records in table todo as identified in $done have a value in the column recurinterval then THEN reset date_scheduled column ELSE just set status_id column to 6 for those records. This is the error I get from mysql_error() ... You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CASE recurinterval != 0 AND recurinterval IS NOT NULL THEN SET date_sche' at line 2 How can I make this statement work? UPDATE todo CASE recurinterval != 0 AND recurinterval IS NOT NULL THEN SET date_scheduled = CASE recurunit WHEN 'DAY' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval DAY) WHEN 'WEEK' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval WEEK) WHEN 'MONTH' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval MONTH) WHEN 'YEAR' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval YEAR) END WHERE todo_id IN ($done) ELSE SET status_id = 6 WHERE todo_id IN ($done) END The following mySQL statement worked just fine before I revised like above. UPDATE todo SET date_scheduled = CASE recurunit WHEN 'DAY' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval DAY) WHEN 'WEEK' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval WEEK) WHEN 'MONTH' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval MONTH) WHEN 'YEAR' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval YEAR) END WHERE todo_id IN ($done) AND recurinterval != 0 AND recurinterval IS NOT NULL

    Read the article

  • mysql query using global variables

    - by Carlos
    I am trying run a query to active the users account. I am not sure if I am having problem with the query itself or if there's something else that I dont know about. here is the code: if($_SESSION['lastid']&&$_SESSION['random']) { $check= mysql_query('SELECT * FROM members WHERE id= "$_SESSION[lastid]" AND random = " $_SESSION[random]"'); $checknum = mysql_num_rows($check); //$checknum = mysql_query($check) or die("Error: ". mysql_error(). " with query ". $check); if($checknum != 0) // run query to activate the account { $acti= mysql_query('UPDATE members SET activation = "1" WHERE id= "$_SESSION[lastid]"'); die('Your account has been activated. You may now log in!'); }else{ echo('Invalid id or activation code.') . ' lastid: ' .$_SESSION['lastid'] . ' random: ' .$_SESSION['random'] ; // die ('Invalid id or activation code.'); } }else{ die('Could not either find id or random number!'); } this is the warning I am getting from mysql: Warning: mysql_num_rows(): supplied argument is not a valid MySQL result resource in /hermes/bosweb26b/b2501/servername/folder/file.php on line 30 but when I echo the variables out, I get the same values that are stored in the database.... Invalid id or activation code. lastid: 2 and random: 36308075 could someone please give me a hint? thank you.

    Read the article

  • How to use MySQL geospatial extensions with spherical geometries

    - by Joshua
    Hi Everyone, I would like to store thousands of latitude/longitude points in a MySQL db. I was successful at setting up the tables and adding the data using the geospatial extensions where the column 'coord' is a Point(lat, lng). Problem: I want to quickly find the 'N' closest entries to latitude 'X' degrees and longitude 'Y' degrees. Since the Distance() function has not yet been implemented, I used GLength() function to calculate the distance between (X,Y) and each of the entries, sorting by ascending distance, and limiting to 'N' results. The problem is that this is not calculating shortest distance with spherical geometry. Which means if Y = 179.9 degrees, the list of closest entries will only include longitudes of starting at 179.9 and decreasing even though closer entries exist with longitudes increasing from -179.9. How does one typically handle the discontinuity in longitude when working with spherical geometries in databases? There has to be an easy solution to this, but I must just be searching for the wrong thing because I have not found anything helpful. Should I just forget the GLength() function and create my own function for calculating angular separation? If I do this, will it still be fast and take advantage of the geospatial extensions? Thanks! josh UPDATE: This is exactly what I am describing above. However, it is only for SQL Server. Apparently SQL Server has a Geometry and Geography datatypes. The geography does exactly what I need. Is there something similar in MySQL?

    Read the article

  • Mysql slow query: INNER JOIN + ORDER BY causes filesort

    - by Alexander
    Hello! I'm trying to optimize this query: SELECT `posts`.* FROM `posts` INNER JOIN `posts_tags` ON `posts`.id = `posts_tags`.post_id WHERE (((`posts_tags`.tag_id = 1))) ORDER BY posts.created_at DESC; The size of tables is 38k rows, and 31k and mysql uses "filesort" so it gets pretty slow. I tried to use different indexes, no luck. CREATE TABLE `posts` ( `id` int(11) NOT NULL auto_increment, `created_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_on_created_at` (`created_at`), KEY `for_tags` (`trashed`,`published`,`clan_private`,`created_at`) ) ENGINE=InnoDB AUTO_INCREMENT=44390 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci CREATE TABLE `posts_tags` ( `id` int(11) NOT NULL auto_increment, `post_id` int(11) default NULL, `tag_id` int(11) default NULL, `created_at` datetime default NULL, `updated_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_tags_on_post_id_and_tag_id` (`post_id`,`tag_id`) ) ENGINE=InnoDB AUTO_INCREMENT=63175 DEFAULT CHARSET=utf8 +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | 1 | SIMPLE | posts_tags | index | index_post_id_and_tag_id | index_post_id_and_tag_id | 10 | NULL | 24159 | Using where; Using index; Using temporary; Using filesort | | 1 | SIMPLE | posts | eq_ref | PRIMARY | PRIMARY | 4 | .posts_tags.post_id | 1 | | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ 2 rows in set (0.00 sec) What kind of index I need to define to avoid mysql using filesort? Is it possible when order field is not in where clause?

    Read the article

  • Fix DB duplicate entries (MySQL bug)

    - by Silence
    I'm using MySQL 4.1. Some tables have duplicates entries that go against the constraints. When I try to group rows, MySQL doesn't recognise the rows as being similar. Example: Table A has a column "Name" with the Unique proprety. The table contains one row with the name 'Hach?' and one row with the same name but a square at the end instead of the '?' (which I can't reproduce in this textfield) A "Group by" on these 2 rows return 2 separate rows This cause several problems including the fact that I can't export and reimport the database. On reimporting an error mentions that a Insert has failed because it violates a constraint. In theory I could try to import, wait for the first error, fix the import script and the original DB, and repeat. In pratice, that would take forever. Is there a way to list all the anomalies or force the database to recheck constraints (and list all the values/rows that go against them) ? I can supply the .MYD file if it can be helpful.

    Read the article

  • Why is this MySQL INSERT INTO running twice?

    - by stuboo
    I'm attempting to use the mysql insert statement below to add information to a database table. When I execute the script, however, the insert statement is run twice. Here's the URL mysite.com/save.php?Body=p220,c180 Thanks in advance. <?php //tipping fees application require('base.inc.php'); require('functions.inc.php'); // connect to the database & save this message there try { $dbh = new PDO("mysql:host=$dbhost;dbname=$dbname", $dbuser, $dbpass); //$number = formatPhone($_REQUEST['From']); //if($number != 'xxx-xxx-xxxx'){die('SMS from unknown number');} // kill this if from anyone but mike $message = $_REQUEST['Body']; //$Sid = $_REQUEST['SmsSid']; $now = time(); echo $message; $message = explode(",",$message); echo '<pre>'; print_r($message); echo 'message count = '.count($message); echo '</pre>'; $i = 0; $j = count($message); while($i<$j){ $quantity =$message[$i]; $material = substr($quantity, 0, 1); $amount = substr($quantity, 1); switch ($material) { case 'p': $m = "paper"; break; case 'c': $m = "containers"; break; default: $m = "other"; } $count = $dbh->exec("INSERT INTO tippingtotals(sid,time,material,weight) VALUES('$i+$j','$now','$m','$amount')"); echo $count; echo '<br />'; $i++; } //close the database connection $dbh = null; } catch(PDOException $e) { echo $e->getMessage(); } ?>

    Read the article

  • Cassandra instead of MySQL for social networking app

    - by Christopher McCann
    I am in the middle of building a new app which will have very similar features to Facebook and although obviously it wont ever have to deal with the likes of 400,000,000 million users it will still be used by a substantial user base and most of them will demand it run very very quickly. I have extensive experience with MySQL but a social app offers complexities which MySQL is not well suited too. I know Facebook, Twitter etc have moved towards Cassandra for a lot of their data but I am not sure how far to go with it. For example would you store such things as user data - username, passwords, addresses etc in Cassandra? Would you store e-mails, comments, status updates etc in Cassandra? I have also read alot that something like neo4j is much better for representing the friend relationships used by social apps as it is a graph database. I am only just starting down the NoSQL route so any guidance is greatly appreciated. Would anyone be able to advise me on this? I hope I am not being too general!

    Read the article

  • MySQL: Limit rows linked to each joined row

    - by SolidSnakeGTI
    Hello, Specifications: MySQL 4.1+ I've certain situation that requires certain result set from MySQL query, let's see the current query first & then ask my question: SELECT thread.dateline AS tdateline, post.dateline AS pdateline, MIN(post.dateline) FROM thread AS thread LEFT JOIN post AS post ON(thread.threadid = post.threadid) LEFT JOIN forum AS forum ON(thread.forumid = forum.forumid) WHERE post.postid != thread.firstpostid AND thread.open = 1 AND thread.visible = 1 AND thread.replycount >= 1 AND post.visible = 1 AND (forum.options & 1) AND (forum.options & 2) AND (forum.options & 4) AND forum.forumid IN(1,2,3) GROUP BY post.threadid ORDER BY tdateline DESC, pdateline ASC As you can see, mainly I need to select dateline of threads from 'thread' table, in addition to dateline of the second post of each thread, that's all under the conditions you see in the WHERE CLAUSE. Since each thread has many posts, and I need only one result per thread, I've used GROUP BY CLAUSE for that purpose. This query will return only one post's dateline with it's related unique thread. My questions are: How to limit returned threads per each forum!? Suppose I need only 5 threads -as a maximum- to be returned for each forum declared in the WHERE CLAUSE 'forum.forumid IN(1,2,3)', how can this be achieved. Is there any recommendations for optimizing this query (of course after solving the first point)? Notes: I prefer not to use sub-queries, but if it's the only solution available I'll accept it. Double queries not recommended. I'm sure there's a smart solution for this situation. Appreciated advice in advance :)

    Read the article

  • For each result in MySQL query, push to array (complicated)

    - by Dylan Taylor
    Okay, here's what I'm trying to do. I am running a MySQL query for the most recent posts. For each of the returned rows, I need to push the ID of the row to an array, then within that ID in the array, I need to add more data from the rows. A multi-dimensional array. Here's my code thus far. $query = "SELECT * FROM posts ORDER BY id DESC LIMIT 10"; $result = mysql_query($query); while($row = mysql_fetch_array($result)){ $id = $row["id"]; $post_title = $row["title"]; $post_text = $row["text"]; $post_tags = $row["tags"]; $post_category = $row["category"]; $post_date = $row["date"]; } As you can see I haven't done anything with arrays yet. Here's an ideal structure I'm looking for, just incase you're confused. The master array I guess you could call it. We'll just call this array $posts. Within this array, I have one array for each row returned in my MySQL query. Within those arrays there is the $post_title, $post_text, etc. How do I do this? I'm so confused.. an example would be really appreciated. -Dylan

    Read the article

  • Assigning Object to View, big MySQL resultset.

    - by A Finn
    Hello (Sorry for my bad English) Is it bad practice to assign object to view and call its methods in there? I use smarty as my template engine. In my controller I could do like this 1# $this->view->assign("name", $this->model->getName); and in my view <p>{$name}</p> OR 2# $this->view->assign("Object", $this->model); and in my view <p>{$Report->getName()}</p> Well my biggest problem is that I have to handle a big amount of data coming out from the MySQL and I thought that if I would made a method that would print out the data while looping mysql_fetch_row. Well at least I know that using html-tags in the model is a bad thing to do. So I would assign the object to the view to get the result to the right position on the page.. Reading a mysql-result to an array first may cause memory problems am I right? So what is the solution doing things MVC style.. And yes Im using a framework of my own.

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >