Search Results

Search found 58168 results on 2327 pages for 'mysql real escape string'.

Page 35/2327 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • c# winforms events restore textbox contents on escape

    - by aj3jr
    Using c# in 2008 Express. I have a textbox containing a path. I append a "\" at the end on Leave Event. If the user presses 'Escape' key I want the old contents to be restored. When I type over all the text and press 'Escape' I hear a thump and the old text isn't restored. Here what I have so far ... public string _path; public string _oldPath; this.txtPath.KeyPress += new System.Windows.Forms.KeyPressEventHandler(txtPath_CheckKeys); this.txtPath.Enter +=new EventHandler(txtPath_Enter); this.txtPath.LostFocus += new EventHandler(txtPath_LostFocus); public void txtPath_CheckKeys(object sender, KeyPressEventArgs kpe) { if (kpe.KeyChar == (char)27) { _path = _oldPath; } } public void txtPath_Enter(object sender, EventArgs e) { //AppendSlash(sender, e); _oldPath = _path; } void txtPath_LostFocus(object sender, EventArgs e) { //throw new NotImplementedException(); AppendSlash(sender, e); } public void AppendSlash(object sender, EventArgs e) { //add a slash to the end of the txtPath string on ANY change except a restore this.txtPath.Text += @"\"; } Thanks in advance,

    Read the article

  • getting rid of filesort on WordPress MySQL query

    - by Hans
    An instance of WordPress that I manage goes down about once a day due to this monster MySQL query taking far too long: SELECT SQL_CALC_FOUND_ROWS distinct wp_posts.* FROM wp_posts LEFT JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) LEFT JOIN wp_term_taxonomy ON wp_term_taxonomy.term_taxonomy_id = wp_term_relationships.term_taxonomy_id LEFT JOIN wp_ec3_schedule ec3_sch ON ec3_sch.post_id=id WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('1050') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish') AND NOT EXISTS (SELECT * FROM wp_term_relationships JOIN wp_term_taxonomy ON wp_term_taxonomy.term_taxonomy_id = wp_term_relationships.term_taxonomy_id WHERE wp_term_relationships.object_id = wp_posts.ID AND wp_term_taxonomy.taxonomy = 'category' AND wp_term_taxonomy.term_id IN (533,3567) ) AND ec3_sch.post_id IS NULL GROUP BY wp_posts.ID ORDER BY wp_posts.post_date DESC LIMIT 0, 10; What do I have to do to get rid of the very slow filesort? I would think that the multicolumn type_status_date index would be fast enough. The EXPLAIN EXTENDED output is below. +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ | 1 | PRIMARY | wp_posts | ref | type_status_date | type_status_date | 124 | const,const | 7034 | Using where; Using temporary; Using filesort | | 1 | PRIMARY | wp_term_relationships | ref | PRIMARY | PRIMARY | 8 | bwog_wordpress_w.wp_posts.ID | 373 | Using index | | 1 | PRIMARY | wp_term_taxonomy | eq_ref | PRIMARY | PRIMARY | 8 | bwog_wordpress_w.wp_term_relationships.term_taxonomy_id | 1 | Using index | | 1 | PRIMARY | ec3_sch | ref | post_id_index | post_id_index | 9 | bwog_wordpress_w.wp_posts.ID | 1 | Using where; Using index | | 3 | DEPENDENT SUBQUERY | wp_term_taxonomy | range | PRIMARY,term_id_taxonomy,taxonomy | term_id_taxonomy | 106 | NULL | 2 | Using where | | 3 | DEPENDENT SUBQUERY | wp_term_relationships | eq_ref | PRIMARY,term_taxonomy_id | PRIMARY | 16 | bwog_wordpress_w.wp_posts.ID,bwog_wordpress_w.wp_term_taxonomy.term_taxonomy_id | 1 | Using index | | 2 | DEPENDENT SUBQUERY | tt | const | PRIMARY,term_id_taxonomy,taxonomy | term_id_taxonomy | 106 | const,const | 1 | | | 2 | DEPENDENT SUBQUERY | tr | eq_ref | PRIMARY,term_taxonomy_id | PRIMARY | 16 | func,const | 1 | Using index | +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ 8 rows in set, 2 warnings (0.05 sec) And CREATE TABLE: CREATE TABLE `wp_posts` ( `ID` bigint(20) unsigned NOT NULL auto_increment, `post_author` bigint(20) unsigned NOT NULL default '0', `post_date` datetime NOT NULL default '0000-00-00 00:00:00', `post_date_gmt` datetime NOT NULL default '0000-00-00 00:00:00', `post_content` longtext NOT NULL, `post_title` text NOT NULL, `post_excerpt` text NOT NULL, `post_status` varchar(20) NOT NULL default 'publish', `comment_status` varchar(20) NOT NULL default 'open', `ping_status` varchar(20) NOT NULL default 'open', `post_password` varchar(20) NOT NULL default '', `post_name` varchar(200) NOT NULL default '', `to_ping` text NOT NULL, `pinged` text NOT NULL, `post_modified` datetime NOT NULL default '0000-00-00 00:00:00', `post_modified_gmt` datetime NOT NULL default '0000-00-00 00:00:00', `post_content_filtered` text NOT NULL, `post_parent` bigint(20) unsigned NOT NULL default '0', `guid` varchar(255) NOT NULL default '', `menu_order` int(11) NOT NULL default '0', `post_type` varchar(20) NOT NULL default 'post', `post_mime_type` varchar(100) NOT NULL default '', `comment_count` bigint(20) NOT NULL default '0', `robotsmeta` varchar(64) default NULL, PRIMARY KEY (`ID`), KEY `post_name` (`post_name`), KEY `type_status_date` (`post_type`,`post_status`,`post_date`,`ID`), KEY `post_parent` (`post_parent`), KEY `post_date` (`post_date`), FULLTEXT KEY `post_related` (`post_title`,`post_content`) )

    Read the article

  • Mysql: how to extract multiple text files from my mysql table

    - by Patrick
    hi, I need to extract data from my mysql database into multiple text files. I have a table with 4 columns: UserID, UserName, Tag, Score. I need to create a text file for each Tag, with the userID, the userName and score (ordered by score) i.e. Tag1.txt 234922 John 35 234294 David 205 392423 Patrick 21 Tag2.txt 234922 John 35 234294 David 205 392423 Patrick 21 and so on... Edited: Sample: http://dl.dropbox.com/u/72686/expertsTable.png thanks

    Read the article

  • String replacement problem.

    - by fastcodejava
    I want to provide some template for a code generator I am developing. A typical pattern for class is : public ${class_type} ${class_name} extends ${super_class} implements ${interfaces} { ${class_body} } Problem is if super_class is blank or interfaces. I replace extends ${super_class} with empty string. But I get extra spaces. So a class with no super_class and interfaces end up like : public class Foo { //see the extra spaces before {? ${class_body} } I know I can replace multiple spaces with single, but is there any better approach?

    Read the article

  • combining two select statements to return one result

    - by DalivDali
    I need to combine the results for two select queries from two view tables, from which I am performing calculations. Perhaps there is an easier way to perform a query using if...else - any pointers? Essentially I need to divide everything by 'ar.time_ratio' under the condition in sql query 1, and ignore that for query 2. SELECT gs.traffic_date, gs.domain_group, gs.clicks/ar.time_ratio as 'Scaled_clicks', gs.visitors/ar.time_ratio as 'scaled_visitors', gs.revenue/ar.time_ratio as 'scaled_revenue', (gs.revenue/gs.clicks)/ar.time_ratio as 'scaled_average_cpc', (gs.clicks)/(gs.visitors)/ar.time_ratio as 'scaled_ctr', gs.average_rpm/ar.time_ratio as 'scaled_rpm', (((gs.revenue)/(gs.visitors))/ar.time_ratio)*1000 as "Ecpm" FROM group_stats gs, v_active_ratio ar WHERE ar.group_id=gs.domain_group and SELECT gs.traffic_date, gs.domain_group, gs.clicks, gs.visitors, gs.revenue, (gs.revenue/gs.clicks) as 'average_cpc', (gs.clicks)/(gs.visitors) as 'average_ctr', gs.average_rpm, ((gs.revenue)/(gs.visitors))*1000 as "Ecpm" FROM group_stats gs, v_active_ratio ar where not ar.group_id=gs.domain_group

    Read the article

  • string categorization strategies

    - by Andrew Heath
    I'm the one-man dev team on a fledgling military history website. One aspect of the site is a catalog of ~1,200 individual battles, including the nations & formations (regiments, divisions, etc) which took part. The formation information (as well as the other battle info) was manually imported from a series of books by a 10-man volunteer team. The formations were listed in groups with varying formatting and abbreviation patterns. At the time I set up the data collection forms I couldn't think of a good way to process that data... and elected to store it all as strings in the MySQL database and sort it out later. Well, "later" - as it tends to happen - has arrived. :-) Each battle has 2+ records in the database - one for each nation that participated. Each record has a formations text string listing the formations present as the volunteer chose to add them. Some real examples: 39th Grenadier Rgmt, 26th Volksgrenadier Division 2nd Luftwaffe Field Division, 246th Infantry Division 247th Rifle Division, 255th Tank Brigade 2nd Luftwaffe Field Division, SS Cavalry Division 28th Tank Brigade, 158th Rifle Division, 135th Rifle Division, 81st Tank Brigade, 242nd Tank Brigade 78th Infantry Division 3rd Kure Special Naval Landing Force, Tulagi Seaplane Base personnel 1st Battalion 505th Infantry Regiment The ultimate goal is for each individual force to have an ID, so that its participation can be traced throughout the battle database. Formation hierarchy, such as the final item above 1st Battalion (of the) 505th Infantry Regiment also needs to be preserved. In that case, 1st Battalion and 505th Infantry Regiment would be split, but 1st Battalion would be flagged as belonging to the 505th. In database terms, I think I want to pull the formation field out of the current battle info table and create three new tables: FORMATION [id] [name] FORMATION_HIERARCHY [id] [parent] [child] FORMATION_BATTLE [f_id] [battle_id] It's simple to explain, but complicated to enact. What I'm looking for from the SO community is just some tips on how best to tackle this problem. Ideally there's some sort of method to solving this that I'm not aware of. However, as a last resort, I could always code a classification framework and call my volunteers back to sort through 2,500+ records...

    Read the article

  • PHP String tokenizer not working correctly

    - by asdadas
    I have no clue why strtok decided to break on me. Here is my code. I am tokenizing a string by dollar symbol $. echo 'Tokenizing this by $: ',$aliases,PHP_EOL; if(strlen($aliases) > 0) { //aliases check $token = strtok($aliases, '$'); while($token != NULL) { echo 'Found a token: ',$token,PHP_EOL; if(!isGoodLookup($token)) { echo 'ERROR: Invalid alias found.',PHP_EOL; stop($db); } $goodAliasesList[] = $token; $token = strtok('$'); } if($token == NULL) echo 'Found null token, moving on',PHP_EOL; } And this is my output: Tokenizing this by $: getaways$aaa Found a token: getaways Found null token, moving on str tok is not supposed to do this!! where is my aaa token!!

    Read the article

  • Almost Realtime Data and Web application

    - by Chris G.
    I have a computer that is recording 100 different data points into an OPC server. I've written a simple OPC client that can read all of this data. I have a front-end website on a different network that I would like to consume this data. I could easily set the OPC client to send the data to a SQL server and the website could read from it, but that would be a lot of writes. If I wanted the data to be updated every 10 seconds I'd be writing to the database every 10 seconds. (I could probably just serialize the 100 points to get 1 write / 10 seconds but that would also limit my ability to search the data later). This solution wouldn't scale very well. If I had 100 of these computers the situation would quickly grow out of hand. Obviously I am well out of my league here and I have no experience with working with a large amount of data like this. What are my options and what should I research?

    Read the article

  • SELECT INTO or Stored Procedure?

    - by Kerry
    Would this be better as a stored procedure or leave it as is? INSERT INTO `user_permissions` ( `user_id`, `object_id`, `type`, `view`, `add`, `edit`, `delete`, `admin`, `updated_by_user_id` ) SELECT `user_id`, $object_id, '$type', 1, 1, 1, 1, 1, $user_id FROM `user_permissions` WHERE `object_id` = $object_id_2 AND `type` = '$type_2' AND `admin` = 1 You can think of this with different objects, lets say you have groups and subgroups. If someone creates a subgroup, it is making everyone who had access to the parent group now also have access to the subgroup. I've never made a stored procedure before, but this looks like it might be time. This call be probably be called very often. Should I be creating a procedure or will the performance be insignificant?

    Read the article

  • Date range/query problem..

    - by Simon
    Am hoping someone can help me out a bit with date ranges... I have a table with 3 fields id, datestart, dateend I need to query this to find out if a pair of dates from a form are conflicting i.e table entry 1, 2010-12-01, 2010-12-09 from the form 2010-12-08, 20-12-15 select id from date_table where '2010-12-02' between datestart and dateend; That returns me the id that I want, but what I would like to do is to take the date range from the form and do a query similar to what I have got that will take both form dates 2010-12-08, 20-12-15 and query the db to ensure that there is no conflicting date ranges in the table. Am sat scratching my head with the problem... TIA

    Read the article

  • mysql_connect VS mysql_pconnect

    - by rogeriopvl
    I have this doubt, I've searched the web and the answers seem to be diversified. Is it better to use mysql_pconnect over mysql_connect when connecting to a database via PHP? I read that pconnect scales much better, but on the other hand, being a persistent connection... having 10 000 connections at the same time, all persistent, doesn't seem scalable to me. Thanks in advance.

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • Practical mysql schema advice for eCommerce store - Products & Attributes

    - by Gravy
    I am currently planning my first eCommerce application (mySQL & Laravel Framework). I have various products, which all have different attributes. Describing products very simply, Some will have a manufacturer, some will not, some will have a diameter, others will have a width, height, depth and others will have a volume. Option 1: Create a master products table, and separate tables for specific product types (polymorphic relations). That way, I will not have any unnecessary null fields in the products table. Option 2: Create a products table, with all possible fields despite the fact that there will be a lot of null rows Option 3: Normalise so that each attribute type has it's own table. Option 4: Create an attributes table, as well as an attribute_values table with the value being varchar regardless of the actual data-type. The products table would have a many:many relationship with the attributes table. Option 5: Common attributes to all or most products put in the products table, and specific attributes to a particular category of product attached to the categories table. My thoughts are that I would like to be able to allow easy product filtering by these attributes and sorting. I would also want the frontend to be fast, less concern over the performance of the inserting and updating of product records. Im a bit overwhelmed with the vast implementation options, and cannot find a suitable answer in terms of the best method of approach. Could somebody point me in the right direction? In an ideal world, I would like to offer the following kind of functionality - http://www.glassesdirect.co.uk/products/ to my eCommerce store. As can be seen, in the sidebar, you can select an attribute the glasses to filter them. e.g. male / female or plastic / metal / titanium etc... Alternatively, should I just dump the mySql relational database idea and learn mongodb?

    Read the article

  • MySQL – Export the Resultset to CSV file

    - by Pinal Dave
    In SQL Server, you can use BCP command to export the result set to a csv file. In MySQL too, You can export data from a table or result set as a csv file in many methods. Here are two methods. Method 1 : Make use of Work Bench If you are using Work Bench as a querying tool, you can make use of it’s Export option in the result window. Run the following code in Work Bench SELECT db_names FROM mysql_testing; The result will be shown in the result windows. There is an option called “File”. Click on it and it will prompt you a window to save the result set (Screen shot attached to show how file option can be used). Choose the directory and type out the name of the file. Method 2 : Make use of OUTFILE command You can do the export using a query with OUTFILE command as shown below SELECT db_names FROM mysql_testing INTO OUTFILE 'C:/testing.csv' FIELDS ENCLOSED BY '"' TERMINATED BY ';' ESCAPED BY '"' LINES TERMINATED BY '\r\n'; After the execution of the above code, you can find a file named testing.csv in C drive of the server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL Tagged: CSV

    Read the article

  • MySQL – Introduction to CONCAT and CONCAT_WS functions

    - by Pinal Dave
    MySQL supports two types of concatenation functions. They are CONCAT and CONCAT_WS CONCAT function just concats all the argument values as such SELECT CONCAT('Television','Mobile','Furniture'); The above code returns the following TelevisionMobileFurniture If you want to concatenate them with a comma, either you need to specify the comma at the end of each value, or pass comma as an argument along with the values SELECT CONCAT('Television,','Mobile,','Furniture'); SELECT CONCAT('Television',',','Mobile',',','Furniture'); Both the above return the following Television,Mobile,Furniture However you can omit the extra work by using CONCAT_WS function. It stands for Concatenate with separator. This is very similar to CONCAT function, but accepts separator as the first argument. SELECT CONCAT_WS(',','Television','Mobile','Furniture'); The result is Television,Mobile,Furniture If you want pipeline as a separator, you can use SELECT CONCAT_WS('|','Television','Mobile','Furniture'); The result is Television|Mobile|Furniture So CONCAT_WS is very flexible in concatenating values along with separate. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • Finding multiple values in a string Jquery / Javascript

    - by user257503
    I have a three strings of categories "SharePoint,Azure,IT"; "BizTalk,Finance"; "SharePoint,Finance"; I need to find a way to check if a string contains for example "SharePoint" and "IT", or "BizTalk" and "Finance". The tests are individual strings themselces. How would i loop through all the category strings (1 - 3) and only return the ones which have ALL instances of the souce. i have tried the following function doesExist(source, filterArray) { var substr = filterArray.split(" "); jQuery.each(substr, function() { var filterTest = this; if(source.indexOf(filterTest) != -1 ) { alert("true"); return true; }else { alert("false"); return false; } }); } with little success...the code above checks one at a time rather than both so the results returned are incorrect. Any help would be great. Thanks Chris UPDATE: here is a link to a work in progress version..http://www.invisiblewebdesign.co.uk/temp/filter/#

    Read the article

  • PHP Can't connect to MySQL on the production server

    - by Jairo Santos
    I'm having problems with connections with MySQL through PHP script. The MySQL user is root and I added GRANTS to root@'%' so I can connect from anywhere. Lets assume my MySQL host as "bigboy.com.br" The funny part is, from my local machine, on my test server, the script can connect to the MySQL server normally. But on the dedicated server where MySQL is running, the same PHP script gives me "Access denied for 'root'@'bigboy.com.br'" error.

    Read the article

  • Benchmark MySQL Cluster using flexAsynch: No free node id found for mysqld(API)?

    - by quanta
    I am going to benchmark MySQL Cluster using flexAsynch follow this guide, details as below: mkdir /usr/local/mysqlc732/ cd /usr/local/src/mysql-cluster-gpl-7.3.2 cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysqlc732/ -DWITH_NDB_TEST=ON make make install Everything works fine until this step: # /usr/local/mysqlc732/bin/flexAsynch -t 1 -p 80 -l 2 -o 100 -c 100 -n FLEXASYNCH - Starting normal mode Perform benchmark of insert, update and delete transactions 1 number of concurrent threads 80 number of parallel operation per thread 100 transaction(s) per round 2 iterations Load Factor is 80% 25 attributes per table 1 is the number of 32 bit words per attribute Tables are with logging Transactions are executed with hint provided No force send is used, adaptive algorithm used Key Errors are disallowed Temporary Resource Errors are allowed Insufficient Space Errors are disallowed Node Recovery Errors are allowed Overload Errors are allowed Timeout Errors are allowed Internal NDB Errors are allowed User logic reported Errors are allowed Application Errors are disallowed Using table name TAB0 NDBT_ProgramExit: 1 - Failed ndb_cluster.log: WARNING -- Failed to allocate nodeid for API at 127.0.0.1. Returned eror: 'No free node id found for mysqld(API).' I also have recompiled with -DWITH_DEBUG=1 -DWITH_NDB_DEBUG=1. How can I run flexAsynch in the debug mode? # /usr/local/mysqlc732/bin/flexAsynch -h FLEXASYNCH Perform benchmark of insert, update and delete transactions Arguments: -t Number of threads to start, default 1 -p Number of parallel transactions per thread, default 32 -o Number of transactions per loop, default 500 -l Number of loops to run, default 1, 0=infinite -load_factor Number Load factor in index in percent (40 -> 99) -a Number of attributes, default 25 -c Number of operations per transaction -s Size of each attribute, default 1 (PK is always of size 1, independent of this value) -simple Use simple read to read from database -dirty Use dirty read to read from database -write Use writeTuple in insert and update -n Use standard table names -no_table_create Don't create tables in db -temp Create table(s) without logging -no_hint Don't give hint on where to execute transaction coordinator -adaptive Use adaptive send algorithm (default) -force Force send when communicating -non_adaptive Send at a 10 millisecond interval -local 1 = each thread its own node, 2 = round robin on node per parallel trans 3 = random node per parallel trans -ndbrecord Use NDB Record -r Number of extra loops -insert Only run inserts on standard table -read Only run reads on standard table -update Only run updates on standard table -delete Only run deletes on standard table -create_table Only run Create Table of standard table -drop_table Only run Drop Table on standard table -warmup_time Warmup Time before measurement starts -execution_time Execution Time where measurement is done -cooldown_time Cooldown time after measurement completed -table Number of standard table, default 0

    Read the article

  • How can I centralise MySQL data between 3 or more geographically separate servers?

    - by Andy Castles
    To explain the background to the question: We have a home-grown PHP application (for running online language-learning courses) running on a Linux server and using MySQL on localhost for saving user data (e.g. results of tests taken, marks of submitted work, time spent on different pages in the courses, etc). As we have students from different geographic locations we currently have 3 virtual servers hosted close to those locations (Spain, UK and Hong Kong) and users are added to the server closest to them (they access via different URLs, e.g. europe.domain.com, uk.domain.com and asia.domain.com). This works but is an administrative nightmare as we have to remember which server a particular user is on, and users can only connect to one server. We would like to somehow centralise the information so that all users are visible on any of the servers and users could connect to any of the 3 servers. The question is, what method should we use to implement this. It must be an issue that that lots of people have encountered but I haven't found anything conclusive after a fair bit of Googling around. The closest I have seen to solutions are: something like master-master replication, but I have read so many posts suggesting that this is not a good idea as things like auto_increment fields can break. circular replication, this sounded perfect but to quote from O'Reilly's High Performance MySQL, "In general, rings are brittle and best avoided" We're not against rewriting code in the application to make it work with whatever solution is required but I am not sure if replication is the correct thing to use. Thanks, Andy P.S. I should add that we experimented with writes to a central database and then using reads from a local database but the response time between the different servers for writing was pretty bad and it's also important that written data is available immediately for reading so if replication is too slow this could cause out-of-date data to be returned. Edit: I have been thinking about writing my own rudimentary replication script which would involve something like having each user given a server ID to say which is his "home server", e.g. users in asia would be marked as having the Hong Kong server as their own server. Then the replication scripts (which would be a PHP script set to run as a cron job reasonably frequently, e.g. every 15 minutes or so) would run independently on each of the servers in the system. They would go through the database and distribute any information about users with the "home server" set to the server that the script is running on to all of the other databases in the system. They would also need to suck new information which has been added to any of the other databases on the system where the "home server" flag is the server where the script is running. I would need to work out the details and build in the logic to deal with conflicts but I think it would be possible, however I wanted to make sure that there is not a correct solution for this already out there as it seems like it must be a problem that many people have already come across.

    Read the article

  • Symlink error when installing MySQL via Homebrew

    - by Asad Syed
    Trying to install MySQL via Homebrew. The install seems to work fine but i get an error: "Error: The linking step did not complete successfully The formula built, but is not symlinked into /usr/local You can try again using `brew link mysql'" Naturally, after this I ran: brew link mysql Which spat out: Error: Could not symlink file: /usr/local/Cellar/mysql/5.5.20/include/typelib.h /usr/local/include is not writable. You should change its permissions. So I ran it with sudo and got a "cowardly refusing to brew link mysql".

    Read the article

  • Change the mysql.sock that the system PHP uses

    - by Mark P.
    In the past I had installed MySQL as part of XAMPP in /Applications/XAMPP/ and the PHP that the system used (e.g. from command line) used /Applications/XAMPP/xamppfiles/var/mysql/mysql.sock to connect with MySQL. Now I have installed MySQL 5 with macports and I want my system PHP to be able to use that. I think I must set PHP to use /opt/local/var/run/mysql5/mysqld.sock but where do I set this? Is there a php.ini for CLI PHP?

    Read the article

  • nginx can't see MySQL

    - by user135235
    I have a fully working Joomla 2.5.6 install driven by a local MySQL server, but I'd like to test nginx to see if it's a faster web serving experience than Apache. \ PHP 5.4.6 (PHP54w) \ CentOS 6.2 \ Joomla 2.5.6 \ PHP54w-fpm.i386 (FastCGI process manager) \ php -m shows: mysql & mysqli modules loaded Nginx seems to have installed fine via yum, it can process a PHP-info file via FastCGI perfectly OK (http://37.128.190.241/php.php) but when I stop Apache, start nginx instead and visit my site I get: "Database connection error (1): The MySQL adapter 'mysqli' is not available." I've tried adjusting my Joomla configuration.php to use mysql instead of mysqli but I get the same basic error, only this time "Database connection error (1): The MySQL adapter 'mysql' is not available" of course! Can anyone think what the problem might be please? I did try explicitly setting extension = mysqli.so and extension = mysql.so in my php.ini to try and force the issue (despite php -m showing they were both successfully loaded anyway) - no difference. I have a pretty standard nginx default.conf: server { listen 80; server_name www.MYDOMAIN.com; server_name_in_redirect off; access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log info; root /var/www/html/MYROOT_DIR; index index.php index.html index.htm default.html default.htm; # Support Clean (aka Search Engine Friendly) URLs location / { try_files $uri $uri/ /index.php?q=$uri&$args; } # deny running scripts inside writable directories location ~* /(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ { return 403; error_page 403 /403_error.html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi.conf; } # caching of files location ~* \.(ico|pdf|flv)$ { expires 1y; } location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ { expires 14d; } } Snip of output from phpinfo under nginx: Server API FPM/FastCGI Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/phar.ini, /etc/php.d/zip.ini Snip of output from phpinfo under Apache: Server API Apache 2.0 Handler Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/mysql.ini, /etc/php.d/mysqli.ini, /etc/php.d/pdo.ini, /etc/php.d/pdo_mysql.ini, /etc/php.d/pdo_sqlite.ini, /etc/php.d/phar.ini, /etc/php.d/sqlite3.ini, /etc/php.d/zip.ini Seems that with Apache, PHP is loading substantially more additional .ini files, including ones relating to mysql (mysql.ini, mysqli.ini, pdo_mysql.ini) than nginx. Any ideas how I get nginix to also call these additional .ini's ? Thanks in advance, Steve

    Read the article

  • Not able to installl mysql-server ubuntu

    - by makdotgnu
    I'm not able to install mysql-server on ubuntu 10.04 I had installed mysql but it was giving this error ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) So I removed mysql-server completely from synaptic but after this i'm not able to reinstall it. When I try to reinstall it synaptic fridges. how to do remove each file of mysql and install it ?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >