Search Results

Search found 48190 results on 1928 pages for 'mysql slow query log'.

Page 29/1928 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • mod_rewrite to redirect URL with query string

    - by meeble
    I've searched all over stackoverflow, but none of the answers seem to be working for this situation. I have a lot of working mod_rewrite rules already in my httpd.conf file. I just recently found that Google had indexed one of my non-rewritten URLs with a query string in it: http://domain.com/?state=arizona I would like to use mod_rewrite to do a 301 redirect to this URL: http://domain.com/arizona The issue is that later on in my rewrite rules, that 2nd URL is being rewritten to pass query variables on to WordPress. It ends up getting rewritten to: http://domain.com/index.php?state=arizona Which is the proper functionality. Everything I have tried so far has either not worked at all or put me in an endless rewrite loop. This is what I have right now, which is getting stuck in a loop: RewriteCond %{QUERY_STRING} state=arizona [NC] RewriteRule .* http://domain.com/arizona [R=301,L] #older rewrite rule that passes query string based on URL: RewriteRule ^([A-Za-z-]+)$ index.php?state=$1 [L] which gives me an endless rewrite loop and takes me to this URL: http://domain.com/arizona?state=arizona I then tried this: RewriteRule .* http://domain.com/arizona? [R=301,L] which got rid of the query string in the URL, but still creates a loop.

    Read the article

  • Unable to log into samba server

    - by Paddington
    I am unable to log into a samba server (running on fedora core 6) as it prompts for a username and password when I try to connect to the mapped drives from my windows 7 machine. I decided to reset the password using the command smbpassword paddy and when I list users using check the pdbedit -L -v I see that the password was updated at the time I made this change. However, I am still unable to log in. The log file in /var/log/samba/log.paddy shows: [2012/10/11 09:55:54.605923, 1] smbd/service.c:678(make_connection_snum) create_connection_server_info failed: NT_STATUS_ACCESS_DENIED [2012/10/11 09:55:54.606635, 1] smbd/service.c:678(make_connection_snum) create_connection_server_info failed: NT_STATUS_ACCESS_DENIED How can I resolve this so that I can log in?

    Read the article

  • Combining two-part SQL query into one query

    - by user332523
    Hello, I have a SQL query that I'm currently solving by doing two queries. I am wondering if there is a way to do it in a single query that makes it more efficient. Consider two tables: Transaction_Entries table and Transactions, each one defined below: Transactions - id - reference_number (varchar) Transaction_Entries - id - account_id - transaction_id (references Transactions table) Notes: There are multiple transaction entries per transaction. Some transactions are related, and will have the same reference_number string. To get all transaction entries for Account X, then I would do SELECT E.*, T.reference_number FROM Transaction_Entries E JOIN Transactions T ON (E.transaction_id=T.id) where E.account_id = X The next part is the hard part. I want to find all related transactions, regardless of the account id. First I make a list of all the unique reference numbers I found in the previous result set. Then for each one, I can query all the transactions that have that reference number. Assume that I hold all the rows from the previous query in PreviousResultSet UniqueReferenceNumbers = GetUniqueReferenceNumbers(PreviousResultSet) // in Java foreach R in UniqueReferenceNumbers // in Java SELECT * FROM Transaction_Entries where transaction_id IN (SELECT * FROM Transactions WHERE reference_number=R Any suggestions how I can put this into a single efficient query?

    Read the article

  • How to view multiple log files as one file in unix/linux

    - by user42679
    Hi, I was wondering if there is a convenient way in linux/unix to read multiple log files as one. More specifically, I would like to view a sequence of log files (app.log, app.log.1 app.log.2, etc) as one big file using normal unix tools (vi, less, etc). When the EOF is read the tool will automatically move to the beginning of the next file. During my work I have to analyze uat/prod logs to investigate and solve problems. The fact that I need to traverse many log files disturbs my work and causes delays. Any ideas?

    Read the article

  • How can I remove old log entries from a log file and archive them somewhere else in Linux?

    - by Mike B
    CentOS 4.x I apologize in advance if this is not the appropriate place to ask this question. It pertains to a linux server / IT admin task. I've got a log file on an old CentOS 4.x server and I want to remove log entries older than a certain date and place them in a new file for archive. Here's an example of the log format: 2012-06-07 22:32:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:03,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:04,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:10,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:12,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:15,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:40,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:58,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:02,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| Essentially, I'm looking for a one-liner that will do the following: Find any events older than a provided YYYY-MM-DD and remove them from the primary log file. Take the deleted events from step 1 and put them in a new log file (Optional) Compress the new archive log file holding the deleted events. I'm aware that there are log rotate tools that do this but this should just be a one-time task so I'd prefer not to set that up. Additional notes: If the date part it tricky or too resource intensive, an alternative would be to just keep the last X number of lines and move the rest. I was originally thinking of something like tail -n 10000 > newfile.txt but that would mean moving the "good" logs to a new file and then doing a name swap... and then I'd still need to remove the "good" entries from the archive. This particular log file is pretty large (1 GB) so I'd prefer the task to be as resource and time efficient as possible. The extra pipes in the log concern me and I'm not sure if I'd need extra protection in the commands to avoid that from causing problems.

    Read the article

  • How can get unique values from data table using dql?

    - by piemesons
    I am having a table in which there is a column in which various values are stored.i want to retrieve unique values from that table using dql. Doctrine_Query::create() ->select('rec.school') ->from('Records rec') ->where("rec.city='$city' ") ->execute(); Now i want only unique values. Can anybody tell me how to do that... Edit Table Structure: CREATE TABLE IF NOT EXISTS `records` ( `id` int(11) NOT NULL AUTO_INCREMENT, `state` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL, `city` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL, `school` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=16334 ; This is the Query I am using: Doctrine_Query::create() ->select('DISTINCT rec.city') ->from('Records rec') ->where("rec.state = '$state'") // ->getSql(); ->execute(); Generting Sql for this gives me: SELECT DISTINCT r.id AS r__id, r.city AS r__city FROM records r WHERE r.state = 'AR' Now check the sql generated:::: DISTINCT is on 'id' column where as i want Distinct on city column. Anybody know how to fix this. EDIT2 Id is unique cause its an auto incremental value.Ya i have some real duplicates in city column like: Delhi and Delhi. Right.. Now when i am trying to fetch data from it, I am getting Delhi two times. How can i make query like this: select DISTINCT rec.city where state="xyz"; Cause this will give me the proper output. EDIT3: Anybody who can tell me how to figure out this query..???

    Read the article

  • MySQL: Limit rows linked to each joined row

    - by SolidSnakeGTI
    Hello, Specifications: MySQL 4.1+ I've certain situation that requires certain result set from MySQL query, let's see the current query first & then ask my question: SELECT thread.dateline AS tdateline, post.dateline AS pdateline, MIN(post.dateline) FROM thread AS thread LEFT JOIN post AS post ON(thread.threadid = post.threadid) LEFT JOIN forum AS forum ON(thread.forumid = forum.forumid) WHERE post.postid != thread.firstpostid AND thread.open = 1 AND thread.visible = 1 AND thread.replycount >= 1 AND post.visible = 1 AND (forum.options & 1) AND (forum.options & 2) AND (forum.options & 4) AND forum.forumid IN(1,2,3) GROUP BY post.threadid ORDER BY tdateline DESC, pdateline ASC As you can see, mainly I need to select dateline of threads from 'thread' table, in addition to dateline of the second post of each thread, that's all under the conditions you see in the WHERE CLAUSE. Since each thread has many posts, and I need only one result per thread, I've used GROUP BY CLAUSE for that purpose. This query will return only one post's dateline with it's related unique thread. My questions are: How to limit returned threads per each forum!? Suppose I need only 5 threads -as a maximum- to be returned for each forum declared in the WHERE CLAUSE 'forum.forumid IN(1,2,3)', how can this be achieved. Is there any recommendations for optimizing this query (of course after solving the first point)? Notes: I prefer not to use sub-queries, but if it's the only solution available I'll accept it. Double queries not recommended. I'm sure there's a smart solution for this situation. Appreciated advice in advance :)

    Read the article

  • Help on MySQL table indexing when GROUP BY is used in a query

    - by Silver Light
    Thank you for your attention. There are two INNODB tables: Table authors id INT nickname VARCHAR(50) status ENUM('active', 'blocked') about TEXT Table books author_id INT title VARCHAR(150) I'm running a query against these tables, to get each author and a count of books he has: SELECT a. * , COUNT( b.id ) AS book_count FROM authors AS a, books AS b WHERE a.status != 'blocked' AND b.author_id = a.id GROUP BY a.id ORDER BY a.nickname This query is very slow (takes about 6 seconds to execute). I have an index on books.author_id and it works perfectly, but I do not know how to create an index on authors table, so that this query could use it. Here is how current EXPLAIN looks: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE a ALL PRIMARY,id_status_nickname NULL NULL NULL 3305 Using where; Using temporary; Using filesort 1 SIMPLE b ref key_author_id key_author_id 5 a.id 2 Using where; Using index I've looked at MySQL manual on optimizing queries with group by, but could not figure out how I can apply it on my query. I'll appreciate any help and hints on this - what must be the index structure, so that MySQL could use it?

    Read the article

  • MySQL server with website hosting with managed hosting

    - by BlackSheep
    I work for a small business, and I am the "IT Department". I also happen to be a summer intern, so after I leave the number of IT staff will be 0 for an indefinite period of time. I need a place where I can reliably put up the company's website as well as the mySQL+PHP backend. Good redundancy is a plus, as well as easy administration for my IT-challenged colleagues. Managed hosting would be good, so the PHP versions can update without my company having to hire an admin. EDIT: The company already has a mySQL+PHP server running locally which hosts the existing website. My assignment is to find a remote server where the latest versions of mySQL+PHP will be maintained and where there is a very small chance of unintended downtime. Can you recommend anything?

    Read the article

  • Official MySQL Cluster Training Available Near You!

    - by Antoinette O'Sullivan
    Oracle is the official provider of MySQL Training. To learn more about MySQL Cluster, you can register for the MySQL Cluster training at a large selection of locations and often you will find the course delivery in your local language! For example:  Where  When  Delivery Language  Prague, Czech Republic  17 September 2012  Czech  Warsaw, Poland  1 August 2012  Polish  Wien, Austria  27 August 2012  German  London, United Kingdom  18 July 2012  English  Lisbon, Portugal  3 December 2012  European Portugese  Nice, France  8 October 2012  French  Barcelona, Spain  25 September 2012  Spanish  Madrid, Spain  20 August 2012  Spanish  Denver, United States  17 October 2012  English  Chicago, United States  22 August 2012  English  New York, United States  20 June 2012  English  Petaling Jaya, Malaysia  18 July 2012  English  Singapore  21 August 2012  English  Melbourne, Australia  13 June 2012  English  Mexico City, Mexico  23 July 2012  Spanish To learn more or register your interest in another course, location, or date, go to Oracle University's official portal.

    Read the article

  • Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)

    - by Siddharth
    My mysql installation was screwed up. I uninstalled and installed it multiple times. But I kept getting a hang or a error "cant connect". I now finally have it running with sudo -u mysql mysqld. How do I get mysqld to run at startup, and why do I need to run it as user mysql. Can I change it to run as root ? Edit Filesystem Size Used Avail Use% Mounted on /dev/sda1 230G 5.5G 213G 3% / none 2.0G 268K 2.0G 1% /dev none 2.0G 3.2M 2.0G 1% /dev/shm none 2.0G 92K 2.0G 1% /var/run none 2.0G 0 2.0G 0% /var/lock none 2.0G 0 2.0G 0% /lib/init/rw /dev/sda5 230G 20G 199G 9% /home

    Read the article

  • MySQL port forwarding

    - by Eduard Luca
    I am trying to help a colleague to connect to my MySQL server. However the situation is a bit special, and here's why (let's call him person A and me, person B): Person A has a PC, on which he has a virtual machine, which is in the same network as the actual PC he's running. However person A is also in the same network with person B (a different network). I want the site that lives on A's VM to be able to connect to the MySQL server on B's PC. For this I've thought a port forwarding would be appropriate: from ip-of-person-A:3306 to ip-of-person-B:3306. This way the site would connect to the IP of the PC it's living on (not the VM), which would forward to A's MySQL. I've seen several examples of port forwarding, but I don't think it's what I need, from what I've seen, it's kind of the opposite. So would something like this be achievable?

    Read the article

  • Install Apache, Php, Mysql latest versions not available via apt

    - by Tarun
    I need to install the latest versions of apache, php and mysql that are available on their websites, but the problem is that apt repositories don't have these updates versions yet. Versions that I want: Apache 2.4.4, PHP 5.4.16 and MySQL 5.6.12. Versions available via apt: Apache 2.2.22, PHP 5.4.15, MySQL 5.5.31. Ubuntu doesn't support the above versions of packages. Please, I want help! Thanks! P.S. Tasksel also doesn't have these versions of packages yet.

    Read the article

  • Connect to MySQL on remote server from inside python script (DB API)

    - by Atul Kakrana
    Very recently I have started to write python scripts that need to connect few databases on mySQL server. The problem is that when I work from office my script works fine but running a script from my home while on office VPN generates connection error. I also noticed the mySQL client Squirrel also cannot connect from my home but works fine on Office computer. I think both are giving problem for the same reason. Do I need to create a ssh tunnel and forward the port? If yes how do I do it? mySQL is installed on server I have ssh access. Please help me on this AK

    Read the article

  • MySQL for Beginners course - first steps to lowering your Database TCOs

    - by Antoinette O'Sullivan
    Thinking about lowering your Database TCO by using the MySQL Server? Don't miss the chance to get training from the source! With the newly released MySQL for Beginners class, learn how this powerful relational database management system can make your life easier and more fun! This course covers all the basics and will get you on your way, with a solid foundation. This instructor led, hands-on class covers the fundamentals of SQL and relational databases, using MySQL as a teaching tool. Send information about this course release to a friend who might be considering getting started on the world's most popular small footprint database.

    Read the article

  • Connecting remote mysql database to local mysql databse? [migrated]

    - by Shashank
    I want to write a php code to be embedded in drupal7 module. I want to call a procedure which can copy the newly generated data in local mysql database to the remote mysql database. When data is inserted in tables 'A' of my local data base it should be copied to the specific table 'B' of the remote mysql server's database. Table 'A' is on local host. Table 'B' is on remote server. insert data on 'A' - copied data in 'B' Is this possible? Thanks for the help.

    Read the article

  • Phantom activity on MySQL

    - by LoveMeSomeCode
    This is probably just my total lack of MySQL expertise, but is it typical to see lots of phantom activity on a MySQL instance via phpMyAdmin? I have a shared hosting plan through Lithium, and when I log in through the phpMyAdmin console and click on the 'Status' tab, it's showing crazy high numbers for queries. Within an hour of activating my account I had 1 million queries. At first I thought this was them setting things up, but the number is climbing constantly, averaging 170/second. I've got a support ticket in with Lithium, but I thought I'd ask here if this were a MySQL/shared host thing, because I had the same thing happen with a shared hosting plan through Joyent.

    Read the article

  • How to Add Policy-based Audit Compliance to you existing MySQL applications

    - by Rob Young
    As a follow up to an earlier blog on the subject, please join us today at 0900 US PT to learn how to easily add policy-based auditing compliance to your existing MySQL applications.  This brief, informative session will provide an overview of the new MySQL Enterprise Audit plugin and will include a simple, practical step-by-step "how to" approach to get up and running with the new functionality. You can learn more and secure your seat for the presentation here.  Thanks for your continued support of MySQL!

    Read the article

  • how to run mysql drop and create synonym in shell script

    - by bgrif
    I have added this command to a script I am writing and I am running into a issue with it not logging onto mysql and running the commands. How can i fix this and make it run. #! /bin/bash Subject: Please stage the following TFL09143 Locator Bulletin to all TF90 staging environments: # This next section is to go to mysql server and make changes. you can drop and create synonyms truncate a table and insert into a different one. you will be able to verify the counts to the different locations # $ mysql --host=app03-bsi --u "" --p "" "TF90BPS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90BP.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90BP.TFBPS3.BTXSUPB && TRUNCATE TABLE TF90BP.TFBPS3.BTXSUPB SELECT * FROM TF90BP.TFBPS2.BTXSUPB; select count () from TF90BP.TF90.BTXADDR select count() from TF90BPS.TF90.BTXADDR; select count() from TF90BP.TF90.BTXSUPB; select count() from TF90BPS.TF90.BTXSUPB;" $ mysql --host=app03-bsi --u "" --p "" "TF90LMS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90LM.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90LM.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90LM.TFLMS2.BTXADDR;TRUNCATE TABLE TF90LM.TFLMS3.BTXSUPB;INSERT INTO TF90LM.TFLMS3.BTXSUPB SELECT * FROM TF90LM.TFLMS2.BTXSUPB;Verify select count() from TF90LM.TF90.BTXADDR;select count() from TF90LMS.TF90.BTXADDR;select count() from TF90LM.TF90.BTXSUPB;select count() from TF90LMS.TF90.BTXSUPB" $ mysql --host=app03-bsi --u "" --p "" "TF90NCS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90NC.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90NC.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90NC.TFNCS2.BTXADDR; TRUNCATE TABLE TF90NC.TFNCS3.BTXSUPB; INSERT INTO TF90NC.TFNCS3.BTXSUPB SELECT * FROM TF90NC.TFNCS2.BTXSUPB; Verify select count() from TF90NC.TF90.BTXADDR; select count() from TF90NCS.TF90.BTXADDR;select count() from TF90NC.TF90.BTXSUPB;select count() from TF90NCS.TF90.BTXSUPB" $ mysql --host=app03-bsi --u "" --p "" "TF90PVS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90PV.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90PV.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90PV.TFPVS2.BTXADDR;TRUNCATE TABLE TF90PV.TFPVS3.BTXSUPB;INSERT INTO TF90PV.TFPVS3.BTXSUPB SELECT * FROM TF90PV.TFPVS2.BTXSUPB;Verify select count() from TF90PV.TF90.BTXADDR;select count() from TF90PVS.TF90.BTXADDR;select count() from TF90PV.TF90.BTXSUPB;select count() from TF90PVS.TF90.BTXSUPB" TFL09143 Staging cd \ntsrv\common\To\IT-CERT-TEST\TFL09143 #change to mapped network drive cp -p TFL09143.pkg /d:/tf90/code_stg && /tf90bp/code_stg && /tf90lm/code_stg && /tf90pv/code_stg # Copies the package from the networked folder and then copies to the location(s) needed.# InvalidInput="true" if [ $# -eq 0 ] ; then echo "This script sets up TF90 Staging" echo -n "Which production do you want to run? (RB/TaxLocator/Cyclic)" read ProductionDistro else ProductionDistro="$1" fi while [ "$InvalidInput" = "true" ] do if [ "$ProductionDistro" = "RB" -o "$ProductionDistro" = "TaxLocator" -o "$ProductionDistro" = "Cyclic" ] ; then InvalidInput="false" break else echo "You have entered an error" echo "You must type RB or TaxLocator or Cyclic" echo "you typed $ProductionDistro" echo "This script sets up TF90 Staging" read ProductionDistro fi done InvalidInput="true" if [ $# -eq 0 ] ; then echo "This script sets up RB TF90 Staging" echo -n "Which Element do you want to run? (TF90/TF90BP/TF90LM/TF90PV/ALL)" read ElementDistro else ElementDistro="$1" fi while [ "$InvalidInput" = "true" ] do if [ "$ElementDistro" = "TF90" -o "$ElementDistro" = "TF90BP" -o "$ElementDistro" = "TF90LM" -o "$ElementDistro" = "TF90PV" -o "$ElementDistro" = "ALL" ] ; then InvalidInput="false" break else echo "You have entered an error" echo "You must type TF90 or TF90BP or TF90LM or TF90PV" echo "you typed $ElementDistro" echo "This script sets up TF90 Staging" read ElementDistro fi done if [ "$ElementDistro" = "TF90" ] ; then cd /d/tf90/code_stg vim TFL09143.pkg export var=TF90_CONNECT_STRING=DSN=TF90NCS;export Description=TF90NCS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90NCS; export DATASET=DEFAULT pkgintall -l -v ../TFL09143.pkg fi if [ "$ElementDistro" = "$TF90BP" ] ; then cd /d/tf90bp/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90BPS;export Description=TF90BPS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90BPS; start tfloader -l –v ../TFL09143.pkg fi if [ "$ElementDistro" = "$TF90LM" ] ; then cd /d/tf90lm/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90LMS;export Description=TF90LMS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90LMS; start tfloader -l -v ../TFL09143.pkg fi if [ "$ElementDistro" = "TF90PV" ] ; then cd /d/tf90pv/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90PVS;Description=TF90PVS;Trusted_Connection=Yes;WSID=APP03- BSI;DATABASE=TF90PVS; start tfloader -l –v ../TFL09143.pkg fi exit 0

    Read the article

  • mysql questions for beginners

    - by ankhseeker
    ok, I have a few questions regarding mysql. I am currently running ubuntu 12.04.4 LTS command line version. I am looking for a database that I can use. I am confused at this point because I am uninformed. mysql is just one database that is on the server? or can it contain several or many databases What programs do I use to access it on the server or is it a vt-100 type access? I understand that mysql comes with lamp? or ubuntu. I am thinking that it is already installed but not sure how to access it, but that is another question for later. Outside of the man pages and the ubuntu manual, is there a site for its setup and use? Thanks!

    Read the article

  • Cannot Start MySQL Server on Fresh MAMP Install

    - by alexpelan
    I'm using Mac OS X 10.6.2 on my Macbook Pro. I can get the apache server to start, but not the mysql server, on both the default apache and default MAMP ports. When I try to go to my start page, I get the message "Error: Could not connect to MySQL server!" . Here's what's in my mysql error log: 00513 02:00:07 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended 100513 02:00:16 mysqld_safe Starting mysqld daemon with databases from /Applications/MAMP/db/mysql 100513 2:00:16 [Warning] The syntax '--log_slow_queries' is deprecated and will be removed in a future release. Please use '--slow_query_log'/'--slow_query_log_file' instead. 100513 2:00:16 [Warning] You have forced lower_case_table_names to 0 through a command-line option, even though your file system '/Applications/MAMP/db/mysql/' is case insensitive. This means that you can corrupt a MyISAM table by accessing it with different cases. You should consider changing lower_case_table_names to 1 or 2 100513 2:00:16 [Warning] One can only use the --user switch if running as root 100513 2:00:16 [Note] Plugin 'FEDERATED' is disabled. 100513 2:00:16 [Note] Plugin 'ndbcluster' is disabled. InnoDB: Error: log file /usr/local/mysql/data/ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 16777216 bytes! 100513 2:00:16 [ERROR] Plugin 'InnoDB' init function returned error. 100513 2:00:16 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100513 2:00:16 [ERROR] /Applications/MAMP/Library/libexec/mysqld: unknown option '--skip-bdb' 100513 2:00:16 [ERROR] Aborting 100513 2:00:16 [Note] /Applications/MAMP/Library/libexec/mysqld: Shutdown complete 100513 02:00:16 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended A couple of things: 1) There are a bunch of different .cnf files that come with MAMP (my-huge, my-medium, etc.)...how can I tell which one is actually being used? 2) I deleted the ib_logfile0 and ib_logfile1 as recommended by another post on serverfault, and then ended up with more errors: 100519 16:01:30 InnoDB: Log file /usr/local/mysql/data/ib_logfile0 did not exist: new to be created InnoDB: Setting log file /usr/local/mysql/data/ib_logfile0 size to 16 MB InnoDB: Database physically writes the file full: wait... 100519 16:01:30 InnoDB: Log file /usr/local/mysql/data/ib_logfile1 did not exist: new to be created InnoDB: Setting log file /usr/local/mysql/data/ib_logfile1 size to 16 MB InnoDB: Database physically writes the file full: wait... InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100519 16:01:31 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 100519 16:01:31 InnoDB: Started; log sequence number 0 44556 100519 16:01:31 [ERROR] /Applications/MAMP/Library/libexec/mysqld: unknown option '--skip-bdb' 100519 16:01:31 [ERROR] Aborting And then I got this the next time I tried to run it: InnoDB: Unable to lock /usr/local/mysql/data/ibdata1, error: 35 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. Sorry that this is a lot of information, but I don't want to leave anything out. Thanks.

    Read the article

  • Severe mysqldump performance degradation using Centos Linux, 8GB PAE and MySQL 5.0.77

    - by Duncan Harris
    We use MySQL 5.0.77 on CentOS 5.5 on VMWare: Linux dev.ic.soschildrensvillages.org.uk 2.6.18-194.11.4.el5PAE #1 SMP Tue Sep 21 05:48:23 EDT 2010 i686 i686 i386 GNU/Linux We have recently upgraded from 4GB RAM to 8GB. When we did this the time of our mysqldump overnight backup jumped from under 10 minutes to over 2 hours. It also caused unresponsiveness on our plone based web site due to database load. The dump is using the optimized mysqldump format and is spooled directly through a socket to another server. Any ideas on what we could do to fix gratefully appreciated. Would a MySQL upgrade help? Anything we can do to MySQL config? Anything we can do to Linux config? Or do we have to add another server or go to 64-bit? We ran a previous (non-virtual) server on 6GB PAE and didn't notice a similar issue. This was on same MySQL version, but Centos 4.4. Server config file: [mysqld] port=3307 socket=/tmp/mysql_live.sock wait_timeout=31536000 interactive_timeout=31536000 datadir=/var/mysql/live/data user=mysql max_connections = 200 max_allowed_packet = 64M table_cache = 2048 binlog_cache_size = 128K max_heap_table_size = 32M sort_buffer_size = 2M join_buffer_size = 2M lower_case_table_names = 1 innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size=1G innodb_log_file_size=300M innodb_log_buffer_size=8M innodb_flush_log_at_trx_commit=1 innodb_file_per_table [mysqldump] # Do not buffer the whole result set in memory before writing it to # file. Required for dumping very large tables quick max_allowed_packet = 64M [mysqld_safe] # Increase the amount of open files allowed per process. Warning: Make # sure you have set the global system limit high enough! The high value # is required for a large number of opened tables open-files-limit = 8192 Server variables: mysql> show variables; +---------------------------------+------------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/local/mysql-5.0.77-linux-i686-glibc23/ | | binlog_cache_size | 131072 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/local/mysql-5.0.77-linux-i686-glibc23/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/mysql/live/data/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + -><()~*:""&| | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | | group_concat_max_len | 1024 | | have_archive | YES | | have_bdb | NO | | have_blackhole_engine | YES | | have_compress | YES | | have_crypt | YES | | have_csv | YES | | have_dynamic_loading | YES | | have_example_engine | NO | | have_federated_engine | YES | | have_geometry | YES | | have_innodb | YES | | have_isam | NO | | have_merge_engine | YES | | have_ndbcluster | DISABLED | | have_openssl | DISABLED | | have_ssl | DISABLED | | have_query_cache | YES | | have_raid | NO | | have_rtree_keys | YES | | have_symlink | YES | | hostname | app.ic.soschildrensvillages.org.uk | | init_connect | | | init_file | | | init_slave | | | innodb_additional_mem_pool_size | 1048576 | | innodb_autoextend_increment | 8 | | innodb_buffer_pool_awe_mem_mb | 0 | | innodb_buffer_pool_size | 1073741824 | | innodb_checksums | ON | | innodb_commit_concurrency | 0 | | innodb_concurrency_tickets | 500 | | innodb_data_file_path | ibdata1:10M:autoextend | | innodb_data_home_dir | | | innodb_adaptive_hash_index | ON | | innodb_doublewrite | ON | | innodb_fast_shutdown | 1 | | innodb_file_io_threads | 4 | | innodb_file_per_table | ON | | innodb_flush_log_at_trx_commit | 1 | | innodb_flush_method | | | innodb_force_recovery | 0 | | innodb_lock_wait_timeout | 50 | | innodb_locks_unsafe_for_binlog | OFF | | innodb_log_arch_dir | | | innodb_log_archive | OFF | | innodb_log_buffer_size | 8388608 | | innodb_log_file_size | 314572800 | | innodb_log_files_in_group | 2 | | innodb_log_group_home_dir | ./ | | innodb_max_dirty_pages_pct | 90 | | innodb_max_purge_lag | 0 | | innodb_mirrored_log_groups | 1 | | innodb_open_files | 300 | | innodb_rollback_on_timeout | OFF | | innodb_support_xa | ON | | innodb_sync_spin_loops | 20 | | innodb_table_locks | ON | | innodb_thread_concurrency | 8 | | innodb_thread_sleep_delay | 10000 | | interactive_timeout | 31536000 | | join_buffer_size | 2097152 | | key_buffer_size | 8384512 | | key_cache_age_threshold | 300 | | key_cache_block_size | 1024 | | key_cache_division_limit | 100 | | language | /usr/local/mysql-5.0.77-linux-i686-glibc23/share/mysql/english/ | | large_files_support | ON | | large_page_size | 0 | | large_pages | OFF | | lc_time_names | en_US | | license | GPL | | local_infile | ON | | locked_in_memory | OFF | | log | OFF | | log_bin | OFF | | log_bin_trust_function_creators | OFF | | log_error | | | log_queries_not_using_indexes | OFF | | log_slave_updates | OFF | | log_slow_queries | OFF | | log_warnings | 1 | | long_query_time | 10 | | low_priority_updates | OFF | | lower_case_file_system | OFF | | lower_case_table_names | 1 | | max_allowed_packet | 67108864 | | max_binlog_cache_size | 4294963200 | | max_binlog_size | 1073741824 | | max_connect_errors | 10 | | max_connections | 200 | | max_delayed_threads | 20 | | max_error_count | 64 | | max_heap_table_size | 33554432 | | max_insert_delayed_threads | 20 | | max_join_size | 18446744073709551615 | | max_length_for_sort_data | 1024 | | max_prepared_stmt_count | 16382 | | max_relay_log_size | 0 | | max_seeks_for_key | 4294967295 | | max_sort_length | 1024 | | max_sp_recursion_depth | 0 | | max_tmp_tables | 32 | | max_user_connections | 0 | | max_write_lock_count | 4294967295 | | multi_range_count | 256 | | myisam_data_pointer_size | 6 | | myisam_max_sort_file_size | 2146435072 | | myisam_recover_options | OFF | | myisam_repair_threads | 1 | | myisam_sort_buffer_size | 8388608 | | myisam_stats_method | nulls_unequal | | ndb_autoincrement_prefetch_sz | 1 | | ndb_force_send | ON | | ndb_use_exact_count | ON | | ndb_use_transactions | ON | | ndb_cache_check_time | 0 | | ndb_connectstring | | | net_buffer_length | 16384 | | net_read_timeout | 30 | | net_retry_count | 10 | | net_write_timeout | 60 | | new | OFF | | old_passwords | OFF | | open_files_limit | 8192 | | optimizer_prune_level | 1 | | optimizer_search_depth | 62 | | pid_file | /var/mysql/live/mysqld.pid | | plugin_dir | | | port | 3307 | | preload_buffer_size | 32768 | | profiling | OFF | | profiling_history_size | 15 | | protocol_version | 10 | | query_alloc_block_size | 8192 | | query_cache_limit | 1048576 | | query_cache_min_res_unit | 4096 | | query_cache_size | 0 | | query_cache_type | ON | | query_cache_wlock_invalidate | OFF | | query_prealloc_size | 8192 | | range_alloc_block_size | 4096 | | read_buffer_size | 131072 | | read_only | OFF | | read_rnd_buffer_size | 262144 | | relay_log | | | relay_log_index | | | relay_log_info_file | relay-log.info | | relay_log_purge | ON | | relay_log_space_limit | 0 | | rpl_recovery_rank | 0 | | secure_auth | OFF | | secure_file_priv | | | server_id | 0 | | skip_external_locking | ON | | skip_networking | OFF | | skip_show_database | OFF | | slave_compressed_protocol | OFF | | slave_load_tmpdir | /tmp/ | | slave_net_timeout | 3600 | | slave_skip_errors | OFF | | slave_transaction_retries | 10 | | slow_launch_time | 2 | | socket | /tmp/mysql_live.sock | | sort_buffer_size | 2097152 | | sql_big_selects | ON | | sql_mode | | | sql_notes | ON | | sql_warnings | OFF | | ssl_ca | | | ssl_capath | | | ssl_cert | | | ssl_cipher | | | ssl_key | | | storage_engine | MyISAM | | sync_binlog | 0 | | sync_frm | ON | | system_time_zone | GMT | | table_cache | 2048 | | table_lock_wait_timeout | 50 | | table_type | MyISAM | | thread_cache_size | 0 | | thread_stack | 196608 | | time_format | %H:%i:%s | | time_zone | SYSTEM | | timed_mutexes | OFF | | tmp_table_size | 33554432 | | tmpdir | /tmp/ | | transaction_alloc_block_size | 8192 | | transaction_prealloc_size | 4096 | | tx_isolation | REPEATABLE-READ | | updatable_views_with_limit | YES | | version | 5.0.77 | | version_comment | MySQL Community Server (GPL) | | version_compile_machine | i686 | | version_compile_os | pc-linux-gnu | | wait_timeout | 31536000 | +---------------------------------+------------------------------------------------------------------+ 237 rows in set (0.00 sec)

    Read the article

  • Query for props list with or without values

    - by vitto
    Hi, I'm trying to make a SELECT on three relational tables like these ones: table_materials -> material_id - material_name table_props -> prop_id - prop_name table_materials_props - row_id -> material_id -> prop_id - prop_value On my page, I'd like to get a result like this one but i have some problem with the query: material prop A prop B prop C prop D prop E wood 350 NULL NULL 84 16 iron NULL 17 NULL NULL 201 copper 548 285 99 NULL NULL so the query should return something like: material prop_name prop_value wood prop A 350 wood prop B NULL wood prop C NULL wood prop D 84 wood prop E 16 // and go on with others rows i thought to use something like: SELECT * FROM table_materials AS m INNER JOIN table_materials_props AS mp ON m.material_id = mp.material_id INNER JOIN table_materials_props AS p ON mp.prop_id = p.prop_id ORDER BY p.prop_name the problem is the query doesn't return the NULL values, and I need the same prop order for all the materials regardless of prop values are NULL or not I hope this example is clear!

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >