Search Results

Search found 14956 results on 599 pages for 'mysql dba'.

Page 63/599 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • mysql compatability error - rake aborted error on rake db:create

    - by user1904563
    I'm a starter in ROR I' m using ruby 1.9.3 and rails 3.2.1 This is what appears on using rake db:create command **>rake db:create rake aborted! Please install the mysql adapter: 'gem install activerecord-mysql-adapter' <can't activate mysql <~> 2.8.1>, already activated mysql-2.9.0-x86-mingw32. Make sur all dependancies are added to Gemfile.>** Have installed mysql gem and copied libmysql.dll file to Ruby193/bin (Phpmyadmin and xampp also works good). Please help me to solve this issue.

    Read the article

  • Drupal, mysql server settings

    - by Patrick
    hi, I've a problem to configure database settings in Drupal. I will propose here some sample data: Database Mysql: Database: databaseName User: user Password: password Server: server.com Server Choice: mysqldb2 (in phpmyadmin I have this option and I can choose between mysqldb1 and mysqldb2 to access to the mysql server) The error message I get is: The mysql error was: Access denied for user: 'user@localhost' (Using password: YES). I've tried the following lines in settings.php but I always get the same error message: $db_url = 'mysql://user:password@localhost/databaseName'; $db_url = 'mysql://user:password@localhost/databaseName/mysqldb2'; The user and password work in phpmyadmin so I'm sure they are correct. thanks

    Read the article

  • Using MySQL as data source in Microsoft SQL Server Analysis Services

    - by coldilocks
    Hi, I have installed the latest .net connector (http://www.mysql.com/downloads/connector/net/), I can add MySQL databases as Data Sources, I can even browse through the data from Business Intelligence Studio. The problem is that I CANNOT create a datasource view, or if I do create one without tables, trying to add them after the fact gives me the same error. Specifically it looks like the data source view wizard tries to submit queries against the MySQL database using square brackets/braces, and the query bombs. I get an error message like: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '[my_db].[cheatType]' at line 2 So, in summary, has anyone been able to create a data source view using MySQL tables and, if so, can they please show me how this can be done. Thanks for any help!

    Read the article

  • mysql gem for snow leopard

    - by Will
    I had trouble with the gem at first but got it to work when I installed the 64-bit MySQL and reinsatlled the gem with arch flags. So it work in rails. The error I used to get was uninitialized constant MysqlCompat::MysqlRes but that is now gone :) However in Xcode when I run a RubyCocoa project I still get the old error of uninitialized constant MysqlCompat::MysqlRes Does anyone know why this may be? Is it because the gdb is 64-bit? How can it work in Rails but not in RubyCocoa? A little debugging shows that it fails to load mysql_api.bundle /Library/Ruby/Gems/1.8/gems/mysql-2.8.1/lib/mysql_api.bundle: dlopen(/Library/Ruby/Gems/1.8/gems/mysql-2.8.1/lib/mysql_api.bundle, 9): no suitable image found. Did find: (LoadError) /Library/Ruby/Gems/1.8/gems/mysql-2.8.1/lib/mysql_api.bundle: mach-o, but wrong architecture - /Library/Ruby/Gems/1.8/gems/mysql-2.8.1/lib/mysql_api.bundle from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require'

    Read the article

  • Integration of Photoshop and MySQL

    - by NewToProgrammingWorld
    I’m working to integrate Photoshop (CS4, local) with MySQL (Linux, remote), such that: a) image file information, data and meta data can be exported from PS into MySQL where same can be manipulated both manually and calculated, and then b) the MySQL fields can be referenced in Photoshop scripting for further automated image file manipulation, and c) the MySQL fields can be referenced for other purposes such as dynamic propagation of files and associated file data to a website, for example. I’ve spent many hours running searches for these concepts on StackOverflow, Google, and Adobe Support. I’m coming up empty, which leads me to believe that it’s not natively possible, and that I’ll need some middle language like PHP. Does anyone know if it’s possible for Photoshop and MySQL to bi-directionally share data? If so, how? If that’s not possible, what solution(s) might you recommend, in terms of specific tools/technologies that I can research further. Thank you.

    Read the article

  • Problem with Sphinx resultset larger than 16 MB in MySQL

    - by gmemon
    Hello All, I am accessing a large indexed text dataset using sphinxse via MySQL. The size of resultset is on the order of gigabytes. However, I have noticed that MySQL stops the query with following error whenever the dataset is larger than 16MB: 1430 (HY000): There was a problem processing the query on the foreign data source. Data source error: bad searchd response length (length=16777523) length shows the length of resultset that offended MySQL. I have tried the same query with Sphinx's standalone search program. It works fine. I have tried all possible variables in both MySQL and Sphinx, but nothing is helping. I am using Sphinx 0.9.9 rc-2 and MySQL 5.1.46. Thanks

    Read the article

  • MySQL storing negative and positive decimals

    - by Shishant
    Hello, I want to be able to store -11.99 and +11.99 kind of values in mysql db I am thinking of decimals instead of varchar. But reading mysql site I found out that its incompatible with older versions of mysql As a result of the change from string to numeric format for DECIMAL storage, DECIMAL columns no longer store a leading + or - character or leading 0 digits. Before MySQL 5.0.3, if you inserted +0003.1 into a DECIMAL(5,1) column, it was stored as +0003.1. As of MySQL 5.0.3, it is stored as 3.1. For negative numbers, a literal - character is no longer stored. Applications that rely on the older behavior must be modified to account for this change. So what should be the data type, If I have to give up varchar and make it compatible with older versions too?

    Read the article

  • MySQL inconsistent table scan results

    - by user148207
    What's going on here? mysql> select count(*) from notes where date(updated_at) > date('2010-03-25'); +----------+ | count(*) | +----------+ | 0 | +----------+ 1 row in set (0.59 sec) mysql> select count(*) from notes where message like'%***%' and date(updated_at) > date('2010-03-25'); +----------+ | count(*) | +----------+ | 26 | +----------+ 1 row in set (1.30 sec) mysql> explain select count(*) from notes where date(updated_at) > date('2010-03-25'); +----+-------------+-------+------+---------------+------+---------+------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+--------+-------------+ | 1 | SIMPLE | notes | ALL | NULL | NULL | NULL | NULL | 588106 | Using where | +----+-------------+-------+------+---------------+------+---------+------+--------+-------------+ 1 row in set (0.07 sec) mysql> explain select updated_at from notes where message like'%***%' and date(updated_at) > date('2010-03-25'); +----+-------------+-------+------+---------------+------+---------+------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+--------+-------------+ | 1 | SIMPLE | notes | ALL | NULL | NULL | NULL | NULL | 588106 | Using where | +----+-------------+-------+------+---------------+------+---------+------+--------+-------------+ 1 row in set (0.09 sec) mysql>

    Read the article

  • Connecting to hosted MySQL server with Java

    - by Infiniti Fizz
    Hi, I've been recently trying to connect to a hosted MySQL using Java but can't get it to work. I can connect to a local MySQL with localhost using: connect = DriverManager.getConnection("jdbc:mysql://localhost/lego?" + "user=******&password=*******"); (Replacing the astrisks withmy username and password) I can connect to the hosted MySQL database fine with PHP using: mysql_connect('mysql.hosts.co.uk','******','**********'); mysql_select_db('test'); My problem is, I cannot connect via Java. I have an Exception which is caught if the connection doesn't work and this is always printed out. Any ideas why it isn't working? Am I doing something wrong? Thanks for your time, InfinitiFizz

    Read the article

  • How to Import a .bak file using MySQL?

    - by user682526
    I have no knowledge on MySQL and I have received a .bak file from one of my customer and he asked me to "import" this file using MySQL. I installed MysQL Community Server free download which installs MySQL Command Line Client. Now I don't know how to Import this .bak file and can't read the necessary data that I need. I tried installation of MySQL EE free trial as well but it doesn't give me anything that I can invoke the DB GUI so I can import the .bak file either. I'm frustrated now and can't go anywhere. Can you please help ? Thanks !

    Read the article

  • How to get number of rows deleted from mysql in shell script

    - by simonlord
    Hi all I can't work out how to get the mysql client to return the number of rows deleted to the shell when running a delete. Does anyone know what option will enable this? Or ways around it? Here's what i'm trying, but i get no output: #!/bin/bash deleted=`mysql mydb -e "delete from mytable where insertedtime < '2010-04-01 00:00:00'"|tail -n 1` I was expecting something like this as the output from mysql: deleted 999999 Which is why i have the tail -n 1 so i only pick up the count and not the column name. When running the command by hand (mysql mydb -e "delete from mytable where insertedtime < '2010-04-01 00:00:00'") there is no output. When running the command interactively when running the mysql client i ge the following: mysql>delete from mytable where insertedtime < '2010-04-01 00:00:00'; Query OK, 0 rows affected (0.00 sec) I want to get the rows affected count into my shell variable. Any help would be most appreciated.

    Read the article

  • Installing mySQL on mac for use with python

    - by Paul Patterson
    I am aware that there are umpteen similar questions here, and on other forums, but none of them have been able to help me. I simply want to install mySQL on my mac (running snow leopard 10.6.5) for use with Python. So far I have: 1) downloaded and installed [mysql-5.5.8-osx10.6-x86_64.dmg] (I have also accidentally downloaded and installed [mysql-5.1.54-osx10.6-x86_64.dmg]) 2) downloaded and installed [mySQL-python-1.2.3] 3) added the following to my .bash_profile: [export PATH=$PATH:/usr/local/mysql/bin] but when i run:import mySQLdb in terminal I am met with the following message: Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named mySQLdb Can anyone help?

    Read the article

  • MySQL 5 in MySQL 4 compatible mode for one database?

    - by Horace Ho
    In a recent project, I have to maintain some PHP code. I set up a development server and installed MySQL, Apache, PHP, ..etc. The program is terminated with an error: Unknown column _ _ _ in 'on clause' Cannot select .... Google shows that it's a change of syntax around JOINs, parentheses are needed. As you may imagine, fixing all that PHP SQL strings will be the last resort. _< Is is possible to config MySQL 5 to run at MySQL 4 compatible mode? Or even better, for only one database? Thanks! p.s. Since we are going to host the code on a new production server (MySQL 5 on a CentOS box), the chance to install MySQL 4 on the new server might be slim.

    Read the article

  • MySQL Accept Any Password

    - by George
    Suppose that I have a test server with a large group of test accounts. The test accounts have unknown passwords which are hard-coded into the application's reports and are stored encrypted in the mysql.users table. Is there any option or hack which can be used to make mysql accept any text as the "correct" password for an account? For example: Update mysql.user Set Password="*" where 1=1 Note: The above line wouldn't work because it would literally set the password to "*" and not the wildcard character. However, I am looking for a way to create a mysql account which would accept anything as a valid password. This machine is disconnected from the network and I have full access to the mysql database...

    Read the article

  • Best solution to import records from MySQL database to MS SQL (Hourly)

    - by xkingpin
    I need to import records stored in a MySQL Database that I do not maintain into my Sql Server 2005 database (x64) We should import the records at an interval basis (probably 1 hour). What would be the best solution to perform the regular import? Windows Service (using reference MySql.data dll) Windows Client (could make it automated) SQL Extended Stored Procedure (is it possible to reference the MySQL.data dll?) SSIS package - Install MySQL ODBC driver The problem with #4 is that I do not really want to support the ODBC driver on the sql server. I'm not sure if you can even reference the x86 MySql.data dll into a x64 sql server process for #3. (Or if you can even reference that dll within a sql server project)

    Read the article

  • How to completely remove MySQL from a windows 7 machine

    - by Jazibobs
    I'm new to MySQL struggling to find a version and workbench which works stably on my 64 bit windows 7 machine. I've decided to attempt to completely remove MySQL from my machine and to restart the installation process from scratch. However, after uninstalling all software linked with MySQL using conventional control panel uninstalling means some MySQL windows services still remain on my machine. I can't see any obvious method to remove these and they have since been causing me difficulties when trying to install different versions of MySQL. Could anyone please advise?

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • That's all about nuances

    - by user13334359
    When I sent a proposal for session "Managing and Troubleshooting MySQL for Oracle DBAs" to MySQL Connect conference org committee it had not any mention of Oracle in its name, but later I was asked to provide more details for former Oracle DBAs who want to use MySQL. I was fast and I said "yes".So my original aim to teach people to troubleshoot MySQL changed to teaching of how different is MySQL from Oracle in troubleshooting aspects. Although both RDBMs have very much in common they are definitely very different. So what I am going to speak about this time is nuances of how MySQL stores data, how it manages locks, why its high availability solutions: MySQL Cluster and Replication have same names as Oracle's, but work differently and more. And, of course, I will tell how to troubleshoot it all.

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enoigh of memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. and it'll full again. that's the terrible circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • What is the fastest way to clone an INNODB table within the same server?

    - by Vic
    Our development server is a replication slave of our production server. We have a script that developers use if they want to run their applications/bug fixes against fresh data. That script looks like this: dbs=( analytics auth logs users ) server=localhost conn="-h ${server} -u ${username} --password=${password}" # Stop the replication client so we don't encounter weird data. echo "STOP SLAVE" | mysql ${conn} # Bunch of bulk insert optimizations echo "SET autocommit=0" | mysql ${conn} echo "SET unique_checks=0" | mysql ${conn} echo "SET foreign_key_checks=0" | mysql ${conn} # Restore all databases and tables. for sourcedb in ${dbs[*]} do destdb=${prefix}${sourcedb} echo "Dropping database ${destdb}..." echo "DROP DATABASE IF EXISTS ${destdb}" | mysql ${conn} echo "CREATE DATABASE ${destdb}" | mysql ${conn} # First, all the tables. for table in `echo "SHOW FULL TABLES WHERE Table_type <> 'VIEW'" | mysql $conn $sourcedb | tail -n +2`; do if [[ "${table}" != 'BASE' && "${table}" != 'TABLE' && "${table}" != 'VIEW' ]] ; then createTable=`echo "SHOW CREATE TABLE ${table}"|mysql -B -r $conn $sourcedb|tail -n +2|cut -f 2-` echo "Restoring ${destdb}/${table}..." echo "$createTable ;" | mysql $conn $destdb insertData="INSERT INTO ${destdb}.${table} SELECT * FROM ${sourcedb}.${table}" echo "$insertData" | mysql $conn $destdb fi fi done done echo "SET foreign_key_checks=1" | mysql ${conn} echo "SET unique_checks=1" | mysql ${conn} echo "COMMIT" | mysql ${conn} # Restart the replication client echo "START SLAVE" | mysql ${conn} All of these operations are, as I mentioned, within the same server. Is there a faster way to clone the tables I'm not seeing? They're all INNODB tables. Thanks!

    Read the article

  • connecting mysql from android with jdbc

    - by manuraphy
    hai i used the following code to connect mysql in local host from android. it only displays the actions given in catch section . i dont know whether its a connection problem or not package com.test1; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class Test1Activity extends Activity { /** Called when the activity is first created. */ String str="new"; static ResultSet rs; static PreparedStatement st; static Connection con; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); final TextView tv=(TextView)findViewById(R.id.user); try { Class.forName("com.mysql.jdbc.Driver"); con=DriverManager.getConnection("jdbc:mysql://10.0.2.2:8080/example","root",""); st=con.prepareStatement("select * from country where id=1"); rs=st.executeQuery(); while(rs.next()) { str=rs.getString(2); } tv.setText(str); setContentView(tv); } catch(Exception e) { tv.setText(str); } } } when executes it displays "new" in the avd. java.lang.management.ManagementFactory.getThreadMXBean, referenced from method com.mysql.jdbc.MysqlIO.appendDeadlockStatusInformation Could not find class 'javax.naming.StringRefAddr', referenced from method com.mysql.jdbc.ConnectionPropertiesImpl$ConnectionProperty.storeTo Could not find method javax.naming.Reference.get, referenced from method com.mysql.jdbc.ConnectionPropertiesImpl$ConnectionProperty.initializeFrom can anyone suggest some solution ? and thankz in advance

    Read the article

  • MySQL Unique hash insertion

    - by Jesse
    So, imagine a mysql table with a few simple columns, an auto increment, and a hash (varchar, UNIQUE). Is it possible to give mysql a query that will add a column, and generate a unique hash without multiple queries? Currently, the only way I can think of to achieve this is with a while, which I worry would become more and more processor intensive the more entries were in the db. Here's some pseudo-php, obviously untested, but gets the general idea across: while(!query("INSERT INTO table (hash) VALUES (".generate_hash().");")){ //found conflict, try again. } In the above example, the hash column would be UNIQUE, and so the query would fail. The problem is, say there's 500,000 entries in the db and I'm working off of a base36 hash generator, with 4 characters. The likelyhood of a conflict would be almost 1 in 3, and I definitely can't be running 160,000 queries. In fact, any more than 5 I would consider unacceptable. So, can I do this with pure SQL? I would need to generate a base62, 6 char string (like: "j8Du7X", chars a-z, A-Z, and 0-9), and either update the last_insert_id with it, or even better, generate it during the insert. I can handle basic CRUD with MySQL, but even JOINs are a little outside of my MySQL comfort zone, so excuse my ignorance if this is cake. Any ideas? I'd prefer to use either pure MySQL or PHP & MySQL, but hell, if another language can get this done cleanly, I'd build a script and AJAX it too. Thanks!

    Read the article

  • Retrieve blob field from mySQL database with MATLAB

    - by yuk
    I'm accessing public mySQL database using JDBC and mySQL java connector. exonCount is int(10), exonStarts and exonEnds are longblob fields. javaaddpath('mysql-connector-java-5.1.12-bin.jar') host = 'genome-mysql.cse.ucsc.edu'; user = 'genome'; password = ''; dbName = 'hg18'; jdbcString = sprintf('jdbc:mysql://%s/%s', host, dbName); jdbcDriver = 'com.mysql.jdbc.Driver'; dbConn = database(dbName, user , password, jdbcDriver, jdbcString); gene.Symb = 'CDKN2B'; % Check to make sure that we successfully connected if isconnection(dbConn) qry = sprintf('SELECT exonCount, exonStarts, exonEnds FROM refFlat WHERE geneName=''%s''',gene.Symb); result = get(fetch(exec(dbConn, qry)), 'Data'); fprintf('Connection failed: %s\n', dbConn.Message); end Here is the result: result = [2] [18x1 int8] [18x1 int8] [2] [18x1 int8] [18x1 int8] result{1,2}' ans = 50 49 57 57 50 57 48 49 44 50 49 57 57 56 54 55 51 44 This is wrong. The length of 2nd and 3rd columnsshould match the number in the 1st column. The 1st blob, for example, should be [21992901; 21998673]. How I can convert it? Update: Just after submitting this question I thought it might be hex representation of a string. And it was confirmed: >> char(result{1,2}') ans = 21992901,21998673, So now I need to convert all blobs hex data into numeric vectors. Still thinking to do it in a vectorized way, since number of rows can be large.

    Read the article

  • PHP mySQL Error

    - by happyCoding25
    Hello, Im new to php so I decided to follow this tutorial for a simple login screen. I got the code setup but when I try login I get this error: Warning: mysql_num_rows(): supplied argument is not a valid MySQL result resource in (a long file path to the script) on line 27 The code I got from the tutorial is: <?php ob_start(); $host="thehost"; // Host name $username="myusername"; // Mysql username $password="mypass"; // Mysql password $db_name="test"; // Database name $tbl_name="members"; // Table name // Connect to server and select databse. mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); // Define $myusername and $mypassword $myusername=$_POST['myusername']; $mypassword=$_POST['mypassword']; // To protect MySQL injection (more detail about MySQL injection) $myusername = stripslashes($myusername); $mypassword = stripslashes($mypassword); $myusername = mysql_real_escape_string($myusername); $mypassword = mysql_real_escape_string($mypassword); $sql="SELECT * FROM $tbl_name WHERE username='$myusername' and password='$mypassword'"; $result=mysql_query($sql); // Mysql_num_row is counting table row $count=mysql_num_rows($result); // If result matched $myusername and $mypassword, table row must be 1 row if($count==1){ // Register $myusername, $mypassword and redirect to file "login_success.php" session_register("myusername"); session_register("mypassword"); header("location:login_success.php"); } else { echo "Wrong Username or Password"; } ob_end_flush(); ?> (Note: All of the mySQL database info is filled in on my version) Aslo, the author gives a php5 version and a normal php version. I have tried both and gotten the same error. If anyone knows why this is happening help is really appreciated.

    Read the article

  • MySQL can't access root account or reset with mysqladmin

    - by glumptious
    So if I type mysql -u root I'm supposedly logged in, however upon trying to create or access a database I get this lovely error: ERROR 1044 (42000): Access denied for user ''@'localhost' to database 'test1'. I haven't the foggiest idea why after logging in as root it's trying access DB's as ''@'localhost' and it's driving me a bit crazy right now. Possibly related, when I try to set the root password I get the error mysqladmin: Can't turn off logging; error: 'Access denied; you need (at least one of) the SUPER privilege(s) for this operation'. I've tried removing mysql-server via running apt-get purge mysql-server and then reinstalling with no luck. This is running Ubuntu Server 12.10 64-bit and mysql is indeed running. --Edit-- I wonder if perhaps there is no root user. So I try to start MySQL with --skip-grant-tables and the create the root user but then I'm given this: ERROR 1290 (HY000): The MySQL server is running with the --skip-grant-tables option so it cannot execute this statement. Fun fun fun fun fun.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >