Search Results

Search found 96031 results on 3842 pages for 'mysql server'.

Page 66/3842 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Uninstall mysql completely windows 7

    - by cestmoimarin
    Greetings, I understand that this questions has been asked twice. However the answer is not there. I've removed all mysql in registry via regedit that I could find. I made programdata folder visible, deleted mysql folder that's there. Windows 7 doesn't have a very good 'grep' equivalent that I could use. I tried using powershell to find any hidden files but it requires digital signatures which I do not know how to create. Besides windows restore, is there any other way I can force my old 'invisible' mysql 5.1.40 to disappear? I want to try and install mysql via other ways, I'm not sure how to use cmake though to compile the code.

    Read the article

  • Windows Server 2003 R2 SP2 can't access the internet

    - by Ishmael
    Our 64-bit windows server can no longer access the internet although it is a web-server that is hosting webpages accessible by the internet. Since this dilemma the server can no longer receive windows updates or Symantec virus definitions. We have checked our firewalls so nothing is blocked. Any insight on why this issue happened would be appreciated. Also the server is running IE6 which we have been trying to update.

    Read the article

  • Exim 4 pipe select command/script from mysql db

    - by axel.klein
    Is there an option to run a mysql lookup in the pipe driver of exim? MYSQL_Q_SCRIPT=SELECT script FROM MYSQL_EMAILTABLE WHERE domain='${quote_mysql:$domain}' AND local_part='${quote_mysql:$local_part}' command = "${lookup mysql {MYSQL_Q_SCRIPT}{$value} I am always getting an error like this: "Expansion of "${lookup" from command "${lookup mysql {SELECT script FROM emails WHERE domain='${quote_mysql:$domain}' AND local_part='${quote_mysql:$local_part}'}{$value}}" in run_script transport failed: missing lookup type" The problem is that exactly the same query works fine in the appenddriver. So I do not see the mistake.

    Read the article

  • MySQL open files limit

    - by Brian
    This question is similar to set open_files_limit, but there was no good answer. I need to increase my table_open_cache, but first I need to increase the open_files_limit. I set the option in /etc/mysql/my.cnf: open-files-limit = 8192 This worked fine in my previous install (Ubuntu 8.04), but now in Ubuntu 10.04, when I start the server up, open_files_limit is reported to be 1710. That seems like a pretty random number for the limit to be clipped to. Anyway, I tried getting around it by adding a line like this in /etc/security/limits.conf: mysql hard nofile 8192 I also tried adding this to the pre-start script in mysql's upstart config (/etc/init/mysql.conf): ulimit -n 8192 Obviously neither of those things worked. So where is the hoop that has been added between Ubuntu 8.04 and 10.04 through which I must jump in order to actually increase the open files limit?

    Read the article

  • MySQL installation question.

    - by srtriage
    I am far from a DBA and have a question. Recently I installed MySQL. On my machine C:\ is a 50GB partition of two mirrored 10k SAS drives. The remaining space on those drives is allocated to D:. I also have a SSD mounted as E:. When I installed MySQL, I installed it to E:\ assuming that that is where the database information would be held since I had installed it there. I am now seeing C:\ProgramData\MySQL\MySQL Server 5.1\data\peq, peq being the name of my main database. Is my database being stored in C:\ and if so, how do I fix it to store the DB on the SSD?

    Read the article

  • syslog-ng and nging logs to mysql

    - by Katafalkas
    So couple of days ago I asked how to log php and nginx logs to centralized MySQL database, and m0ntassar gave a perfect answer :) cheer ! The problem I am facing now is that I can not seem to get it working. syslog-ng version: # syslog-ng --version syslog-ng 3.2.5 This is my nginx log format: log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; syslog-ng source: source nginx { file( "/var/log/nginx/tg-test-3.access.log" follow_freq(1) flags(no-parse) ); }; syslog-ng destination: destination d_sql { sql(type(mysql) host("127.0.0.1") username("syslog") password("superpasswd") database("syslog") table("nginx") columns("remote_addr","remote_user","time_local","request","status","body_bytes_sent","http_ referer","http_user_agent","http_x_forwarded_for") values("$REMOTE_ADDR", "$REMOTE_USER", "$TIME_LOCAL", "$REQUEST", "$STATUS","$BODY_BYTES_SENT", "$HTTP_REFERER", "$HTTP_USER_AGENT", "$HTTP_X_FORWARDED_FOR")); }; MySQL table for testing purposes: CREATE TABLE `nginx` ( `remote_addr` varchar(100) DEFAULT NULL, `remote_user` varchar(100) DEFAULT NULL, `time` varchar(100) DEFAULT NULL, `request` varchar(100) DEFAULT NULL, `status` varchar(100) DEFAULT NULL, `body_bytes_sent` varchar(100) DEFAULT NULL, `http_referer` varchar(100) DEFAULT NULL, `http_user_agent` varchar(100) DEFAULT NULL, `http_x_forwarded_for` varchar(100) DEFAULT NULL, `time_local` text, `datetime` text, `host` text, `program` text, `pid` text, `message` text ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Now first thing that goes wrong is when I restart syslog-ng: # /etc/init.d/syslog-ng restart Stopping syslog-ng: [ OK ] Starting syslog-ng: WARNING: You are using the default values for columns(), indexes() or values(), please specify these explicitly as the default will be dropped in the future; [ OK ] I have tried creating a file destination and it all works fine, and then I have tried replacing my destination with: destination d_sql { sql(type(mysql) host("127.0.0.1") username("syslog") password("kosmodromas") database("syslog") table("nginx") columns("datetime", "host", "program", "pid", "message") values("$R_DATE", "$HOST", "$PROGRAM", "$PID", "$MSGONLY") indexes("datetime", "host", "program", "pid", "message")); }; Which did work and it was writing stuff to mysql, The problem is that I want to write stuff to in exact format as nginx log format is. I assume that I am missing something really simple or I need to do some parsing between source and destination. Any help will be much appreciated :)

    Read the article

  • Binding MySQL to run from the public or private LAN IP address - which one is faster

    - by Lamin Barrow
    So we have 2 servers all running at the same web host. We have bind MySQL to listen on the public ip-address of the database server and the web server connects to it from the public ip. Both servers run on the same private network. Currently, the DB connect method from our php script takes about 3ms to connect to the MySQL database server host. My question is, would MySql data interaction from the web server be faster if we bind it to listen on the private lan address on the database server instead of the public IP? or is it the same regardless and it wont make a different. i have moved this question to server fault http://serverfault.com/questions/438156/binding-mysql-to-run-from-the-public-or-private-lan-ip-address-which-one-is-fa

    Read the article

  • How should I copy the "mysql" database to my new server using PHPMyAdmin

    - by undefined
    My new webhosting company has set up a MySQL database for me and it has the tables MySQL and Information_schema already there. I want to copy my existing database from another server (a) to the new one (b). I assume I need to overwrite the 'mysql' database on server (b) with the one from my existing server (a) or atleast copy over the permissions. 1) What information does the mysql database hold? users and permissions I can see, does it have the login info for phpMyAdmin? I dont want to overwrite that obviously. 2) Should I drop the table on server (b) and import my original? 3) Should I just copy the users table? 4) Do I need to worry about the information_schema table? should I copy this over too? thanks

    Read the article

  • Cannot connect remote mysql, Cpanel?

    - by BerkErarslan
    I use centos and cpanel I need to use remote mysql with php. I add the ip which I try connect to server to "Remote Database Access Hosts" on Cpanel. However, when I try to connect server with this code: <?php $link = mysql_connect("server_ip", "xxx", "xxx") or die(mysql_error()); $db = mysql_select_db("xxx", $link) or die (mysql_error()); print_r($db); I have error like this: Warning: mysql_connect() [function.mysql-connect]: Can't connect to MySQL server on 'xxxx' (10060) in C:\AppServ\www\test.php on line 3 Can't connect to MySQL server on '94.138.204.234' (10060) I also try to connect using "server_ip:3306" but it still doesn't work. How Can I solve this problem ?

    Read the article

  • Windows API BackupRead failing with error 50 on Windows Storage Server 2008 R2

    - by Jason F
    We have a backup application that uses the Windows API BackupRead. It works correctly on Windows Server 2003, 2008, 2008 R2. It does not work on Storage Server 2008 R2. It always fails with error 50 - The request is not supported. The documentation for BackupRead gives no indication that it will not work with Storage Server 2008 R2. Anyone else have any experience using this API on Storage Server 2008 R2?

    Read the article

  • Optimizing PHP<>MySQL performance

    - by BarsMonster
    I am trying to optimize my PHP<MySQL on this test script: <? for($i=0;$i<100;$i++)//Itterations count $res.= var_dump(loadRow("select body_ru from articles where id>$i*50 limit 100")); print_r($res); ?> I have APC, and article table have an index on id. Also, all these queries are hitting query cache, so sole MySQL performance if great. But when I am using ab -c 10 -t 10 to bench this scipt, I am getting: 100 itterations: ~100req/sec (~10'000 MySQL queries per second) 5 itteration: ~200req/sec 1 itteration: ~380req/sec 0 itteration: ~580req/sec I've tried to disable persistent connections in PHP - it made it slower a bit. So, how can I make it work faster, provided that MySQL is not limiting performance here?

    Read the article

  • MySQL & tmpfs : performance

    - by Serty Oan
    I was wondering if, and how much, using tmpfs could improve MySQL performance and how it should be done ? My guess would be to do mount -t tmpfs -o size=256M /path/to/mysql/data/DatabaseName, and to use the database normally but maybe I'm wrong (I'm using MyISAM tables only). Will a hourly rsync between the tmpfs /path/to/mysql/data/DatabaseName and /path/to/mysql/data/DatabaseName_backup penalize performances ? If so, how should I make the backup of the tmpfs database ? So, is it a good way to do things, is there a better way or am I losing my time ?

    Read the article

  • BizTalk Server 2009 highly available configuration

    - by Matthew
    Hi. I am setting up a BizTalk 2009 Server production environment and I wonder if this tutorial: http://msdn.microsoft.com/en-us/library/cc558972(BTS.10).aspx is still fairly accurate for an installation on all the latest platform software (Hyper-V, Windows Server 2008, BizTalk Server 2009, SQL Server 2008)? Is there anything like this but more up to date? Any other advice about this process and links to articles, blogs & tutorials would be appreciated.

    Read the article

  • Mysql charset problem

    - by Newtonx
    I'm trying to import some data from one server to another. But when I do it, I'm having problems with charset. Words like Goiânia became Goiâni and conceição became conceição My Application was set to use latin1 charset Server 1 : MySQL Charset : UTF-8 Unicode (utf8) table collation : latin1_swedish_ci Server 2 : MySQL Charset: UTF-8 Unicode (utf8) table collation : latin1_swedish_ci Command I used to export data from server 1 mysqldump -u root -p --default-character-set=iso-8859-1 database_name db.sql Command used to restore to server 2 mysql -u root -p database_name < db.sql

    Read the article

  • Problem attaching mdf file in sql server 2008

    - by Fraz Sundal
    I have an mdf file of sql server 2005 database now i want it to attach in sql server 2008 R2 but when i try to attach it, it gave me error saying. Unable to open the physical file "D:\Fraz\Freelance\Database\DBmdf13aug\mbh_pk.mdf". Operating system error 5: "5(Access is denied.)". (Microsoft SQL Server, Error: 5120) what can be the problem and how to fix it? Is this folder permission error or sql server 2008 have something missing

    Read the article

  • what's the port number of mysql

    - by user28233
    I'm trying to connect mysql with vb.net, I've already downloaded the mysql connector-net. And installed it. But I don't know what is the port number , server address of mysql. Its needed in the connection string. Please help, Server=myServerAddress;Port=1234;Database=myDataBase;Uid=myUsername;Pwd=myPassword;

    Read the article

  • MySQL Error: 1067

    - by Vicky
    When I try t start MySQL server service it is giving an error: "Could not start the MySQL server in local computer Error 1067: The process terminated unexpectedly" I need to fix the problem without having to uninstall the MySQL server. Is there a way to do it?

    Read the article

  • Mysql master-master not replicating

    - by frankil
    I'm setting up a master-master mysql replication on two servers (db1 and db2). I started with setting up db2 as a slave to db1 and that works fine. But when I set up db1 as a slave to db2 it isn't replicating. On the face of it everything looks fine but the data isn't replicating. There are no errors in either of the error logs. The slave status is updating the bin log position. I have used mysqlbinlog to examine both the binlog on the db2 and the relay log on db1 and all of the queries are going in there, but not being executed to db1. "show slave status" on both servers shows that both the slave io and sql threads are "Yes" and that the relay log position is updated by the sql thread. Also on both servers: >echo "show processlist" | mysql | grep "system user" 166819 system user NULL Connect 3655 Waiting for master to send event NULL 166820 system user NULL Connect 3507 Has read all relay log; waiting for the slave I/O thread to update it NULL Relevant config for db1: server-id = 1 log-slave-updates replicate-same-server-id = 0 auto_increment_increment = 4 auto_increment_offset = 1 master-host = db2 master-port = 3306 master-user = slaveuser master-password = *** skip-slave-start sync_binlog = 1 binlog-ignore-db=mysql Config for db2 server-id = 2 log-slave-updates replicate-same-server-id = 0 auto_increment_increment = 4 auto_increment_offset = 2 master-host = db1 master-port = 3306 master-user = slaveuser master-password = *** sync_binlog = 1 relay-log=mysql-relay-bin binlog-ignore-db=mysql What else can I look for to make sure db1 executes the queries from db2?

    Read the article

  • Replacement for MySQL proxy

    - by tostinni
    We have a standard MySQL Master/Slave replication for our database. In order to use the slave server we have set up MySQL Proxy. However we have been strongly discouraged to use it as it still alpha and not very well supported. Our application is built with Drupal 7 which doesn't use very effectively its slave database (see my related question on Drupal Answers). What tools could we use to act like MySQL proxy and send SELECT queries on the slave server in order to distribute the load ?

    Read the article

  • Long connection times from PHP to MySQL on EC2

    - by Erik Giberti
    I'm having an intermittent issue connecting to a database slave with InnoDB. Intermittently I get connections taking longer than 2 seconds. These servers are hosted on Amazon's EC2. The app server is PHP 5.2/Apache running on Ubuntu. The DB slave is running Percona's XtraDB 5.1 on Ubuntu 9.10. It's using an EBS Raid array for the data storage. We already use skip name resolve and bind to address 0.0.0.0. This is a stub of the PHP code that's failing $tmp = mysqli_init(); $start_time = microtime(true); $tmp-options(MYSQLI_OPT_CONNECT_TIMEOUT, 2); $tmp-real_connect($DB_SERVERS[$server]['server'], $DB_SERVERS[$server]['username'], $DB_SERVERS[$server]['password'], $DB_SERVERS[$server]['schema'], $DB_SERVERS[$server]['port']); if(mysqli_connect_errno()){ $timer = microtime(true) - $start_time; mail($errors_to,'DB connection error',$timer); } There's more than 300Mb available on the DB server for new connections and the server is nowhere near the max allowed (60 of 1,200). Loading on both servers is < 2 on 4 core m1.xlarge instances. Some highlights from the mysql config max_connections = 1200 thread_stack = 512K thread_cache_size = 1024 thread_concurrency = 16 innodb-file-per-table innodb_additional_mem_pool_size = 16M innodb_buffer_pool_size = 13G Any help on tracing the source of the slowdown is appreciated. [EDIT] I have been updating the sysctl values for the network but they don't seem to be fixing the problem. I made the following adjustments on both the database and application servers. net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_fin_timeout = 20 net.ipv4.tcp_keepalive_time = 180 net.ipv4.tcp_max_syn_backlog = 1280 net.ipv4.tcp_synack_retries = 1 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 87380 16777216 [EDIT] Per jaimieb's suggestion, I added some tracing and captured the following data using time. This server handles about 51 queries/second at this the time of day. The connection error was raised once (at 13:06:36) during the 3 minute window outlined below. Since there was 1 failure and roughly 9,200 successful connections, I think this isn't going to produce anything meaningful in terms of reporting. Script: date /root/database_server.txt (time mysql -h database_Server -D schema_name -u appuser -p apppassword -e '') /dev/null 2 /root/database_server.txt Results: === Application Server 1 === Mon Feb 22 13:05:01 EST 2010 real 0m0.008s user 0m0.001s sys 0m0.000s Mon Feb 22 13:06:01 EST 2010 real 0m0.007s user 0m0.002s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Application Server 2 === Mon Feb 22 13:05:01 EST 2010 real 0m0.009s user 0m0.000s sys 0m0.002s Mon Feb 22 13:06:01 EST 2010 real 0m0.009s user 0m0.001s sys 0m0.003s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Database Server === Mon Feb 22 13:05:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s Mon Feb 22 13:06:01 EST 2010 real 0m0.006s user 0m0.010s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s [EDIT] Per a suggestion received on a LinkedIn question, I tried setting the back_log value higher. We had been running the default value (50) and increased it to 150. We also raised the kernel value /proc/sys/net/core/somaxconn (maximum socket connections) to 256 on both the application and database server from the default 128. We did see some elevation in processor utilization as a result but still received connection timeouts.

    Read the article

  • Can I install SQL Server 2008 R2 on a Windows Server 2008 R2 Standard machine in a workgroup then join the server to a domain?

    - by Zero Subnet
    I have a Windows 2008 Server Standard x64 machine that I need to install SQL Server 2008 R2 Standard on then ship it to a different site where it will be joined to a Active Directory domain. The server is now using the default "WORKGROUP" workgroup and i need to know if i can install SQL Server on it then ship it to the other site where it will be joined to the domain without issues. What are the possible problems that could happen? are there any workarounds?

    Read the article

  • MySQL open files limit

    - by Brian
    This question is similar to set open_files_limit, but there was no good answer. I need to increase my table_open_cache, but first I need to increase the open_files_limit. I set the option in /etc/mysql/my.cnf: open-files-limit = 8192 This worked fine in my previous install (Ubuntu 8.04), but now in Ubuntu 10.04, when I start the server up, open_files_limit is reported to be 1710. That seems like a pretty random number for the limit to be clipped to. Anyway, I tried getting around it by adding a line like this in /etc/security/limits.conf: mysql hard nofile 8192 I also tried adding this to the pre-start script in mysql's upstart config (/etc/init/mysql.conf): ulimit -n 8192 Obviously neither of those things worked. So where is the hoop that has been added between Ubuntu 8.04 and 10.04 through which I must jump in order to actually increase the open files limit?

    Read the article

  • multiple file systems for mysql

    - by RainDoctor
    Does mysql support multiple file systems for a single database with most of the tables being on MyISAM? Context: we have a 1.5TB mysql database, which is increasing at the rate of 200GB per month. The storage is directly attached, whose slots are almost full. I can add another DAS, and increase the file system. But resizing volume, resizing file system, etc are getting messy. Is there a concept of "tablespace, datafile" (like in oracle) in MySql world? Or how you guys manage mysql db with these kind of constraints?

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >