Search Results

Search found 18220 results on 729 pages for 'null hypothesis'.

Page 283/729 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • Failed to connect from slave to master with error "error connecting to master (1045)"

    - by Victor Lin
    I try to setup replication from slave to the master. CHANGE MASTER TO MASTER_HOST = 'master', MASTER_PORT = 3306, MASTER_USER = 'repl', MASTER_PASSWORD = 'xxx'; And I did grant privileges to the user on master. I can connect with mysql command from slave machine to the master mysql -h master -u repl -p mysql> show grants; GRANT RELOAD, SUPER, REPLICATION SLAVE, CREATE USER ON *.* TO 'repl'@'xxx' IDENTIFIED BY PASSWORD 'xxx' mysql> select 1; +---+ | 1 | +---+ | 1 | +---+ 1 row in set (0.04 sec) As you can see, privileges are correct, connection works fine, but however, the connection for replication to master always failed. mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Connecting to master Master_Host: master Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: Read_Master_Log_Pos: 4 Relay_Log_File: slave-replay-bin.000002 Relay_Log_Pos: 4 Relay_Master_Log_File: Slave_IO_Running: Connecting Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 0 Relay_Log_Space: 107 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 1045 Last_IO_Error: error connecting to master 'repl@master:3306' - retry-time: 60 retries: 86400 Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 0 1 row in set (0.00 sec) Is this caused by different version of MySQL server? The version of master is 5.0.77, and the slave is 5.5.13. But all articles I could find tell me that it's okay to replicate from a newer slave to old master. How to solve this problem? -- Update -- I even try to upgrade the old MySQL, but still, the problem is not solved. mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Connecting to master Master_Host: master Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: master-bin.000007 Read_Master_Log_Pos: 107 Relay_Log_File: slave-replay-bin.000001 Relay_Log_Pos: 4 Relay_Master_Log_File: master-bin.000007 Slave_IO_Running: Connecting Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 107 Relay_Log_Space: 107 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 1045 Last_IO_Error: error connecting to master 'repl@master' - retry-time: 60 retries: 86400 Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 0 1 row in set (0.00 sec)

    Read the article

  • Bad network performance on KVM guest

    - by Devator
    I have a dedicated server connected to a 1000 Mbit port. However, the Debian guest is only getting half to a 1/4 the speeds: On the node itself (Linux node 2.6.32-279.9.1.el6.x86_64 #1 SMP Tue Sep 25 21:43:11 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux): wget http://www.bbned.nl/scripts/speedtest/download/file1000mb.bin -O /dev/null --2012-11-11 23:10:11-- http://www.bbned.nl/scripts/speedtest/download/file1000mb.bin Resolving www.bbned.nl... 62.177.144.181 Connecting to www.bbned.nl|62.177.144.181|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1048576000 (1000M) [application/octet-stream] Saving to: â/dev/nullâ 100%[====================================>] 1,048,576,000 100M/s in 10s 2012-11-11 23:10:21 (100 MB/s) - â/dev/nullâ On the guest (Debian 6.0.5, x64: Linux debian 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux): wget http://www.bbned.nl/scripts/speedtest/download/file1000mb.bin -O /dev/null --2012-11-11 23:10:41-- http://www.bbned.nl/scripts/speedtest/download/file1000mb.bin Resolving www.bbned.nl... 62.177.144.181 Connecting to www.bbned.nl|62.177.144.181|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1048576000 (1000M) [application/octet-stream] Saving to: â/dev/nullâ 100%[=================================================================================================================================================================================================>] 1,048,576,000 16.5M/s in 42s 2012-11-11 23:11:23 (23.8 MB/s) - â/dev/nullâ I use the virtio NIC. I tried some more NICs: e1000 and the Realtek 8139 but those yield even worse results. Anyone has an idea how to improve these speeds?

    Read the article

  • How Do I Install Latest Java Plugin on Ubuntu 8.04 LTS with Custom Firefox 3.6.3?

    - by Volomike
    I have Ubuntu 8.04 LTS and will need to remain on that for awhile as I am a programmer and cannot disturb what I'm doing in the middle of a project. As you know, it doesn't want to install the latest Firefox by default and keeps me in the stone age. So, I had to install my own Firefox. I did so and with relative ease I was able to install the Flash plugin using the normal 'ln -s' technique one usually does with /usr/lib/firefox/plugins. Now I need to do a Gotomypc.com task and it requires Java -- moan -- so I need to figure out how to install Java. I downloaded and installed the latest Java plugin in /usr/java, and I do see underneath that layer of folders a plugins folder with a .so file. So, I went to /usr/lib/firefox/plugins and did this: ln -s /usr/java/jre1.6.0_20/plugin/i386/ns7/libjavaplugin_oji.so Then, I also read on the web that one has to create a ~/.mozilla/plugins folder, cd into it, and then run the same ln -s command again. Another site recommended finding one's ~/.mozilla/firefox and renaming it to ~/.mozilla/firefox.old (after you backed up your bookmarks) and then launching firefox again so that it creates ~/.mozilla/firefox and uses the new Java plugin. Well, all these attempts have completely failed and it is incredibly frustrating. I do about:plugins to see my plugins and all I get are the Flash and the default null plugin. I do not see the Java one. Also, they said on my tools menu with Firefox 3.6.3 I would see a Java Console menu and I don't see that either. I found a pluginreg.dat somewhere deep under ~/.mozilla/firefox, but it does not list the Java plugin inside -- only Flash and the null plugin. Please help me install Java. I need to help a client out and need to connect to his PC remotely with gotomypc, which requires Java inside firefox.

    Read the article

  • Remote Desktop to Server 2008 fails from one particular Win7 client

    - by Jesse McGrew
    I have a VPS running Windows Web Server 2008 R2. I'm able to connect using Remote Desktop from my home PC (Windows 7), personal laptop (Windows 7), and work laptop (Windows XP). However, I cannot connect from my work PC (Windows 7). I receive the error "The logon attempt failed" in the RDP client, and the server event log shows "An account failed to log on" with this explanation: Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: username Account Domain: hostname Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: JESSE-PC Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 I can connect from the offending work PC if I start up Windows XP Mode and use the RDP client inside that. The server is part of a domain but my account is local, so I'm logging in using a username of the form hostname\username. None of the clients are part of a domain. The server uses a self-signed certificate, and connecting from home I get a warning about that, but connecting from work I just get the logon error.

    Read the article

  • Error code 1005 (errno: 121) upon create table while restoring MySQL database from a dump

    - by Jonathan
    I have a linux prod machine and a Win7 64bit dev machine. My workflow includes dumping the production MySQL database on the linux machine and restoring it in my local MySQL database on the windows machine (using SQLyog). This worked fine for a long time. Following some trouble, I formatted and reinstalled my windows dev machine. Since then I'm unable to restore the db on it. I keep receiving the following error: Query: CREATE TABLE `auth_group` ( `id` int(11) NOT NULL auto_increment, `name` varchar(80) collate utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Error occured at:2010-06-26 17:16:14 Line no.:30 Error Code: 1005 - Can't create table 'ap_site.auth_group' (errno: 121) Notice that this is the first create table statement in the sql dump file. This error occurs both on MySQL Community Server 5.1.41 and 5.1.48 and with SQLyog Community 8.0.4 and 8.5.1. I really don't know what's different in my configuration from before the reinstall and now and why does it have this effect. Restoring from sql dump is something I need to keep on doing, so I need a permanent fix and not a tailored workaround.

    Read the article

  • deploying war on tomcat fails to start

    - by Asghar
    i have a java application which uses JAX_WS when i deployed on my tomcat5 server . it is deployed successfully. but it fails to start SEVERE: WSSERVLET11: failed to parse runtime descriptor: java.lang.IllegalArgumentException: prefix cannot be "null" when creating a QName java.lang.IllegalArgumentException: prefix cannot be "null" when creating a QName at javax.xml.namespace.QName.<init>(xml-commons-apis-1.3.02.jar.so) at gnu.xml.stream.XMLParser.getAttributeName(libgcj.so.7rh) at com.sun.xml.ws.util.xml.XMLStreamReaderFilter.getAttributeName(XMLStreamReaderFilter.java:228) at com.sun.xml.ws.streaming.XMLStreamReaderUtil$AttributesImpl.<init>(XMLStreamReaderUtil.java:355) at com.sun.xml.ws.streaming.XMLStreamReaderUtil.getAttributes(XMLStreamReaderUtil.java:198) at com.sun.xml.ws.transport.http.DeploymentDescriptorParser.parseAdapters(DeploymentDescriptorParser.java:204) at com.sun.xml.ws.transport.http.DeploymentDescriptorParser.parse(DeploymentDescriptorParser.java:147) at com.sun.xml.ws.transport.http.servlet.WSServletContextListener.contextInitialized(WSServletContextListener.java:124) at org.apache.catalina.core.StandardContext.listenerStart(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardContext.start(catalina-5.5.23.jar.so) at org.apache.catalina.manager.ManagerServlet.start(catalina-manager-5.5.23.jar.so) at org.apache.catalina.manager.HTMLManagerServlet.start(catalina-manager-5.5.23.jar.so) at org.apache.catalina.manager.HTMLManagerServlet.doGet(catalina-manager-5.5.23.jar.so) at javax.servlet.http.HttpServlet.service(tomcat5-servlet-2.4-api-5.5.23.jar.so) at javax.servlet.http.HttpServlet.service(tomcat5-servlet-2.4-api-5.5.23.jar.so) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(catalina-5.5.23.jar.so) at org.apache.catalina.core.ApplicationFilterChain.doFilter(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardWrapperValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardContextValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardHostValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.valves.ErrorReportValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardEngineValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.connector.CoyoteAdapter.service(catalina-5.5.23.jar.so) at org.apache.coyote.http11.Http11Processor.process(tomcat-http-5.5.23.jar.so) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(tomcat-http-5.5.23.jar.so) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(tomcat-util-5.5.23.jar.so) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(tomcat-util-5.5.23.jar.so) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(tomcat-util-5.5.23.jar.so) at java.lang.Thread.run(libgcj.so.7rh)

    Read the article

  • android geting data from sql

    - by sagar
    Hello i m new to android. i wont to connect to sql server for store and get data so so me one can help me sending code of android for do it.. i had tried to do tht with java nd it was workink but now i wont to create a aplication for android my java code is :: import java.sql.*; public class MysqlTest { public static void main (String[] args) { Connection conn = null; try { String userName = "pietro"; //change it to your username String password = "pietro"; //change it to your password String url = "jdbc:mysql://192.168.0.67:3306/registro"; Class.forName("com.mysql.jdbc.Driver").newInstance(); conn = DriverManager.getConnection(url, userName, password); Statement s = (Statement) conn.createStatement(); // code for create a tabel in server s.execute("create table School2 (rolno integer,sub text)"); // code for create a tabel in server s.execute("insert into School2(rolno,sub)values(10,'java')"); //code for add value in tabel s.execute("select rolno,sub from School2");//code for add value in tabel s.close(); System.out.println("Database connection established"); } catch (Exception e) { System.err.println("Cannot connect to database server"); } finally { if (conn != null) { try { conn.close (); System.out.println("Database connection terminated"); } catch (Exception e) { /* ignore close errors */ } } } } }

    Read the article

  • Windows Network File Transfer to Samba server: “Are you sure you want to copy this file without its properties?”

    - by jimp
    I am transferring a lot of files to a new NAS based on OpenMediaVault, with the Samba 3.5.6 service running. I am transferring from Windows 7 64-bit to the NAS, and on some media files Windows is prompting about losing some property data across the transfer. I have never seen this before when transferring to Samba boxes I have built myself (vs this turnkey solution), so I'm guessing there must be a Samba setting I can change to preserve the file properties in question instead of permanently losing whatever they contain (Date Taken? Exposure? Flash Fired? etc). Or maybe I've just never encountered this before; I'm really not sure. I tried adding ea support = yes and store dos attributes = yes to the [global] section, but the problem remains. The Linux file system is ext4 mounted with user_xattr (full options: defaults,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0) as Samba requires. Any ideas would be greatly appreciated. Thank you! Samba config: [global] workgroup = WORKGROUP server string = %h server include = /etc/samba/dhcp.conf dns proxy = no log level = 2 syslog = 2 log file = /var/log/samba/log.%m max log size = 1000 syslog only = yes panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = no passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes socket options = TCP_NODELAY IPTOS_LOWDELAY guest account = nobody load printers = no disable spoolss = yes printing = bsd printcap name = /dev/null unix extensions = yes wide links = no create mask = 0777 directory mask = 0777 use sendfile = no null passwords = no local master = yes time server = yes wins support = yes ea support = yes store dos attributes = yes Note: I found this related question, but it explains the loss due to the user trying to transfer from NTFS to FAT32.

    Read the article

  • Startup script for Red5 on Ubuntu 9.04

    - by user49249
    I am creating startup script for Red5 on Ubuntu. Red5 is installed in /opt/red5 Following is a working script on a CentOS Box on which Red5 is running [code] ==========Start init script ========== #!/bin/sh PROG=red5 RED5_HOME=/opt/red5/dist DAEMON=$RED5_HOME/$PROG.sh PIDFILE=/var/run/$PROG.pid # Source function library . /etc/rc.d/init.d/functions [ -r /etc/sysconfig/red5 ] && . /etc/sysconfig/red5 RETVAL=0 case "$1" in start) echo -n $"Starting $PROG: " cd $RED5_HOME $DAEMON >/dev/null 2>/dev/null & RETVAL=$? if [ $RETVAL -eq 0 ]; then echo $! > $PIDFILE touch /var/lock/subsys/$PROG fi [ $RETVAL -eq 0 ] && success $"$PROG startup" || failure $"$PROG startup" echo ;; stop) echo -n $"Shutting down $PROG: " killproc -p $PIDFILE RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$PROG ;; restart) $0 stop $0 start ;; status) status $PROG -p $PIDFILE RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|status}" RETVAL=1 esac exit $RETVAL [/code] What do I need to replace for Ubuntu in the above script. My Red5 is in /opt/red5/ and to start it manually I always do /opt/red5/dist/red5.sh from Ubuntu As I did not find rc.d/functions on Ubuntu on my laptop also /etc/init.d/functions I did not existed. I would like to be able to use them with service as Red hat distributions do. I checked /lib/lsb/init-functions.

    Read the article

  • Why dd finishes instantly when pipelining to cat?

    - by agsamek
    I start bash on Cygwin and type: dd if=/dev/zero | cat /dev/null It finishes instantly. When I type: dd if=/dev/zero > /dev/null it runs as expected and I can issue killall -USR1 dd to see the progress. Why does the former invocation finishes instantly? Is it the same on a Linux box? * Explanation why I asked such stupid question and possibly not so stupid question I was compressing hdd images and some were compressed incorrectly. I ended up with the following script showing the problem: while sleep 1 ; do killall -v -USR1 dd ; done & dd if=/dev/zero bs=5000000 count=200 | gzip -c | gzip -cd | wc -c Wc should write 1000000000 at the end. The problem is that it does not on my machine: bash-3.2$ dd if=/dev/zero bs=5000000 count=200 | gzip -c | gzip -cd | wc -c 13+0 records in 12+0 records out 60000000 bytes (60 MB) copied, 0.834 s, 71.9 MB/s 27+0 records in 26+0 records out 130000000 bytes (130 MB) copied, 1.822 s, 71.4 MB/s 200+0 records in 200+0 records out 1000000000 bytes (1.0 GB) copied, 13.231 s, 75.6 MB/s 1005856128 Is it a bug or am I doing something wrong once again?

    Read the article

  • using gmail as email relay for sendmail

    - by Nikita
    I used to be able to send emails using a gmail account & sendmail configured using one of the guides on the Internet, for example: http://appgirl.net/blog/configuring-sendmail-to-relay-through-gmail-smtp/ This is a small server and I've recently moved it to a different house. And sendmail has stop working. The only thing different in the network setup is a new router. What is happening: In the log files, I see the following error: ...stat=Deferred: smtp.gmail.com: No route to host When I run from the command line: strace sendmail -f A -t B -u "Subject" -m "Message" -tls=yes ssl=yes -s smtp.gmail.com:587 -xu A -xp XYZ It hangs on this call: recvfrom(3, "m0\201\203\0\1\0\0\0\0\0\0\4ares\3lan\0\0\34\0\1", 8192, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.1.254")}, [16]) = 26 close(3) = 0 time(NULL) = 1339997943 open("/etc/localtime", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=3477, ...}) = 0 fstat64(3, {st_mode=S_IFREG|0644, st_size=3477, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb76ff000 read(3, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\4\0\0\0\4\0\0\0\0"..., 4096) = 3477 _llseek(3, -24, [3453], SEEK_CUR) = 0 read(3, "\nEST5EDT,M3.2.0,M11.1.0\n", 4096) = 24 close(3) = 0 munmap(0xb76ff000, 4096) = 0 socket(PF_FILE, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 3 connect(3, {sa_family=AF_FILE, path="/dev/log"}, 110) = 0 send(3, "<18>Jun 18 01:39:03 sendmail[268"..., 96, MSG_NOSIGNAL) = 96 nanosleep({60, 0}, So it looks like at some point it tries to resolve the DNS name, but I don't have anything running on 53, so it dies out and then just hangs. The other interesting thing is that msmtp works just fine on the same server. Update: ares in strace output is actually the name of my server, but .254 IP address is the address of the router. Could anyone tell me why this is happening or what further steps can I take to investigate the issue? Thanks!

    Read the article

  • Getting custom web.config sections and their contents in Powershell

    - by Rob
    I have a web application installed in c:\inetpub\wwwroot_Site1\AppName which has a custom section group and section as follows: <configSections> <sectionGroup name="Libraries"> <section name="Custom.Section.Name" type="System.Configuration.NameValueSectionHandler,system, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=null"/> <section name="Custom.Section.Name2" type="System.Configuration.NameValueSectionHandler,system, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=null"/> </sectionGroup> </configSections> I've written the following snippet of Powershell: Import-Module WebAdministration Get-WebConfiguration //Libraries IIS:\Sites\Site1\AppName Which correctly returns: Name         Sections                           Groups ====          ========                        =========== Libraries    Custom.Section.Name                   Custom.Section.Name2 What I can't fathom is how to, either via Get-WebConfiguration or Get-WebConfigurationProperty obtain access to the <add key="x" value="y" /> elements that are direct children of CustomSectionName in the actual "body" of the configuration file.

    Read the article

  • How to diagnose storage system scaling problems?

    - by Unknown
    We are currently testing the maximum sequential read throughput of a storage system (48 disks total behind two HP P2000 arrays) connected to HP DL580 G7 running RHEL 5 with 128 GB of memory. Initial testing has been mainly done by running DD-commands like this: dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=3000 In parallel for each disk. However, we have been unable to scale the results from one array (maximum throughput of 1.3 GB/s) to two (almost the same throughput). Each array is connected to a dedicated host bust adapter, so they should not be the bottleneck. The disks are currently in JBOD configuration, so each disk can be addressed directly. I have two questions: Is running multiple DD commands in parallel really a good way to test maximum read throughput? We have noticed very high SWAPIN-% numbers in iotop, which I find hard to explain because the target is /dev/null How shoud we proceed in trying to find the reason for the scaling problem? Do you thing the server itself is the bottleneck here, or could there be some linux parameters that we have overlooked?

    Read the article

  • Error code 1005 (errno: 121) upon create table while restoring MySQL database from a dump

    - by Jonathan
    I have a linux prod machine and a Win7 64bit dev machine. My workflow includes dumping the production MySQL database on the linux machine and restoring it in my local MySQL database on the windows machine (using SQLyog). This worked fine for a long time. Following some trouble, I formatted and reinstalled my windows dev machine. Since then I'm unable to restore the db on it. I keep receiving the following error: Query: CREATE TABLE `auth_group` ( `id` int(11) NOT NULL auto_increment, `name` varchar(80) collate utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Error occured at:2010-06-26 17:16:14 Line no.:30 Error Code: 1005 - Can't create table 'ap_site.auth_group' (errno: 121) Notice that this is the first create table statement in the sql dump file. This error occurs both on MySQL Community Server 5.1.41 and 5.1.48 and with SQLyog Community 8.0.4 and 8.5.1. I really don't know what's different in my configuration from before the reinstall and now and why does it have this effect. Restoring from sql dump is something I need to keep on doing, so I need a permanent fix and not a tailored workaround.

    Read the article

  • sys.dm_exec_query_stats interaction with recompilation

    - by Sam Saffron
    We use sys.dm_exec_query_stats to track down slow queries and queries that are IO offenders. This works great, we get a lot of very insightful stats. It is clear this is not as accurate as running a profiler trace, as you have no idea when SQL Server will decide to chuck out a an execution plan. We have quite a few queries where the wrong execution plan is cached. For example queries like the following: SELECT TOP 30 a.Id FROM Posts a JOIN Posts q ON q.Id = a.ParentId JOIN PostTags pt ON q.Id = pt.PostId WHERE a.PostTypeId = 2 AND a.DeletionDate IS NULL AND a.CommunityOwnedDate IS NULL AND a.CreationDate @date AND LEN(a.Body) 300 AND pt.Tag = @tag AND a.Score 0 ORDER BY a.Score DESC The problem is that the ideal plan really depends on the date selected (screenshot of ideal plan): However if the wrong plan is cached, it totally chokes when the date range is big: (notice the big fat lines) To overcome this we were recommended to use either OPTION (OPTIMIZE FOR UNKNOWN) or OPTION (RECOMPILE) OPTIMIZE FOR UNKNOWN results in a slightly better plan, which is far from optimal. Executions are tracked in sys.dm_exec_query_stats. RECOMPILE results in the best plan being chosen, however no execution counts and stats are tracked in sys.dm_exec_query_stats. Is there another DMV we could use to track stats on queries with OPTION (RECOMPILE)? Is this behavior by-design? Is there another way we can for recompilation while keeping stats tracked in sys.dm_exec_query_stats? Note: the framework will always execute parameterized queries using sp_executesql

    Read the article

  • logrotate: neither rotate nor compress empty files

    - by Andrew Tobey
    i have just set up an (r)syslog server to receive the logs of various clients, which works fine. only logrotate is still not behaving as intending. i want logrotate to create a new logfile for each day, but only to keep and store i.e. compress non-empty files. my logrotate config looks currently like this # sample configuration for logrotate being a remote server for multiple clients /var/log/syslog { rotate 3 daily missingok notifempty delaycompress compress dateext nomail postrotate reload rsyslog >/dev/null 2>&1 || true endscript } # local i.e. the system's very own logs: keep logs for a whole month /var/log/kern.log /var/log/kernel-info /var/log/auth.log /var/log/auth-info /var/log/cron.log /var/log/cron-info /var/log/daemon.log /var/log/daemon-info /var/log/mail.log /var/log/rsyslog /var/log/rsyslog-info { rotate 31 daily missingok notifempty delaycompress compress dateext nomail sharedscripts postrotate reload rsyslog >/dev/null 2>&1 || true endscript } # received i.e. logs from the clients /var/log/path-to-logs/*/* { rotate 31 daily missingok notifempty delaycompress compress dateext nomail } what i end up with is having is some sort of "summarized" files such as filename-datestampDay-Day and corresponding .gz files. What I do have are empty files, which are eventually zipped. so does the notifempty directive is in fact responsible for these DayX-DayY files, days on which really nothing happened? what would be an efficient way to drop both, empty log files and their .gz files, so that I eventually only keep logs/compressed files that truly contain data?

    Read the article

  • xen + debian network after upgrade squeeze to wheeze

    - by rush
    I've got a Debian + Xen server. After a system upgrade to the stable version the network doesn't come up after boot. Every time after reboot I need to bring it up manually. The network configuration was not changed during upgrade. Here is /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 11.22.33.44 netmask 255.255.255.0 gateway 11.22.33.1 nameserver 8.8.8.8 After boot ip r shows no route and eth0 has no ip address. Manually ip and route setup goes fine and network starts working. Messages from dmesg about network I've found (looks like nothing interesting) [ 3.894401] ACPI: Fan [FAN3] (off) [ 3.894444] ACPI: Fan [FAN4] (off) [ 4.178348] e1000e 0000:00:19.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:1e:67:14:66:c9 [ 4.178351] e1000e 0000:00:19.0: eth0: Intel(R) PRO/1000 Network Connection [ 4.178392] e1000e 0000:00:19.0: eth0: MAC: 10, PHY: 11, PBA No: 0100FF-0FF [ 4.178413] e1000e 0000:02:00.0: Disabling ASPM L0s L1 [ 4.178432] xen: registering gsi 16 triggering 0 polarity 1 -- [ 4.223667] ata5: DUMMY [ 4.223668] ata6: DUMMY [ 4.289153] e1000e 0000:02:00.0: eth1: (PCI Express:2.5GT/s:Width x1) 00:1e:67:14:66:c8 [ 4.289155] e1000e 0000:02:00.0: eth1: Intel(R) PRO/1000 Network Connection [ 4.289245] e1000e 0000:02:00.0: eth1: MAC: 3, PHY: 8, PBA No: 1000FF-0FF [ 4.506908] usb 1-1: new high-speed USB device number 2 using ehci_hcd [ 4.542920] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300) -- [ 10.362999] EXT4-fs (dm-23): mounted filesystem with ordered data mode. Opts: (null) [ 10.419103] EXT4-fs (dm-3): mounted filesystem with ordered data mode. Opts: (null) [ 10.988255] ADDRCONF(NETDEV_UP): eth1: link is not ready [ 13.175533] Event-channel device installed. [ 13.287555] XENBUS: Unable to read cpu state -- [ 13.288670] XENBUS: Unable to read cpu state [ 13.965939] Bridge firewalling registered [ 14.134048] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx [ 14.283862] ADDRCONF(NETDEV_UP): peth0: link is not ready [ 14.284543] ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready [ 17.800627] e1000e: peth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx [ 17.801377] ADDRCONF(NETDEV_CHANGE): peth0: link becomes ready [ 18.307278] device peth0 entered promiscuous mode [ 24.538899] eth1: no IPv6 routers present [ 28.570902] peth0: no IPv6 routers present I've upgraded two servers and I've such behaviour on two of them. How to fix this and get network starts automatically on boot?

    Read the article

  • Weekly cron executing 2 times:

    - by yes123
    Hi guys, I have placed a .sh file that runs a php script weekly. This script should run only once, but every sunday it runs at: 1:30 am 6:50 am Any way to fix this? Add1: /etc/cron.weekly/cronweek: #!/bin/bash /usr/bin/php -f /home/my/path/to/script/cronweek.php Add2: crontab file: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # */1 * * * * root /usr/local/rtm/bin/rtm 35 > /dev/null 2> /dev/null

    Read the article

  • Error authenticating git repository with Redmine

    - by woni
    I've setup Redmine 2.1 on my Debian Squeeze server following this Tutorial HowTo configure Redmine for advanced git integration (I tried to use the grack path). Redmine server is running properly, but I have a problem granting users access to git repositories. When I try to clone a repository it says: error: The requested URL returned error: 500 while accessing The apache error.log shows this entry: [Fri Sep 28 15:50:56 2012] [crit] [client xx.xx.xx.xx] configuration error: couldn't check user. Check your authn provider!: /repo.git/info/refs It also asks me for user and password when cloning, but it shouldn't if I understand the tutorial right. I'm using the Redmine authentication module: <VirtualHost *:80> ServerName my.server.at DocumentRoot "/var/www/my.server.at/public" PerlLoadModule Apache::Redmine <Directory "/var/www/my.server.at/public"> Options None AllowOverride None Order allow,deny Allow from all </Directory> SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER" SetEnv GIT_PROJECT_ROOT /var/git/my.server.at/ SetEnv GIT_HTTP_EXPORT_ALL ScriptAlias /git/ /usr/lib/git-core/git-http-backend <Location /> Order allow,deny Allow from all AuthType Basic AuthName Git Require valid-user AuthBasicAuthoritative Off AuthUserFile /dev/null AuthGroupFile /dev/null PerlAccessHandler Apache::Authn::Redmine::access_handler PerlAuthenHandler Apache::Authn::Redmine::authen_handler RedmineDSN "DBI:mysql:database=redmine;host=localhost" RedmineDbUser "user" RedmineDbPass "password" RedmineGitSmartHttp yes </Location> </VirtualHost> Can someone help me please and explain the error and what I can do to solve my problem?

    Read the article

  • Heavy write to Galera cluster - table locked, cluster practically unusable

    - by Joe
    I set up Galera Cluster on 3 nodes. It works perfectly for reading data. I have done simple application to make some test on the cluster. Unfortunately I have to say that the Cluster fails totally when I try to do some writing. Maybe it can be configured differently or I do sth wrong? I have a simple stored procedure: CREATE PROCEDURE testproc(IN p_idWorker INTEGER) BEGIN DECLARE t_id INT DEFAULT -1; DECLARE t_counter INT ; UPDATE test SET idWorker = p_idWorker WHERE counter = 0 AND idWorker IS NULL limit 1; SELECT id FROM test WHERE idWorker = p_idWorker LIMIT 1 INTO t_id; SELECT ABS(MAX(counter)/MIN(counter)) FROM TEST INTO t_counter; SELECT COUNT(*) FROM test WHERE counter = 0 INTO t_counter; IF t_id >= 0 THEN UPDATE test SET counter = counter + 1 WHERE id = t_id; UPDATE test SET idWorker = NULL WHERE id = t_id; SELECT t_counter AS res; ELSE SELECT 'end' AS res; END IF; END $$ Now my simple C# application creates for example 3 MySQL clients in separate threads and each one executes the procedure every 100ms until there is no record where column 'counter' = 0. Unfortunately - after about 10 seconds sth is going bad. On servers there is process 'query_end' that never ends. After that - you cannot make update on the test table, MySQL returns: ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction . You cant even restart mysql. What you can do is to restart server, sometimes whole cluster. Is Galera Cluster so unreliable when you do massive concucurrent writing/updates? Hard to believe.

    Read the article

  • linux automatic change permissions in resolv.file

    - by rikr
    In various linux servers I see how the permissions of the /etc/resolv.conf file change automatically. In state normal: -r--r--r-- 1 root root 103 Jul 4 11:50 resolv.conf In changed state: -r--r----- 1 root root 103 Jul 4 11:50 resolv.conf I installed auditd for monitoring it, and these are the two entries between the change: type=PATH msg=audit(07/04/2012 12:20:02.719:303) : item=0 name=/etc/resolv.conf inode=137102 dev=fe:00 mode=file,644 ouid=root ogid=root rdev=00:00 type=CWD msg=audit(07/04/2012 12:20:02.719:303) : cwd=/ type=SYSCALL msg=audit(07/04/2012 12:20:02.719:303) : arch=x86_64 syscall=open success=yes exit=3 a0=7feeb1405dec a1=0 a2=1b6 a3=0 items=1 ppid=1585 pid=3445 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=4294967295 comm=hostid exe=/usr/bin/hostid key=(null) type=PATH msg=audit(07/04/2012 12:50:03.727:304) : item=0 name=/etc/resolv.conf inode=137102 dev=fe:00 mode=file,440 ouid=root ogid=root rdev=00:00 type=CWD msg=audit(07/04/2012 12:50:03.727:304) : cwd=/ type=SYSCALL msg=audit(07/04/2012 12:50:03.727:304) : arch=x86_64 syscall=open success=yes exit=3 a0=7f2bcf7abdec a1=0 a2=1b6 a3=0 items=1 ppid=1585 pid=3610 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=4294967295 comm=hostid exe=/usr/bin/hostid key=(null) any ideas?

    Read the article

  • Is there any reason this cronjob would fail in cron, but not on the command line?

    - by Treffynnon
    I have written a little one liner that will email me when a list of files changes - I used sha512 to generate a list of hashes and then periodically check that those hashes still match. */5 * * * * /usr/bin/sha512sum --status -c /sha512.sumlist && echo "Success" > /dev/null || echo "Check robots.txt and index.html in /var/www as staging sites are now potentially exposed to the world and the damned googlebot" | /usr/bin/mail -s "Default staging server files have changed" [email protected] It works fine on the command line with: /usr/bin/sha512sum --status -c /sha512.sumlist && echo "Success" > /dev/null || echo "Check robots.txt and index.html in /var/www as staging sites are now potentially exposed to the world and the damned googlebot" | /usr/bin/mail -s "Default staging server files have changed" [email protected] As soon as I run it as a cronjob though it emails every time it runs with the failure message instead of only when the sha512sum check should fail. Is there something silly I have missed in a rush? I forgot to mention that I am running an Ubuntu machine.

    Read the article

  • Running Solr on VPS problem

    - by Camran
    I have a VPS with Ubuntu OS. I run solr om my local machine (windows xp laptop) just fine. I have configured Jetty, and Solr just the same way as on my computer, but on the server. I have also downloaded the JRE and installed it on the server. However, whenever I try to run the start.jar file, the PuTTY terminal shows a bunch of text but gets stuck. I could pase the text here but it is very long, so unless somebody wants to see it I wont. Also, I cant view the solr admin page at all. Does anybody have experience in this kind of problem? Maybe java isn't correctly installed? It is a VPS so maybe installation is different. Thanks UPDATE: These are the last lines from the terminal, in other words, this is where it stops every time: INFO: [] webapp=null path=null params={event=firstSearcher&q=static+firstSearcher+warming+query+from+solrconfig.xml} hits=0 status=0 QTime=9 May 28, 2010 8:58:42 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener done. May 28, 2010 8:58:42 PM org.apache.solr.handler.component.SpellCheckComponent$SpellCheckerListener newSearcher INFO: Loading spell index for spellchecker: default May 28, 2010 8:58:42 PM org.apache.solr.core.SolrCore registerSearcher INFO: [] Registered new searcher Searcher@63a721 main Also you should know that I installed jetty by just dragging the folders from my HD to the VPS server.

    Read the article

  • validate weblogic security realm user through java

    - by user1877246
    i have installed weblogic '10.3.4.0' and have created a user in the default security realm 'myrealm'. The authenticator is DefaultAuthenticator. Now, I have to authenticate the user through a stand alone java application. But the application is resposning with 'LDAP: error code 49 - Invalid Credentials': CODE-START ** Properties l_props = new Properties(); LdapContext l_ctx = null; l_props.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory"); l_props.put(Context.PROVIDER_URL, "ldap://localhost:7001"); l_props.put(Context.SECURITY_AUTHENTICATION, "simple"); l_props.put(Context.SECURITY_PRINCIPAL, "cn=username"); l_props.put(Context.SECURITY_CREDENTIALS, "password"); l_ctx = new InitialLdapContext(l_props, null); ** CODE-END ** * ERROR-START * javax.naming.AuthenticationException: [LDAP: error code 49 - Invalid Credentials] at com.sun.jndi.ldap.LdapCtx.mapErrorCode(LdapCtx.java:3041) at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2987) at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2789) at com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2703) at com.sun.jndi.ldap.LdapCtx.(LdapCtx.java:293) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:175) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(LdapCtxFactory.java:193) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:136) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:66) at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:667) at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:288) at javax.naming.InitialContext.init(InitialContext.java:223) at javax.naming.ldap.InitialLdapContext.(InitialLdapContext.java:134) at com.iflex.fcat.misc.TestLDAP.createInitialLdapContext(TestLDAP.java:258) at com.iflex.fcat.misc.TestLDAP.authenticate(TestLDAP.java:170) at com.iflex.fcat.misc.TestLDAP.main(TestLDAP.java:125) ERROR-END

    Read the article

  • How to start and stop a systemd unit with another?

    - by Andy Shinn
    I am using CoreOS to schedule systemd units with fleet. I have two units (firehose.service and firehose-announce.service. I am trying to get the firehose-announce.service to start and stop along with the firehose.service. Here is the unit file for firehose-announce.service: [Unit] Description=Firehose etcd announcer BindsTo=firehose@%i.service After=firehose@%i.service Requires=firehose@%i.service [Service] EnvironmentFile=/etc/environment TimeoutStartSec=30s ExecStartPre=/bin/sh -c 'sleep 1' ExecStart=/bin/sh -c "port=$(docker inspect -f '{{range $i, $e := .NetworkSettings.Ports }}{{$p := index $e 0}}{{$p.HostPort}}{{end}}' firehose-%i); echo -n \"Adding socket $COREOS_PRIVATE_IPV4:$port/tcp to /firehose/upstream/firehose-%i\"; while netstat -lnt | grep :$port >/dev/null; do etcdctl set /firehose/upstream/firehose-%i $COREOS_PRIVATE_IPV4:$port --ttl 300 >/dev/null; sleep 200; done" RestartSec=30s Restart=on-failure [X-Fleet] X-ConditionMachineOf=firehose@%i.service I am trying to use BindsTo with the notion that start and stop of firehose.service will also start or stop firehose-announce.service. But this never happens correctly. If firehose.service is stopped, then firehose-announce.service goes to failed state. But when I start firehose.service, the firehose-announce.service doesn't start up. What am I doing wrong here?

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >