Search Results

Search found 5286 results on 212 pages for 'logs'.

Page 57/212 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Why apache doesn't restart after configuring SSL?

    - by poz2k4444
    I've installed apache2 and then configure it to work with SSL following this and this tutorials, the problem becomes when I try to restart the service, the following error throws: (98)Address already in use: make_sock: could not bind to address 0.0.0.0:443 no listening sockets available, shutting down Unable to open logs the output of netstat -anp | grep 443 just display firefox listening and anything else, how could I solve this and get the service running??

    Read the article

  • Apache2 cgi's crash on odbc db access (but run fine from shell)

    - by Martin
    Problem overview (details below): I'm having an apache2 + ruby integration problem when trying to connect to an ODBC data source. The main problem boils down to the fact that scripts that run fine from an interactive shell crash ruby on the database connect line when run as a cgi from apache2. Ruby cgi's that don't try to access the ODBC datasource work fine. And (again) ruby scripts that connect to a database with ODBC do fine when executed from the command line (or cron). This behavior is identical when I use perl instead of ruby. So, the issue seems to be with the environment provided for ruby (perl) by apache2, but I can't figure out what is wrong or what to do about it. Does anyone have any suggestions on how to get these cgi scripts to work properly? I've tried many different things to get this to work, and I'm happy to provide more detail of any aspect if that will help. Details: Mac OS X Server 10.5.8 Xserve 2 x 2.66 Dual-Core Intel Xeon (12 GB) Apache 2.2.13 ruby 1.8.6 (2008-08-11 patchlevel 287) [universal-darwin9.0] ruby-odbc 0.9997 dbd-odbc (0.2.5) dbi (0.4.3) mod_ruby 1.3.0 Perl -- 5.8.8 DBI -- 1.609 DBD::ODBC -- 1.23 odbc driver: DataDirect SequeLink v5.5 (/Library/ODBC/SequeLink.bundle/Contents/MacOS/ivslk20.dylib) odbc datasource: FileMaker Server 10 (v10.0.2.206) ) a minimal version of a script (anonymized) that will crash in apache but run successfully from a shell: #!/usr/bin/ruby require 'cgi' require 'odbc' cgi = CGI.new("html3") aConnection = ODBC::connect('DBFile', "username", 'password') aQuery = aConnection.prepare("SELECT zzz_kP_ID FROM DBTable WHERE zzz_kP_ID = 81044") aQuery.execute aRecord = aQuery.fetch_hash.inspect aQuery.drop aConnection.disconnect # aRecord = '{"zzz_kP_ID"=>81044.0}' cgi.out{ cgi.html{ cgi.body{ "<pre>Primary Key: #{aRecord}</pre>" } } } Example of running this from a shell: gamma% ./minimal.rb (offline mode: enter name=value pairs on standard input) Content-Type: text/html Content-Length: 134 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><HTML><BODY><pre>Primary Key: {"zzz_kP_ID"=>81044.0}</pre></font></BODY></HTML>% gamma% ) typical crash log lines: Dec 22 14:02:38 gamma ReportCrash[79237]: Formulating crash report for process perl[79236] Dec 22 14:02:38 gamma ReportCrash[79237]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140237_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0 Dec 22 14:03:13 gamma ReportCrash[79256]: Formulating crash report for process perl[79253] Dec 22 14:03:13 gamma ReportCrash[79256]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140311_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0

    Read the article

  • Combine multiple unix commands into one output

    - by Ben McCormack
    I need to search our mail logs for a specific e-mail address. We keep a current file named maillog as well as a week's worth of .bz2 files in the same folder. Currently, I'm running the following commands to search for the file: grep [email protected] maillog bzgrep [email protected] *.bz2 Is there a way combine the grep and bzgrep commands into a single output? That way, I could pipe the combined results to a single e-mail or a single file.

    Read the article

  • ActiveMQ Pure Master / Slave - Out of sync

    - by pico
    What i have : 1 master broker and 1 slave broker both in ActiveMQ 5.4.0 What i use : waitForSlave on master side and failover uri on slave side (in the master connector URI) What i want to do : I want to wait some delay (like 5 seconds) in case of a tiny network failures between master and slave before starting slave transpôrt connectors So i put this in slave config : <broker xmlns="http://activemq.apache.org/schema/core" brokerName="slave" dataDirectory="${activemq.base}/data" useJmx="true" persistent="true" populateJMSXUserID="true" masterConnectorURI="failover://(tcp://master:61616)?initialReconnectDelay=1000&amp;maxReconnectDelay=30000" shutdownOnMasterFailure="false" advisorySupport="false"> It seems to work but after a network hang between master and slave, the slave reconnect successfully and then the master logs a lot of : 2010-10-18 17:08:44,421 | ERROR | Slave Failed | org.apache.activemq.broker.ft.MasterBroker | ActiveMQ Task java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1040-634226732611718750-0:0 at org.apache.activemq.broker.MapTransportConnectionStateRegister.lookupConnectionState(MapTransportConnectionStateRegister.java:93) at org.apache.activemq.broker.TransportConnection.lookupConnectionState(TransportConnection.java:1412) at org.apache.activemq.broker.TransportConnection.processRemoveConsumer(TransportConnection.java:561) at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:76) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:309) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:185) at org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:69) at org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:218) at org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:98) at org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:36) On the slave side everything is fine. So after that, i've tried to stop the master to see if the slave is capable of turning master after these "network hangs". The master took long time to shutdown (10 seconds) and then some error message appears in slave logs : 2010-10-18 17:09:32,915 | WARN | Async error occurred: java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1049-634226732657812500-0:3 | org.apache.activemq.broker.TransportConnection.Service | VMTransport: vm://slave#5 java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1049-634226732657812500-0:3 at org.apache.activemq.broker.MapTransportConnectionStateRegister.lookupConnectionState(MapTransportConnectionStateRegister.java:93) at org.apache.activemq.broker.TransportConnection.lookupConnectionState(TransportConnection.java:1412) at org.apache.activemq.broker.TransportConnection.processRemoveSession(TransportConnection.java:600) at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:74) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:309) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:185) at org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:69) at org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:218) at org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:98) at org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:36) Are they any ways to keep my kaha stores (they are individual stores) synchronised? The main problem is that the slave never turn master after a master failure, it stay block on this message : 2010-10-18 17:09:33,681 | WARN | Transport (master/172.21.60.61:61616) failed to tcp://master:61616 , attempting to automatically reconnect due to: java.net.SocketException: Software caused connection abort: socket write error | org.apache.activemq.transport.failover.FailoverTransport | ActiveMQ Transport: tcp://master/172.21.60.61:61616 I'm totally stuck with these syncs problems, any help welcome! Regards

    Read the article

  • Binary Log Format in MySQL

    - by amritansu
    Reference manual for MySQL 5.6 states that " Some changes, however, still use the statement-based format. Examples include all DDL (data definition language) statements such as CREATE TABLE, ALTER TABLE, or DROP TABLE. " Does this statement means that even if we have ROW format for binary logs all DDLs will be logged in binary log as statement based? How does this affect replication? Kindly help me to understand this.

    Read the article

  • SBS 2003 Event ID 3005 (ActiveSync Related)

    - by lance-gallant
    I've recently noticed the following Error in my Server logs. Happens whenever Mobile device user attempts to syncrhonise using OMA. The error is logged on the server, but the user is able to sync, albeit slowly. Event Type: Error Event Source: Server ActiveSync Event Category: None Event ID: 3005 Date: 16/03/2010 Time: 01:22:20 PM User: xxxxxx Computer: xxxxxx Description: Unexpected Exchange mailbox Server error: Server: xxxxxx User: xxxxxx HTTP status code: [409]. Verify that the Exchange mailbox Server is working correctly.

    Read the article

  • Auditing events 4656 and 4658 on Windows folder on Server 2008

    - by PCurd
    During an overnight system state backup we are seeing thousands of success audit events (4656, 4658) on the folder c:\windows\servicing, system32 and others in the windows folder. We use file success auditing on some files so I can't disable it but this deluge is filling up the logs and making reporting tricky. What is the harm of changing the auditing settings on the windows folder? What are the recommended settings to put on the files for those people doing system state backups? Thanks,

    Read the article

  • Setting up two screens in Xorg

    - by viraptor
    I'be got two Nvidia cards, but Xorg activates only one of them. The following config is based on the nvidia configurator output: Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" LeftOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Module" Load "dbe" Load "extmod" Load "type1" Load "freetype" Load "glx" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "keyboard" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "HP LE2201w" HorizSync 24.0 - 83.0 VertRefresh 50.0 - 76.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" ModelName "Acer AL2017" HorizSync 30.0 - 82.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Device" Identifier "Card0" Driver "nvidia" VendorName "nVidia Corporation" BoardName "GeForce 6100 nForce 405" BusID "PCI:0:13:0" EndSection Section "Device" Identifier "Card1" Driver "nvidia" VendorName "nVidia Corporation" BoardName "GeForce 8400 GS" BusID "PCI:2:0:0" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection What I see in the log file is: (==) Log file: "/var/log/Xorg.0.log", Time: Fri Mar 19 11:08:08 2010 (==) Using config file: "/etc/X11/xorg.conf" (==) ServerLayout "Layout0" (**) |-->Screen "Screen0" (0) (**) | |-->Monitor "Monitor0" (==) No device specified for screen "Screen0". Using the first device section listed. (**) | |-->Device "Card0" (**) |-->Screen "Screen1" (1) (**) | |-->Monitor "Monitor1" (==) No device specified for screen "Screen1". Using the first device section listed. (**) | |-->Device "Card0" (**) |-->Input Device "Keyboard0" (**) |-->Input Device "Mouse0" (**) Option "Xinerama" "0" (==) Automatically adding devices (==) Automatically enabling devices even though later on both cards are detected: (--) PCI:*(0:0:13:0) 10de:03d1:1019:2601 nVidia Corporation C61 [GeForce 6100 nForce 405] rev 162, Mem @ 0xfb000000/16777216, 0xd0000000/268435456, 0xfc000000/16777216, BIOS @ 0x????????/131072 (--) PCI: (0:2:0:0) 10de:0422:0000:0000 nVidia Corporation G86 [GeForce 8400 GS] rev 161, Mem @ 0xf8000000/16777216, 0xe0000000/268435456, 0xf6000000/33554432, I/O @ 0x0000bc00/128, BIOS @ 0x????????/131072 [ --- some more logs --- ] (II) Mar 19 11:08:10 NVIDIA(0): NVIDIA GPU GeForce 6100 nForce 405 (C61) at PCI:0:13:0 (II) Mar 19 11:08:10 NVIDIA(0): (GPU-0) [ --- some more logs --- ] (II) Mar 19 11:08:12 NVIDIA(GPU-1): NVIDIA GPU GeForce 8400 GS (G86) at PCI:2:0:0 (GPU-1) Unfortunately later on only one card is initialised and one screen is active. Xrandr shows only one screen too. Any ideas on how to fix it?

    Read the article

  • Multiple syslog-ng destination loghosts

    - by pablo808
    I am currently forwarding logs to one remote destination loghost. filter f_windows { program("Security-Audit*"); }; log { source(r_sys); filter(f_windows); destination(d_windows); }; log { source(r_sys); filter (f_windows); destination(d_loghost); }; I would like to forward these logs to two additional remote destination loghost's. The manual defines destination syntax as: destination <identifier> { destination-driver(params); destination-driver(params); ... }; Tried these different configs: Define additional destinations hosts in d_loghost: destination d_loghost { udp("server1" port(514)); udp("server2" port(514)); udp("server3" port(514));}; filter f_windows { program("Security-Audit*"); }; log { source(r_sys); filter (f_windows); destination(d_loghost); }; Define addtional destination hosts in their own d_loghost definitions: destination d_loghost1 { udp("server1" port(514)); destination d_loghost2 { udp("server2" port(514)); destination d_loghost3 { udp("server3" port(514)); filter f_windows { program("Security-Audit*"); }; log { source(r_sys); filter (f_windows); destination(d_loghost1); }; log { source(r_sys); filter (f_windows); destination(d_loghost2); }; log { source(r_sys); filter (f_windows); destination(d_loghost3); }; Both fail unfortuantly, what am I missing? Thanks.

    Read the article

  • Exchange Server is rejecting message after "MAIL FROM" with "500 5.3.3" with tarpit despite being a Trusted Receiver

    - by Don Rhummy
    I'm getting the message: "500 5.3.3 Unrecognized command" from Exchange server and seeing in the Exchange Server logs that it's tarpitting my smtp sender despite the fact that: I added a Receive Connector for my ip that allows connection, uses "Externally Secure" I ran the commands (with the actual server name): CODE: Set-ReceiveConnector "MyTrusted connector (Servername)" -MaxAcknowledgementDelay 0 Set-ReceiveConnector "MyTrusted connector (Servername)" -TarpitInterval 0 Despite all that, it STILL fails! Any idea what's wrong?

    Read the article

  • MySQL daemon keeps terminating unexpectedly

    - by Yehia A.Salam
    The MySQL daemon on my CentOS server keeps crashing, i got the logs from /var/logs/mysqld but still i am not sure how to fix this: 121114 16:22:56 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended 121114 21:55:11 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121114 21:55:11 [Note] Plugin 'FEDERATED' is disabled. 121114 21:55:11 InnoDB: The InnoDB memory heap is disabled 121114 21:55:11 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121114 21:55:11 InnoDB: Compressed tables use zlib 1.2.3 121114 21:55:11 InnoDB: Using Linux native AIO 121114 21:55:11 InnoDB: Initializing buffer pool, size = 128.0M 121114 21:55:11 InnoDB: Completed initialization of buffer pool 121114 21:55:11 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 121114 21:55:11 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 121114 21:55:12 InnoDB: Waiting for the background threads to start 121114 21:55:13 InnoDB: 1.1.6 started; log sequence number 77177262 121114 21:55:13 [Note] Event Scheduler: Loaded 0 events 121114 21:55:13 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.5.12' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) by Remi 121115 00:19:44 mysqld_safe Number of processes running now: 0 121115 00:19:44 mysqld_safe mysqld restarted 121115 0:19:47 [Note] Plugin 'FEDERATED' is disabled. 121115 0:19:47 InnoDB: The InnoDB memory heap is disabled 121115 0:19:47 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121115 0:19:47 InnoDB: Compressed tables use zlib 1.2.3 121115 0:19:47 InnoDB: Using Linux native AIO 121115 0:19:47 InnoDB: Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 121115 0:19:47 InnoDB: Completed initialization of buffer pool 121115 0:19:47 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121115 0:19:47 [ERROR] Plugin 'InnoDB' init function returned error. 121115 0:19:47 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121115 0:19:47 [ERROR] Unknown/unsupported storage engine: InnoDB 121115 0:19:47 [ERROR] Aborting Edit #1 total used free shared buffers cached Mem: 496 370 126 0 24 110 -/+ buffers/cache: 234 261 Swap: 1023 9 1014 Edit #2 Also, largest table in my mysql is 20MB, so my the memory used should be pretty moderate. SELECT CONCAT(table_schema, '.', table_name), CONCAT(ROUND(table_rows / 1000000, 2), 'M') rows, CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G') DATA, CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G') idx, CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') total_size, ROUND(index_length / data_length, 2) idxfrac FROM information_schema.TABLES ORDER BY data_length + index_length DESC LIMIT 10;

    Read the article

  • Troubleshooting Windows Explorer

    - by edward.r.cox
    I have a .NET application that receives messages from a web server and alerts the user. I have one Windows XP SP3 computer that when my application receives the message, it crashes the Windows Explorer shell. (I can see the message received in the listview just before Explorer crashes.) I don't see any errors in the Windows event logs. Is there somewhere that I should be looking at to troubleshoot this problem?

    Read the article

  • Most common account names used in ssh brute force attacks

    - by Charles Stewart
    Does anyone maintain lists of the most frequently guessed account names that are used by attackers brute-forcing ssh? For your amusement, from my main server's logs over the last month (43 313 failed ssh attempts), with root not getting as far as sshd: cas@txtproof:~$ grep -e sshd /var/log/auth* | awk ' { print $8 }' | sort | uniq -c | sort | tail -n 13 32 administrator 32 stephen 34 administration 34 sales 34 user 35 matt 35 postgres 38 mysql 42 oracle 44 guest 86 test 90 admin 16513 checking

    Read the article

  • httpd dead but subsys locked

    - by McShark
    Hello, I modified today max_execution_time in php.ini, when I restarted the server, I get this error : Stopping httpd: [FAILED] Starting httpd: (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs I killed httpd proc : killall httpd, and started it fine, but the I can't open any web site on the server. service httpd status OUTPUT : httpd dead but subsys locked I removed httpd file from /var/lock/subsys/ :S Same problem. Please Help!

    Read the article

  • ShareWebDb_log.ldf is 97GB - How to reduce?

    - by pierre
    We run an SBS 2008 server with WSS. On the drive I have set aside for WSS, I'm fast running out of space due to the ShareWebDb_log.ldf file: I've tried doing to do what I've read online - change the recovery mode, backup and truncate - but I can't actually see how I do this via the SQL Server Management Studio tool. Can anyone shed any light? The database for this file does not show in the list, and I cannot expand Management SQL Server Logs (because it's too large?).

    Read the article

  • How can I find the space used by a SQL Transaction Log?

    - by Sean Earp
    The SQL Server sp_spaceused stored procedure is useful for finding out a database size, unallocated space, etc. However (as far as I can tell), it does not report that information for the transaction log (and looking at database properties within SQL Server Management Studio also does not provide that information for transaction logs). While I can easily find the physical space used by a transaction log by looking at the .ldf file, how can I find out how much of the log file is used and how much is unused?

    Read the article

  • Tomcat service start issue

    - by Srinivasan
    Hi, I have installed tomcat as service successfully. when i start this from control Panel- administrative tools - services, it tows error like this, The Apache Tomcat 4.1 service on local computer started and then stopped. Some service stop automatically if they have no work to do, for example, the performance logs and alerts sevice. Wat is the error?

    Read the article

  • MySQL: Pacemaker cannot start the failed master as a new slave?

    - by quanta
    I'm going to setup failover for MySQL replication (1 master and 1 slave) follow this guide: https://github.com/jayjanssen/Percona-Pacemaker-Resource-Agents/blob/master/doc/PRM-setup-guide.rst Here're the output of crm configure show: node serving-6192 \ attributes p_mysql_mysql_master_IP="192.168.6.192" node svr184R-638.localdomain \ attributes p_mysql_mysql_master_IP="192.168.6.38" primitive p_mysql ocf:percona:mysql \ params config="/etc/my.cnf" pid="/var/run/mysqld/mysqld.pid" socket="/var/lib/mysql/mysql.sock" replication_user="repl" replication_passwd="x" test_user="test_user" test_passwd="x" \ op monitor interval="5s" role="Master" OCF_CHECK_LEVEL="1" \ op monitor interval="2s" role="Slave" timeout="30s" OCF_CHECK_LEVEL="1" \ op start interval="0" timeout="120s" \ op stop interval="0" timeout="120s" primitive writer_vip ocf:heartbeat:IPaddr2 \ params ip="192.168.6.8" cidr_netmask="32" \ op monitor interval="10s" \ meta is-managed="true" ms ms_MySQL p_mysql \ meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" globally-unique="false" target-role="Master" is-managed="true" colocation writer_vip_on_master inf: writer_vip ms_MySQL:Master order ms_MySQL_promote_before_vip inf: ms_MySQL:promote writer_vip:start property $id="cib-bootstrap-options" \ dc-version="1.0.12-unknown" \ cluster-infrastructure="openais" \ expected-quorum-votes="2" \ no-quorum-policy="ignore" \ stonith-enabled="false" \ last-lrm-refresh="1341801689" property $id="mysql_replication" \ p_mysql_REPL_INFO="192.168.6.192|mysql-bin.000006|338" crm_mon: Last updated: Mon Jul 9 10:30:01 2012 Stack: openais Current DC: serving-6192 - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, 2 expected votes 2 Resources configured. ============ Online: [ serving-6192 svr184R-638.localdomain ] Master/Slave Set: ms_MySQL Masters: [ serving-6192 ] Slaves: [ svr184R-638.localdomain ] writer_vip (ocf::heartbeat:IPaddr2): Started serving-6192 Editing /etc/my.cnf on the serving-6192 of wrong syntax to test failover and it's working fine: svr184R-638.localdomain being promoted to become the master writer_vip switch to svr184R-638.localdomain Current state: Last updated: Mon Jul 9 10:35:57 2012 Stack: openais Current DC: serving-6192 - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, 2 expected votes 2 Resources configured. ============ Online: [ serving-6192 svr184R-638.localdomain ] Master/Slave Set: ms_MySQL Masters: [ svr184R-638.localdomain ] Stopped: [ p_mysql:0 ] writer_vip (ocf::heartbeat:IPaddr2): Started svr184R-638.localdomain Failed actions: p_mysql:0_monitor_5000 (node=serving-6192, call=15, rc=7, status=complete): not running p_mysql:0_demote_0 (node=serving-6192, call=22, rc=7, status=complete): not running p_mysql:0_start_0 (node=serving-6192, call=26, rc=-2, status=Timed Out): unknown exec error Remove the wrong syntax from /etc/my.cnf on serving-6192, and restart corosync, what I would like to see is serving-6192 was started as a new slave but it doesn't: Failed actions: p_mysql:0_start_0 (node=serving-6192, call=4, rc=1, status=complete): unknown error Here're snippet of the logs which I'm suspecting: Jul 09 10:46:32 serving-6192 lrmd: [7321]: info: rsc:p_mysql:0:4: start Jul 09 10:46:32 serving-6192 lrmd: [7321]: info: RA output: (p_mysql:0:start:stderr) Error performing operation: The object/attribute does not exist Jul 09 10:46:32 serving-6192 crm_attribute: [7420]: info: Invoked: /usr/sbin/crm_attribute -N serving-6192 -l reboot --name readable -v 0 The full logs: http://fpaste.org/AyOZ/ The strange thing is I can starting it manually: export OCF_ROOT=/usr/lib/ocf export OCF_RESKEY_config="/etc/my.cnf" export OCF_RESKEY_pid="/var/run/mysqld/mysqld.pid" export OCF_RESKEY_socket="/var/lib/mysql/mysql.sock" export OCF_RESKEY_replication_user="repl" export OCF_RESKEY_replication_passwd="x" export OCF_RESKEY_test_user="test_user" export OCF_RESKEY_test_passwd="x" sh -x /usr/lib/ocf/resource.d/percona/mysql start: http://fpaste.org/RVGh/ Did I make something wrong?

    Read the article

  • Can't get Monit to work

    - by Andrea
    I am trying to configure Monit on my local machine to get a taste at how it works, but I have some issues. What I am trying to do is to get any evidence that Monit is up and running correctly and is actually monitoring something. So my /etc/monit/monitrc looks like set daemon 60 set logfile /var/log/monit.log set idfile /var/lib/monit/id set statefile /var/lib/monit/state set eventqueue basedir /var/lib/monit/events slots 100 set httpd port 2812 and allow username:password check process apache2 with pidfile /usr/local/apache/logs/apache2.pid start program = "/etc/init.d/apache2 start" stop program = "/etc/init.d/apache2 stop" if failed port 6543 protocol http then exec "/usr/bin/touch /tmp/monit" If I understand correctly, since apache does not listen on port 6543 (it is just a random number) I should get an error, and as a consequence the file /tmp/monit should be created. So I start monit by sudo service monit start sudo monit monitor apache2 Unfortunately no such file is created. Instead the web console shows an error for apache - execution failed. The log says 'apache2' failed to start. What am I doing wrong? EDIT As suggested in the comments, I ran monit in verbose mode, by monit -vv monitor apache2 (the exact command suggested in the comments failed). The output is Runtime constants: Control file = /etc/monit/monitrc Log file = /var/log/monit.log Pid file = /var/run/monit.pid Debug = True Log = True Use syslog = False Is Daemon = True Use process engine = True Poll time = 60 seconds with start delay 0 seconds Expect buffer = 256 bytes Event queue = base directory /var/lib/monit/events with 100 slots Mail from = (not defined) Mail subject = (not defined) Mail message = (not defined) Start monit httpd = True httpd bind address = Any/All httpd portnumber = 2812 httpd signature = True Use ssl encryption = False httpd auth. style = Basic Authentication The service list contains the following entries: Process Name = apache2 Pid file = /usr/local/apache/logs/apache2.pid Monitoring mode = active Start program = '/etc/init.d/apache2 start' timeout 30 second(s) Stop program = '/etc/init.d/apache2 stop' timeout 30 second(s) Existence = if does not exist 1 times within 1 cycle(s) then restart else if succeeded 1 times within 1 cycle(s) then alert Pid = if changed 1 times within 1 cycle(s) then alert Ppid = if changed 1 times within 1 cycle(s) then alert Port = if failed localhost:6543 [HTTP via TCP] with timeout 5 seconds 1 times within 1 cycle(s) then exec '/usr/bin/touch /tmp/prova-monit' timeout 0 cycle(s) else if succeeded 1 times within 1 cycle(s) then alert System Name = system_andrea-Vostro-420-Series Monitoring mode = active

    Read the article

  • Adding tail behaviour where enter adds blank lines to less

    - by gonvaled
    I love less, which I can use to follow logs with the +F flag (or the ShiftF hotkey), search forwards and backwards, and generally move freely through the document. But there is one thing missing in less: usually I am at the end of the file, and I want to see new things happening. In tail -f I would just hit enter several times, and new log lines would just appear clearly separated from old lines. Is it possible to add this to less? How?

    Read the article

  • How to get a screensaver at the Windows 7 login screen?

    - by AlexV
    I'm using Windows 7 Ultimate 64-bit. Since a couple of weeks, I no longer have a screensaver on the Windows login screen. If a user logs and then lock his session, the screensaver will start eventually but if no user are logged the login screen will never start the screensaver. Anyone know how to set a screensaver there?

    Read the article

  • How to run Django 1.3/1.4 on uWSGI on nginx on EC2 (Apache2 works)

    - by Tadeck
    I am posting a question on behalf of my administrator. Basically he wants to set up Django app (made on Django 1.3, but will be moving to Django 1.4, so it should not really matter which one of these two will work, I hope) on WSGI on nginx, installed on Amazon EC2. The app runs correctly when Django's development server is used (with ./manage.py runserver 0.0.0.0:8080 for example), also Apache works correctly. The only problem is with nginx and it looks there is something else wrong with nginx / WSGI or Django configuration. His description is as follows: Server has been configured according to many tutorials, but unfortunately Nginx and uWSGI still do not work with application. ProjectName.py: import os, sys, wsgi os.environ.setdefault("DJANGO_SETTINGS_MODULE", "ProjectName.settings") from django.core.wsgi import get_wsgi_application application = get_wsgi_application() I run uWSGI by comand: uwsgi -x /etc/uwsgi/apps-enabled/projectname.xml XML file: <uwsgi> <chdir>/home/projectname</chdir> <pythonpath>/usr/local/lib/python2.7</pythonpath> <socket>127.0.0.1:8001</socket> <daemonize>/var/log/uwsgi/proJectname.log</daemonize> <processes>1</processes> <uid>33</uid> <gid>33</gid> <enable-threads/> <master/> <vacuum/> <harakiri>120</harakiri> <max-requests>5000</max-requests> <vhost/> </uwsgi> In logs from uWSGI: *** no app loaded. going in full dynamic mode *** In logs from Nginx: XXX.com [pid: XXX|app: -1|req: -1/1] XXX.XXX.XXX.XXX () {48 vars in 989 bytes} [Date] GET / => generated 46 bytes in 77 m secs (HTTP/1.1 500) 2 headers in 63 bytes (0 switches on core 0) added /usr/lib/python2.7/ to pythonpath. Traceback (most recent call last): File "./ProjectName.py", line 26, in <module> from django.core.wsgi import get_wsgi_application ImportError: No module named wsgi unable to load app SCRIPT_NAME=XXX.com| Example tutorials that were used: http://projects.unbit.it/uwsgi/wiki/RunOnNginx https://docs.djangoproject.com/en/1.4/howto/deployment/wsgi/ Do you have any idea what has been done incorrectly, or what should be done to make Django work on uWSGI on nginx on EC2?

    Read the article

  • Corrupted version of WordPad

    - by Mike GH
    Somehow in trying to cleanup my computer's hard drive, I have inadvertently corrupted WordPad, which I have long used for viewing many types of files, including logs (i.e., log4j). Now when I right click one of these files and choose "open with" & WordPad, I get an error (fn is not a valid Win32 application). I've researched this issue, tried reinstalling wordpad.exe; bbut I still get the same error. Any suggestions would be appreciated.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >