Search Results

Search found 7542 results on 302 pages for 'debug backtrace'.

Page 208/302 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • Heartbeat won't successfully start up resources from a cold boot when a failed node is present

    - by Matthew
    I currently have two ubuntu servers running Heartbeat and DRBD. The servers are directory connected with a 1000Mbps crossover cable on eth1 and have access to an IP camera LAN on eth0. Now, let's say that one node is down and the remaining functional node is booting after having been shut down. The node that is still functioning won't start up heartbeat and provide access to the drbd resource from a cold boot. I have to manually restart heartbeat by sudo service heartbeat restart to get everything up and running. How can I get it to start fine from a cold start, when only one server is present? Here is the ha.cf: debug /var/log/ha-debug logfile /var/log/ha-log logfacility none keepalive 2 deadtime 10 warntime 7 initdead 60 ucast eth1 192.168.2.2 ucast eth0 10.1.10.201 node EMserver1 node EMserver2 respawn hacluster /usr/lib/heartbeat/ipfail ping 10.1.10.22 10.1.10.21 10.1.10.11 auto_failback off Some material from the syslog: harc[4604]: 2012/11/27_13:54:49 info: Running /etc/ha.d//rc.d/status status mach_down[4632]: 2012/11/27_13:54:49 info: /usr/share/heartbeat/mach_down: nice_failback: foreign resources acquired mach_down[4632]: 2012/11/27_13:54:49 info: mach_down takeover complete for node emserver2. Nov 27 13:54:49 EMserver1 heartbeat: [4586]: info: Initial resource acquisition complete (T_RESOURCES(us)) Nov 27 13:54:49 EMserver1 heartbeat: [4586]: info: mach_down takeover complete. IPaddr[4679]: 2012/11/27_13:54:49 INFO: Resource is stopped Nov 27 13:54:49 EMserver1 heartbeat: [4605]: info: Local Resource acquisition completed. harc[4713]: 2012/11/27_13:54:49 info: Running /etc/ha.d//rc.d/ip-request-resp ip-request-resp ip-request-resp[4713]: 2012/11/27_13:54:49 received ip-request-resp IPaddr::10.1.10.254 OK yes ResourceManager[4732]: 2012/11/27_13:54:50 info: Acquiring resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server IPaddr[4759]: 2012/11/27_13:54:50 INFO: Resource is stopped ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 start IPaddr[4816]: 2012/11/27_13:54:50 INFO: Using calculated nic for 10.1.10.254: eth0 IPaddr[4816]: 2012/11/27_13:54:50 INFO: Using calculated netmask for 10.1.10.254: 255.255.255.0 IPaddr[4816]: 2012/11/27_13:54:50 INFO: eval ifconfig eth0:0 10.1.10.254 netmask 255.255.255.0 broadcast 10.1.10.255 IPaddr[4804]: 2012/11/27_13:54:50 INFO: Success ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/drbddisk r0 start Filesystem[4965]: 2012/11/27_13:54:50 INFO: Resource is stopped ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 start Filesystem[5039]: 2012/11/27_13:54:50 INFO: Running start for /dev/drbd1 on /shr Filesystem[5033]: 2012/11/27_13:54:51 INFO: Success ResourceManager[4732]: 2012/11/27_13:54:51 info: Running /etc/init.d/nfs-kernel-server start Nov 27 13:55:00 EMserver1 heartbeat: [4586]: info: Local Resource acquisition completed. (none) Nov 27 13:55:00 EMserver1 heartbeat: [4586]: info: local resource transition completed. Nov 27 13:57:46 EMserver1 heartbeat: [4586]: info: Heartbeat shutdown in progress. (4586) Nov 27 13:57:46 EMserver1 heartbeat: [5286]: info: Giving up all HA resources. ResourceManager[5301]: 2012/11/27_13:57:46 info: Releasing resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server ResourceManager[5301]: 2012/11/27_13:57:46 info: Running /etc/init.d/nfs-kernel-server stop ResourceManager[5301]: 2012/11/27_13:57:46 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 stop Filesystem[5372]: 2012/11/27_13:57:46 INFO: Running stop for /dev/drbd1 on /shr Filesystem[5372]: 2012/11/27_13:57:47 INFO: Trying to unmount /shr Filesystem[5372]: 2012/11/27_13:57:47 INFO: unmounted /shr successfully Filesystem[5366]: 2012/11/27_13:57:47 INFO: Success ResourceManager[5301]: 2012/11/27_13:57:47 info: Running /etc/ha.d/resource.d/drbddisk r0 stop ResourceManager[5301]: 2012/11/27_13:57:47 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 stop IPaddr[5509]: 2012/11/27_13:57:47 INFO: ifconfig eth0:0 down IPaddr[5497]: 2012/11/27_13:57:47 INFO: Success Nov 27 13:57:47 EMserver1 heartbeat: [5286]: info: All HA resources relinquished. Nov 27 13:57:48 EMserver1 heartbeat: [4586]: info: killing /usr/lib/heartbeat/ipfail process group 4603 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBFIFO process 4589 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4590 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4591 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4592 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4593 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4594 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4595 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4596 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4597 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4598 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4599 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4589 exited. 11 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4596 exited. 10 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4598 exited. 9 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4590 exited. 8 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4595 exited. 7 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4591 exited. 6 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4592 exited. 5 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4593 exited. 4 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4597 exited. 3 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4594 exited. 2 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4599 exited. 1 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: emserver1 Heartbeat shutdown complete. Here is some more from the log ResourceManager[2576]: 2012/11/28_16:32:42 info: Acquiring resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server IPaddr[2602]: 2012/11/28_16:32:42 INFO: Running OK Filesystem[2653]: 2012/11/28_16:32:43 INFO: Running OK Nov 28 16:32:52 EMserver1 heartbeat: [1695]: WARN: node emserver2: is dead Nov 28 16:32:52 EMserver1 heartbeat: [1695]: info: Dead node emserver2 gave up resources. Nov 28 16:32:52 EMserver1 ipfail: [1807]: info: Status update: Node emserver2 now has status dead Nov 28 16:32:52 EMserver1 heartbeat: [1695]: info: Link emserver2:eth1 dead. Nov 28 16:32:53 EMserver1 ipfail: [1807]: info: NS: We are still alive! Nov 28 16:32:53 EMserver1 ipfail: [1807]: info: Link Status update: Link emserver2/eth1 now has status dead Nov 28 16:32:55 EMserver1 ipfail: [1807]: info: Asking other side for ping node count. Nov 28 16:32:55 EMserver1 ipfail: [1807]: info: Checking remote count of ping nodes. Nov 28 16:32:57 EMserver1 heartbeat: [1695]: info: Heartbeat shutdown in progress. (1695) Nov 28 16:32:57 EMserver1 heartbeat: [2734]: info: Giving up all HA resources. ResourceManager[2751]: 2012/11/28_16:32:57 info: Releasing resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server ResourceManager[2751]: 2012/11/28_16:32:57 info: Running /etc/init.d/nfs-kernel-server stop ResourceManager[2751]: 2012/11/28_16:32:57 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 stop Filesystem[2829]: 2012/11/28_16:32:57 INFO: Running stop for /dev/drbd1 on /shr Filesystem[2829]: 2012/11/28_16:32:57 INFO: Trying to unmount /shr Filesystem[2829]: 2012/11/28_16:32:58 INFO: unmounted /shr successfully Filesystem[2823]: 2012/11/28_16:32:58 INFO: Success ResourceManager[2751]: 2012/11/28_16:32:58 info: Running /etc/ha.d/resource.d/drbddisk r0 stop ResourceManager[2751]: 2012/11/28_16:32:58 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 stop IPaddr[2971]: 2012/11/28_16:32:58 INFO: ifconfig eth0:0 down IPaddr[2958]: 2012/11/28_16:32:58 INFO: Success Nov 28 16:32:58 EMserver1 heartbeat: [2734]: info: All HA resources relinquished. Nov 28 16:32:59 EMserver1 heartbeat: [1695]: info: killing /usr/lib/heartbeat/ipfail process group 1807 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBFIFO process 1777 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1778 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1779 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1780 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1781 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1782 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1783 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1784 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1785 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1786 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1787 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1778 exited. 11 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1779 exited. 10 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1780 exited. 9 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1781 exited. 8 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1782 exited. 7 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1783 exited. 6 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1784 exited. 5 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1785 exited. 4 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1786 exited. 3 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1787 exited. 2 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1777 exited. 1 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: emserver1 Heartbeat shutdown complete. If I restarted heartbeat at this point... the resources heartbeat controls would start up fine.... please help!

    Read the article

  • Key-Based SSH Permission denied (publickey) Ubuntu 12-04

    - by user125176
    I have configured sshd to accept key-based ssh logins with LogLevel on DEBUG, and uploaded my public key to ~/.ssh.authorized_keys, where permissions are set as: 700 ~/.ssh 600 ~/.ssh/authorized_keys From root, I can su - USERNAME. From the client I get Permission denied (publicly). From the server Here's how it is telling me that it "Could not open authorized keys '/home/USERNAME/.ssh/authorized_keys': Permission denied". Client protocol version 2.0; client software version OpenSSH_5.2 match: OpenSSH_5.2 pat OpenSSH* Enabling compatibility mode for protocol 2.0 Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 permanently_set_uid: 105/65534 [preauth] list_hostkey_types: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 [preauth] SSH2_MSG_KEXINIT sent [preauth] SSH2_MSG_KEXINIT received [preauth] kex: client->server aes128-ctr hmac-md5 none [preauth] kex: server->client aes128-ctr hmac-md5 none [preauth] SSH2_MSG_KEX_DH_GEX_REQUEST received [preauth] SSH2_MSG_KEX_DH_GEX_GROUP sent [preauth] expecting SSH2_MSG_KEX_DH_GEX_INIT [preauth] SSH2_MSG_KEX_DH_GEX_REPLY sent [preauth] SSH2_MSG_NEWKEYS sent [preauth] expecting SSH2_MSG_NEWKEYS [preauth] SSH2_MSG_NEWKEYS received [preauth] KEX done [preauth] userauth-request for user USERNAME service ssh-connection method none [preauth] attempt 0 failures 0 [preauth] PAM: initializing for "USERNAME" PAM: setting PAM_RHOST to "USERHOSTNAME" PAM: setting PAM_TTY to "ssh" userauth_send_banner: sent [preauth] userauth-request for user USERNAME service ssh-connection method publickey [preauth] attempt 1 failures 0 [preauth] test whether pkalg/pkblob are acceptable [preauth] Checking blacklist file /usr/share/ssh/blacklist.RSA-4096 Checking blacklist file /etc/ssh/blacklist.RSA-4096 temporarily_use_uid: 1001/1002 (e=0/0) trying public key file /home/USERNAME/.ssh/authorized_keys Could not open authorized keys '/home/USERNAME/.ssh/authorized_keys': Permission denied restore_uid: 0/0 temporarily_use_uid: 1001/1002 (e=0/0) trying public key file /home/USERNAME/.ssh/authorized_keys2 Could not open authorized keys '/home/USERNAME/.ssh/authorized_keys2': Permission denied restore_uid: 0/0 Failed publickey for USERNAME from IPADDRESS port 57523 ssh2 Connection closed by IPADDRESS [preauth] do_cleanup [preauth] monitor_read_log: child log fd closed do_cleanup PAM: cleanup

    Read the article

  • How to get remote firewall administration working with Windows Server Core 2008 R2?

    - by Daniel15
    I'm setting up a Windows Server Core 2008 R2 installation in a VMware virtual machine before setting it up on a live VPS. I've gotten remote administration via MMC working on my computer (a PC running Windows 7) for things like event logs, but I can't seem to get the firewall administration working. No matter what I do, I get the following error mesage: You do not have the correct permissions to open the Windows Firewall with Advanced Security console. You must be a member of the Administrators group or the Network Operators group to perform this task. For more information, contact you system administrator. Error code: 0x5. I've used cmdkey to add valid server credentials on my computer, and enabled remote management with the following commands: netsh advfirewall firewall set rule group="remote administration" new enable=yes netsh advfirewall firewall set rule group="windows firewall remote management" new enable=yes netsh advfirewall set currentprofile settings remotemanagement enable I am not running on a domain (just a workgroup), this is the only Windows Server 2008 computer I have. I've tried turning off the firewall completely, but remote administration is still failing How do I debug this issue? Does anyone know how to fix it? I found a few forum topics about it (eg. Remotely managing Windows Firewall on Server Core gives access denied (error 0x5) on Windows Server TechCenter) but they didn't help (I've already tried most of the fixes listed).

    Read the article

  • ip-up does not trigger when using built-in cisco vpn on mac osx lion

    - by Yasser Sobhdel
    I am using Cisco VPN client over lion and I want to make the ip-up and ip-down work. There is no sign of any action taken when I connect or disconnect this VPN connection. I really doubt whether the syntax has been changed or even this kind if connection is triggering the ip-up. Logically, it must be set over ppp but when using the following codes and instructions on them, there is no sign of any output in the log file: http://www.macfreek.nl/mindmaster/Modify_PPTP_Routing_Table http://www.aidanfindlater.com/use-vpn-for-specific-sites-on-mac-os-x Going for error, which there is no sign of it, using the following page: http://hints.macworld.com/article.php?story=20060616150640529 I couldn't find the /var/log/ppp/vpnd.log log file. Also the files are given full permission 0755 or a+x or even 777 using the following command: sudo chmod a+x /etc/ppp/ip-up Any clue on how to debug this would be appreciated. I am totally confused, netstat -rn -f inet doesn't show the routes. Even when the routes are added manually, closing the VPN connection does not run the ip-down and the routes must be deleted manually.

    Read the article

  • Cisco 891w multiple VLAN configuration

    - by Jessica
    I'm having trouble getting my guest network up. I have VLAN 1 that contains all our network resources (servers, desktops, printers, etc). I have the wireless configured to use VLAN1 but authenticate with wpa2 enterprise. The guest network I just wanted to be open or configured with a simple WPA2 personal password on it's own VLAN2. I've looked at tons of documentation and it should be working but I can't even authenticate on the guest network! I've posted this on cisco's support forum a week ago but no one has really responded. I could really use some help. So if anyone could take a look at the configurations I posted and steer me in the right direction I would be extremely grateful. Thank you! version 15.0 service timestamps debug datetime msec service timestamps log datetime msec no service password-encryption ! hostname ESI ! boot-start-marker boot-end-marker ! logging buffered 51200 warnings ! aaa new-model ! ! aaa authentication login userauthen local aaa authorization network groupauthor local ! ! ! ! ! aaa session-id common ! ! ! clock timezone EST -5 clock summer-time EDT recurring service-module wlan-ap 0 bootimage autonomous ! crypto pki trustpoint TP-self-signed-3369945891 enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-3369945891 revocation-check none rsakeypair TP-self-signed-3369945891 ! ! crypto pki certificate chain TP-self-signed-3369945891 certificate self-signed 01 (cert is here) quit ip source-route ! ! ip dhcp excluded-address 192.168.1.1 ip dhcp excluded-address 192.168.1.5 ip dhcp excluded-address 192.168.1.2 ip dhcp excluded-address 192.168.1.200 192.168.1.210 ip dhcp excluded-address 192.168.1.6 ip dhcp excluded-address 192.168.1.8 ip dhcp excluded-address 192.168.3.1 ! ip dhcp pool ccp-pool import all network 192.168.1.0 255.255.255.0 default-router 192.168.1.1 dns-server 10.171.12.5 10.171.12.37 lease 0 2 ! ip dhcp pool guest import all network 192.168.3.0 255.255.255.0 default-router 192.168.3.1 dns-server 10.171.12.5 10.171.12.37 ! ! ip cef no ip domain lookup no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO891W-AGN-A-K9 sn FTX153085WL ! ! username ESIadmin privilege 15 secret 5 $1$g1..$JSZ0qxljZAgJJIk/anDu51 username user1 password 0 pass ! ! ! class-map type inspect match-any ccp-cls-insp-traffic match protocol cuseeme match protocol dns match protocol ftp match protocol h323 match protocol https match protocol icmp match protocol imap match protocol pop3 match protocol netshow match protocol shell match protocol realmedia match protocol rtsp match protocol smtp match protocol sql-net match protocol streamworks match protocol tftp match protocol vdolive match protocol tcp match protocol udp class-map type inspect match-all ccp-insp-traffic match class-map ccp-cls-insp-traffic class-map type inspect match-any ccp-cls-icmp-access match protocol icmp class-map type inspect match-all ccp-invalid-src match access-group 100 class-map type inspect match-all ccp-icmp-access match class-map ccp-cls-icmp-access class-map type inspect match-all ccp-protocol-http match protocol http ! ! policy-map type inspect ccp-permit-icmpreply class type inspect ccp-icmp-access inspect class class-default pass policy-map type inspect ccp-inspect class type inspect ccp-invalid-src drop log class type inspect ccp-protocol-http inspect class type inspect ccp-insp-traffic inspect class class-default drop policy-map type inspect ccp-permit class class-default drop ! zone security out-zone zone security in-zone zone-pair security ccp-zp-self-out source self destination out-zone service-policy type inspect ccp-permit-icmpreply zone-pair security ccp-zp-in-out source in-zone destination out-zone service-policy type inspect ccp-inspect zone-pair security ccp-zp-out-self source out-zone destination self service-policy type inspect ccp-permit ! ! crypto isakmp policy 1 encr 3des authentication pre-share group 2 ! crypto isakmp client configuration group 3000client key 67Nif8LLmqP_ dns 10.171.12.37 10.171.12.5 pool dynpool acl 101 ! ! crypto ipsec transform-set myset esp-3des esp-sha-hmac ! crypto dynamic-map dynmap 10 set transform-set myset ! ! crypto map clientmap client authentication list userauthen crypto map clientmap isakmp authorization list groupauthor crypto map clientmap client configuration address initiate crypto map clientmap client configuration address respond crypto map clientmap 10 ipsec-isakmp dynamic dynmap ! ! ! ! ! interface FastEthernet0 ! ! interface FastEthernet1 ! ! interface FastEthernet2 ! ! interface FastEthernet3 ! ! interface FastEthernet4 ! ! interface FastEthernet5 ! ! interface FastEthernet6 ! ! interface FastEthernet7 ! ! interface FastEthernet8 ip address dhcp ip nat outside ip virtual-reassembly duplex auto speed auto ! ! interface GigabitEthernet0 description $FW_OUTSIDE$$ES_WAN$ ip address 10...* 255.255.254.0 ip nat outside ip virtual-reassembly zone-member security out-zone duplex auto speed auto crypto map clientmap ! ! interface wlan-ap0 description Service module interface to manage the embedded AP ip unnumbered Vlan1 arp timeout 0 ! ! interface Wlan-GigabitEthernet0 description Internal switch interface connecting to the embedded AP switchport trunk allowed vlan 1-3,1002-1005 switchport mode trunk ! ! interface Vlan1 description $ETH-SW-LAUNCH$$INTF-INFO-FE 1$$FW_INSIDE$ ip address 192.168.1.1 255.255.255.0 ip nat inside ip virtual-reassembly zone-member security in-zone ip tcp adjust-mss 1452 crypto map clientmap ! ! interface Vlan2 description guest ip address 192.168.3.1 255.255.255.0 ip access-group 120 in ip nat inside ip virtual-reassembly zone-member security in-zone ! ! interface Async1 no ip address encapsulation slip ! ! ip local pool dynpool 192.168.1.200 192.168.1.210 ip forward-protocol nd ip http server ip http access-class 23 ip http authentication local ip http secure-server ip http timeout-policy idle 60 life 86400 requests 10000 ! ! ip dns server ip nat inside source list 23 interface GigabitEthernet0 overload ip route 0.0.0.0 0.0.0.0 10.165.0.1 ! access-list 23 permit 192.168.1.0 0.0.0.255 access-list 100 remark CCP_ACL Category=128 access-list 100 permit ip host 255.255.255.255 any access-list 100 permit ip 127.0.0.0 0.255.255.255 any access-list 100 permit ip 10.165.0.0 0.0.1.255 any access-list 110 permit ip 192.168.0.0 0.0.5.255 any access-list 120 remark ESIGuest Restriction no cdp run ! ! ! ! ! ! control-plane ! ! alias exec dot11radio service-module wlan-ap 0 session Access point version 12.4 no service pad service timestamps debug datetime msec service timestamps log datetime msec no service password-encryption ! hostname ESIRouter ! no logging console enable secret 5 $1$yEH5$CxI5.9ypCBa6kXrUnSuvp1 ! aaa new-model ! ! aaa group server radius rad_eap server 192.168.1.5 auth-port 1812 acct-port 1813 ! aaa group server radius rad_acct server 192.168.1.5 auth-port 1812 acct-port 1813 ! aaa authentication login eap_methods group rad_eap aaa authentication enable default line enable aaa authorization exec default local aaa authorization commands 15 default local aaa accounting network acct_methods start-stop group rad_acct ! aaa session-id common clock timezone EST -5 clock summer-time EDT recurring ip domain name ESI ! ! dot11 syslog dot11 vlan-name one vlan 1 dot11 vlan-name two vlan 2 ! dot11 ssid one vlan 1 authentication open eap eap_methods authentication network-eap eap_methods authentication key-management wpa version 2 accounting rad_acct ! dot11 ssid two vlan 2 authentication open guest-mode ! dot11 network-map ! ! username ESIadmin privilege 15 secret 5 $1$p02C$WVHr5yKtRtQxuFxPU8NOx. ! ! bridge irb ! ! interface Dot11Radio0 no ip address no ip route-cache ! encryption vlan 1 mode ciphers aes-ccm ! broadcast-key vlan 1 change 30 ! ! ssid one ! ssid two ! antenna gain 0 station-role root ! interface Dot11Radio0.1 encapsulation dot1Q 1 native no ip route-cache bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding bridge-group 1 spanning-disabled ! interface Dot11Radio0.2 encapsulation dot1Q 2 no ip route-cache bridge-group 2 bridge-group 2 subscriber-loop-control bridge-group 2 block-unknown-source no bridge-group 2 source-learning no bridge-group 2 unicast-flooding bridge-group 2 spanning-disabled ! interface Dot11Radio1 no ip address no ip route-cache shutdown ! encryption vlan 1 mode ciphers aes-ccm ! broadcast-key vlan 1 change 30 ! ! ssid one ! antenna gain 0 dfs band 3 block channel dfs station-role root ! interface Dot11Radio1.1 encapsulation dot1Q 1 native no ip route-cache bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding bridge-group 1 spanning-disabled ! interface GigabitEthernet0 description the embedded AP GigabitEthernet 0 is an internal interface connecting AP with the host router no ip address no ip route-cache ! interface GigabitEthernet0.1 encapsulation dot1Q 1 native no ip route-cache bridge-group 1 no bridge-group 1 source-learning bridge-group 1 spanning-disabled ! interface GigabitEthernet0.2 encapsulation dot1Q 2 no ip route-cache bridge-group 2 no bridge-group 2 source-learning bridge-group 2 spanning-disabled ! interface BVI1 ip address 192.168.1.2 255.255.255.0 no ip route-cache ! ip http server no ip http secure-server ip http help-path http://www.cisco.com/warp/public/779/smbiz/prodconfig/help/eag access-list 10 permit 192.168.1.0 0.0.0.255 radius-server host 192.168.1.5 auth-port 1812 acct-port 1813 key ***** bridge 1 route ip

    Read the article

  • Outgrew MongoDB … now what?

    - by samsmith
    We dump debug and transaction logs into mongodb. We really like mongodb because: Blazing insert perf document oriented Ability to let the engine drop inserts when needed for performance But there is this big problem with mongodb: The index must fit in physical RAM. In practice, this limits us to 80-150gb of raw data (we currently run on a system with 16gb RAM). Sooooo, for us to have 500gb or a tb of data, we would need 50gb or 80gb of RAM. Yes, I know this is possible. We can add servers and use mongo sharding. We can buy a special server box that can take 100 or 200 gb of RAM, but this is the tail wagging the dog! We could spend boucoup $$$ on hardware to run FOSS, when SQL Server Express can handle WAY more data on WAY less hardware than Mongo (SQL Server does not meet our architectural desires, or we would use it!) We are not going to spend huge $ on hardware here, because it is necessary only because of the Mongo architecture, not because of the inherent processing/storage needs. (And sharding? Please! Cost aside, who needs the ongoing complexity of three, five, or more servers to manage a relatively small load?) Bottom line: MongoDB is FOSS, but we gotta spend $$$$$$$ on hardware to run it? We sould rather buy commercial SW! I am sure we are not the first to hit this issue, so we ask the community: Where do we go next? (We already run Mongo v2) Thanks!!

    Read the article

  • Apache not Forwarding Client x509 Certificate to Tomcat via mod_proxy

    - by hooknc
    Hi Everyone, I am having difficulties getting a client x509 certificate to be forwarded to Tomcat from Apache using mod_proxy. From observations and reading a few logs it does seem as though the client x509 certificate is being accepted by Apache. But, when Apache makes an SSL request to Tomcat (which has clientAuth="want"), it doesn't look like the client x509 certificate is passed during the ssl handshake. Is there a reasonable way to see what Apache is doing with the client x509 certificate during its handshake with Tomcat? Here is the environment I'm working with: Apache/2.2.3 Tomcat/6.0.29 Java/6.0_23 OpenSSL 0.9.8e Here is my Apache VirtualHost SSL config: <VirtualHost xxx.xxx.xxx.xxx:443> ServerName xxx ServerAlias xxx SSLEngine On SSLProxyEngine on ProxyRequests Off ProxyPreserveHost On ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel debug SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW SSLCertificateFile /usr/local/certificates/xxx.crt SSLCertificateKeyFile /usr/local/certificates/xxx.key SSLCertificateChainFile /usr/local/certificates/xxx.crt SSLVerifyClient optional_no_ca SSLOptions +ExportCertData CustomLog logs/ssl_request_log \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> ProxyPass / https://xxx.xxx.xxx.xxx:8443/ ProxyPassReverse / https://xxx.xxx.xxx.xxx:8443/ </VirtualHost> Then here is my Tomcat SSL Connector: <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" address="xxx.xxx.xxx.xxx" maxThreads="150" scheme="https" secure="true" keystoreFile="/usr/local/certificates/xxx.jks" keypass="xxx_pwd" clientAuth="want" sslProtocol="TLSv1" proxyName="xxx.xxx.xxx.xxx" proxyPort="443" /> Could there possibly be issues with SSL Renegotiation? Could there be problems with the Truststore in our Tomcat instance? (We are using a non-standard Truststore that has partner organization CAs.) Is there better logging for what is happening internally with Apache for SSL? Like what is happening to the client cert or why it isn't forwarding the certificate when tomcats asks for one? Any reasonable assistance would be greatly appreciated. Thank you for your time.

    Read the article

  • Drive XML returning Windows Volume Shadow Service Error

    - by Ssvarc
    I'm trying to image a SATA laptop hard drive, using DriveImageXML, that is attached to my computer via a USB adapter. I'm running Win7 Ultimate 64 bit. DriveXML is returning: Could not initialize Windows Volume Shadow Service (VSS). ERROR C:\Program Files (x86)\Runtime Software\Drivelmage XML\vss64.exe failed to start. ERROR TIMEOUT Make sure VSSVC.EXE is running in your task manager. Click Help for more information. VSSVC.EXE is running in Task Manager, as is VSS64.exe. Looking at the FAQ on the Runtime webpage this turned up: Please verify in Settings-Control Panel-Administrative Tools-Services that the following services are enabled: MS Software Shadow Copy Provider Volume Shadow Copy Also make sure you are able to stop and start these services. Possible reasons for VSS failures: For VSS to work, at least one volume in your computer must be NTFS. If you use only FAT drives, VSS will not function. The required NTFS volume does not need to be identical with the volume you want to image. You should make sure that VSSVC.EXE is running in your task manager. If the problems persist, registering "oleaut.dll" and "oleaut32.dll" using "regsvr32" might help. Both of those services are running and can be started and stopped without issue. Using "regsvr32" to register ""oleaut32.dll" returns successful, but "oleaut.dll" returns: The module "oleaut.dll" failed to load. Make sure the binary is stored at the specified path or debug it to check for problems with the binary or dependent .DLL files. The specified module could not be found. Some other information that might be relevant. Browsing to the drive is successful, but accessing certain folders returns an "access" error. Windows runs a permissions adder that adds the current user profile to the NFTS permissions. Could this be the cause of the issue? DriveImage XML is running as Administrator. Thoughts?

    Read the article

  • IPMI not fucntioning with Network Bonding

    - by muhammed sameer
    Hey, I am having problems with running IPMI on my servers that have network bonding enabled. Platform: CentOS release 5.3 (Final) Kernel: 2.6.18-92.el5 64bit Dell PowerEdge 1950 Ethernet controller: Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet I have bonded the interface eth0 and eth1 as active passive, with eth0 as the active interface, below is conf description from /proc Bonding Mode: fault-tolerance (active-backup) Primary Slave: eth0 Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 30 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cd Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cf My IPMI device is as follows IPMI Device Information Interface Type: KCS (Keyboard Control Style) Specification Version: 2.0 I2C Slave Address: 0x10 NV Storage Device: Not Present Base Address: 0x0000000000000CA8 (I/O) Register Spacing: 32-bit Boundaries I Have used openIPMI as well as freeipmi both to control the chassis via the IPMI card, but on servers which have bonding enabled, the command times out, below is the full run of the command with debug info. ipmi_lan_send_cmd:opened=[0], open=[4482848] IPMI LAN host 70.87.28.115 port 623 Sending IPMI/RMCP presence ping packet ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed Error: Unable to establish LAN session Failed to open LAN interface Unable to get Chassis Power Status On the other hand I configured IPMI on a box with the same specs as mentioned above without bonding and IPMI works perfectly. Has anyone faced this problem with IPMI + Bonding ? I would be thankful is someone helps circumvent this issue. Muhammed Sameer

    Read the article

  • SSI includes not working on Debian with Apache

    - by Mike
    I'm trying to get SSI to work on Debian running Apache, however the .shtml files are not being parsed. From a PHP file with phpinfo() I can see that the following show up in the loaded modules section: mod_mime_xattr mod_mime mod_mime_magic In /etc/apache2/mods-enabled/mime.conf I have (among other things): AddType text/html .shtml AddOutputFilter INCLUDES .shtml In /etc/apache2/sites-enabled/domain.com.conf (for the virtual host in question) I have: <Directory /home/username/public_html> Options +Includes allow from all AllowOverride All </Directory> and for good measure, I added the following as well: <Directory /> Options +Includes </directory> In the user's .htaccess file, I tried adding: Options +Includes AddType text/html shtml AddHandler server-parsed shtml Nothing seems to work. How can I even debug this? Edit: Here is the output of ls /etc/apache2/mods-enabled/ in case this helps actions.conf dav_svn.load proxy_balancer.load actions.load deflate.conf proxy.conf alias.conf deflate.load proxy_connect.load alias.load dir.conf proxy_http.load auth_basic.load dir.load proxy.load auth_digest.load env.load python.load authn_file.load fcgid.conf reqtimeout.conf authz_default.load fcgid.load reqtimeout.load authz_groupfile.load mime.conf rewrite.load authz_host.load mime.load ruby.load authz_user.load mime_magic.conf setenvif.conf autoindex.conf mime_magic.load setenvif.load autoindex.load mime-xattr.load ssl.conf cgi.load negotiation.conf ssl.load dav_fs.conf negotiation.load status.conf dav_fs.load php5.conf status.load dav.load php5.load suexec.load dav_svn.conf proxy_balancer.conf

    Read the article

  • Mediawiki extension error

    - by vinylguitar
    I'm running the latest version of mediawiki using MoWeS Portable II from my desktop. I just installed this extension on the wiki http://www.mediawiki.org/wiki/Extension:MsUpload It adds an option to upload files (to be embedded in an article) to the edit screen of an article. After installing it when I try and edit an article I get the following error: Fatal error: Call to undefined method OutputPage::addModules() in C:\Users\User\Desktop\knowledge mapedia 10 25 13 copy\mowes_portable\www\mediawiki\extensions\MsUpload\msupload.php on line 65 Also here is what I posted in the localsettings.php file (I put it in at the end of localsettings.php if it makes a difference): Start --------------------------------------- MsUpload $wgMSU_ShowAutoKat = false; #autocategorisation $wgMSU_CheckedAutoKat = false; #checkbox for autocategorisation checked $wgMSU_debug = false; #debug mode $wgMSU_ImgParams = '400px'; #default max-size for inserted image $wgMSU_UseDragDrop = true; #show drag&drop area require_once "$IP/extensions/MsUpload/msupload.php"; End --------------------------------------- MsUpload require_once "$IP/extensions/msupload/msupload.php"; At line 65 in the localsettings.php file there is the following: line 64 ## Database settings line 65 $wgDBtype = "mysql"; line 66 $wgDBserver = "localhost"; line 67 $wgDBname = "mediawiki"; line 68 $wgDBuser = "root"; line 69 $wgDBpassword = ""; Any idea what I'm doing wrong?

    Read the article

  • [Zend Framework - Ubuntu10.04- Lamp- First Project] i get 500 error on http://localhost/zftutorial/p

    - by meyosef
    Hi I new in Zend and Lamp, my software: Zend Framework, Ubuntu10.04,Lamp. I made my first Zend Project with Zend tool (according this tutorial http://akrabat.com/wp-content/uploads/Getting-Started-with-Zend-Framework.pdf) But when i go to http://localhost/zftutorial/public i get 500 error. My $ dir -l of zftutorial: drwxr-xr-x 6 root root 4096 2010-06-01 23:54 application drwxr-xr-x 2 root root 4096 2010-06-01 23:54 docs drwxr-xr-x 3 root root 4096 2010-06-02 00:23 library drwxr-xr-x 3 root root 4096 2010-06-02 00:00 nbproject drwxr-xr-x 2 root root 4096 2010-06-01 23:54 public drwxr-xr-x 4 root root 4096 2010-06-01 23:54 tests my:/etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Thanks,

    Read the article

  • How can I set my bootloader to load my primary (C:) partition?

    - by acidzombie24
    I created 4 partitions and want to use them to have seperate Windows XP, Windows 7, (possibly) Windows Vista installations, and "WinDummy" (to test applications in Vista, XP or another OS). I used Norton Ghost to install an OS to the drive in about 3 minutes. My problem is that I installed the spare first on the 4th partition, then Windows 7 on the second. I tried to set the bootloader (with easybcd) to use the first partition - but it doesn't want to. Heres my debug screen on easybcd As you can see, the device is set to H and i cant figure out how to change it. I can make my bootloader use Windows 7 first, but I can't make it use my C: install of XP instead of my spare H:. How would I fix this? Windows Boot Manager -------------------- identifier {9dea862c-5cdd-4e70-acc1-f32b344d4795} device partition=H: description Windows Boot Manager locale en-US inherit {7ea2e1ac-2e61-4728-aaa3-896d9d0a9f0e} default {bc2d8409-8640-11de-aa7e-a477d86453c4} resumeobject {bc2d8405-8640-11de-aa7e-a477d86453c4} displayorder {bc2d8409-8640-11de-aa7e-a477d86453c4} {bc2d8406-8640-11de-aa7e-a477d86453c4} {bc2d8404-8640-11de-aa7e-a477d86453c4} {466f5a88-0af2-4f76-9038-095b170dc21c} toolsdisplayorder {b2721d73-1db4-4c62-bf78-c548a880142d} timeout 3 Real-mode Boot Sector --------------------- identifier {bc2d8409-8640-11de-aa7e-a477d86453c4} device partition=C: path \NTLDR description Windows XP Windows Boot Loader ------------------- identifier {bc2d8406-8640-11de-aa7e-a477d86453c4} device partition=D: path \Windows\system32\winload.exe description Windows 7 locale en-US inherit {6efb52bf-1766-41db-a6b3-0ee5eff72bd7} recoverysequence {bc2d8407-8640-11de-aa7e-a477d86453c4} recoveryenabled Yes osdevice partition=D: systemroot \Windows resumeobject {bc2d8405-8640-11de-aa7e-a477d86453c4} nx OptIn Windows Boot Loader ------------------- identifier {bc2d8404-8640-11de-aa7e-a477d86453c4} device partition=E: path \Windows\system32\winload.exe description Blank osdevice partition=E: systemroot \Windows Windows Legacy OS Loader ------------------------ identifier {466f5a88-0af2-4f76-9038-095b170dc21c} device partition=H: path \ntldr description Windows XP Spare

    Read the article

  • HAProxy + NodeJS gets stuck on TCP Retransmission

    - by sled
    I have a HAProxy + NodeJS + Rails Setup, I use the NodeJS Server for file upload purposes. The problem I'm facing is that if I'm uploading through haproxy to nodejs and a "TCP (Fast) Retransmission" occurs because of a lost packet the TX rate on the client drops to zero for about 5-10 secs and gets flooded with TCP Retransmissions. This does not occur if I upload to NodeJS directly (TCP Retransmission happens too but it doesn't get stuck with dozens of retransmission attempts). My test setup is a simple HTML4 FORM (method POST) with a single file input field. The NodeJS Server only reads the incoming data and does nothing else. I've tested this on multiple machines, networks, browsers, always the same issue. Here's a TCP Traffic Dump from the client while uploading a file: ..... TCP 1506 [TCP segment of a reassembled PDU] >> everything is uploading fine until: TCP 1506 [TCP Fast Retransmission] [TCP segment of a reassembled PDU] TCP 66 [TCP Dup ACK 7392#1] 63265 > http [ACK] Seq=4844161 Ack=1 Win=524280 Len=0 TSval=657047088 TSecr=79373730 TCP 1506 [TCP Retransmission] [TCP segment of a reassembled PDU] >> the last message is repeated about 50 times for >>5-10 secs<< (TX drops to 0 on client, RX drops to 0 on server) TCP 1506 [TCP segment of a reassembled PDU] >> upload continues until the next TCP Fast Retransmission and the same thing happens again The haproxy.conf (haproxy v1.4.18 stable) is the following: global log 127.0.0.1 local1 debug maxconn 4096 # Total Max Connections. This is dependent on ulimit nbproc 2 defaults log global mode http option httplog option tcplog frontend http-in bind *:80 timeout client 6000 acl is_websocket path_beg /node/ use_backend node_backend if is_websocket default_backend app_backend # Rails Server (via nginx+passenger) backend app_backend option httpclose option forwardfor timeout server 30000 timeout connect 4000 server app1 127.0.0.1:3000 # node.js backend node_backend reqrep ^([^\ ]*)\ /node/(.*) \1\ /\2 option httpclose option forwardfor timeout queue 5000 timeout server 6000 timeout connect 5000 server node1 127.0.0.1:3200 weight 1 maxconn 4096 Thanks for reading! :) Simon

    Read the article

  • Debugging UI Problems in IE8 (Was IE8 on Windows 7 Authentication Mess)

    - by alharaka
    UPDATE: I think the real question I need to ask here is: how does a technician debug UI problems with Internet Explorer, and not HTML rendering issues that have pretty good tools? I am aware of the SysInternals tools and others mentioned below, but maybe I am not harnessing their power properly. Someone else in the TechNet forum I mentioned had a similar issue. Again, I have lots of data, I am not sure how to properly interpret it. ORIGINAL POST: So I tried the venerable Technet Forums to solve this isse. In short, the Windows Security dialog has no place to put credentials, rendering pretty much useless. This happens to apply for a whole bunch of our intranet websites, and only a select number of users with a few laptops have this problem. It ends up looking like this. Things I have tried so far: Disabling local Group Policy (not domain connected) Disabling local Security Policy Resetting IE settings A few system restores Re-registering a bunch of IE DLL's and all other steps here Reinstalling IE8 (dism /online /disable-feature /featurename:"internet-explorer-optional-x86, reboot, dism /online /enable-feature /featurename:"internet-explorer-optional-x86, and reboot) And SFC scan, which found nothing Still, nothing. Not only am I fed up, but I have begun to really work with APIExplorer and Procmon as mentioned in the Technet original because I want to know WHAT is happening, not just fix it. Any thoughts?

    Read the article

  • Uknown nginx Error Messages

    - by Sparsh Gupta
    Hello, I am getting some nginx errors as I can see them in my error.log which I am unable to understand. They look like: ERRORS: 2011/03/13 21:48:21 [crit] 14555#0: *323314343 open() "/usr/local/nginx/proxy_temp/0/95/0000000950" failed (13: Permission denied) while reading upstream, client: XX.XX.XX.XX, server: , request: "GET /abc.jpg 2 HTTP/1.0", upstream: "http://192.168.162.141:80/abc.jpg", host: "example.com", referrer: "http://domain.com" 2011/03/13 22:00:07 [crit] 14552#0: *324171134 open() "/usr/local/nginx/proxy_temp/1/95/0000000951" failed (13: Permission denied) while reading upstream, client: XX.XX.XX.XY, server: , request: "GET mno.png HTTP/1.1", upstream: "http://192.168.162.141:80/mno.png", host: "example.com", referrer: "http://domain2.com" I also looked at these locations but found that there is no file by this name. root@li235-57:/var/log/nginx# /usr/local/nginx/proxy_temp/1/ 00/ 01/ 02/ 03/ 04/ 05/ 06/ 07/ 08/ 09/ 10/ 11/ 12/ 13/ 14/ 15/ 16/ 17/ 18/ 19/ 20/ 21/ 22/ 23/ 24/ 25/ 26/ 27/ 28/ 29/ 30/ 31/ 32/ 33/ 34/ 35/ 36/ 37/ root@li235-57:/var/log/nginx# ls /usr/local/nginx/proxy_temp/0/ 01/ 02/ 03/ 04/ 05/ 06/ 07/ 08/ 09/ 10/ 11/ 12/ 13/ 14/ 15/ 16/ 17/ 18/ 19/ 20/ 21/ 22/ 23/ 24/ 25/ 26/ 27/ 28/ 29/ 30/ 31/ 32/ 33/ 34/ 35/ 36/ 37/ Can someone help me whats going on / how can I debug this more and better fix this Thanks

    Read the article

  • Compiling mod_auth_kerb on OS X

    - by bshacklett
    I'm trying to get mod_auth_kerb installed, but I can't seem to find any information on compiling it on OS X. I'm getting the following when I attempt to compile: ./apxs.sh "-I. -Ispnegokrb5 -I/include " "-dynamic -g -O2 -arch x86_64 -Wl,-search_paths_first -lgssapi_krb5 -lkrb5 -lk5crypto -lcom_err -lresolv -lresolv" "" "/Applications/XAMPP/xamppfiles/bin/apxs" "-c" "src/mod_auth_kerb.c" /Applications/XAMPP/xamppfiles/build/libtool --silent --mode=compile gcc -prefer-pic -I/Applications/XAMPP/xamppfiles/include -L/Applications/XAMPP/xamppfiles/lib -mmacosx-version-min=10.4 -arch i386 -arch ppc -DDARWIN -DSIGPROCMASK_SETS_THREAD_MASK -no-cpp-precomp -I/Applications/XAMPP/xamppfiles/include -I/Applications/XAMPP/xamppfiles/include -I/Applications/XAMPP/xamppfiles/include -I/Applications/XAMPP/xamppfiles/include -I. -Ispnegokrb5 -I/include -c -o src/mod_auth_kerb.lo src/mod_auth_kerb.c && touch src/mod_auth_kerb.slo src/mod_auth_kerb.c: In function ‘authenticate_user_krb5pwd’: src/mod_auth_kerb.c:1030: warning: passing argument 8 of ‘verify_krb5_user’ discards qualifiers from pointer target type src/mod_auth_kerb.c: In function ‘authenticate_user_krb5pwd’: src/mod_auth_kerb.c:1030: warning: passing argument 8 of ‘verify_krb5_user’ discards qualifiers from pointer target type /Applications/XAMPP/xamppfiles/build/libtool --silent --mode=link gcc -o src/mod_auth_kerb.la -dynamic -g -O2 -arch x86_64 -Wl,-search_paths_first -lgssapi_krb5 -lkrb5 -lk5crypto -lcom_err -lresolv -lresolv -rpath /Applications/XAMPP/xamppfiles/modules -module -avoid-version src/mod_auth_kerb.lo ld: warning: in src/.libs/mod_auth_kerb.o, missing required architecture x86_64 in file warning: no debug symbols in executable (-arch x86_64) I'm configuring as follows: ./configure --with-krb4=no CFLAGS='-g -O2 -arch x86_64' I should mention that I'm using XAMPP with the development package on this machine.

    Read the article

  • Puppet classes out of order despite explicit arrow operator use

    - by Alexandr Kurilin
    Absolute puppet beginner here. I'm experiencing an interesting behavior with my puppet manifests and would love to know what I'm doing wrong. Let's for example say I'm configuring the instance with the following ordered classes: class { 'update_system': } -> class { 'facter': } -> class { 'user_sshkey': user => 'ubuntu', type => 'rsa', } -> class { 'tmux': user => 'ubuntu', } -> class { 'vim': user => 'ubuntu', } -> class { 'bashrc': user => 'ubuntu' } -> notify {"Configuring DB role":} -> class { 'postgresql': } when I run the manifest with the --debug switch, by looking at notify statements I can see the classes be executed in the following order: 1. update_system starts 2. a cron type inside of postgresql class (the very **last** class in that ordered list above) is executed 3. postgres::install starts 5. facter starts installing 6. postgres::configure and postgres::service start 7. the vim class is executed 8. "Configuring DB role" notification is made. All the way at the end here. etc Basically the thing is all over the place, the order doesn't seem to follow the arrow operators in any way. I'm guessing I'm missing something here that would force the classes to execute one at a time. Could it be that I'm missing some kind of anchor pattern here? Invalid containment?

    Read the article

  • Getting AWStats to work in Ubuntu 12.04

    - by koogee
    I'm new to apache and i'm trying to set up AWStats on my ubuntu 12.04 server. I've followed the guide at Ubuntu docs https://help.ubuntu.com/community/AWStats I set it up according to the instructions and awstats is able to generate initial stats from apache log successfully. I placed the links to awstats in the default virtual host file. However when I try to run http://server-ip-address:8080/awstats/awstats.pl, I get: Error: SiteDomain parameter not defined in your config/domain file. You must edit it for using this version of AWStats. Setup ('/etc/awstats/awstats.conf' file, web server or permissions) may be wrong. Check config file, permissions and AWStats documentation (in 'docs' directory). Here is my /etc/apache2/sites-available/default file: <VirtualHost *:8080> ServerAdmin webmaster@localhost DocumentRoot /home/saad/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/saad/www/> Options Indexes FollowSymLinks MultiViews AllowOverride AuthConfig Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /awstatsclasses "/usr/share/awstats/lib/" Alias /awstats-icon "/usr/share/awstats/icon/" Alias /awstatscss "/usr/share/doc/awstats/examples/css" ScriptAlias /awstats/ /usr/lib/cgi-bin/ Options ExecCGI -MultiViews +SymLinksIfOwnerMatch ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> The only three variables I edited in /etc/awstats/awstats.conf are: LogFile="/var/log/apache2/access.log" SiteDomain="server-name.noip.org" HostAliases="localhost 127.0.0.1 server-name.no-ip.org" The apache server works fine and i'm able to access other pages stored on the server. Any guidance would be welcome.

    Read the article

  • How do I install OpenStack on a single Ubuntu 12.04 node?

    - by Sam Edwards
    I'm having trouble installing OpenStack in Ubuntu 12.04, for various reasons: The official Ubuntu website recommends Juju and MAAS. However, this is a single node I am trying to get OpenStack installed on, and MAAS requires "two or more nodes" according to the docs. Additionally, I don't have any experience in MAAS and Juju and would rather stick to technologies I am more familiar with so that I can debug problems that arise. I have tried StackGeek but this fails because the node only has a single Ethernet port. The node does, however, have the second hard drive required for the nova storage. I have tried DevStack but cannot log into the dashboard. The login form appears fine, but as soon as I try to submit the page, my browser begins loading indefinitely. I have tried installing straight from packages, but I get an Internal Server Error in the dashboard upon trying to log in, with no helpful logs anywhere in sight to aid me in debugging the issue. Each of these attempts was with a fresh Ubuntu 12.04 LTS setup; I'm finding it really strange that no matter what I try, I cannot get OpenStack installed. Is this even a stable/mature project? Why am I encountering so many bugs?

    Read the article

  • Apache2: not defined domains directing to the same virtual host

    - by rafaame
    I have Apache2 configured in a debian box with virtual hosts. I have several domains pointing to the box's IP address. The domains whose virtual hosts are configured works perfectly. But if I type in the browser a domain that is pointing to the box but whose the virtual host is not configured, I get to a random virtual host of another domain in the box. Not a random, but one of the virtual hosts (always the same) but I dunno why it is it. The correct would be that the domains that are not configured as virtual hosts return a hostname error or something, right? Does someone know how to fix the problem? One of my virtual hosts config file: <VirtualHost *:80> ServerAdmin [email protected] ServerName dl.domain.com DocumentRoot /var/www/dl.domain.com/public_html/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/dl.domain.com/public_html/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> My apache2.conf http://www.speedyshare.com/files/29107024/apache2.conf Thanks for the help

    Read the article

  • Can't install py-subversion on freebsd 8.2

    - by max taldykin
    I'm trying to install python bindings for subversion: # cd /usr/ports/devel/py-subversion # make ===> Patching for py26-subversion-1.6.15 ===> Applying extra patch /usr/ports/devel/py-subversion/../../devel/subversion/files /bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in cannot open /usr/ports/devel/py-subversion/../../devel/subversion/files/bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in: No such file or directory *** Error code 2 Yes, there is no such file in subversion/files, but there is file patch-subversion::bindings::swig::perl::natives::Makefle.PL.in (with colons instead of hyphens). After renaming and rerunning make I got another error: # make ===> Patching for py26-subversion-1.6.15 ===> Applying extra patch /usr/ports/devel/py-subversion/../../devel/subversion/files/bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in cannot open /usr/ports/devel/py-subversion/../../devel/subversion/files/bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in: No such file or directory *** Error code 2 But now there is nothing like bindings-* in subversion/files. So, the question is why is it so and how can I install py-subversion? PS: FreeBSD is running on virtual private server, so I think it is somehow patched. # uname -a FreeBSD mskhug.ru 8.2-PRERELEASE FreeBSD 8.2-PRERELEASE #0 r50: Thu Feb 24 10:15:34 IRKT 2011 [email protected]:/root/src/sys/amd64/compile/DEBUG amd64

    Read the article

  • Using virtual IP with stunnel and haproxy

    - by beardtwizzle
    Hi there, We have a load-balancer setup, in which an HTTPS Request flows through the following steps:- Client -> DNS -> stunnel on Load-Balancer -> HAProxy on LB -> Web-Server This setup works perfectly when stunnel is listening to the local IP of the Load-Balancer. However in our setup we have 2 load-balancers and we want to be able to listen to a virtual IP, which only ever exists on one LB at a time (keepalived flips the IP to the second LB if the first one falls over). HAProxy has no problem in doing this (and I can ping the assigned virtual IP on the load-balancer I'm testing), but it seems stunnel hates the concept. Has anyone achieved this before (below is my stunnel config - as you can see I'm actually listening for ALL traffic on 443):- cert= /etc/ssl/certs/mycert.crt key = /etc/ssl/certs/mykey.key ;setuid = nobody ;setgid = nogroup pid = /etc/stunnel/stunnel.pid debug = 3 output = /etc/stunnel/stunnel.log socket=l:TCP_NODELAY=1 socket=r:TCP_NODELAY=1 [https] accept=443 connect=127.0.0.1:8443 TIMEOUTclose=0 xforwardedfor=yes Sorry for the long-winded question!

    Read the article

  • Making OpenSSL work on PHP Windows 2008 server with FastCGI

    - by KacieHouser
    I have been researching all day. Here is what I have done: In C:/PHP/php.ini and C:/PHP/php-cgi-fcgi.ini I have made the extension_dir = "C:/PHP/ext" I uncommented extension=php_openssl.dll I went to http://windows.php.net/download/ and got the thread safe version with the PHP 5.4 (5.4.8) version of DLL's In C:/PHP/ext I replaced the php_openssl.dll with the one I downloaded In System32 and SysWOW64 I added the following DLL's ssleay.dll libeay.dll I restarted the IIS server in the Server Manager under Web Server and stopped and started the World Wide Web Publishing Service That didn't work, so I tried same thing with the unthreaded versions. I still get: Fatal error: Call to undefined function ftp_ssl_connect() in C:\inetpub\wwwroot\REMOVED_dev\save_data.php on line 5 Here are related things from phpinfo(): System Windows NT DEV-WEB1 6.1 build 7601 (Windows Server 2008 R2 Standard Edition Service Pack 1) i586 Compiler MSVC9 (Visual C++ 2008) Architecture x86 Configure Command cscript /nologo configure.js "--enable-snapshot-build" "--enable-debug-pack" "--disable-zts" "--disable-isapi" "--disable-nsapi" "--without-mssql" "--without-pdo-mssql" "--without-pi3web" "--with-pdo-oci=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8-11g=C:\php-sdk\oracle\instantclient11\sdk,shared" "--with-enchant=shared" "--enable-object-out-dir=../obj/" "--enable-com-dotnet" "--with-mcrypt=static" "--disable-static-analyze" "--with-pgo" Server API CGI/FastCGI Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\PHP\php-cgi-fcgi.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) Registered PHP Streams php, file, glob, data, http, ftp, zip, compress.zlib, compress.bzip2, https, ftps, sqlsrv, phar Registered Stream Socket Transports tcp, udp, ssl, sslv3, sslv2, tls FTP support enabled Protocols dict, file, ftp, ftps, gopher, http, https, imap, imaps, ldap, pop3, pop3s, rtsp, scp, sftp, smtp, smtps, telnet, tftp openssl OpenSSL support enabled OpenSSL Library Version OpenSSL 0.9.8t 18 Jan 2012 OpenSSL Header Version OpenSSL 0.9.8x 10 May 2012 What am I missing here?

    Read the article

  • Starfield Wildcard SSL Certificate Not Trusted in All Browsers

    - by Austen Cameron
    I am at a loss as to what else I might try in order to debug this issue with a Starfield Wildcard SSL Certificate. The problem is that in certain browsers (Safari or the most-updated chrome you can get for OS X 10.5.8 for example) the certificate comes up as untrusted, even on the root domain. My server setup / background info: General LAMP setup - CentOS 6.3 - on a Godaddy VPS Starfield Technologies Wildcard SSL certificate Installed using the instructions from godaddy's support pages ssl.conf lines are basically as follows: SSLCertificateFile /path/to/cert/mysite.com.cert SSLCertificateKeyFile /path/to/cert/mysite.key SSLCertificateChainFile /path/to/cert/sf_bundle.crt Everything seemingly worked fine until the other night when I noticed the problem in OS X, I assume it's more browser version related, but have only been able to replicate it on that particular machine. What I have tried: Updating sf_bundle.crt from godaddy's cert repository and Starfield's repository versions Following This ServerFault answer from Jim Phares - changing the ChainFile line to sf_intermediate.crt from Starfield's repository Using http://www.sslshopper.com/ssl-checker.html on my url It says the domain is correctly listed on the certificate but comes up with an error that reads The certificate is not trusted in all web browsers. You may need to install an Intermediate/chain certificate to link it to a trusted root certificate. What might I try next to remedy the untrusted certificate issue? Let me know if there is any other information needed that might help debugging this issue. Thanks in advance!

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >