Search Results

Search found 28784 results on 1152 pages for 'start'.

Page 533/1152 | < Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >

  • Unable to Mange DNS via MMC

    - by IT Helpdesk Team Manager
    When trying to access the DNS service on Microsoft Windows Server 2003 (Build 3790) domain controller/schema master via the MMC DNS snap in or locally via the DNS MMC from Administrative tools I'm getting a red "X" through the icon for the DNS Server. The inability to access DNS management via MMC happens on all domain controllers as well. We've looked at items such as the DHCP client not being started, incorrect DNS setup ( the machine points at itself and another DC ), the DNS service not running ( it is and all DNS queries via NSLOOKUP work correctly ), dslint returns the correct information and functions as expected. There is the following entry in the DNS event log: The DNS server could not initialize the remote procedure call (RPC) service. If it is not running, start the RPC service or reboot the computer. The event data is the error code. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 0000051b dnscmd fails with RPC server unavailable yet RPC is started: C:\Documents and Settings\Administrator.DOMAIN>dnscmd /Info Info query failed status = 1722 (0x000006ba) Command failed: RPC_S_SERVER_UNAVAILABLE 1722 (000006ba) DCDIAG /TEST:DNS /V /E produces the following errors: Warning: no DNS RPC connectivity (error or non Microsoft DNS server is running) [Error details: 1753 (Type: Win32 - Description: There are no more endpoints available from the endpoint mapper.)] Warning: no DNS RPC connectivity (error or non Microsoft DNS server is running) [Error details: 1722 (Type: Win32 - Description: The RPC server is unavailable.)] The DNS server could not initialize the remote procedure call (RPC) service. If it is not running, start the RPC service or reboot the computer. The event data is the error code. A DNS query for _ldap._tcp.dc._msdcs. returns the correct results. All domain and ADS related activities are working except that I can't manage my DNS via MMC or dnscmd. Any thoughts or solutions would be greatly appreciated. EDIT: Adding Registry export per request: Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc Class Name: <NO CLASS> Last Write Time: 10/18/2012 - 2:29 PM Value 0 Name: DCOM Protocols Type: REG_MULTI_SZ Data: ncacn_ip_tcp Value 1 Name: UuidSequenceNumber Type: REG_DWORD Data: 0xb19bd0f Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\ClientProtocols Class Name: <NO CLASS> Last Write Time: 3/9/2007 - 12:11 PM Value 0 Name: ncacn_np Type: REG_SZ Data: rpcrt4.dll Value 1 Name: ncacn_ip_tcp Type: REG_SZ Data: rpcrt4.dll Value 2 Name: ncadg_ip_udp Type: REG_SZ Data: rpcrt4.dll Value 3 Name: ncacn_http Type: REG_SZ Data: rpcrt4.dll Value 4 Name: ncacn_at_dsp Type: REG_SZ Data: rpcrt4.dll Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\NameService Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Value 0 Name: DefaultSyntax Type: REG_SZ Data: 3 Value 1 Name: Endpoint Type: REG_SZ Data: \pipe\locator Value 2 Name: NetworkAddress Type: REG_SZ Data: \\. Value 3 Name: Protocol Type: REG_SZ Data: ncacn_np Value 4 Name: ServerNetworkAddress Type: REG_SZ Data: \\. Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\NetBios Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\RpcProxy Class Name: <NO CLASS> Last Write Time: 3/9/2007 - 12:11 PM Value 0 Name: Enabled Type: REG_DWORD Data: 0x1 Value 1 Name: ValidPorts Type: REG_SZ Data: pdc:100-5000 Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\SecurityService Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Value 0 Name: 9 Type: REG_SZ Data: secur32.dll Value 1 Name: 10 Type: REG_SZ Data: secur32.dll Value 2 Name: 14 Type: REG_SZ Data: schannel.dll Value 3 Name: 16 Type: REG_SZ Data: secur32.dll Value 4 Name: 1 Type: REG_SZ Data: secur32.dll Value 5 Name: 18 Type: REG_SZ Data: secur32.dll Value 6 Name: 68 Type: REG_SZ Data: netlogon.dll

    Read the article

  • How could I stop ssh offering a wrong key?

    - by Alvaro Maceda
    (This is a problem with ssh, not gitolite) I've configured gitolite on my home server (ubuntu 12.04 server, open-ssh). I want an special identityfile to administer the repositories, so I need to access throught ssh to my own host ussing two different identity keys. This is the content of my .ssh/config file: Host gitadmin.gammu.com User git IdentityFile /home/alvaro/.ssh/id_gitolite_mantra Host git.gammu.com User git IdentityFile /home/alvaro/.ssh/id_alvaro_mantra This is the content of my hosts file: # Git 127.0.0.1 gitadmin.gammu.com 127.0.0.1 git.gammu.com So I should be able to communicate with gitolite this way to access with the "normal" account: $ssh git.gammu.com and this way to access with the administrative account: $ssh gitadmin.gammu.com When I try to access with the normal account, all is ok: alvaro@mantra:~/.ssh$ ssh git.gammu.com PTY allocation request failed on channel 0 hello alvaro, this is gitolite 2.2-1 (Debian) running on git 1.7.9.5 the gitolite config gives you the following access: @R_ @W_ testing Connection to git.gammu.com closed. When I do the same with the administrative account: alvaro@mantra:~$ ssh gitadmin.gammu.com PTY allocation request failed on channel 0 hello alvaro, this is gitolite 2.2-1 (Debian) running on git 1.7.9.5 the gitolite config gives you the following access: @R_ @W_ testing Connection to gitadmin.gammu.com closed. It should show the administrative repository. If I launch ssh with verbose option: ssh -vvv gitadmin.gammu.com ... debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/alvaro/.ssh/id_alvaro_mantra (0x7f7cb6c0fbc0) debug2: key: /home/alvaro/.ssh/id_gitolite_mantra (0x7f7cb6c044d0) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/alvaro/.ssh/id_alvaro_mantra debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 ... It's offering the key id_alvaro_mantra, and it should'nt!! The same happens when I specify the key with the -i option: ssh -i /home/alvaro/.ssh/id_gitolite_mantra -vvv gitadmin.gammu.com ... debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/alvaro/.ssh/id_alvaro_mantra (0x7fa365237f90) debug2: key: /home/alvaro/.ssh/id_gitolite_mantra (0x7fa365230550) debug2: key: /home/alvaro/.ssh/id_gitolite_mantra (0x7fa365231050) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/alvaro/.ssh/id_alvaro_mantra debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 debug2: input_userauth_pk_ok: fp 36:b1:43:36:af:4f:00:e5:e1:39:50:7e:07:80:14:26 debug3: sign_and_send_pubkey: RSA 36:b1:43:36:af:4f:00:e5:e1:39:50:7e:07:80:14:26 debug1: Authentication succeeded (publickey). ... What the hell is happening??? I'm missing something, but I can't find what. These are the contents of my home dir: -rw-rw-r-- 1 alvaro alvaro 395 nov 14 18:00 authorized_keys -rw-rw-r-- 1 alvaro alvaro 326 nov 21 10:21 config -rw------- 1 alvaro alvaro 137 nov 20 20:26 environment -rw------- 1 alvaro alvaro 1766 nov 20 21:41 id_alvaromaceda.es -rw-r--r-- 1 alvaro alvaro 404 nov 20 21:41 id_alvaromaceda.es.pub -rw------- 1 alvaro alvaro 1766 nov 14 17:59 id_alvaro_mantra -rw-r--r-- 1 alvaro alvaro 395 nov 14 17:59 id_alvaro_mantra.pub -rw------- 1 alvaro alvaro 771 nov 14 18:03 id_developer_mantra -rw------- 1 alvaro alvaro 1679 nov 20 12:37 id_dos_pruebasgit -rw-r--r-- 1 alvaro alvaro 395 nov 20 12:37 id_dos_pruebasgit.pub -rw------- 1 alvaro alvaro 1679 nov 20 12:46 id_gitolite_mantra -rw-r--r-- 1 alvaro alvaro 397 nov 20 12:46 id_gitolite_mantra.pub -rw------- 1 alvaro alvaro 1675 nov 20 21:44 id_gitpruebas.es -rw-r--r-- 1 alvaro alvaro 408 nov 20 21:44 id_gitpruebas.es.pub -rw------- 1 alvaro alvaro 1679 nov 20 12:34 id_uno_pruebasgit -rw-r--r-- 1 alvaro alvaro 395 nov 20 12:34 id_uno_pruebasgit.pub -rw-r--r-- 1 alvaro alvaro 2434 nov 21 10:11 known_hosts There are a bunch of other keys which aren't offered... why id_alvaro_mantra is offered and not the other keys? I can't understand. I need some help, don't know where to look....

    Read the article

  • UFW as an active service on Ubuntu

    - by lamcro
    Every time I restart my computer, and check the status of the UFW firewall (sudo ufw status), it is disabled, even if I then enable and restart it. I tried putting sudo ufw enable as one of the startup applications but it asks for the sudo password every time I log on, and I'm guessing it does not protect anyone else who logs on my computer. How can I setup ufw so it is activated when I turn on my computer, and protects all accounts? Update I just tried /etc/init.d/ufw start, and it activated the firewall. Then I restarted the computer, and again it was disabled. content of /etc/ufw/ufw.conf # /etc/ufw/ufw.conf # # set to yes to start on boot ENABLED=yes # set to one of 'off', 'low', 'medium', 'high' LOGLEVEL=full content of /etc/default/ufw # /etc/default/ufw # # Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback # accepted). You will need to 'disable' and then 'enable' the firewall for # the changes to take affect. IPV6=no # Set the default input policy to ACCEPT, ACCEPT_NO_TRACK, DROP, or REJECT. # ACCEPT enables connection tracking for NEW inbound packets on the INPUT # chain, whereas ACCEPT_NO_TRACK does not use connection tracking. Please note # that if you change this you will most likely want to adjust your rules. DEFAULT_INPUT_POLICY="DROP" # Set the default output policy to ACCEPT, ACCEPT_NO_TRACK, DROP, or REJECT. # ACCEPT enables connection tracking for NEW outbound packets on the OUTPUT # chain, whereas ACCEPT_NO_TRACK does not use connection tracking. Please note # that if you change this you will most likely want to adjust your rules. DEFAULT_OUTPUT_POLICY="ACCEPT" # Set the default forward policy to ACCEPT, DROP or REJECT. Please note that # if you change this you will most likely want to adjust your rules DEFAULT_FORWARD_POLICY="DROP" # Set the default application policy to ACCEPT, DROP, REJECT or SKIP. Please # note that setting this to ACCEPT may be a security risk. See 'man ufw' for # details DEFAULT_APPLICATION_POLICY="SKIP" # By default, ufw only touches its own chains. Set this to 'yes' to have ufw # manage the built-in chains too. Warning: setting this to 'yes' will break # non-ufw managed firewall rules MANAGE_BUILTINS=no # # IPT backend # # only enable if using iptables backend IPT_SYSCTL=/etc/ufw/sysctl.conf # extra connection tracking modules to load IPT_MODULES="nf_conntrack_ftp nf_nat_ftp nf_conntrack_irc nf_nat_irc" Update Followed your advise and ran update-rc.d with no luck. lester@mcgrath-pc:~$ sudo update-rc.d ufw defaults update-rc.d: warning: /etc/init.d/ufw missing LSB information update-rc.d: see <http://wiki.debian.org/LSBInitScripts> Adding system startup for /etc/init.d/ufw ... /etc/rc0.d/K20ufw -> ../init.d/ufw /etc/rc1.d/K20ufw -> ../init.d/ufw /etc/rc6.d/K20ufw -> ../init.d/ufw /etc/rc2.d/S20ufw -> ../init.d/ufw /etc/rc3.d/S20ufw -> ../init.d/ufw /etc/rc4.d/S20ufw -> ../init.d/ufw /etc/rc5.d/S20ufw -> ../init.d/ufw lester@mcgrath-pc:~$ ls -l /etc/rc?.d/*ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc0.d/K20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc1.d/K20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc2.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc3.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc4.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc5.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc6.d/K20ufw -> ../init.d/ufw

    Read the article

  • smartctl -t long isn't finishing

    - by xenoterracide
    I been running smartctl -t long on a drive for about 2 days now and it seems to be stalled at 10%. short and conveyance both passed. I have to send 1 of 2 drives purchased back I found badblocks with badblocks (none on this drive and I'ts made over a pass already). I'm just wondering if I should be concerned about this. smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Device Model: WDC WD10EARS-00Y5B1 Serial Number: WD-WMAV51582123 Firmware Version: 80.00A80 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon May 10 22:19:52 2010 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 241) Self-test routine in progress... 10% of test remaining. Total time to complete Offline data collection: (20100) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 231) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x3031) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 2 3 Spin_Up_Time 0x0027 131 131 021 Pre-fail Always - 6408 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 12 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 148 10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 10 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 7 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 174 194 Temperature_Celsius 0x0022 106 102 000 Old_age Always - 41 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Conveyance offline Completed without error 00% 99 - # 2 Extended offline Interrupted (host reset) 10% 30 - # 3 Short offline Completed without error 00% 0 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • VPN in Ubuntu fails every time

    - by fazpas
    I am trying to setup a vpn connection in Ubuntu 10.04 to use the service from relakks.com I used the network manager to add the vpn connection and the settings are: Gateway: pptp.relakks.com Username: user Password: pwd IPv4 Settings: Automatic (VPN) Advanced: MSCHAP & MSCHAPv2 checked Use point-to-point encryption (security:default) Allow BSD data compression checked Allow deflate data compression checked Use TCP header compression checked The connection always fail, here is the syslog: Jun 27 20:11:56 desktop NetworkManager: <info> Starting VPN service 'org.freedesktop.NetworkManager.pptp'... Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' started (org.freedesktop.NetworkManager.pptp), PID 2064 Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' just appeared, activating connections Jun 27 20:11:56 desktop NetworkManager: <info> VPN plugin state changed: 3 Jun 27 20:11:56 desktop NetworkManager: <info> VPN connection 'Relakks' (Connect) reply received. Jun 27 20:11:56 desktop pppd[2067]: Plugin /usr/lib/pppd/2.4.5//nm-pptp-pppd-plugin.so loaded. Jun 27 20:11:56 desktop pppd[2067]: pppd 2.4.5 started by root, uid 0 Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp1, iface: ppp1): no ifupdown configuration found. Jun 27 20:11:56 desktop pppd[2067]: Using interface ppp1 Jun 27 20:11:56 desktop pppd[2067]: Connect: ppp1 <--> /dev/pts/0 Jun 27 20:11:56 desktop pptp[2071]: nm-pptp-service-2064 log[main:pptp.c:314]: The synchronous pptp option is NOT activated Jun 27 20:11:57 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request' Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established. Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 1024). Jun 27 20:11:59 desktop kernel: [ 56.564074] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=61 TOS=0x00 PREC=0x00 TTL=52 ID=40460 DF PROTO=47 Jun 27 20:11:59 desktop kernel: [ 56.944054] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=40461 DF PROTO=47 Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[call_callback:pptp_callmgr.c:79]: Closing connection (call state) Jun 27 20:11:59 desktop pppd[2067]: Modem hangup Jun 27 20:11:59 desktop pppd[2067]: Connection terminated. Jun 27 20:11:59 desktop NetworkManager: <info> VPN plugin failed: 1 Jun 27 20:11:59 desktop NetworkManager: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:59 desktop pppd[2067]: Exit. Does someone can identify something in the syslog? I've been googling and reading about pptp but couldn't find anything about the error "read returned zero, peer has closed"

    Read the article

  • Strange enduser experience with Liferay, Glassfish and Apache on RedHat

    - by Pete Helgren
    Tried multiple forums to get to the bottom of this. I hope I can get some direction here: Here is the stack I am working with: Red Hat Enterprise Linux Server release 5.6 (Tikanga) Liferay 6.0.6 on Glassfish 3.0.1 MySQL 5.0.77 Apache 2.2.3 The Liferay portal provides a variety of portlets to end users. Static content (web pages), static resources (primarily pdf and mp3 files 1mb - 80mb in size), File upload and download capabilities (primarily 40-60mb mp3 files) and online streaming of those MP3 files. Here is the strange end user experiences: Under normal load: (20-30) users uploading, downloading or streaming files and 20-30 accessing static content (some of it downloads), we see the following: 1) Clicking a link triggers the download of a portion of an MP3 (the portion is a few seconds long). 2) Clicking on a link tiggers the download of the page content rather than rendering. 3) Clicking a link causes the page to dump binary data to the end user rather than the expected content. 4) Clicking a link returns the text of a javascript file rather than rendering the page. Each occurrence is totally random (or appears so). Sometimes it works, sometimes it doesn't. It seems to have no relation to browser or client OS. The strange events seem to occur much more frequently when using an SSL connection rather than regular http. Apache serves as a proxy server only (reverse). It basically passes all the requests through to Glassfish. There isn't any static content proxy served by Apache. We rebuilt the entire stack from scratch and redeployed the portlet wars and still have the same issues. Liferay is running as a single server (not clustered). We disabled mod_cache in Apache. The problems are more frequent as the server load grows. This morning the load is pretty light and we are seeing few problems but the use of the site will grow, particularly tonight around 9pm CST through Wednesday morning. You could try the site (http://preview.bsfinternational.org) during those times and I would expect that you might experience one of the weirdnesses as you randomly click links on the site (https is invoked only when signed in). Again, https seems to exacerbate the issue. This seems very much like a caching issue but I don't know where in the stack to start peeling the onion. Apache? Liferay? Glassfish? MySQL? Maybe even Redhat? We are stumped and most forums we have posted to (LifeRay and Glassfish) have returned very few suggestions. I just need an idea of where to start looking. I understand that we could have a portlet EDIT: Opening the files in a Hex editor that appear to be pages that download rather than render, we see that the first 4000 characters are "junk" and then the "HTTP/1.1 ...." 'normal' header is seen. So something is dumping a jumble of characters up to offset 4000 (when viewing it in a Hex editor). Perhaps a clue? Ideas?

    Read the article

  • Nginx + PHP - No input file specified for 1 server block. Other server block works fine

    - by F21
    I am running Ubuntu Desktop 12.04 with nginx 1.2.6. PHP is PHP-FPM 5.4.9. This is the relevant part of my nginx.conf: http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { server_name testapp.com; root /www/app/www/; index index.php index.html index.htm; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { listen 80 default_server; root /www index index.html index.php; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } } Relevant bits from php-fpm.conf: ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot chdir = /www In my hosts file, I redirect 2 domains: testapp.com and test.com to 127.0.0.1. My web files are all stored in /www. From the above settings, if I visit test.com/phpinfo.php and test.com/app/www, everything works as expected and I get output from PHP. However, if I visit testapp.com, I get the dreaded No input file specified. error. So, at this point, I pull out the log files and have a look: 2012/12/19 16:00:53 [error] 12183#0: *17 FastCGI sent in stderr: "Unable to open primary script: /www/app/www/index.php (No such file or directory)" while reading response header from upstream, client: 127.0.0.1, server: testapp.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "testapp.com" This baffles me because I have checked again and again and /www/app/www/index.php definitely exists! This is also validated by the fact that test.com/app/www/index.php works which means the file exists and the permissions are correct. Why is this happening and what are the root causes of things breaking for just the testapp.com v-host? Just an update to my investigation: I have commented out chroot and chdir in php-fpm.conf to narrow down the problem If I remove the location ~ \.php$ block for testapp.com, then nginx will send me a bin file which contains the PHP code. This means that on nginx's side, things are fine. The problem is that something must be mangling the file paths when passing it to PHP-FPM. Having said that, it is quite strange that the default_server v-host works fine because its root is /www, where as things just won't work for the testapp.com v-host because the root is /www/app/www.

    Read the article

  • How to troubleshoot problem with OpenVpn Appliance Server not able to connect

    - by Peter
    1) I have a Windows Server 2008 Standard SP2 2) I am running Hyper-V and have the OpenSvn Appliance Server virtual running 3) I have configured it as it said, only issue was that the legacy network adapter does not have a setting the instructions mention "Enable spoofing of MAC Addresses". My understand is that before R2, this was on by default. 4) Server is running, web interfaces look good 5) I am trying to connect from a Vista 64 box and cannot 5a) If I set to UPD I am stuck at Authorizing and client log looks like: 10/11/09 15:00:42: INFO: OvpnConfig: connect... 10/11/09 15:00:42: INFO: Gui listen socket at 34567 10/11/09 15:00:42: INFO: sending start command to instantiator... 10/11/09 15:00:42: INFO: start 34567 ?C:\Users\Peter\AppData\Roaming\OpenVPNTech\config?02369512D0C82A04B88093022DA0226202218022A902264022AE022B? 10/11/09 15:00:42: INFO: Got line from MI->>INFO:OpenVPN Management Interface Version 1 -- type 'help' for more info 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 5b) I setup server for tcp and try to connect, I get a loop of authorizing and reconnecting. Log looks like: 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 10/11/09 15:00:54: INFO: Got line from MI->>BYTECOUNT:2560,3888 ... Is there anyway to turn on robust logging on the server to understand what is happening? Any ideas on how to hunt this down?

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • php-fpm + persistent sockets = 502 bad gateway

    - by leeoniya
    Put on your reading glasses - this will be a long-ish one. First, what I'm doing. I'm building a web-app interface for some particularly slow tcp devices. Opening a socket to them takes 200ms and an fwrite/fread cycle takes another 300ms. To reduce the need for both of these actions on each request, I'm opening a persistent tcp socket which reduces the response time by the aforementioned 200ms. I was hoping PHP-FPM would share the persistent connections between requests from different clients (and indeed it does!), but there are some issues which I havent been able to resolve after 2 days of interneting, reading logs and modifying settings. I have somewhat narrowed it down though. Setup: Ubuntu 13.04 x64 Server (fully updated) on Linode PHP 5.5.0-6~raring+1 (fpm-fcgi) nginx/1.5.2 Relevent config: nginx worker_processes 4; php-fpm/pool.d pm = dynamic pm.max_children = 2 pm.start_servers = 2 pm.min_spare_servers = 2 Let's go from coarse to fine detail of what happens. After a fresh start I have 4x nginx processes and 2x php5-fpm processes waiting to handle requests. Then I send requests every couple seconds to the script. The first take a while to open the socket connection and returns with the data in about 500ms, the second returns data in 300ms (yay it's re-using the socket), the third also succeeds in about 300ms, the fourth request = 502 Bad Gateway, same with the 5th. Sixth request once again returns data, except now it took 500ms again. The process repeats for several cycles after which every 4 requests result in 2x 502 Bad Gateways and 2x 500ms Data responses. If I double all the fpm pool values and have 4x php-fpm processes running, the cycles settles in with 4x successful 500ms responses followed by 4x Bad Gateway errors. If I don't use persistent sockets, this issue goes away but then every request is 500ms. What I suspect is happening is the persistent socket keeps each php-fpm process from idling and ties it up, so the next one gets chosen until none are left and as they error out, maybe they are restarted and become available on the next round-robin loop ut the socket dies with the process. I haven't yet checked the 'slowlog', but the nginx error log shows lots of this: *188 recv() failed (104: Connection reset by peer) while reading response header from upstream, client:... All the suggestions on the internet regarding fixing nginx/php-fpm/502 bad gateway relate to high load or fcgi_pass misconfiguration. This is not the case here. Increasing buffers/sizes, changing timeouts, switching from unix socket to tcp socket for fcgi_pass, upping connection limits on the system....none of this stuff applies here. I've had some other success with setting pm = ondemand rather than dynamic, but as soon as the initial fpm-process gets killed off after idling, the persistent socket is gone for all subsequent php-fpm spawns. For the php script, I'm using stream_socket_client() with a STREAM_CLIENT_PERSISTENT flag. A while/stream_select() loop to detect socket data and fread($sock, 4096) to grab the data. I don't call fclose() obviously. If anyone has some additional questions or advice on how to get a persistent socket without tying up the php-fpm processes beyond the request completion, or maybe some other things to try, I'd appreciate it. some useful links: Nginx + php-fpm - recv() error Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server) Nginx + PHP-FPM "error 104 Connection reset by peer" causes occasional duplicate posts http://www.linuxquestions.org/questions/programming-9/php-pfsockopen-552084/ http://stackoverflow.com/questions/14268018/concurrent-use-of-a-persistent-php-socket http://devzone.zend.com/303/extension-writing-part-i-introduction-to-php-and-zend/#Heading3 http://stackoverflow.com/questions/242316/how-to-keep-a-php-stream-socket-alive http://php.net/manual/en/install.fpm.configuration.php https://www.google.com/search?q=recv%28%29+failed+%28104:+Connection+reset+by+peer%29+while+reading+response+header+from+upstream+%22502%22&ei=mC1XUrm7F4WQyAHbv4H4AQ&start=10&sa=N&biw=1920&bih=953&dpr=1

    Read the article

  • Mac Terminal.app: Force '^C' to be printed when editing current prompt, then aborting it

    - by Stefan Lasiewski
    This is the opposite of Prevent “^C” from being printed when aborting editing current prompt. I'm using Bash. When I'm editing the commandline in Bash, and I hit Control-C to abort the commandline, the '^C' character does not display. I would like to see this character. I tried commands like stty -ctlecho and stty ctlecho (which I borrowed from the other question), but this didn't work for me. This behavior seems to be true with my environment on Ubuntu, CentOS and MacOSX. This only happens within Apple's Terminal.App. If I SSH to a remote Linux or FreeBSD box, then ^C is printed. So, this is clearly just a local setting. Update: Here is the output of stty -a, as requested by @quack quixote : $ stty -a speed 9600 baud; 41 rows; 88 columns; lflags: icanon isig iexten echo echoe -echok echoke -echonl echoctl -echoprt -altwerase -noflsh -tostop -flusho pendin -nokerninfo -extproc iflags: -istrip icrnl -inlcr -igncr ixon -ixoff ixany imaxbel iutf8 -ignbrk brkint -inpck -ignpar -parmrk oflags: opost onlcr -oxtabs -onocr -onlret cflags: cread cs8 -parenb -parodd hupcl -clocal -cstopb -crtscts -dsrflow -dtrflow -mdmbuf cchars: discard = ^O; dsusp = ^Y; eof = ^D; eol = <undef>; eol2 = <undef>; erase = ^?; intr = ^C; kill = ^U; lnext = ^V; min = 1; quit = ^\; reprint = ^R; start = ^Q; status = ^T; stop = ^S; susp = ^Z; time = 0; werase = ^W; After typing stty sane, stty -a will output the following. The only difference is the parameter of -iutf8. $ stty sane $ stty -a speed 9600 baud; 41 rows; 157 columns; lflags: icanon isig iexten echo echoe -echok echoke -echonl echoctl -echoprt -altwerase -noflsh -tostop -flusho pendin -nokerninfo -extproc iflags: -istrip icrnl -inlcr -igncr ixon -ixoff ixany imaxbel -iutf8 -ignbrk brkint -inpck -ignpar -parmrk oflags: opost onlcr -oxtabs -onocr -onlret cflags: cread cs8 -parenb -parodd hupcl -clocal -cstopb -crtscts -dsrflow -dtrflow -mdmbuf cchars: discard = ^O; dsusp = ^Y; eof = ^D; eol = <undef>; eol2 = <undef>; erase = ^?; intr = ^C; kill = ^U; lnext = ^V; min = 1; quit = ^\; reprint = ^R; start = ^Q; status = ^T; stop = ^S; susp = ^Z; time = 0; werase = ^W;

    Read the article

  • Exchange 2003-Exchange 2010 post migration GAL/OAB problem

    - by user68726
    I am very new to Exchange so forgive my newbie-ness. I've exhausted Google trying to find a way to solve my problem so I'm hoping some of you gurus can shed some light on my next steps. Please forgive my bungling around through this. The problem I cannot download/update the Global Address List (GAL) and Offline Address Book (OAB) on my Outlook 2010 clients. I get: Task 'emailaddress' reported error (0x8004010F) : 'The operation failed. An object cannot be found.' ---- error. I'm using cached exchange mode, which if I turn off Outlook hangs completely from the moment I start it up. (Note I've replaced my actual email address with 'emailaddress') Background information I migrated mailboxes, public store, etc. from a Small Business Server 2003 with Exchange 2003 box to a Server 2008 R2 with Exchange 2010 based primarily on an experts exchange how to article. The exchange server is up and running as an internet facing exchange server with all of the roles necessary to send and receive mail and in that capacity is working fine. I "thought" I had successfully migrated everything from the SBS03 box, and due to huge amounts of errors in everything from AD to the Exchange install itself I removed the reference to the SBS03 server in adsiedit. I've still got access to the old SBS03 box, but as I said the number of errors in everything is preventing even the uninstall of Exchange (or the starting of the Exchange Information Store service), so I'm quite content to leave that box completely out of the picture while trying to solve my problem. After research I discovered this is most likely because I failed to run the “update-globaladdresslist” (or get / update) command from the Exchange shell before I removed the Exchange 2003 server from adsiedit (and the network). If I run the command now it gives me: WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Offline Address Book - first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Schedule+ Free Busy Information – first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameArchive" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameContacts" is invalid and couldn't be updated. (Note that I’ve replaced my domain with “domainname.com” and my organization name with “containername”) What I’ve tried I don’t want to use the old OAB, or GAL, I don’t care about either, our GAL and distribution lists needed to be organized anyway, so at this point I really just want to get rid of the old reference to the “first administrative group” and move on. I’ve tried to create a new GAL and tell Exchange 2010 to use that GAL instead of the old GAL, but I'm obviously missing some of the commands or something dumb I need to do to start over with a blank slate/GAL/OAB. I'm very tempted to completely delete the entire "first administrative group" tree from adsiedit and see if that gets rid of the ridiculous reference that no longer exists but I dont want to break something else. Commands run to try to create a new GAL and tell exch10 to use that GAL: New-globaladdresslist –name NAMEOFNEWGAL Set-globaladdresslist GUID –name NAMEOFNEWGAL This did nothing for me except now when I run get-globaladdresslist or with the | FL pipe I see two GALs listed, the “default global address list” and the “NAMEOFNEWGAL” that I created. After a little more research this morning it looks like you can't change/delete/remove the default address list, and the only way to do what I'm trying to do would be to maybe remove the default address list via adsiedit and recreate with a command something like new-GlobalAddressList -Name "Default Global Address List" -IncludedRecipients AllRecipients. This would be acceptable but I've searched and searched and can't find instructions or a breakdown of where exactly the default GAL lives in AD, and if I'd have to remove multiple child references/records. Of interest I'm getting an event ID 9337 in my application log OALGen did not find any recipients in address list \Global Address List. This offline address list will not be generated. -\NAMEOFMYOAB --------- on my Exchange 2010 box, which pretty much to me seems to confirm my suspicion that the empty GAL/OAB is what's causing the Outlook client 0x8004010F error. Help please!

    Read the article

  • Bridged virtual interface is not available or visible to ifconfig.

    - by Omniwombat
    Hello all. I'm running Ubuntu 9.04, kernel 2.6.28-18, and vmware-server 2.0.1. I'm attempting to setup a virtual linux machine to use a bridged interface rather than NAT or host-only. Both NAT and host-only work just fine. When running vmware-config.pl, I set /dev/vmnet0 to bridge eth0, /dev/vmnet1 to host-only, and /dev/vmnet8 to NAT. When I run ifconfig -a I see the physical interface (eth0), vmnet1 and vmnet8 both of which are up and have IP addresses assigned to them. I also see other various interfaces that are not relevant here. In the web console, when I ask that the guest machine's network card be bridged, it states that a bridged setup is "Not available" and shows the disabled device icon. Inside the guest machine, I do have an eth0 interface which I can set to anything I like, however it can't see my external network, or the host. I do see errors in my vmware/hostd.log which state: "The network bridge on device vmnet0 is not running. The virtual machine will not be able to communicate with the host or with other machines on your network" which confirms the problem. vmnet-bridge is running, and I see the following in my process table: /usr/bin/vmnet-bridge -d /var/run/vmnet-bridge-0.pid -n 0 -i eth0 I confirm that the /var/run/vmnet-bridge-0.pid file is there and that it points to the correct process. I saw this question relating to Ubuntu 9.04 and bridged interfaces, in which the poster determined that the vsock library was not getting built due to a flaw in the vmware-config.pl script. I applied the patch, reran the script, and confirm that vsock.ko and vsock.o are in my /lib directory structure. vsock does show up in an lsmod. My /etc/vmware directory has /vmnet1 and /vmnet8 subdirectories. They contain configuration utilities for running DHCP and nat type services as expected. There is no vmnet0 subdirectory. My /etc/vmware/netmap.conf file DOES show entries for vmnet0; both the name and the device as I configured it from the script. My /dev directory contains devices vmnet0 through vmnet9. They have major device number 119, and minor device numbers 0 through 9. /proc/net/dev shows statistics for vmnet1 and vmnet8, but not vmnet0. I have a /proc/vmnet directory, but it's empty. When I start or stop the vmware service with /etc/init.d/vmware start, I see the following: Starting VMware services: Virtual machine monitor done Virtual machine communication interface done VM communication interface socket family: done Virtual ethernet done Bridged networking on /dev/vmnet0 done Host-only networking on /dev/vmnet1 (background) done DHCP server on /dev/vmnet1 done Host-only networking on /dev/vmnet8 (background) done DHCP server on /dev/vmnet8 done NAT service on /dev/vmnet8 done VMware Server Authentication Daemon (background) done Shared Memory Available done Starting VMware management services: VMware Server Host Agent (background) done VMware Virtual Infrastructure Web Access Starting VMware autostart virtual machines: Virtual machines done Nothing appears to be wrong there. What n00b thing am I doing such that vmnet0 and only vmnet0 does not show up in the interface list?

    Read the article

  • Automatically starting svnserve on Snow Leopard

    - by Cleggy
    I have installed Subversion onto my iMac running Snow Leopard, but am having trouble getting svnserve to start up automatically. As I understand it (I'm still fairly green with OSX), the best way to do that is to utilize launchd. To that end, I have created the following .plist file in the /Library/LaunchDaemons folder. If I use launchctl to execute this file, svnserve starts as expected, but it doesn't automatically start when the system starts up or I log in. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Disabled</key> <false/> <key>Label</key> <string>org.tigris.subversion.svnserve</string> <key>UserName</key> <string>Dave</string> <key>ProgramArguments</key> <array> <string>/opt/subversion/bin/svnserve</string> <string>--inetd</string> <string>--root=/Users/Shared/SVNrep</string> </array> <key>ServiceDescription</key> <string>Subversion Standalone Server</string> <key>Sockets</key> <dict> <key>Listeners</key> <array> <dict> <key>SockFamily</key> <string>IPv4</string> <key>SockServiceName</key> <string>svn</string> <key>SockType</key> <string>stream</string> </dict> <dict> <key>SockFamily</key> <string>IPv6</string> <key>SockServiceName</key> <string>svn</string> <key>SockType</key> <string>stream</string> </dict> </array> </dict> <key>inetdCompatibility</key> <dict> <key>Wait</key> <false/> </dict> </dict> </plist> If anyone here could provide any suggestions as to how to get this to work, I'd really appreciate it.

    Read the article

  • Windows 7 sometimes boots in VGA mode

    - by TuxRug
    I have an Asus G50VT-x5 laptop with nVidia GeForce9800M-GS graphics. Normally, Windows boots normally, but about 20% of the time (rough estimate), it will boot with the fallback VGA driver, maxing out at 800x600 with no Aero. I've checked the system logs and there is nothing indicating an error loading the nVidia driver. It even specifies in the logs that the Nvidia Display Driver service started successfully, even though it has booted in safe graphics mode. This has been happening for a while, but it's happening a little more often now than it was before. Since the first time my system exhibited this behavior, I have updated my graphics driver a handful of times. I used System Information for Windows to check for problems there, but the only thing that stood out was the following: Core Temperature 4486449 °C (8075639 °F) Shaders Temperature 1171513530 °C (2108724330 °F) I know this reading is incorrect, because my laptop is nowhere near the surface of the sun and my desk has not burst into flames. When it's opererating normally, I get a sane reading like [Core Temperature 58 °C (136 °F)] with no Shaders Temperature listed. All I have to do to resolve the issue is reboot. I have seen no stability issues with the graphics or anything else. A long time ago, I had an issue with this computer where my framerate would suddenly drop during a 3D game from 40fps to <1fps, but after looking at the temperature readout immediately after quitting a game, I removed the bottom panel and blew the dust out of the vent and heatsink. Since then I have no drops in framerate under any situation. I have uploaded a zip containing the SIW reports for when the problem is occurring and when the computer is operating normally. I don't have a paid account so it can only be downloaded 10 times, so please only download the reports if you think you can use them. If you try to download the reports and they are no longer available, please comment and I will re-upload them. If you want to look at the files, they are on Rapidshare. EDIT It happened again, and I looked a little deeper into the System logs. When this happens, there are a lot of errors about other device drivers unable to start. All of these errors are for PnP drivers. Also, my USB keyboard and mouse take a few moments before they actually start working, although this happens sometimes the first normal boot as well. I am quite sure this is related, so I am adding the pnp tag. Also, CHKDSK will not run on boot. Even if a check is scheduled or a volume is manually set as dirty, CHKDSK will be skipped entirely, not even leaving an entry in the System logs. I tried running CHKNTFS /D, which did not work. I then manually changed my HKLM\System\CurrentControlSet\Control\Session Manager BootExecute value to the default listed on Microsoft's website. That did not work either. I ended up booting to repair mode and running CHKDSK there, which found a number of minor inconsistencies on my system drive, but none on my data drive. I have no idea if this is related. Some more information for those who don't download my SIW report file: Antivirus and Firewall are ESET Smart Security I have three different virutalization programs installed: VMware Player, Windows Virtual PC, and VirtualBox. The network adapters for these show up in the log of failed device starts. EDIT 2 I tried running sfc /scannow, which reported that it found corrupted files that could not be fixed. The CBS log is extremely cryptic. I tried booting to my install disk, launching repair mode, and doing an offline sfc from there, which produced the same result.

    Read the article

  • Cannot create zpool, how to get rid of intel raid volume?

    - by nagylzs
    This is a FreeBSD 9.1 amd64 computer. It has 5 disks installed. The ada0 and ada1 disks are used with a hw raid to provide the root filesystem: root@gw:/home/gandalf # ls /dev | grep ada ada0 ada1 ada2 ada3 ada4 root@gw:/home/gandalf # zpool status pool: zroot state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 raid/r0s1a ONLINE 0 0 0 errors: No known data errors I want to create a raidz pool for the remaining disks: root@gw:/home/gandalf # zpool create -f data raidz1 ada2 ada3 ada4 cannot create 'data': one or more devices is currently unavailable root@gw:/home/gandalf # dmesg | grep ada2 ada2 at ata4 bus 0 scbus6 target 0 lun 0 ada2: <WDC WD20EARS-00MVWB0 51.0AB51> ATA-8 SATA 2.x device ada2: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes) ada2: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) ada2: Previously was known as ad16 root@gw:/home/gandalf # dmesg | grep ada3 ada3 at ata5 bus 0 scbus7 target 0 lun 0 ada3: <SAMSUNG HD103UJ 1AA01118> ATA-7 SATA 2.x device ada3: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes) ada3: 953868MB (1953523055 512 byte sectors: 16H 63S/T 16383C) ada3: Previously was known as ad18 GEOM_RAID: Intel-fb8732fa: Disk ada3 state changed from NONE to ACTIVE. GEOM_RAID: Intel-fb8732fa: Subdisk Volume0:0-ada3 state changed from NONE to ACTIVE. root@gw:/home/gandalf # dmesg | grep ada4 ada4 at ata6 bus 0 scbus8 target 0 lun 0 ada4: <TOSHIBA DT01ACA100 MS2OA750> ATA-8 SATA 3.x device ada4: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes) ada4: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) ada4: Previously was known as ad20 root@gw:/home/gandalf # dmesg | grep GEOM_RAID Aha, so ada3 is already part of another raid volume? Let's see: root@gw:/home/gandalf # dmesg | grep GEOM_RAID GEOM_RAID: SiI-130628113902: Array SiI-130628113902 created. GEOM_RAID: SiI-130628113902: Disk ada0 state changed from NONE to ACTIVE. GEOM_RAID: SiI-130628113902: Subdisk SiI Raid1 Set:1-ada0 state changed from NONE to STALE. GEOM_RAID: SiI-130628113902: Disk ada1 state changed from NONE to ACTIVE. GEOM_RAID: SiI-130628113902: Subdisk SiI Raid1 Set:0-ada1 state changed from NONE to STALE. GEOM_RAID: SiI-130628113902: Array started. GEOM_RAID: SiI-130628113902: Subdisk SiI Raid1 Set:0-ada1 state changed from STALE to ACTIVE. GEOM_RAID: SiI-130628113902: Subdisk SiI Raid1 Set:1-ada0 state changed from STALE to RESYNC. GEOM_RAID: SiI-130628113902: Subdisk SiI Raid1 Set:1-ada0 rebuild start at 0. GEOM_RAID: SiI-130628113902: Volume SiI Raid1 Set state changed from STARTING to SUBOPTIMAL. GEOM_RAID: SiI-130628113902: Provider raid/r0 for volume SiI Raid1 Set created. GEOM_RAID: Intel-fb8732fa: Array Intel-fb8732fa created. GEOM_RAID: Intel-fb8732fa: Force array start due to timeout. GEOM_RAID: Intel-fb8732fa: Disk ada3 state changed from NONE to ACTIVE. GEOM_RAID: Intel-fb8732fa: Subdisk Volume0:0-ada3 state changed from NONE to ACTIVE. GEOM_RAID: Intel-fb8732fa: Array started. GEOM_RAID: Intel-fb8732fa: Volume Volume0 state changed from STARTING to DEGRADED. GEOM_RAID: Intel-fb8732fa: Provider raid/r1 for volume Volume0 created. root@gw:/home/gandalf # Yes, indeed. I want to get rid of raid/r1 completely. However, the controller was already set to "IDE" mode in the BIOS. So why it is creating a raid volume??? I have also tried overwritting the first 16k data of ada3 and reboot the computer, but it did not help. How can I delete /dev/raid/r1 ? root@gw:/home/gandalf # graid status Name Status Components raid/r0 SUBOPTIMAL ada0 (ACTIVE (RESYNC 4%)) ada1 (ACTIVE (ACTIVE)) raid/r1 DEGRADED ada3 (ACTIVE (ACTIVE)) root@gw:/home/gandalf # graid delete raid/r1 graid: Array 'raid/r1' not found. root@gw:/home/gandalf # graid delete /dev/raid/r1 graid: Array '/dev/raid/r1' not found. root@gw:/home/gandalf # Thanks

    Read the article

  • RHCS: GFS2 in A/A cluster with common storage. Configuring GFS with rgmanager

    - by Pavel A
    I'm configuring a two node A/A cluster with a common storage attached via iSCSI, which uses GFS2 on top of clustered LVM. So far I have prepared a simple configuration, but am not sure which is the right way to configure gfs resource. Here is the rm section of /etc/cluster/cluster.conf: <rm> <failoverdomains> <failoverdomain name="node1" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n1"/> </failoverdomain> <failoverdomain name="node2" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n2"/> </failoverdomain> </failoverdomains> <resources> <script file="/etc/init.d/clvm" name="clvmd"/> <clusterfs name="gfs" fstype="gfs2" mountpoint="/mnt/gfs" device="/dev/vg-cs/lv-gfs"/> </resources> <service name="shared-storage-inst1" autostart="0" domain="node1" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> <service name="shared-storage-inst2" autostart="0" domain="node2" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> </rm> This is what I mean: when using clusterfs resource agent to handle GFS partition, it is not unmounted by default (unless force_unmount option is given). This way when I issue clusvcadm -s shared-storage-inst1 clvm is stopped, but GFS is not unmounted, so a node cannot alter LVM structure on shared storage anymore, but can still access data. And even though a node can do it quite safely (dlm is still running), this seems to be rather inappropriate to me, since clustat reports that the service on a particular node is stopped. Moreover if I later try to stop cman on that node, it will find a dlm locking, produced by GFS, and fail to stop. I could have simply added force_unmount="1", but I would like to know what is the reason behind the default behavior. Why is it not unmounted? Most of the examples out there silently use force_unmount="0", some don't, but none of them give any clue on how the decision was made. Apart from that I have found sample configurations, where people manage GFS partitions with gfs2 init script - https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Defining_The_Resources or even as simply as just enabling services such as clvm and gfs2 to start automatically at boot (http://pbraun.nethence.com/doc/filesystems/gfs2.html), like: chkconfig gfs2 on If I understand the latest approach correctly, such cluster only controls whether nodes are still alive and can fence errant ones, but such cluster has no control over the status of its resources. I have some experience with Pacemaker and I'm used to that all resources are controlled by a cluster and an action can be taken when not only there are connectivity issues, but any of the resources misbehave. So, which is the right way for me to go: leave GFS partition mounted (any reasons to do so?) set force_unmount="1". Won't this break anything? Why this is not the default? use script resource <script file="/etc/init.d/gfs2" name="gfs"/> to manage GFS partition. start it at boot and don't include in cluster.conf (any reasons to do so?) This may be a sort of question that cannot be answered unambiguously, so it would be also of much value for me if you shared your experience or expressed your thoughts on the issue. How does for example /etc/cluster/cluster.conf look like when configuring gfs with Conga or ccs (they are not available to me since for now I have to use Ubuntu for the cluster)? Thanks you very much!

    Read the article

  • Hardware recommendations / parts list for a modern, quiet ZFS NAS box - 2011-Feb edition [closed]

    - by dandv
    I want to build some really reliable storage for my data, and it seems that ZFS is the only filesystem at the moment that does live checksumming. That rules out DroboPro, so I'm looking to building a quiet ZFS NAS that would start with 4 2TB or larger hard drives. I'd like this system to be very reliable and relatively future-proof for 2-3 years, so I'm willing to invest some $$$ and buy higher end components. I did see questions here and on other forums about low-cost servers, but I'm not looking for those. I'd be super happy to go for an off-the-shelf solution, but I haven't found one that's quiet. I started doing the research (summarized on my wiki), but I realized that it just gets too complicated for what I know as a software dude, and I'm entering the analysis paralysis area. At this point, I'm basically looking for a parts list for a configuration that will work (and is modern), and I know there are folks around here who are way more competent than me. I've built computers and am comfortable assembling one and messing with *nix; I can follow guides; I just want to end the decision process for the hardware and software configuration. What I've researched so far (not that these are very modern components): Case: I think I've settled on the Antec Twelve Hundred case because it cools well, is quiet, and simply has 12 bays that allow elastic mounting. The SilverStone Raven is its counter-candidate, but I find its construction quite odd. For the PSU, I'm torn between Antec CP-850 and Nexus RX-8500, but I did this research more than a year ago. The Nexus has a very uniform power profile, and I'd rather not have the Antec spin up and down based on load. On the other hand, I'm not sure how often my file server will draw more than 400W under use. For the hard drives, I've read that WD Black drives are actually WD RE3 with a software setting changed. I'd also like to buy different drive types, not just 4 WDs. Recommendations? Right now I have a 2TB Hitachi Deskstar 7K300. For the motherboard, CPU and RAM I have no idea, other than the RAM must be ECC. I already asked a question here about ECC RAM, but I was misguided and was looking for a motherboard that would support USB 3.0 as well. I've learned to go with eSATA, or worry about USB later. Then there's the (liquid) cooling, Wi-Fi card, and FreeBSD vs. OpenSolaris Express. Lastly, I'm wondering if I can make this PC into a media server by adding a Blu-ray drive and a good sound card. But support for Blu-Ray is spotty on Linux, and I don't know if Windows 7 on VirtualBox would get sufficient hardware access to output HDMI or SPDIFF signals. (Running OpenSolaris virtualized is not an option because of the reliability risk.) Then there are HDCP concerns. Suggestions on that would be appreciated as well, but I don't want us to get sidetracked. A specific shopping list on the core components would be great, so I can start ordering, and in the meantime educate myself with regards to the other issues. Finally, I think this could become a great FAQ for those technically inclined to build their own ZFS server, but confused by the dizzying array of options out there, and I promise to compile the results and share my experience building and benchmarking the server.

    Read the article

  • DRBD not syncing between my nodes when IP is reset

    - by ramdaz
    I am trying to setup DRBD by following the article at http://www.howtoforge.com/setting-up-network-raid1-with-drbd-on-ubuntu-11.10-p2 I am using Ubuntu 10.04 DRBD - 8.3.11 In the first run I had everything working perfectly and when shifting the systems to a production environment I decided to restart the Meta Data creation part and start from scratch. The IPs had changed entirely in the production environment. Issuing drdbadm create-md r0 in both the servers runs successfully. But when I do "drbdadm -- --overwrite-data-of-peer primary all" on the primary it fails to start the re sync. My config file is as given below resource r0 { protocol C; syncer { rate 50M; } startup { wfc-timeout 15; degr-wfc-timeout 60; } net { cram-hmac-alg sha1; shared-secret "aklsadkjlhdbskjndsf8738734jkfkjfkjf"; } on primaryds { device /dev/drbd0; disk /dev/md2; address 172.16.7.1:7788; meta-disk internal; } on secondaryds { device /dev/drbd0; disk /dev/md2; address 172.16.7.3:7788; meta-disk internal; } } Status on primary root at primaryds:~# cat /proc/drbd version: 8.3.7 (api:88/proto:86-91) GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by root at primaryds, 2012-05-12 15:08:01 0: cs:WFBitMapS ro:Primary/Secondary ds:UpToDate/Inconsistent C r---- ns:0 nr:0 dw:0 dr:200 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:5690352828 Status on secondary root at secondaryds:/etc/drbd.d# cat /proc/drbd version: 8.3.7 (api:88/proto:86-91) GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by root at secondaryds, 2012-05-12 15:25:25 0: cs:WFBitMapT ro:Secondary/Primary ds:Inconsistent/UpToDate C r---- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:5690352828 Log of Primary May 30 13:42:23 primaryds kernel: [ 1584.057076] block drbd0: role( Secondary -> Primary ) disk( Inconsistent -> UpToDate ) May 30 13:42:23 primaryds kernel: [ 1584.086264] block drbd0: Forced to consider local data as UpToDate! May 30 13:42:23 primaryds kernel: [ 1584.086303] block drbd0: Creating new current UUID May 30 13:42:26 primaryds kernel: [ 1586.405551] block drbd0: drbd_sync_handshake: May 30 13:42:26 primaryds kernel: [ 1586.405564] block drbd0: self E8A075F378173D4B:0000000000000004:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:26 primaryds kernel: [ 1586.405574] block drbd0: peer 0000000000000004:0000000000000000:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:26 primaryds kernel: [ 1586.405582] block drbd0: uuid_compare()=2 by rule 30 May 30 13:42:26 primaryds kernel: [ 1586.405587] block drbd0: Becoming sync source due to disk states. May 30 13:42:26 primaryds kernel: [ 1586.405592] block drbd0: Writing the whole bitmap, full sync required after drbd_sync_handshake. May 30 13:42:27 primaryds kernel: [ 1588.171638] block drbd0: 5427 GB (1422588207 bits) marked out-of-sync by on disk bit-map. May 30 13:42:27 primaryds kernel: [ 1588.172769] block drbd0: conn( Connected -> WFBitMapS ) Log in Secondary May 30 13:42:24 secondaryds kernel: [ 1563.304894] block drbd0: peer( Secondary - Primary ) pdsk( Inconsistent - UpToDate ) May 30 13:42:24 secondaryds kernel: [ 1563.339674] block drbd0: drbd_sync_handshake: May 30 13:42:24 secondaryds kernel: [ 1563.339685] block drbd0: self 0000000000000004:0000000000000000:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:24 secondaryds kernel: [ 1563.339695] block drbd0: peer E8A075F378173D4B:0000000000000004:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:24 secondaryds kernel: [ 1563.339703] block drbd0: uuid_compare()=-2 by rule 20 May 30 13:42:24 secondaryds kernel: [ 1563.339709] block drbd0: Becoming sync target due to disk states. May 30 13:42:24 secondaryds kernel: [ 1563.339714] block drbd0: Writing the whole bitmap, full sync required after drbd_sync_handshake. May 30 13:42:26 secondaryds kernel: [ 1565.652342] block drbd0: 5427 GB (1422588207 bits) marked out-of-sync by on disk bit-map. May 30 13:42:26 secondaryds kernel: [ 1565.652965] block drbd0: conn( Connected - WFBitMapT ) The serves are not responding once it reaches this stage. Tried redoing it couple of time but noting happens. Why could the resync not be taking place? I would like some advice? Directions?

    Read the article

  • /usr/bin/sshd isn't linked against PAM on one of my systems. What is wrong and how can I fix it?

    - by marc.riera
    Hi, I'm using AD as my user account server with ldap. Most of the servers run with UsePam yes except this one, it has lack of pam support on sshd. root@linserv9:~# ldd /usr/sbin/sshd linux-vdso.so.1 => (0x00007fff621fe000) libutil.so.1 => /lib/libutil.so.1 (0x00007fd759d0b000) libz.so.1 => /usr/lib/libz.so.1 (0x00007fd759af4000) libnsl.so.1 => /lib/libnsl.so.1 (0x00007fd7598db000) libcrypto.so.0.9.8 => /usr/lib/libcrypto.so.0.9.8 (0x00007fd75955b000) libcrypt.so.1 => /lib/libcrypt.so.1 (0x00007fd759323000) libc.so.6 => /lib/libc.so.6 (0x00007fd758fc1000) libdl.so.2 => /lib/libdl.so.2 (0x00007fd758dbd000) /lib64/ld-linux-x86-64.so.2 (0x00007fd759f0e000) I have this packages installed root@linserv9:~# dpkg -l|grep -E 'pam|ssh' ii denyhosts 2.6-2.1 an utility to help sys admins thwart ssh hac ii libpam-modules 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules for PAM ii libpam-runtime 0.99.7.1-5ubuntu6.1 Runtime support for the PAM library ii libpam-ssh 1.91.0-9.2 enable SSO behavior for ssh and pam ii libpam0g 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules library ii libpam0g-dev 0.99.7.1-5ubuntu6.1 Development files for PAM ii openssh-blacklist 0.1-1ubuntu0.8.04.1 list of blacklisted OpenSSH RSA and DSA keys ii openssh-client 1:4.7p1-8ubuntu1.2 secure shell client, an rlogin/rsh/rcp repla ii openssh-server 1:4.7p1-8ubuntu1.2 secure shell server, an rshd replacement ii quest-openssh 5.2p1_q13-1 Secure shell root@linserv9:~# What I'm doing wrong? thanks. Edit: root@linserv9:~# cat /etc/pam.d/sshd # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password Edit2: UsePAM yes fails With this configuration ssh fails to start : root@linserv9:/home/admmarc# cat /etc/ssh/sshd_config |grep -vE "^[ \t]*$|^#" Port 22 Protocol 2 ListenAddress 0.0.0.0 RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys ChallengeResponseAuthentication yes UsePAM yes Subsystem sftp /usr/lib/sftp-server root@linserv9:/home/admmarc# The error it gives is as follows root@linserv9:/home/admmarc# /etc/init.d/ssh start * Starting OpenBSD Secure Shell server sshd /etc/ssh/sshd_config: line 75: Bad configuration option: UsePAM /etc/ssh/sshd_config: terminating, 1 bad configuration options ...fail! root@linserv9:/home/admmarc#

    Read the article

  • How to troubleshoot problem with OpenVPN Appliance Server not able to connect

    - by Peter
    1) I have a Windows Server 2008 Standard SP2 2) I am running Hyper-V and have the OpenVPN Appliance Server virtual running 3) I have configured it as it said, only issue was that the legacy network adapter does not have a setting the instructions mention "Enable spoofing of MAC Addresses". My understand is that before R2, this was on by default. 4) Server is running, web interfaces look good 5) I am trying to connect from a Vista 64 box and cannot 5a) If I set to UPD I am stuck at Authorizing and client log looks like: 10/11/09 15:00:42: INFO: OvpnConfig: connect... 10/11/09 15:00:42: INFO: Gui listen socket at 34567 10/11/09 15:00:42: INFO: sending start command to instantiator... 10/11/09 15:00:42: INFO: start 34567 ?C:\Users\Peter\AppData\Roaming\OpenVPNTech\config?02369512D0C82A04B88093022DA0226202218022A902264022AE022B? 10/11/09 15:00:42: INFO: Got line from MI->>INFO:OpenVPN Management Interface Version 1 -- type 'help' for more info 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 5b) I setup server for tcp and try to connect, I get a loop of authorizing and reconnecting. Log looks like: 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 10/11/09 15:00:54: INFO: Got line from MI->>BYTECOUNT:2560,3888 ... Is there anyway to turn on robust logging on the server to understand what is happening? Any ideas on how to hunt this down?

    Read the article

  • Can I make my drives visible and change their partition type without losing my data?

    - by user165408
    I have made a lot of mistakes and now I cannot see my hard disk nor I can start my operating system on my laptop. All my passwords and important files on my hdd without any backup. I followed this course of action Changed my hard disk partitions to dynamic just for getting 5th partition. (1st mistake) Decreased partitions to 4 again. Backed up operating system from 4th to 3rd partition with Norton Ghost. Booted from a live CD for Windows XP. Formatted 4th partition and moved my all important data from 1st and 2nd partitions to the 4th partition. Deleted 1st and 2nd partitions and got 1 partition from half of empty space. So I have just 3 partitions and empty space between 1st and 2nd partitions. Tried to install Windows 8 to the first partition but it did not allow because it is dynamic. Also it did not allow to install to other partitions. Tried to install Windows XP to the 1st partition but it said if I continue I cannot use other drivers. Therefore I escaped from installing it. Booted from the Windows XP live CD then increased 1st partiton to less than 400mb of empty space. Therefore I thought it will be adjacent but it was shown as 2 partitions. In my computer I see just 3 drivers. Using Norton Ghost I recovered my OS to the 1st partition. (2nd mistake it was on 4th partition originally) Booted from a Windows XP live CD I tried to install bcdedit to the Windows XP live CD but it did not work. Then I tried to install EaseUS Partition Master Home Edition. It was installed with errors then I start it and it showed me an error like there is no hard disk. I looked to my PC and my drivers were not there. Booted from the Norton Ghost CD and it did not show me my drivers either, but before I was able to see them. I checked numbers of partition shown by the Norton Ghost utility and they are still have same numbers so I have to see my drivers but I cannot see them now. My hard disk is shown as extarnal dynamic now so I cannot see any drive in my PC in the live Windows XP. There are two options; first one is import extarnal disk and second one is convert disk to basic. Will they delete my data? I fear booting from CDs like Windows XP live CD, Norton Ghost CD, and the operating system CD/DVD, because they may overwrite a few MB their data to my data. These recover tools are already exist in Windows XP live CD by The Ultimate Boot CD for Windows. Can any of them help me? CompuAppa SwissKnife V3 DBXtract Disk Investigator Fab's AutoBackup 2.0 FileRecovery Floppy Repair Free Undelete Handy Recovery Recovery Manager Restorastion Restorastion Help File by UBCD4Win UnChk Unstoppable Copier Finally How can I make it so that my drives are visible again without losing my data? How can I convert my dynamic partitions to basic without losing my data?

    Read the article

  • How do I reset/update my BIOS for Optiplex GX280?

    - by Sam Langlhey
    So far this has been a nightmare for me, which has been frustrating me constantly. I am using Dell Optiplex GX280 with Windows XP home edition, which is running a BIOS version A04. Recently, i've rebooted the pc to find out that its not booting. It will get to the Windows boot up screen with the progress bar but only to restart to the same process again, over and over. Frustrated that I am, i've inserted the Windows recovery CD to at least either repair of reinstall the operating system to find out that was not possible. I hit F8 to have the boot options, each of the boot option that I've selected gave me an error saying: "Selected boot device is not available." Right after that, I went to the BIOS setting and did a diagnostic test, which recognized all the Boot devices onboard. Now, I cannot even repair of reinstall Windows XP, because the system is not booting from none of the boot devices. The surprise is when I removed the hard-drive from the computer and loaded it on into another computer successfully; that's right, there is nothing wrong with the hard drive. After that I was totally puzzled. I found a few pointers online saying that the BIOS start-up block might be corrupted itself and I might need to flash/update the BIOS. I found the detailed instruction on how to create a Boot up disk by downloading the BIOS firmware from the manufacture's website. I did exactly as instructed below: Download the latest version or your choose version of BIOS file for your computer or motherboard from the manufacturer’s support site. Rename the downloaded file to AMIBOOT.ROM. Copy the file to a floppy disk. Insert the floppy disk to the floppy drive. Turn on the system. After I did that and powered on the PC to boot from the floppy drive, it gave me this error message: "Non-System Disk or Disk Error. Replace and Strike any key when ready." I did all that, and I kept on pressing [Ctrl]+[Home] to force it, but it did not did any satisfying result. Desperate as I am, my next attempt is to try the instruction below. Since I want to be ready, in the event it does not work, do you have any solution that you can provide? Please keep in mind that I cannot boot from any of the devices at this moment. My only hope now is to come on with a solution that will work through the Floppy drive, since that's the only drive that affected. Thank you very much for your advice and support in advance. To create a Windows startup disk, insert a floppy disk into the drive of a similarly configured, working Windows XP system, launch My Computer, right-click the floppy disk icon, and select the Format command from the context menu. When you see the Format dialog box, leave all the default settings as they are and click the Start button. Once the format operation is complete, close the Format dialog box to return to My Computer, double-click the drive C icon to access the root directory, and copy the following three files to the floppy disk: Boot.ini NTLDR Ntdetect.com

    Read the article

  • Router 2wire, Slackware desktop in DMZ mode, iptables policy aginst ping, but still pingable

    - by user135501
    I'm in DMZ mode, so I'm firewalling myself, stealthy all ok, but I get faulty test results from Shields Up that there are pings. Yesterday I couldn't make a connection to game servers work, because ping block was enabled (on the router). I disabled it, but this persists even due to my firewall. What is the connection between me and my router in DMZ mode (for my machine, there is bunch of others too behind router firewall)? When it allows router affecting if I'm pingable or not and if router has setting not blocking ping, rules in my iptables for this scenario do not work. Please ignore commented rules, I do uncomment them as I want. These two should do the job right? iptables -A INPUT -p icmp --icmp-type echo-request -j DROP echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all Here are my iptables: #!/bin/sh # Begin /bin/firewall-start # Insert connection-tracking modules (not needed if built into the kernel). #modprobe ip_tables #modprobe iptable_filter #modprobe ip_conntrack #modprobe ip_conntrack_ftp #modprobe ipt_state #modprobe ipt_LOG # allow local-only connections iptables -A INPUT -i lo -j ACCEPT # free output on any interface to any ip for any service # (equal to -P ACCEPT) iptables -A OUTPUT -j ACCEPT # permit answers on already established connections # and permit new connections related to established ones (eg active-ftp) iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT #Gamespy&NWN #iptables -A INPUT -p tcp -m tcp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 6667 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 28910 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29900 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29901 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29920 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p udp -m udp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 6500 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27900 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27901 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 29910 -j ACCEPT # Log everything else: What's Windows' latest exploitable vulnerability? iptables -A INPUT -j LOG --log-prefix "FIREWALL:INPUT" # set a sane policy: everything not accepted > /dev/null iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP iptables -A INPUT -p icmp --icmp-type echo-request -j DROP # be verbose on dynamic ip-addresses (not needed in case of static IP) echo 2 > /proc/sys/net/ipv4/ip_dynaddr # disable ExplicitCongestionNotification - too many routers are still # ignorant echo 0 > /proc/sys/net/ipv4/tcp_ecn #ping death echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all # If you are frequently accessing ftp-servers or enjoy chatting you might # notice certain delays because some implementations of these daemons have # the feature of querying an identd on your box for your username for # logging. Although there's really no harm in this, having an identd # running is not recommended because some implementations are known to be # vulnerable. # To avoid these delays you could reject the requests with a 'tcp-reset': #iptables -A INPUT -p tcp --dport 113 -j REJECT --reject-with tcp-reset #iptables -A OUTPUT -p tcp --sport 113 -m state --state RELATED -j ACCEPT # To log and drop invalid packets, mostly harmless packets that came in # after netfilter's timeout, sometimes scans: #iptables -I INPUT 1 -p tcp -m state --state INVALID -j LOG --log-prefix \ "FIREWALL:INVALID" #iptables -I INPUT 2 -p tcp -m state --state INVALID -j DROP # End /bin/firewall-start

    Read the article

  • Lustre - issues with simple setup

    - by ethrbunny
    Issue: I'm trying to assess the (possible) use of Lustre for our group. To this end I've been trying to create a simple system to explore the nuances. I can't seem to get past the 'llmount.sh' test with any degree of success. What I've done: Each system (throwaway PCs with 70Gb HD, 2Gb RAM) is formatted with CentOS 6.2. I then update everything and install the Lustre kernel from downloads.whamcloud.com and add on the various (appropriate) lustre and e2fs RPM files. Systems are rebooted and tested with 'llmount.sh' (and then cleared with 'llmountcleanup.sh'). All is well to this point. First I create an MDS/MDT system via: /usr/sbin/mkfs.lustre --mgs --mdt --fsname=lustre --device-size=200000 --param sys.timeout=20 --mountfsoptions=errors=remount-ro,user_xattr,acl --param lov.stripesize=1048576 --param lov.stripecount=0 --param mdt.identity_upcall=/usr/sbin/l_getidentity --backfstype ldiskfs --reformat /tmp/lustre-mdt1 and then mkdir -p /mnt/mds1 mount -t lustre -o loop,user_xattr,acl /tmp/lustre-mdt1 /mnt/mds1 Next I take 3 systems and create a 2Gb loop mount via: /usr/sbin/mkfs.lustre --ost --fsname=lustre --device-size=200000 --param sys.timeout=20 --mgsnode=lustre_MDS0@tcp --backfstype ldiskfs --reformat /tmp/lustre-ost1 mkdir -p /mnt/ost1 mount -t lustre -o loop /tmp/lustre-ost1 /mnt/ost1 The logs on the MDT box show the OSS boxes connecting up. All appears ok. Last I create a client and attach to the MDT box: mkdir -p /mnt/lustre mount -t lustre -o user_xattr,acl,flock luster_MDS0@tcp:/lustre /mnt/lustre Again, the log on the MDT box shows the client connection. Appears to be successful. Here's where the issues (appear to) start. If I do a 'df -h' on the client it hangs after showing the system drives. If I attempt to create files (via 'dd') on the lustre mount the session hangs and the job can't be killed. Rebooting the client is the only solution. If I do a 'lctl dl' from the client it shows that only 2/3 OST boxes are found and 'UP'. [root@lfsclient0 etc]# lctl dl 0 UP mgc MGC10.127.24.42@tcp 282d249f-fcb2-b90f-8c4e-2f1415485410 5 1 UP lov lustre-clilov-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 4 2 UP lmv lustre-clilmv-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 4 3 UP mdc lustre-MDT0000-mdc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 4 UP osc lustre-OST0000-osc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 5 UP osc lustre-OST0003-osc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 Doing a 'lfs df' from the client shows: [root@lfsclient0 etc]# lfs df UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 149944 16900 123044 12% /mnt/lustre[MDT:0] OST0000 : inactive device OST0001 : Resource temporarily unavailable OST0002 : Resource temporarily unavailable lustre-OST0003_UUID 187464 24764 152636 14% /mnt/lustre[OST:3] filesystem summary: 187464 24764 152636 14% /mnt/lustre Given that each OSS box has a 2Gb (loop) mount I would expect to see this reflected in available size. There are no errors on the MDS/MDT box to indicate that multiple OSS/OST boxes have been lost. EDIT: each system has all other systems defined in /etc/hosts and entries in iptables to provide access. SO: I'm clearly making several mistakes. Any pointers as to where to start correcting them?

    Read the article

< Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >