Search Results

Search found 1693 results on 68 pages for 'hostname'.

Page 30/68 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Enabled Apache mod_status but server-status was not found on SUSE enterprise 11 SP1

    - by Charles Yeung
    In /etc/apache2/httpd.conf, I have remove the line of Include mod_status and add the following to the last line, LoadModule status_module /usr/lib/apache2/mod_status.so ExtendedStatus On <Location /server-status> SetHandler server-status AllowOverride None Order Deny,Allow Deny from all Allow from all </Location> Then I restart Apache, and go to http://HOSTNAME/server-status, but I get the page not found, Did someone know why I get page not found? Is there any more step needed to do to see the Apache status? Thanks

    Read the article

  • Nginx Proxying to Multiple IP Addresses for CMS' Website Preview

    - by Matthew Borgman
    First-time poster, so bear with me. I'm relatively new to Nginx, but have managed to figure out what I've needed... until now. Nginx v1.0.15 is proxying to PHP-FPM v.5.3.10, which is listening at http://127.0.0.1:9000. [Knock on wood] everything has been running smoothly in terms of hosting our CMS and many websites. Now, we've developed our CMS and configured Nginx such that each supported website has a preview URL (e.g. http://[WebsiteID].ourcms.com/) where the site can be, you guessed it, previewed in those situations where DNS doesn't yet resolve to our server, etc. Specifically, we use Nginx's Map module (http://wiki.nginx.org/HttpMapModule) and a regular expression in the server_name of the CMS' server{ } block to 1) lookup a website's primary domain name from its preview URL and then 2) forward the request to the "matched" primary domain. The corresponding Nginx configuration: map $host $h { 123.ourcms.com www.example1.com; 456.ourcms.com www.example2.com; 789.ourcms.com www.example3.com; } and server { listen [OurCMSIPAddress]:80; listen [OurCMSIPAddress]:443 ssl; root /var/www/ourcms.com; server_name ~^(.*)\.ourcms\.com$; ssl_certificate /etc/nginx/conf.d/ourcms.com.chained.crt; ssl_certificate_key /etc/nginx/conf.d/ourcms.com.key; location / { proxy_pass http://127.0.0.1/; proxy_set_header Host $h; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } (Note: I do realize that the regex in the server_name should be "tighter" for security reasons and match only the format of the website ID (i.e. a UUID in our case).) This configuration works for 99% of our sites... except those that have a dedicated IP address for an installed SSL certificate. A "502 Bad Gateway" is returned for these and I'm unsure as to why. This is how I think the current configuration works for any requests that match the regex (e.g. http://123.ourcms.com/): Nginx looks up the website's primary domain from the mapping, and as a result of the proxy_pass http://127.0.0.1 directive, passes the request back to Nginx itself, which since the proxied request has a hostname corresponding to the website's primary domain name, via the proxy_set_header Host $h directive, Nginx handles the request as if it was as direct request for that hostname. Please correct me if I'm wrong in this understanding. Should I be proxying to those website's dedicated IP addresses? I tried this, but it didn't seem to work? Is there a setting in the Proxy module that I'm missing? Thanks for the help. MB

    Read the article

  • Port forwarding using a BT Home Hub 2.0 (Supplied to new BT Infinity Customers in the UK)

    - by Jasarien
    I don't usually have trouble with port forwarding, I've been able to do it successfully on a number of different routers, including Linksys, Belkin, Netgear and Apple (Time Capsule / Airport Extreme). So I'm quite confused here. I had been using my Apple Time Capsule as my router for a few years now, with several port mappings all working fine. But it died recently, so I've had to resort to using the BT Home Hub 2.0 that was supplied with my BT Infinity broadband subscription. The forwarding interface for the Home Hub is simplified for the most part, allowing you to select an application or game and assign it to a particular computer on the network which you choose from a list that the Home Hub has 'discovered'. My Mac Pro has a manually assigned static IP 192.168.1.4 and my router is static at 192.168.1. I have chosen SSH from the list of applications and assigned it to my Mac Pro (the only computer in the list currently). The Home Hub also has a feature to keep a DNS service updated, and I have set it to keep my external IP address updated on my hostname. This is how I had it setup in the past with other routers and not had trouble before. I am able to ping my hostname (and external IP) from outside the network and get a response. But when I try to connect using SSH, the connection times out. The Home Hub also has "Firewall settings". The currently selected setting is: Default: Allow all outgoing connections and block all incoming traffic. Games and application sharing is allowed. But I've tried changing it to: Disabled: All traffic is allowed to pass through your BT Home Hub to your devices. Note: you’ll still need to use the games and application sharing feature to make sure that certain applications work properly. And the connection still times out... So frustrating. The OS X firewall on my Mac is disabled, so I don't think that's in the way. I have tried setting the port forwarding manually, instead of relying on the preset "SSH" option (incase it's not using the port I expect). So I set up my own "application" (as the Home Hub calls it) and forwarded external port 22 TCP to internal port 22 TCP to 192.168.1.4 - but that just gives the same result - unable to connect. Next, with the router's firewall disabled and OS X's firewall disabled, I ran the Shields Up test (https://www.grc.com/x/ne.dll?bh0bkyd2) and the result was that all my service ports (0 - 1055) are in 'Stealth' mode. I.e. nothing even exists at my IP as far as any outsider is concerned... Strange. The only thing that seems to work is setting my Mac Pro as the DMZ - which I don't want to do for obvious reasons. Any help with this would be extremely appreciated, thanks.

    Read the article

  • Assign fixed IP address via DHCP by DNS lookup

    - by Janoszen
    Preface I'm building a virtualization environment with Ubuntu 14.04 and LXC. I don't want to write my own template since the upgrade from 12.04 to 14.04 has shown that backwards compatibility is not guaranteed. Therefore I'm deploying my virtual machines via lxc-create, using the default Ubuntu template. The DNS for the servers is provided by Amazon Route 53, so no local DNS server is needed. I also use Puppet to configure my servers, so I want to keep the manual effort on the deployment minimal. Now, the default Ubuntu template assigns IP addresses via DHCP. Therefore, I need a local DHCP server to assign IP addresses to the nodes, so I can SSH into them and get Puppet running. Since Puppet requires a proper DNS setup, assigning temporary IP addresses is not an option, the client needs to get the right hostname and IP address from the start. Question What DHCP server do I use and how do I get it to assign the IP address based only on the host-name DHCP option by performing a DNS lookup on that very host name? What I've tried I tried to make it work using the ISC DHCP server, however, the manual clearly states: Please be aware that only the dhcp-client-identifier option and the hardware address can be used to match a host declaration, or the host-identifier option parameter for DHCPv6 servers. For example, it is not possible to match a host declaration to a host-name option. This is because the host-name option cannot be guaranteed to be unique for any given client, whereas both the hardware address and dhcp-client-identifier option are at least theoretically guaranteed to be unique to a given client. I also tried to create a class that matches the hostname like this: class "my-client-name" { match if option host-name = "my-client-name"; fixed-address my-client-name.my-domain.com; } Unfortunately the fixed-address option is not allowed in class statements. I can replace it with a 1-size pool, which works as expected: subnet 10.103.0.0 netmask 255.255.0.0 { option routers 10.103.1.1; class "my-client-name" { match if option host-name = "my-client-name"; } pool { allow members of "my-client-name"; range 10.103.1.2 10.103.1.2; } } However, this would require me to administer the IP addresses in two places (Amazon Route53 and the DHCP server), which I would prefer not to do. About security Since this is only used in the bootstrapping phase on an internal network and is then replaced by a static network configuration by Puppet, this shouldn't be an issue from a security standpoint. I am, however, aware that the virtual machine bootstraps with "ubuntu:ubuntu" credentials, which I intend to fix once this is running.

    Read the article

  • Kickstart: ifcfg-eth0 file genorated by kickstart when install from network but from initrd when install form USB

    - by dooffas
    When I install Fedora 19 with a kickstart file and via network, the generated ifcfg-eth0 file is genorated by the kickstart: # Generated by parse-kickstart However if I use the same kickstart file and install via a USB stick, the ifcfg file is generated by initrd. # Generated by dracut initrd The line in the kickstart file to set the network settings is as follows: network --device=eth0 --bootproto=dhcp --hostname=SOMEHOSTNAME Is there away to keep network device settings set in the kickstart file when not installing via network?

    Read the article

  • Can you please explain substitution in RewriteRule

    - by Scott
    I have the following statements in an .htaccess file RewriteCond %{HTTP_HOST} ^myOldDomain\.com$ [NC] RewriteRule ^(.*)$ https://myNewDomaink.com/$1 [R=301,L] It works fine. I basically found some sample code and modified it to my specific purpose. What I don't quite understand is: Why does $1 refer to the the portion of the supplied url after the hostname - where is the documentation for this? There is no backreference in the RewriteCond.

    Read the article

  • Is it possible to detect Android and iOS devices based on DHCP requests?

    - by abbot
    I want to configure DHCP server in a way that it puts "regular" smartphones and tablets into a separate subnet. Is it possible to detect if the DHCP request comes from an Android or iOS device based on the DHCP request itself? For example: a Sony android phone which was around set the following DHCP options in request, which are potentially useful for identification bootp.option.vendor_class_id == "dhcpcd-5.2.10:Linux-2.6.32.9-perf:armv7l:mogami" bootp.option.hostname == "android-c7d342d011ea6419" Are there any known common patterns in DHCP request options better then MAC prefix?

    Read the article

  • Setting Manager path in Tomcat6

    - by Tom
    Hi gurus I want to switch context path to Manager app in Tomcat6 http://tomcat.apache.org/tomcat-6.0-doc/manager-howto.html I change $CATALINA_BASE/conf/[enginename]/[hostname]/manager.xml to: <Context path="/adm" docBase="${catalina.home}/webapps/manager" privileged="true" antiResourceLocking="false" antiJARLocking="false"> notice to: path="/adm", but manager app is always in /manager. Please, how can I change manager path in Tomcat6? Thanks a lot. Tom

    Read the article

  • How do I remotely deploy a software package on Linux system?

    - by David MZ
    I have a software package I wrote in Mono and I want to be able to deploy it to Ubuntu server as part of my move to continuous integration and deployment work flow. I was wondering if there is a tool to help me do that, some of the tasks I will need is Register a new domain/hostname with linux Install packaegs using apt-et Copy files Run some bash scripts What are the solutions to streamline this process to automate, I understand that is more then one answer to this, I would love to hear all the methods pros and cons. Thank you

    Read the article

  • Invoke command with IP address as Target server does not works at all

    - by Praveen
    Please see the following command and with Trusted Hosts enabled, this does not work: Invoke-Command -ComputerName <IP address> -port 5985 -Credential (New-Object System.Management.Automation.PSCredential ('Domain\User', (ConvertTo-SecureString 'passwd' -AsPlainText -Force))) -Authentication CredSSP -ScriptBlock {Add-PSSnapin Microsoft.Exchange.Management.PowerShell.E2010;Get-Mailbox} This works well when Computername is a hostname. The IP address does not works at all

    Read the article

  • Ping Unknown Host on CentOS at EC2

    - by organicveggie
    Weird problem. We have a collection of servers running CentOS 5 on EC2. The setup includes two DNS servers and two LDAP servers. DNS has a CNAME pointing at the primary LDAP server. One machine (and only one machine) is giving me problems. I can ssh into the server using LDAP authentication. But once I'm on the machine, ping won't resolve the LDAP host even though DNS seems to work fine. Here's ping: $ ping ldap.mycompany.ec2 ping: unknown host ldap.mycompany.ec2 Here's the output of dig: $ dig ldap.mycompany.ec2 ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> ldap.studyblue.ec2 ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2893 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ldap.mycompany.ec2. IN A ;; ANSWER SECTION: ldap.mycompany.ec2. 3600 IN CNAME ec2-hostname.compute-1.amazonaws.com. ec2-hostname.compute-1.amazonaws.com. 55 IN A aaa.bbb.ccc.ddd ;; Query time: 12 msec ;; SERVER: 10.32.159.xxx#53(10.32.159.xxx) ;; WHEN: Tue May 31 11:16:30 2011 ;; MSG SIZE rcvd: 107 And here is resolv.conf: $ cat /etc/resolv.conf search mycompany.ec2 nameserver 10.32.159.xxx nameserver 10.244.19.yyy And here is my hosts file: $ cat /etc/hosts 10.122.15.zzz bamboo4 bamboo4.mycompany.ec2 127.0.0.1 localhost localhost.localdomain And here's nsswitch.conf $ cat /etc/nsswitch.conf passwd: files ldap shadow: files ldap group: files ldap sudoers: ldap files hosts: files dns bootparams: nisplus [NOTFOUND=return] files ethers: files netmasks: files networks: files protocols: files rpc: files services: files netgroup: files ldap publickey: nisplus automount: files ldap aliases: files nisplus So DNS works the way I would expect. And I can ping the ldap server by ip address. And I can even access the box with SSH using LDAP authentication. Any suggestions?

    Read the article

  • Exchange 2010 Internal Auto Discover Migrate away from current .local DNS name

    - by Bryan
    We have an Exchange 2010 Server, running within our Active Directory domain, with an internal hostname of server.example.local. The server is configured for Exchange anywhere, but currently has a self signed certificate with a name of server.example.local installed. Internally, clients connect and work fine, but externally, we are having certificate errors as you would expect. I'm about to purchase a UCC SSL Certificate to install on the server with all the relevant SANs on the certificate to correct this, but due to obvious problem obtaining a trusted cert with .local as a subject alternative name, I'm looking to configure clients on the internal network so that they don't use any reference to the .local hostname. I've configured our external DNS name for the server as exchange.example.com, and have created an CNAME for autodiscover.example.com which also (correctly) points to exchange.example.com. I've also configured internal DNS records for these two hostnames which point to the internal interface of the same server. I don't anticipate any problems here. I'm now trying to reconfigure Auto Discover internally, so that Outlook attempts to connect to exchange.example.com. I've followed the steps in KB940726 to prepare for this, and this appeared to work fine. No errors were generated and I was able to verify the CAS name in AD using ADSI edit. I've just tried testing this with a newly created test user account complete with a new Exchange mailbox, and Outlook 2007 connects fine on the internal network, but looking deeper in the Exchange profile, Outlook is still resolving the server name as server.example.local. Could it be the self signed cert, that is causing Outlook to display the server name as server.example.local, or is there still something wrong with my internal autodiscover configuration? Edit I've proven it isn't the certificate that is responsible for outlook returning server.example.local, by installing another self certified certificate with a name of test.example.com. When creating a new outlook profile, I get the mismatch error I'm expceting, but after accepting the cert, and finishing the config of the Outlook profile, again it still shows server.example.local as the server name. This means that if I were to purchase the UCC cert now, that external client would work fine, but internal clients would show a certificate name mismatch. Any ideas where to start diagnosing this?

    Read the article

  • symlink and sudo executable

    - by CodeMedic
    If I have the below sudoers entry usera ALL=(userb) NOPASSWD: /home/userc/bin/executable-file usera ALL=(userb) NOPASSWD: /home/userc/bin/link-to-another-executable-file When I log-on as usera and try running the below commands, it works sudo -u userb /home/userc/bin/executable-file but NOT the one below. sudo -u userb /home/userc/bin/link-to-another-executable-file Sorry, user usera is not allowed to execute '/home/userc/bin/link-to-another-executable-file' as userb on hostname. Any ideas?

    Read the article

  • dnsmasq local network works for some but hostnames are not resolving for others

    - by prggmr
    I have set up a local network and it seems that some of us can use it properly while others can't. The problem seems to be that the local hostnames I setup don't get resolved for everyone. To overview how the network is setup: I am running an Ubuntu 10.01 server using dnsmasq, this server is setup to act as our primary DNS server, configured via our router. dnsmasq is configured using the options of domain-needed bogus-priv I use the /etc/hosts file to determain the hostnames 192.168.1.10 ra.xsi 192.168.1.10 test.xsi From my machine: If I dig the hostnames they resolve properly ; <<>> DiG 9.4.3-P1 <<>> ra.xsi ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61671 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ra.xsi. IN A ;; ANSWER SECTION: ra.xsi. 0 IN A 192.168.1.10 ;; Query time: 9 msec ;; SERVER: 192.168.1.10#53(192.168.1.10) ;; WHEN: Wed Nov 9 12:28:34 2011 ;; MSG SIZE rcvd: 40 Ping also works: PING ra.xsi (192.168.1.10): 56 data bytes 64 bytes from 192.168.1.10: icmp_seq=0 ttl=64 time=0.834 ms 64 bytes from 192.168.1.10: icmp_seq=1 ttl=64 time=0.699 ms ^C --- ra.xsi ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.699/0.766/0.834/0.068 ms And login via SSH works using the hostname. For those that cannot connect using hostnames, if I dig from their machine it appears the name is being resolved, but they cannot ping, SSH or http access the hostname. ; <<>> DiG 9.6.0-APPLE-P2 <<>> ra ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12554 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ra.xsi. IN A ;; ANSWER SECTION: ra.xsi. 0 IN A 192.168.1.10 ;; Query time: 8 msec ;; SERVER: 192.168.1.10#53(192.168.1.10) ;; WHEN: Wed Nov 9 12:05:50 2011 ;; MSG SIZE rcvd: 36 I've been banging my head at this and just can't seem to figure it out.

    Read the article

  • Confused with DKIM, SPF and Exim Configs

    - by 0pt1m1z3
    I've now spent 2 hours trying to figure out this issue and I am about to give up and go to bed. I've been having issues with Gmail rejecting emails from my VPS server because of false spam alerts (probably caused by lfd sending too many emails). So I changed my Exim config to send emails from a different IP (my VPS comes with 3) and that fixed the issue. I also enabled DKIM and SPF on my domains for added measure. But now, all my emails appear as ("From: Sender Name via server.domain1.com") where server.domain1.com is my VPS hostname. I previously had the same issue in Outlook and turning off "Set SMTP Sender: headers" solved that problem. But I believe adding the DKIM and SPF now makes Gmail add "via server.domain1.com" to my messages. How do I fix this? This is a typical header for a message (as it appears at gmail): Delivered-To: [email protected] Received: by 10.60.44.163 with SMTP id f3csp248622oem; Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Received: by 10.50.106.200 with SMTP id gw8mr452788igb.10.1333081398523; Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Return-Path: <[email protected]> Received: from domain2.com ([X.X.X.X]) by mx.google.com with ESMTPS id y1si810998igb.3.2012.03.29.21.23.18 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates X.X.X.X as permitted sender) client-ip=X.X.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates X.X.X.X as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=server.domain1.com; s=default; h=Date:Message-Id:From:Content-type:MIME-Version:Subject:To; bh=wF8bBRgh01EYg4t5DAeVPv1Ps906UVIeRnQCb/HvSYw=; b=k/Pg7lnrO+Ud/z1mOTv+O/3DiJzzQgyBhfIizIaFHM8tF/eNJt5P2k+9yQB224sxYstZIWwVRBJmiqvcM1QhARv1HWqWma0crppZ3JOn+LRHANan634OBi+58SIRA+gu; Received: (Exim 4.77) id 1SDTVE-0005HA-9Y for [email protected]; Fri, 30 Mar 2012 00:31:56 -0400 To: [email protected] Subject: Password Reset Request MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 From: Sender Name <[email protected]> Message-Id: <[email protected]> Date: Fri, 30 Mar 2012 00:31:56 -0400 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - server.domain1.com X-AntiAbuse: Original Domain - domain2.com X-AntiAbuse: Originator/Caller UID/GID - [507 504] / [47 12] X-AntiAbuse: Sender Address Domain - server.domain1.com

    Read the article

  • Why won't IE let users login to a website unless in In Private mode?

    - by Richard Fawcett
    I'm not entirely sure this belongs on SuperUser.com. I also considered ServerFault.com and StackOverflow.com, but on balance, I think it should belong here? We host a website which has the same code responding to multiple domain names. On 28th December (without any changes deployed to the website) a percentage of users suddenly could not login, and the blank login page was just rendered again even when the correct credentials were entered. The issue is still ongoing. After remote controlling an affected user's PC, we've found the following: The issue affects Internet Explorer 9. The user can login from the same machine on Chrome. The user can login from an In Private browser session using IE9. The user can login if the website is added to the Trusted Sites security zone. The user can NOT login from an IE session in safe mode (started with iexplore -extoff). Only one hostname that the website responds to prevents login, the same user account on the other hostname works fine (note that this is identical code and database running server side), even though that site is not in trusted sites zone. Series of HTTP requests in the failure case: GET request to protected page, returns a 302 FOUND response to login page. GET request to login page. POST to login page, containing credentials, returns redirect to protected page. GET request to protected page... for some reason auth fails and browser is redirected to login page, as in step 1. Other information: Operating system is Windows 7 Ultimate Edition. AV system is AVG Internet Security 2012. I can think of lots of things that could be going wrong, but in every case, one of the findings above is incompatible with the theory. Any ideas what is causing login to fail? Update 06-Jan-2012 Enhanced logging has shown that the .ASPXAUTH cookie is being set in step 3. Its expiry date is 28 days in the future, its path is /, the domain is mysite.com, and its value is an encrypted forms ticket, as expected. However, the cookie is not being received by the web server during step 4. Other cookies are being presented to the server during step 4, it's just this one that is missing. I've seen that cookies are usually set with a domain starting with a period, but mine isn't. Should it be .mysite.com instead of mysite.com? However, if this was wrong, it would presumably affect all users?

    Read the article

  • In your ssh config is it possible to have one host entry for multiple machines on the same domain

    - by Joshua Olson
    I'd like to be able to do something like Host * HostName *.mydomain.com ... So I can type something like ssh test ssh ci ssh dev Instead of having to type ssh test.mydomain.com ssh ci.mydomain.com ssh dev.mydomain.com Right now I have separate entries for each one, but we have dozens of machines, so I'd rather have a default rather than have to duplicate everything so many times.

    Read the article

  • Why do I sometimes get 'sh: $'\302\211 ... ': command not found' in xterm/sh?

    - by amn
    Sometimes when I simply type a valid command like 'find ...', or anything really, I get back the following, which is completely unexpected and confusing (... is command name I type): sh: $'\302\211...': command not found There is some corruption going on I think. I don't use color in my prompt, I am using the Bash shell in POSIX mode as sh (chsh to /bin/sh and so on - $SHELL is sh). What is going on and why does this keep happening? Anything I can debug? I think this is more of an xterm issue than sh, or at least a combination of the two. Files, for context: My /etc/profile, as distributed with Arch Linux x86-64: # /etc/profile #Set our umask umask 022 # Set our default path PATH="/usr/local/sbin:/usr/local/bin:/usr/bin" export PATH # Load profiles from /etc/profile.d if test -d /etc/profile.d/; then for profile in /etc/profile.d/*.sh; do test -r "$profile" && . "$profile" done unset profile fi # Source global bash config if test "$PS1" && test "$BASH" && test -r /etc/bash.bashrc; then . /etc/bash.bashrc fi # Termcap is outdated, old, and crusty, kill it. unset TERMCAP # Man is much better than us at figuring this out unset MANPATH My /etc/shrc, which I created as a way to have sh parse some file on startup, when non-login shell. This is achieved using ENV variable set in /etc/environment with the line ENV=/etc/shrc: PS1='\u@\H \w \$ ' alias ls='ls -F --color' alias grep='grep -i --color' [ -f ~/.shrc ] && . ~/.shrc My ~/.profile, I am launching X when logging in through first virtual tty: [[ -z $DISPLAY && $XDG_VTNR -eq 1 ]] && exec xinit -- -dpi 111 My ~/.xinitc, as you can see I am using the system as a Virtual Box guest: xrdb -merge ~/.Xresources VBoxClient-all awesome & exec xterm And finally, my ~/.Xresources, no fancy stuff here I guess: *faceName: Inconsolata *faceSize: 10 xterm*VT100*translations: #override <Btn1Up>: select-end(PRIMARY, CLIPBOARD, CUT_BUFFER0) xterm*colorBDMode: true xterm*colorBD: #ff8000 xterm*cursorColor: S_red Since ~/.profile references among other things /etc/bash.bashrc, here is its content: # # /etc/bash.bashrc # # If not running interactively, don't do anything [[ $- != *i* ]] && return PS1='[\u@\h \W]\$ ' PS2='> ' PS3='> ' PS4='+ ' case ${TERM} in xterm*|rxvt*|Eterm|aterm|kterm|gnome*) PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033]0;%s@%s:%s\007" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"' ;; screen) PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033_%s@%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"' ;; esac [ -r /usr/share/bash-completion/bash_completion ] && . /usr/share/bash-completion/bash_completion I have no idea what that case statement does, by the way, it does look a bit suspicious though, but then again, who am I to know.

    Read the article

  • ssh client problem: Connection reset by peer

    - by yonix
    I'm having a really annoying problem on my Ubuntu laptop. I noticed it today, after upgrading to Ubuntu 11.04, although I'm not entirely sure this is the cause as I played with my ssh keys a few days ago. The problem is, whenever I try to ssh to ANY host I get the following error: Read from socket failed: Connection reset by peer running with -vvv gives the following output: OpenSSH_5.8p1 Debian-1ubuntu3, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to hostname [10.0.0.2] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 1.99, remote software version OpenSSH_4.2 debug1: match: OpenSSH_4.2 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-1ubuntu3 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "hostname" from file "/root/.ssh/known_hosts" debug3: load_hostkeys: loaded 0 keys debug1: SSH2_MSG_KEXINIT sent Read from socket failed: Connection reset by peer My /etc/ssh/ssh_config: Host * SendEnv LANG LC_* HashKnownHosts yes GSSAPIAuthentication no GSSAPIDelegateCredentials no I can connect to my laptop from any other server via ssh, and I can also ssh localhost from my laptop successfully. I can connect to all these other server from other laptops, and I don't see anything in the logs of the other servers regarding my failed attempt. I tried to stop iptables, didn't help. I tried several tricks I could find online with my /etc/ssh/ssh_config, but I was unsuccessful in solving the problem... Any ideas? Edit: This is the log from one of the hosts I try to connect to: May 1 19:15:23 localhost sshd[2845]: debug1: Forked child 2847. May 1 19:15:23 localhost sshd[2845]: debug3: send_rexec_state: entering fd = 8 config len 577 May 1 19:15:23 localhost sshd[2845]: debug3: ssh_msg_send: type 0 May 1 19:15:23 localhost sshd[2845]: debug3: send_rexec_state: done May 1 19:15:23 localhost sshd[2847]: debug1: rexec start in 5 out 5 newsock 5 pipe 7 sock 8 May 1 19:15:23 localhost sshd[2847]: debug1: inetd sockets after dupping: 3, 3 May 1 19:15:23 localhost sshd[2847]: Connection from 10.0.0.7 port 55747 May 1 19:15:23 localhost sshd[2847]: debug1: Client protocol version 2.0; client software version OpenSSH_5.8p1 Debian-1ubuntu3 May 1 19:15:23 localhost sshd[2847]: debug1: match: OpenSSH_5.8p1 Debian-1ubuntu3 pat OpenSSH* May 1 19:15:23 localhost sshd[2847]: debug1: Enabling compatibility mode for protocol 2.0 May 1 19:15:23 localhost sshd[2847]: debug1: Local version string SSH-2.0-OpenSSH_5.3 May 1 19:15:23 localhost sshd[2847]: debug2: fd 3 setting O_NONBLOCK May 1 19:15:23 localhost sshd[2847]: debug2: Network child is on pid 2848 May 1 19:15:23 localhost sshd[2847]: debug3: preauth child monitor started May 1 19:15:23 localhost sshd[2847]: debug3: mm_request_receive entering May 1 19:15:23 localhost sshd[2848]: debug3: privsep user:group 74:74 May 1 19:15:23 localhost sshd[2848]: debug1: permanently_set_uid: 74/74 May 1 19:15:23 localhost sshd[2848]: debug1: list_hostkey_types: ssh-rsa,ssh-dss May 1 19:15:23 localhost sshd[2848]: debug1: SSH2_MSG_KEXINIT sent May 1 19:15:23 localhost sshd[2848]: debug3: Wrote 784 bytes for a total of 805 May 1 19:15:23 localhost sshd[2848]: fatal: Read from socket failed: Connection reset by peer

    Read the article

  • Redirect Root Directory to Subdirectory using mod_rewrite

    - by manyxcxi
    I am trying to redirect /folder to / using .htaccess but all am I getting is the Apache HTTP Server Test Page. My root directory looks like this: / .htaccess -/folder -/folder2 -/folder3 My .htaccess looks like this: RewriteEngine On RewriteCond %{REQUEST_URI} !^/folder/ RewriteRule (.*) /folder/$1 What am I doing wrong? I checked my httpd.conf (I'm running Centos) and the mod_rewrite library is being loaded. As a side note, my server is not a www server, it's simply a virtual machine so it's hostname is centosvm.

    Read the article

  • Squid on windows loadbalancing only to one server

    - by Martin L.
    After thousands of googles and trying days i cant get the load balancer/failover in squid on windows to work. Iam using squid 2.7. My webservers are 2 single NIC lighttpd and one dual nic lighttpd. server1 in this example is running squid on port 80 and lighttpd on port 8080 (just to test) Requirements: All 3 webservers running lighttpd should be balanced two option for load balancing: Best would be if server1 is busy server2 takes over, if server2 is busy server3 takes over, etc.. Round robin style evenly distributed load. Eg server1 takes first call, server2 second etc.. All requests should be treated the same way (no url rewriting or so on) Sent host headers have to be redirected to every server as http host header, speaking of "server1", "server1.company.internal" and "10.211.1.1". My approach: acl all src all acl manager proto cache_object http_port 80 accel defaultsite=server1.company.internal vhost #reverse proxy entries cache_peer 10.211.2.1 parent 8080 0 no-query originserver round-robin login=PASS name=server1_nic1 cache_peer 10.211.1.2 parent 80 0 no-query originserver round-robin login=PASS name=server2_nic1 cache_peer 10.211.2.3 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic1 cache_peer 10.211.2.4 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic2 #decl of names of squid host acl registered_name_hostdomain dstdomain server1.company.internal acl registered_name_host dstdomain server1 #ip of squid host acl registered_name_ip dstdomain 10.211.2.1 # access: redirects the correct squid hostname http_access allow registered_name_hostdomain http_access allow registered_name_host http_access allow registered_name_ip http_access deny all cache_peer_access server1_nic1 allow registered_name_hostdomain cache_peer_access server1_nic1 allow registered_name_host cache_peer_access server1_nic1 allow registered_name_ip cache_peer_access server2_nic1 allow registered_name_hostdomain cache_peer_access server2_nic1 allow registered_name_host cache_peer_access server2_nic1 allow registered_name_ip cache_peer_access server3_nic1 allow registered_name_hostdomain cache_peer_access server3_nic1 allow registered_name_host cache_peer_access server3_nic1 allow registered_name_ip cache_peer_access server3_nic2 allow registered_name_hostdomain cache_peer_access server3_nic2 allow registered_name_host cache_peer_access server3_nic2 allow registered_name_ip cache_peer_access server1_nic1 deny all cache_peer_access server2_nic1 deny all cache_peer_access server3_nic1 deny all cache_peer_access server3_nic2 deny all never_direct allow all Problems: Load balancer does not load balance other than to first server. Only if the first server is killed in any way the second will take over. I have seen the others working at some point, but definitely not as the intended load balancing described above. If the cache_peer_access is not defined sometimes the wrong hostname is sent to the backend webserver and this always depends on the defaultsite= parameter. Probably because the host header on the request to squid is not set and its replaced by defaultsite. Leaving out defaultsite didnt solve the problem. The only workaround i found for this is the current approach with cache_peer_access. Questions: Does the cache_peer_access influence the round-robin? Is there a better workaround to pass the host header to the backed webservers? Which parameters do increase the speed of load balancing or does anyone have a better approach? -Martin

    Read the article

  • Is there a way to setup a hotspot with a domain name rather than IP address?

    - by WagnerMatosUK
    Basically I've setup a hotspot and its currently being accessed through an IP address. I'd like to use a hostname instead. This is for internal use only, meaning the ODROID device which is being used to as Access Point is connected to the internet via ethernet and only a few devices will access the AP. My setup details: Arch Linux on an ODROID U3 device, using hostapd and dhcp server. PS: I'm quite inexperienced with network so I might be missing something obvious here. Thanks in advance

    Read the article

  • Free, web based alternative to Visio?

    - by Lars
    I have used Visio to map out my network structure, and have used the export function to create an HTML page that is searchable by IP, hostname etc. This is a really nice tool and I use it often. However, I would like for users who do not use Internet Explorer to be able to use the search features. What are some alternatives to Visio here? I want to draw a network diagram where objects are searchable. Thanks!

    Read the article

  • Failing to connect; my fault?

    - by Majid
    A client has supplied the following info (fake values used here) for connecting to his host and doing my stuff. I have tried connecting from several locations and failed - the connection times out. Am I doing something wrong, or is he forgetting some crucial info? IP: 216.x.y.z MySql Password : password1 User : user1 Password : password2 SSH Port : 49226 ========== root password : password1 Hostname ; host.com Username ; user2 Password : password3 FTP Port : 201

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >