Search Results

Search found 12484 results on 500 pages for 'seraphims host'.

Page 84/500 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • How to completely disable IPv6 for loopback interface on RHEL 5.6

    - by Marc D
    I've done lots of research on how to disable IPv6 on RedHat Linux and I have it almost completely disabled. However the loopback interface is still getting an inet6 loopback address (::1/128). I can't find where IPV6 is still enabled for loopback. To disable IPV6 I added the following settings to /etc/sysctl.conf: net.ipv6.conf.default.disable_ipv6=1 net.ipv6.conf.all.disable_ipv6=1 And also added the following line to /etc/sysconfig/network: NETWORKING_IPV6=no After rebooting, the inet6 address is gone from my physical interface (eth0), but is still there for lo: # ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:50:56:xx:xx:xx brd ff:ff:ff:ff:ff:ff inet 10.x.x.x/21 brd 10.x.x.x scope global eth0 If I manually remove the IPV6 address from loopback and then bounce the interface, it comes back: # ip addr del ::1/128 dev lo # ip addr show lo 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo # ip link set lo down # ip link set lo up # ip addr show lo 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever I believe IPV6 should be disabled at the kernel level as confirmed by sysctl: # sysctl net.ipv6.conf.lo.disable_ipv6 net.ipv6.conf.lo.disable_ipv6 = 1 Any ideas on what else would cause the loopback interface to get an IPV6 address?

    Read the article

  • Reverse proxy for a subdirectory in nginx

    - by Maple
    I want to set up a Reverse proxy on my VPS for my Heroku app (http://lovemaple.heroku.com) So if I visit mysite.com/blog I can get the content in http://lovemaple.heroku.com I followed the instructions on the Apache wiki. location /couchdb { rewrite /couchdb/(.*) /$1 break; proxy_pass http://localhost:5984; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } I changed it to fit my situation: location /blog { rewrite /blog/(.*) /$1 break; proxy_pass http://lovemaple.heroku.com; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } When I visit mysite.com/blog, the page show up, but js/css file cannot be gotten (404). Their link becomes mysite.com/style.css but not mysite.com/blog/style.css. What's wrong and how can I fix it?

    Read the article

  • Getting VPN to work from Virtual PC

    - by strobaek
    Setup: Host is running Windows7. Virtual PC is a Windows Server 2008 running under VMWare workstation 6.5. From the Host I have a VPN to e.g. TFS and other resources. From the VPC I need to connect to e.g. a SQL server via the VPN. My problem is, that I cannot get a connection from the VPC. If I'm sitting on the corporate network, all is working fine (but then I don't have the VPN). From home - where the VPN is required - it does not work. I have two network adapters defined/configued. One as BRIDGED and one as Host Only. IF I change the one being BRIDTED to NATS I have no connectivity at all from the VPC. I have no problems connecting from my host to the VPC. Thanks.

    Read the article

  • Configuring nginx server to handle requests from multiple domains

    - by KillABug
    Use Case:- I am working on a web application which allows to create HTML templates and publish them on amazon S3.Now to publish the websites I use nginx as a proxy server. What the proxy server does is,when a user enters the website URL,I want to identify how to check if the request comes from my application i.e app.mysite.com(This won't change) and route it to apache for regular access,if its coming from some other domain like a regular URL www.mysite.com(This needs to be handled dynamically.Can be random) it goes to the S3 bucket that hosts the template. My current configuration is: user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; charset utf-8; keepalive_timeout 65; server_tokens off; sendfile on; tcp_nopush on; tcp_nodelay off; Default Server Block to catch undefined host names server { listen 80; server_name app.mysite.com; access_log off; error_log off; location / { proxy_pass http://127.0.0.1:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_redirect off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; client_max_body_size 10m; client_body_buffer_size 128k; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; } } } Load all the sites include /etc/nginx/conf.d/*.conf; Updates as I was not clear enough :- My question is how can I handle both the domains in the config file.My nginx is a proxy server on port 80 on an EC2 instance.This also hosts my application that runs on apache on a differnet port.So any request coming for my application will come from a domain app.mysite.com and I also want to proxy the hosted templates on S3 which are inside a bucket say sites.mysite.com/coolsite.com/index.html.So if someone hits coolsite.com I want to proxy it to the folder sites.mysite.com/coolsite.com/index.html and not to app.syartee.com.Hope I am clear The other server block: # Server for S3 server { # Listen on port 80 for all IPs associated with your machine listen 80; # Catch all other server names server_name _; //I want it to handle other domains then app.mysite.com # This code gets the host without www. in front and places it inside # the $host_without_www variable # If someone requests www.coolsite.com, then $host_without_www will have the value coolsite.com set $host_without_www $host; if ($host ~* www\.(.*)) { set $host_without_www $1; } location / { # This code rewrites the original request, and adds the host without www in front # E.g. if someone requests # /directory/file.ext?param=value # from the coolsite.com site the request is rewritten to # /coolsite.com/directory/file.ext?param=value set $foo 'http://sites.mysite.com'; # echo "$foo"; rewrite ^(.*)$ $foo/$host_without_www$1 break; # The rewritten request is passed to S3 proxy_pass http://sites.mysite.com; include /etc/nginx/proxy_params; } } Also I understand I will have to make the DNS changes in the cname of the domain.I guess I will have to add app.mysite.com under the CNAME of the template domain name?Please correct if wrong. Thank you for your time

    Read the article

  • nconf nagios config no services defined

    - by user1508056
    I've setup Nagios core on OSX 10.7 server via macports fine. It seems to load fine and the sample config files all copied over to /opt/local/etc/nagios/objects/ fine and are specified correctly in the nagios.cfg file. I then installed nconf manually and got it running without much fight. Then I clicked on "Generate Nagios config" in nconf and get 1 warning and 4 errors. When I expand the error box here what I see: Nagios Core 3.5.0 Copyright (c) 2009-2011 Nagios Core Development Team and Community Contributors Copyright (c) 1999-2009 Ethan Galstad Last Modified: 03-15-2013 License: GPL Website: http://www.nagios.org Reading configuration data... Read main config file okay... Read object config files okay... Running pre-flight check on configuration data... Checking services... Error: There are no services defined! Checked 0 services. Checking hosts... Error: There are no hosts defined! Checked 0 hosts. Checking host groups... Checked 0 host groups. Checking service groups... Checked 0 service groups. Checking contacts... Error: There are no contacts defined! Checked 0 contacts. Checking contact groups... Checked 0 contact groups. Checking service escalations... Checked 0 service escalations. Checking service dependencies... Checked 0 service dependencies. Checking host escalations... Checked 0 host escalations. Checking host dependencies... Checked 0 host dependencies. Checking commands... Checked 0 commands. Checking time periods... Checked 0 time periods. Checking for circular paths between hosts... Checking for circular host and service dependencies... Checking global event handlers... Checking obsessive compulsive processor commands... Checking misc settings... Warning: Nothing specified for illegal_macro_output_chars variable! Total Warnings: 1 Total Errors: 3 I've tried several different things (played with cache settings, changed file permissions/ownership, edited some config files manually, etc.) but nothing gets me past this step. The thing is, when I run 'sudo nagios -v /opt/local/etc/nagios/nagios.cfg' the output shows it is reading a number of services, a localhost, and a contact in the .cfg files...so I'm pretty confident those are ok and the problem is nconf isnt reading the correct .cfg files or something like that. Any ideas what to double check? I did lots of googling and found nothing on this specific issue--so either I'm special (I'm not) or am overlooking something really simple. The path to nagios binary is listed as /opt/local/bin/nagios, if that matters. Also, all the nagios files are owned by nagios:nagios, wheras nconf files are owned by user, with only the directories/files specified in the nconf docs belonging to the _www user and/or group (things like output, temp, config, etc.). Thanks.

    Read the article

  • SNMP HOSTMIB.MIB not loading?

    - by Eriedor
    Forgive me if the answer is something glaringly obvious but I just can't seem to get access to any OIDs under the HOST branch in SNMP. I've used an SNMP browser to inspect a few of my systems and none of them show a HOST branch under ISO.ORG.DOD.INTERNET.MGMT.MIB-2. Any thoughts as to why? I'm looking to monitor a few computer's hardware resources via SNMP and unfortuantely all such OIDs live under the missing HOST branch, Any thoughts?

    Read the article

  • How can I avoid my web browser from redirecting to localhost using WAMP in Windows7?

    - by Josh
    I'm currently using Windows 7 with WAMP to try and work on some software, but my web browsers will not accept cookies from the "localhost" domain. I tried creating a few bogus domains in my hosts file by pointing them to 127.0.0.1 but when I type them in I am automatically redirected back to localhost. I have also configured virtualhosts in apache to correspond with the domains I added to the hosts file and it still redirects back to localhost. Is there anything special I must do on Windows 7 to get around this localhost redirect? Thanks for looking :) I'll include my host file here: # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost 127.0.0.1 magento.localhost.com www.localhost.com Thanks for looking :)

    Read the article

  • Hyper-V CPU Utilization, Good Tools?

    - by yzorg
    I just learned a ton from this post: Host CPU% doesn't include child VM CPU%, specifically I learned that both the 'host OS' and 'child VM' are siblings within the HyperVisor layer. Are there good utilities for 'watching' the total CPU and other resource counters at the HyperVisor (hardware) layer? I know perfmon (watching special Hyper-V CPU counters) is the standard answer, but I've stayed away from perfmon for ad-hoc monitoring. Is there a good OSS or free tools to 'watch' the resource utilization as I create multiple new VMs running on the server? I'm a developer, so if there aren't any good UI tools to surface this data I'd consider creating one, but only if needed. P.S. My specific scenario is I'm creating new web, SQL and back-end server VMs for new Windows 8 Server and SQL 2012 (entire application stack). I need to monitor them for utilization and know when I need to grow beyond 1 host (I'll need to split the VMs into separate hosts as I hit hardware limits of the 1st host, and diagnose problems).

    Read the article

  • Squid on windows loadbalancing only to one server

    - by Martin L.
    After thousands of googles and trying days i cant get the load balancer/failover in squid on windows to work. Iam using squid 2.7. My webservers are 2 single NIC lighttpd and one dual nic lighttpd. server1 in this example is running squid on port 80 and lighttpd on port 8080 (just to test) Requirements: All 3 webservers running lighttpd should be balanced two option for load balancing: Best would be if server1 is busy server2 takes over, if server2 is busy server3 takes over, etc.. Round robin style evenly distributed load. Eg server1 takes first call, server2 second etc.. All requests should be treated the same way (no url rewriting or so on) Sent host headers have to be redirected to every server as http host header, speaking of "server1", "server1.company.internal" and "10.211.1.1". My approach: acl all src all acl manager proto cache_object http_port 80 accel defaultsite=server1.company.internal vhost #reverse proxy entries cache_peer 10.211.2.1 parent 8080 0 no-query originserver round-robin login=PASS name=server1_nic1 cache_peer 10.211.1.2 parent 80 0 no-query originserver round-robin login=PASS name=server2_nic1 cache_peer 10.211.2.3 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic1 cache_peer 10.211.2.4 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic2 #decl of names of squid host acl registered_name_hostdomain dstdomain server1.company.internal acl registered_name_host dstdomain server1 #ip of squid host acl registered_name_ip dstdomain 10.211.2.1 # access: redirects the correct squid hostname http_access allow registered_name_hostdomain http_access allow registered_name_host http_access allow registered_name_ip http_access deny all cache_peer_access server1_nic1 allow registered_name_hostdomain cache_peer_access server1_nic1 allow registered_name_host cache_peer_access server1_nic1 allow registered_name_ip cache_peer_access server2_nic1 allow registered_name_hostdomain cache_peer_access server2_nic1 allow registered_name_host cache_peer_access server2_nic1 allow registered_name_ip cache_peer_access server3_nic1 allow registered_name_hostdomain cache_peer_access server3_nic1 allow registered_name_host cache_peer_access server3_nic1 allow registered_name_ip cache_peer_access server3_nic2 allow registered_name_hostdomain cache_peer_access server3_nic2 allow registered_name_host cache_peer_access server3_nic2 allow registered_name_ip cache_peer_access server1_nic1 deny all cache_peer_access server2_nic1 deny all cache_peer_access server3_nic1 deny all cache_peer_access server3_nic2 deny all never_direct allow all Problems: Load balancer does not load balance other than to first server. Only if the first server is killed in any way the second will take over. I have seen the others working at some point, but definitely not as the intended load balancing described above. If the cache_peer_access is not defined sometimes the wrong hostname is sent to the backend webserver and this always depends on the defaultsite= parameter. Probably because the host header on the request to squid is not set and its replaced by defaultsite. Leaving out defaultsite didnt solve the problem. The only workaround i found for this is the current approach with cache_peer_access. Questions: Does the cache_peer_access influence the round-robin? Is there a better workaround to pass the host header to the backed webservers? Which parameters do increase the speed of load balancing or does anyone have a better approach? -Martin

    Read the article

  • SNMP HOSTMIB.MIB not loading?

    - by user11860
    Forgive me if the answer is something glaringly obvious but I just can't seem to get access to any OIDs under the HOST branch in SNMP. I've used an SNMP browser to inspect a few of my systems and none of them show a HOST branch under ISO.ORG.DOD.INTERNET.MGMT.MIB-2. Any thoughts as to why? I'm looking to monitor a few computer's hardware resources via SNMP and unfortuantely all such OIDs live under the missing HOST branch, Any thoughts?

    Read the article

  • getaddrinfo(3) failed

    - by user101289
    I'm trying to connect to a webservice using a PHP wrapper (which is using curl under the covers). On my local linux machine running PHP 5.3 it works perfectly. However, when I move to a remote server (also running PHP 5.3 on Linux) the call the the webservice URL returns: getaddrinfo(3) failed for http://server.host.com:8080/login I get a similar error from a ping on the remote host: ping: unknown host http://server.host.com:8080/login But when I issue a curl request from the command line, it returns the expected URL. Can anyone shed any light on this issue? Thanks!

    Read the article

  • Set Valid IP On Vmware Vm's Help Please

    - by Shahin At
    I Have a VPS.And i on my vps installed vmware workstation version 9.now i have 3 valid ip's: XXX.152.193.66 XXX.152.193.101 XXX.152.193.103 Gateway: XXX.152.193.65 now tow ip's set in host and i want to 1 ip set on vm. network vm is bridge and set ip on vm but this ip from out of internal network not ping and in vm not ping to gateway. What can I do to solve this problem? my ip is unassigned and only set on vm and use bridge network mode.and gw,mask,dns is set.but not ping. i set this ip on host and without problems is worked, But I do not know why on vm not worked. my host OS Is Windows Server 2003 and firewall is off and RRAs(routing and remote access) For VPN service is Enable. Do not host or virtual machines to create a IP route?

    Read the article

  • Can't remote into Virtual PC

    - by Spamela
    I used to be able to remote into my Virtual PCs. It has been working for at least a year. Yesterday just stopped working... I cannot figure it out... Things I have triple-checked: 1. My Virtual PCs have "Allow Remote Access" checked. 2. My Virtual PCs have an account in the Administrator group that is password protected. 3. My Host's entry in the registry for the Terminal Services Port is still the default of 3389. So here is the strange thing. I can't even remote into the Virtual PC from it's host much less another PC... From the host, I can ping the Virtual PC and get a response but when trying to remote into it from the host I get the following error: Remote Desktop can't connect to the remote computer for one of these reasons: 1)Remote access to the server is not enabled. 2)The remote computer is turned off 3)The remote computer is not available on the network My host is running Windows 7. Virtual PCs are running XP. Thank you for looking at this!

    Read the article

  • Setup IIS 7.5 with multiple website bindings and SSL?

    - by JK01
    On IIS 7.5 I am trying to achieve this with two websites: Default Web Site is bound to: (blank host header port 80 - http) (blank host header port 443 - https) go.example.com www71.example.com the IP address of go.example.com 2nd web site "Beta" is bound to: beta.example.com (blank host header port 443 - https) * using blank only because it doesn't seem to be possible to bind https to a named host header And both need to work with SSL. But I have these problems: When I type in beta.example.com, I see the go.example.com site instead I can not seem to add the SSL binding to both websites at once (I have a single *.example.com wildcard certificate). The beta site will not even start if I add the https binding to it. This is how I have set it up: What is the correct way to set it up?

    Read the article

  • Why is ssh agent forwarding not working?

    - by J. Pablo Fernández
    In my own computer, running MacOSX, I have this in ~/.ssh/config Host * ForwardAgent yes Host b1 ForwardAgent yes b1 is a virtual machine running Ubuntu 12.04. I ssh to it like this: ssh pupeno@b1 and I get logged in without being asked for a password because I already copied my public key. Due to forwarding, I should be able to ssh to pupeno@b1 from b1 and it should work, without asking me for a password, but it doesn't. It asks me for a password. What am I missing? This is the verbose output of the second ssh: pupeno@b1:~$ ssh -v pupeno@b1 OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to b1 [127.0.1.1] port 22. debug1: Connection established. debug1: identity file /home/pupeno/.ssh/id_rsa type -1 debug1: identity file /home/pupeno/.ssh/id_rsa-cert type -1 debug1: identity file /home/pupeno/.ssh/id_dsa type -1 debug1: identity file /home/pupeno/.ssh/id_dsa-cert type -1 debug1: identity file /home/pupeno/.ssh/id_ecdsa type -1 debug1: identity file /home/pupeno/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA 35:c0:7f:24:43:06:df:a0:bc:a7:34:4b:da:ff:66:eb debug1: Host 'b1' is known and matches the ECDSA host key. debug1: Found key in /home/pupeno/.ssh/known_hosts:1 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /home/pupeno/.ssh/id_rsa debug1: Trying private key: /home/pupeno/.ssh/id_dsa debug1: Trying private key: /home/pupeno/.ssh/id_ecdsa debug1: Next authentication method: password pupeno@b1's password:

    Read the article

  • virtualized windows 2003 domain with CentOS 5.3 and poor connectivity

    - by Chris Gow
    I have a test lab set up running a virtualized windows 2003 domain on a CentOS 5.3(xen) host and am experiencing connectivity problems with guests running on other hosts that are part of the same domain. Here's the setup: On Computer A I have CentOS 5.3 running as the host and have virtualized windows 2003 servers for a primary domain controller, a backup domain controller and an exchange server. The primary domain controller also acts as a WINS and dns server. The windows domain appears on a separate subnet from my company's corporate network. Connectivity to any of the virtualized guests on Computer A is fine (remote desktop, ping, what have you). I have another host computer (Computer B) that also has a virtualized Windows 2003 server guest that is part of the same domain. However, connectivity to that guest is flaky at best. I continuously get at least 60% packet loss when I try to ping the guest, and due to that flakiness I can not access any of the services that it runs (remote desktop, web). Now here's the interesting part. It seems to affect only machines running on a different computer than the domain controller that are in the same domain. On Computer B there is another Windows 2003 guest that is not part of the test domain and is on my corporate network. There's no connectivity issues with that guest machine. The problem does not seem to be specific to Computer B either. I created a test VM on my local computer within the test domain and it exhibits the same behaviour as the guest in Computer B. A couple of items to note: - Host OS on both Computer A and B are the same CentOS 5.3 64 bit - Guest OS is Windows 2003 64 bit and 32 bit (the guest on Computer B is 32 bit) - Guest OSes are all up to date (as of Monday) - Host OS on Computer A was upgraded from CentOS 5.2 to 5.3 Update: Sorry I did not follow up with the comments from below. Computer A and B have been moved to their own dedicated switch and the problem has gone away. I'm not sure what the underlying problem(s) were though

    Read the article

  • Does exportfs disrupt users already utilizing those filesystems?

    - by CptSupermrkt
    I need to modify a servers /etc/exports file to export to an additional host. After modifying this file, for it to take effect (i.e. for the additional host to have access to the designated filesystem), I believe I have to run "exportfs" on the server exporting the filesystem. Does this disrupt users who are currently using filesystems that are exported from that serving host? I'm hoping to add this new host "silently", without disruption. Any additional advice related to this, common traps, things to be careful of, etc. would be appreciated if you have any. Edit: just in case...uname -a returns 2.6.32-358.18.1.el6.x86_64 #1 SMP Fri Aug 2 17:04:38 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • Fresh Apache install can't be connected to

    - by Wayne M
    I've got to be missing something here. I have a brand new CentOS server with a LAMP install on it. My domain host (GoDaddy) has the server's IP address configured as the "A Record". Since the server will host subdomains I have enabled NameVirtualHost and set up a virtual host pointing to the web app on the server. I haven't touched anything else in Apache, and it's listening on Port 80 like it should be. However, I can't connect to the server either by DNS or by IP address. I've set up several servers exactly like this one and never run into this before. What could be causing this? Did someone on the host set up a firewall or something that blocks port 80? As I said, I can't connect to the server via anything, but it's a barebones box with LAMP installed on it.

    Read the article

  • Best way to run site through https on server which can't add additional certs

    - by penguin
    So I'm in a curious situation in that I am using a particular server to host things, which I can't host anywhere else (it has access to user databases etc which can't otherwise be accessed). I've been in quite a bit of discussion with the sysadmin at it looks like the only way to run our site: www.foo.com over https may be through some sort of proxy. Currently, users go to www.foo.com and are redirected to https:// host-server.com/foo, as there is an SSL cert installed on that. I want users to be on https:// www.foo.com. I'm told that for various reasons it's going to be very difficult to add an additional SSL cert to the host server. So I was wondering if it is possible to have the DNS records point to a new server, which then creates the HTTPS connection with the browser. Then it forwards requests to https:// host-server.com/foo and feeds the replies back to the original requester. Does this make sense? And would it be at all feasible? My experience with SSL is limited at best, so thanks in advance for your help :) ps gaps in hyperlinks as ServerFault was getting unhappy with the number of links I was posting!

    Read the article

  • Setting up the metasploitable in virtualbox

    - by SetSlapShot
    I'm supposed to try and use kali to do exploits on metasploitable, but I'll burn that bridge when I get there. My question right now is: How do I set up a host only network on virtualbox? I heard that it was unsafe to run metasploitable in bridged networking mode, that host only or NAT is better. When I run metasploitable on NAT, the kali box (attacker) has the same ip address as the metasploitable box, and nmap doesn't really return anything except what I can only assume would be a scan of its own ports? I tried to create a host only network in virtual box. I left the adapter settings at the default, and unchecked DHCP server. now when I run ifconfig on the metasploitable box, there is no ip address listed. Am I setting up/connecting to/not configuring the host only network correctly?

    Read the article

  • NAT for static private addresses

    - by biggdman
    Could someone please help me out with the following scenario: I have a machine that hosts 3 lxc containers, and acts like a router for them. The LXC containers have private ip addresses set on the interfaces that are connected to the host. I want to provide Internet access to the containers and I want to configure the host system so it translates only the addresses that are configured static on the lxc containers interfaces. Should I try to configure the host so it translates each of the 3 private addresses to the public address of the host's interface that is connected to the Internet?

    Read the article

  • VM clients can not access WAN

    - by Saariko
    I have a new VM host on my network, on a Dell R620. The dedicated iDrac is connected with static IP of: 192.168.3.x NIC #1 is connected to my router The eSXI 5.1 host is with IP of 192.168.3.250 The vSphere appliance has a static IP of 192.168.3.241 All the clients on the new host are in the same network 192.168.3.x All clients are Windows 2008 R2 My problem is that non of the clients can access the WAN. I can't ping anything which is beyond my router. I CAN ping anything within my router, even if it's on a different subnet - 192.168.0.x (Router rules are in tact and working) I can ping the gateway (192.168.3.254) One thing that I checked, and is bothering my (but don't know if it has any relevance) is that on the Host Networking properties, there is a vmnic0 (picture) that shows as if it only recognizes 192.168.0.x network - is that so? The command: route print shows me the following details, where I have a duplicate entry for 0.0.0.0 (and one is wrong) - which is probably also why it's not working

    Read the article

  • Gitolite SSH URL Format

    - by KPthunder
    So I got gitolite set up. Simple. But there is one issue I am having. The SSH urls follow the format of git@host:repo. I'm used to Bitbucket / Github where the urls follow the format of git@host:user/repo. Is there a way to get the latter format using gitolite? Another question. I have my ~/.ssh/config file set up with the following entry: Host <host> User <user> IdentityFile <path/to/public/key> I don't have any configuration specifying git as a user, and yet I am able to clone git@host:repo without problem. Obviously, my ssh client is using my public key to access the server which is why gitolite is letting me clone the repo, but how does my ssh client know to use my public key which is only configured for the <user> user and not the git user?

    Read the article

  • Transferring domain from one registrar to another

    - by Macha
    I have a domain from my old web host, which was free with my hosting account. After a few years, I am moving to a VPS. Most of my other domains were registered with Namecheap, so it was just a matter of changing a few DNS records. However, given that my old host does not provide me with a DNS control panel, and I don't want to be paying a full hosting bill for just domains, I'm now looking into transferring it. My old host says there will be a charge of $15 to them. NameCheap's page seems to imply you don't need the current registrar to do anything, but it also seems to be based on sending an email to the one listed in whois. Of course, my old host have whoisguard on the domain so the only email on it is [email protected] (and not a unique [email protected], just [email protected]) which doesn't go to me. Again, there doesn't seem to be an option to disable this. So, is it a case of paying my old host's fee, and paying again for the domain from NameCheap, or is there some other way to transfer my domain? (I'm not really sure which of the trilogy sites this is best for.)

    Read the article

  • Cannot configure hostname keeps on changing after reboot CentOS 6 + nginx [on hold]

    - by The Wolf
    I just finished this tutorial I found online: http://www.unixmen.com/install-lemp-nginx-with-mariadb-and-php-on-centos-6/ Now, I am having trouble making a hostname, you can see the result at: http://www.intodns.com/busilak.com here are my confs /etc/hosts 127.0.0.1 localhost.localdomain localhost localhost4.localdomain4 localhost4 # Auto-generated hostname. Please do not remove this comment. 198.49.66.204 host.busilak.com busilak.com host ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 /etc/sysconfig/network NETWORKING="yes" GATEWAYDEV="venet0" NETWORKING_IPV6="yes" IPV6_DEFAULTDEV="venet0" HOSTNAME="host.busilak.com" /etc/nginx/conf.d/default.conf server { #listen 80; #server_name host.busilak.com; #charset koi8-r; #access_log logs/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } Question: Is there anything I should have done? I just want to use my domain: busilak.com as default domain for my server, such that when I open busilak.com it would point readily to my VPS ip address.

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >