Search Results

Search found 952 results on 39 pages for '443'.

Page 8/39 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Bind ADFS 2.0 service to a specific IP address

    - by ccellar
    I have one server with ADFS-2.0 and a few websites on it. One of the websites is Dynamics CRM which listens on a specific IP address on port 443. Dynamics CRM provides a metadata file for configuration purposes which could be used to configure a relaying party trust with ADFS. It is accessible with the URL https://auth.contoso.com/FederationMetadata/2007-06/federationmetadata.xml The problem is that ADFS-2.0 installs a service which registers following urlacl https://+:443/FederationMetadata/2007-06/ This means the result of accessing the URL https://auth.contoso.com/FederationMetadata/2007-06/federationmetadata.xml is the metadata file of ADFS, not the one of Dynamics CRM. I've tried to delete the default urlacl and added (one of them at a time) https://192.168.1.2:443/FederationMetadata/2007-06/ https://adfs.mydomain.com:443/FederationMetadata/2007-06/ but neither of them worked. Instead the ADFS-service failed to startup complete. Is there any way to bind this service to a IP address? At the moment I see only two alternatives Bind the service to a non standard port. This leads to problems because this means that also the ADFS website has to use a non-standard HTTPS-port. Install ADFS-2.0 on a different server (this is my favorite alternative - however it is not possible in every situation...)

    Read the article

  • My current iptable configuration doesn't work [on hold]

    - by Brad
    sudo chkconfig iptables off /etc/init.d/iptables on ### Clear/flush iptables sudo iptables -F sudo iptables -P INPUT ACCEPT sudo iptables -P OUTPUT ACCEPT sudo iptables -P FORWARD ACCEPT ### Allow SSH iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT ### Allow YUM updates sudo iptables -A OUTPUT -o eth0 -p tcp --dport 80 --match owner --uid-owner 0 --state NEW,ESTABLISHED -j ACCEPT sudo iptables -A OUTPUT -o eth0 -p tcp --dport 443 --match owner --uid-owner 0 --state NEW,ESTABLISHED -j ACCEPT ### Add your rules form the link above, here # ftp,smtp,imap,http,https,pop3,imaps,pop3s sudo iptables -A INPUT -i eth0 -p tcp -m multiport --dports 21,25,143,80,443,110,993,995 -m state --state NEW,ESTABLISHED -j ACCEPT sudo iptables -A OUTPUT -o eth0 -p tcp -m multiport --sports 21,25,143,80,110,443,993,995 -m state --state NEW,ESTABLISHED -j ACCEPT ## allow dns sudo iptables -A OUTPUT -p udp -o eth0 --dport 53 -j ACCEPT && sudo iptables -A INPUT -p udp -i eth0 --sport 53 -j ACCEPT # handling pings sudo iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT && sudo iptables -A OUTPUT -p icmp --icmp-type echo-reply -j ACCEPT sudo iptables -A OUTPUT -p icmp --icmp-type echo-request -j ACCEPT && sudo iptables -A INPUT -p icmp --icmp-type echo-reply -j ACCEPT # manage ddos attacks sudo iptables -A INPUT -p tcp --dport 80 -m limit --limit 25/minute --limit-burst 100 -j ACCEPT ## Implement some logging so that we know what's getting dropped sudo iptables -N LOGGING sudo iptables -A INPUT -j LOGGING sudo iptables -A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IPTables Packet Dropped: " --log-level 7 sudo iptables -A LOGGING -j DROP # once a rule affects traffic then it is no longer managed # so if the traffic has not been accepted, block it sudo iptables -A INPUT -j DROP sudo iptables -I INPUT 1 -i lo -j ACCEPT sudo iptables -A OUTPUT -j DROP # allow only internal port forwarding sudo iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT sudo iptables -P FORWARD DROP # create an iptables config file sudo iptables-save > /root/dsl.fw ### Append the following to the rc.local file sudo nano /etc/rc.local ####--- /sbin/iptables-restore < sudo /root/dsl.fw ####--- /etc/init.d/iptables save ## check to see if this setting is working great. sudo service iptables restart ## log out/in testing sudo chkconfig iptables on What is the problem with this setup? If I restart the server it doesn't allow me back in SSH, and there may be a problem with Yum Original source of information: https://gist.github.com/Jonathonbyrd/1274837#file-instructions

    Read the article

  • Setting up SSL virtual hosts in Apache

    - by Bart van Heukelom
    I'm trying to set up SSL, with SNI, in my apache and am getting the often-seen "ssl_error_rx_record_too_long" error in Firefox when accessing the site (https://test.me.dev.xxxx.net), from which I can conclude that the server is listening on port 443, but doesn't know to use SSL on it. The server is Ubuntu 9.04 with Apache 2.2.11 I enabled SSL in the default way (a2enmod ssl). Here is my relevant config: NameVirtualHost *:* Listen 80 <IfModule mod_ssl.c> Listen 443 </IfModule> ... <VirtualHost *:*> DocumentRoot /home ServerAlias *.dev.xxxx.net UseCanonicalName Off # project.user.dev.xxxx.net VirtualDocumentRoot /home/%2/dev/%1/web SSLEngine On SSLCertificateFile /etc/apache2/certs/dev.crt SSLCertificateKeyFile /etc/apache2/certs/dev.key </VirtualHost> What is wrong?

    Read the article

  • Specify IPSEC port range using ipsec-tools

    - by Sandman4
    Is it possible to require IPSEC on a port range ? I want to require IPSEC for all incoming connections except a few public ports like 80 and 443, but don't want to restrict outgoing connections. My SPD rules would look like: spdadd 0.0.0.0/0 0.0.0.0/0[80] tcp -P in none; spdadd 0.0.0.0/0 0.0.0.0/0[443] tcp -P in none; spdadd 0.0.0.0/0 0.0.0.0/0[0....32767] tcp -P in esp/require/transport; In setkey manpage I see IP ranges, but no mention of port ranges. (The idea is to use IPSEC as a sort of VPN to protect internal communications between multiple servers. Instead of configuring permissions basing on source IPs, or configuring specific ports, I want to demand IPSEC on anything which is not meant to be public - I feel it's less error-prone this way.)

    Read the article

  • Setup Exchange 2007 ActiveSync web application on a separate server

    - by mwillmott
    Hello, I have Exchange 2007 installed on SBS 2008. I also run a web server on the network. I only have one static IP and all traffic trough port 443 is routed to the webserver. I would like to publish the ActiveSync application externally. If i temporarily route 443 traffic to the SBS then it is published (along with owa and everything else which i don't want). Is there a way to host the ActiveSync application on the web server (Server 2008 with IIS7) or to get it to route traffic meant for the ActiveSync application? I have tried creating a site on the webserver which uses the ActiveSync folder on the SBS but that does not seem to work. Thanks, Michael

    Read the article

  • could not bind socket while haproxy restart

    - by shreyas
    I m restarting HAproxy by following command haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) but i get following message [ALERT] 183/225022 (9278) : Starting proxy appli1-rewrite: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy appli2-insert: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy appli3-relais: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy appli4-backup: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy ssl-relay: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy appli5-backup: cannot bind socket my haproxy.cfg file looks likefollowing global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen appli1-rewrite 0.0.0.0:10001 cookie SERVERID rewrite balance roundrobin server app1_1 192.168.34.23:8080 cookie app1inst1 check inter 2000 rise 2 fall 5 server app1_2 192.168.34.32:8080 cookie app1inst2 check inter 2000 rise 2 fall 5 server app1_3 192.168.34.27:8080 cookie app1inst3 check inter 2000 rise 2 fall 5 server app1_4 192.168.34.42:8080 cookie app1inst4 check inter 2000 rise 2 fall 5 listen appli2-insert 0.0.0.0:10002 option httpchk balance roundrobin cookie SERVERID insert indirect nocache server inst1 192.168.114.56:80 cookie server01 check inter 2000 fall 3 server inst2 192.168.114.56:81 cookie server02 check inter 2000 fall 3 capture cookie vgnvisitor= len 32 option httpclose # disable keep-alive rspidel ^Set-cookie:\ IP= # do not let this cookie tell our internal IP address listen appli3-relais 0.0.0.0:10003 dispatch 192.168.135.17:80 listen appli4-backup 0.0.0.0:10004 option httpchk /index.html option persist balance roundrobin server inst1 192.168.114.56:80 check inter 2000 fall 3 server inst2 192.168.114.56:81 check inter 2000 fall 3 backup listen ssl-relay 0.0.0.0:8443 option ssl-hello-chk balance source server inst1 192.168.110.56:443 check inter 2000 fall 3 server inst2 192.168.110.57:443 check inter 2000 fall 3 server back1 192.168.120.58:443 backup listen appli5-backup 0.0.0.0:10005 option httpchk * balance roundrobin cookie SERVERID insert indirect nocache server inst1 192.168.114.56:80 cookie server01 check inter 2000 fall 3 server inst2 192.168.114.56:81 cookie server02 check inter 2000 fall 3 server inst3 192.168.114.57:80 backup check inter 2000 fall 3 capture cookie ASPSESSION len 32 srvtimeout 20000 option httpclose # disable keep-alive option checkcache # block response if set-cookie & cacheable rspidel ^Set-cookie:\ IP= # do not let this cookie tell our internal IP address #errorloc 502 http://192.168.114.58/error502.html #errorfile 503 /etc/haproxy/errors/503.http errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http what is wrong with my aproach

    Read the article

  • Setting up a transparent SSL proxy

    - by badunk
    I've got a linux box set up with 2 network cards to inspect traffic going through port 80. One card is used to go out to the internet, the other one is hooked up to a networking switch. The point is to be able to inspect all HTTP and HTTPS traffic on devices hooked up to that switch for debugging purposes. I've written the following rules for iptables: nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On 192.168.2.1:1337, I've got a transparent http proxy using Charles (http://www.charlesproxy.com/) for recording. Everything's fine for port 80, but when I add similar rules for port 443 (SSL) pointing to port 1337, I get an error about invalid message through Charles. I've used SSL proxying on the same computer before with Charles (http://www.charlesproxy.com/documentation/proxying/ssl-proxying/), but have been unsuccessful with doing it transparently for some reason. Some resources I've googled say its not possible - I'm willing to accept that as an answer if someone can explain why. As a note, I have full access to the described set up including all the clients hooked up to the subnet - so I can accept self-signed certs by Charles. The solution doesn't have to be Charles-specific since in theory, any transparent proxy will do. Thanks! Edit: After playing with it a little, I was able to get it working for a specific host. When I modify my iptables to the following (and open 1338 in charles for reverse proxy): nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j DNAT --to-destination 192.168.2.1:1338 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 1338 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE I am able to get a response, but with no destination host. In the reverse proxy, if I just specify that everything from 1338 goes to a specific host that I wanted to hit, it performs the hand shake properly and I can turn on SSL proxying to inspect the communication. The setup is less than ideal because I don't want to assume everything from 1338 goes to that host - any idea why the destination host is being stripped? Thanks again

    Read the article

  • How can I use VirtualDocumentRoot to serve the www subdomain with SSL enabled?

    - by mdgreenwald
    I am able to serve http://www.domain.com and http://domain.com. Also https://domain.com works fine too. But not https://www.domain.com for some reason this doesn't work. I even created a www.domain.com in my sites-availible folder and also enabled it. I reloaded the configuration and yet it still doesn't work. I have a wildcard certificate so that is NOT the problem. <IfModule mod_ssl.c> <VirtualHost *:443> ServerAdmin [email protected] ServerName *.domain.com:443 ServerAlias www.domain.com VirtualDocumentRoot /var/www/%0 Thanks for any help.

    Read the article

  • Unknown protocol when trying to connect to remote host wit stunnel

    - by RaYell
    I'm trying to set up a stunnel for WebDav on Windows. I want to connect 80 port on my local interface to 443 on another machine in my network. I can ping the machine remote machine. However when I use the tunnel, I'm getting this error all the time SSL state (accept): before/accept initialization SSL_accept: 140760FC: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol There is nothing in the logs on the other machine and here's my stunnel connection config [https] accept = 127.0.0.2:80 connect = 10.0.0.60:443 verify = 0 I've set it up to accept all certificates so this shouldn't be a problem with a self-signed certificate remote host uses. Does anyone knows what might be the problem that this connection cannot be eastablished?

    Read the article

  • old ssl certficate didn't go away on apache2

    - by user1212143
    I have replaced the old ssl certficate with new one and restart apache several time but the old certificate still show on web browser and when I run a command openssl s_client -connect 127.0.0.1:443 -showcerts also I have delete all old certficate files. so not sure where apache still read these certficate. and not read the new one. here is my ssl.conf Listen 0.0.0.0:443 SSLEngine on SSLOptions +StrictRequire <Directory /> SSLRequireSSL </Directory> SSLProtocol -all +TLSv1 +SSLv3 SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM SSLMutex file:/usr/apache2/logs/ssl_mutex SSLRandomSeed startup file:/dev/urandom 1024 SSLRandomSeed connect file:/dev/urandom 1024 SSLSessionCache shm:/usr/apache2/logs/ssl_cache_shm SSLSessionCacheTimeout 600 SSLPassPhraseDialog builtin SSLCertificateFile /usr/apache2/conf/ssl.crt/server.crt SSLCertificateKeyFile /usr/apache2/conf/ssl.key/server.key SSLVerifyClient none SSLProxyEngine off <IfModule mime.c> AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl </IfModule>

    Read the article

  • SSL URL gives a 404

    - by terrid25
    I have recently created an SSL cert on my server *.key and a *csr file. I then created the *crt and the *.ca-bundle with Comodo. I have 2 current vhosts: vhost for - http://www.example.com NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/home/example/public_html/example.com/httpdocs" ServerName example.com ServerAlias www.example.com </VirtualHost> vhost for https://www.example.com NameVirtualHost *:443 <VirtualHost *:443> SSLEngine on SSLCertificateFile /etc/ssl/certs/example_com.crt SSLCertificateKeyFile /etc/ssl/certs/server.key <Directory /home/example/public_html/example.com/httpdocs> AllowOverride All </Directory> DocumentRoot /home/example/public_html/example.com/httpdocs ServerName example.com </VirtualHost> The problem is, when I go to https://www.example.com I get a 404 I'm not sure if the vhost(s) is correct or why I get a 404. Has anyone ever seen this before? I have enabled mod_ssl and restarted apache Many Thanks

    Read the article

  • One domain, dedicated SSL IP on whm

    - by Vanja D.
    It's long, but please read carefully. I am trying to install an SSL certificate on my dedicated server with WHM/cPanel. I have a dedicated IP to use with the SSL certificate. My main domain is example.com (NOT www.example.com), and I have an account and website already running on it. I bought the certificate for the main domain (example.com without www.). I installed the certificate (successfully). I used the example.com domain, the dedicated IP and the same cPanel user which owns example.com (non-ssl) I double checked ConfigServer for port 443 being open. RESULT: https://example.com won't open, ssl check tool returns a "SSL is not configured on this port (443)" error. I have three questions: where did I go wrong, wht did I miss? is it possible to have one domain on two ips (one for http, one for https)? is it possible to have an ssl host with the same user as the regular one?

    Read the article

  • pound: multiple domains

    - by niklassaers
    Hi guys, I've been using pound to run mydomain.dk. Now I've bought some other domains and SSL certificates that are mydomain.no, mydomain.se and mydomain.eu. My old config looked roughly like this: ListenHTTPS Address 81.19.246.120 Port 443 Cert "/usr/local/etc/pound.keys/mydomain.dk.pem" Service BackEnd Address 10.0.10.10 Port 8080 End End End At places like here I've seen that I can use HeadRequire in the Service part, but I want the Host header to go together with the Cert, ideally something like ListenHTTPS Address 81.19.246.120 Port 443 HostAndCert "mydomain.dk" "/usr/local/etc/pound.keys/mydomain.dk.pem" HostAndCert "mydomain.se" "/usr/local/etc/pound.keys/mydomain.se.pem" HostAndCert "mydomain.no" "/usr/local/etc/pound.keys/mydomain.no.pem" HostAndCert "mydomain.eu" "/usr/local/etc/pound.keys/mydomain.eu.pem" Service BackEnd Address 10.0.10.10 Port 8080 End End End Any suggestions or clues to how I can accomplish this? Cheers Nik

    Read the article

  • ssl between balancer members?

    - by jemminger
    I have apache running on one machine as a load balancer: <VirtualHost *:443> ServerName ssl.example.com DocumentRoot /home/example/public SSLEngine on SSLCertificateFile /etc/pki/tls/certs/example.crt SSLCertificateKeyFile /etc/pki/tls/private/example.key <Proxy balancer://myappcluster> BalancerMember http://app1.example.com:12345 route=app1 BalancerMember http://app2.example.com:12345 route=app2 </Proxy> ProxyPass / balancer://myappcluster/ stickysession=_myapp_session ProxyPassReverse / balancer://myappcluster/ </VirtualHost> Note that the balancer takes requests under SSL port 443, but then communicates to the balancer members on a non-ssl port. Is it possible to have the forwarding to the balancer members be under SSL too? If so, is this the best/recommended way? If so, do I have to have another SSL cert for each balancer member? Does the SSLProxyEngine directive have anything to do with this?

    Read the article

  • How is virtual machine port opening works

    - by Xianlin
    I have a question regarding VM port. Say I have a Virtual Machine and a Host Machine. The opening ports on Host are 80, 22, 443 only. if I opened ports 80, 22, 443 VM it should be working. However if I opened port 21 on VM, will it work? If it works, does it mean the port 21 on Host is opened also? My understanding is that the network traffic goes from VM's virtual network adapter to Host's physical network adapter. So the ports on these 2 network adapters should match. Am I correct to say this?

    Read the article

  • nginx - Redirect specific page paths to https while keeping everything else on http (in a single server call)?

    - by Kris Anderson
    From what I've gathered so far it's clear that running if statements in nginx should be avoided at all costs. Most of the examples I've found so far regarding specific page redirects involve multiple servers being used. But, isn't that a bit wasteful? I'm not sure, but I would think multiple servers to accomplish this would be somewhat slower then a single server when under heavy load. My current server call is this: server { listen 10.0.0.60:80; listen 10.0.0.60:443 default ssl; #other code } What I want to do is redirect certain http requests to https requests. For example, I want /login/ and /my-account/ to always be forced to use SSL. If you're on /help/ though, I want that served over the default http. Is there a way to accomplish this within a single server call? Or is there no downside to using 2 server calls to get this working? nginx seems to be under pretty active development and a lot of the older guides I've followed were from times when you couldn't listen to requests for port 80 and 443 within the same server call. But now that nginx has been updated to support that (I'm running 1.2.4), I'm wondering if there's a "best practice" way of handling this today. Any help would be greatly appreciated. EDIT: I did find this guide: http://redant.com.au/blog/manage-ssl-redirection-in-nginx-using-maps-and-save-the-universe/ and I updated my code as follows: map $uri $my_preferred_proto { default "http"; ~^/#/user/login "https"; } server { listen 10.0.0.60:80; ## listen for ipv4; this line is default and implied listen 10.0.0.60:443 default ssl; if ($my_preferred_proto = "none") { set $my_preferred_proto $scheme; } if ($my_preferred_proto != $scheme) { return 301 $my_preferred_proto://mysite.com$request_uri; } It's not working though. When I change the default to https everything is redirected to SSL so it does somewhat work. But the redirect of /#/user/login is not redirecting to HTTPS. Any ideas? Also, is this a good way to go about this?

    Read the article

  • Does Outlook continue to auto-discover account settings for already configured accounts? Can it be prevented?

    - by Oliver Salzburg
    fail2ban just locked me out of our website because something from my desktop was hammering port 443 on the server (which is not in use). I saw my IP also requesting "GET /autodiscover/autodiscover.xml HTTP/1.1", so I assume that's what's going on on port 443 as well. But I only have 1 email account configured in Outlook and it's working just fine. The account is for the address [email protected] and said server will answer for example.com, but that server is not our MX and it is also not configured as an Exchange server in my mail account. So, why is Outlook still trying to retrieve those auto-configuration settings?

    Read the article

  • Tomato QoS: Why is some traffic unclassified when there are classifications for it?

    - by Armitage
    Ok, I am trying to tweak my router to give priority to some traffic. My classifications seem to cover just about everything but I still see ~60 to ~80% of the traffic as unclassified: TCP 192.168.1.100 64137 192.168.1.1 80 Unclassified TCP 192.168.1.100 64175 192.168.1.1 80 Unclassified TCP 192.168.1.100 64144 192.168.1.1 443 Unclassified I assume that the 64### ports are just what my WAP uses to send packets inside my home network. But my classifications seems to cover any traffic for destination ports 80 and 443: (partial list) TCP Dst Port: 80,443 High WWW TCP/UDP Dst Port: 1024-65535 Lowest Bulk Traffic Why do I have so much unclassified traffic if I have a classification that should cover it?

    Read the article

  • In ufw is there any way to disable logging for a particular rule?

    - by thomasrutter
    I am using UFW with a default logging policy of "low". I would like to keep this logging on for the default deny action, but disable it for a particular IP address only. So I'd like to create one particular new rule that doesn't have logging. Is there a way to achieve this? I have a rather uncomplicated ufw setup so far, like this: Status: active Logging: on (low) Default: deny (incoming), allow (outgoing) New profiles: skip To Action From -- ------ ---- 22/tcp LIMIT Anywhere 80/tcp ALLOW Anywhere 443/tcp ALLOW Anywhere 22/tcp ALLOW Anywhere (v6) 80/tcp ALLOW Anywhere (v6) 443/tcp ALLOW Anywhere (v6)

    Read the article

  • Lighttpd as a proxy to a https host

    - by Homer J. Simpson
    Hi, I am trying to set up a lighttpd as proxy from one server to another (which is running Apache/SSL), having trouble with the https part.. I want to be able to capture https requests and let the Apache server handle it, trying this: $SERVER["socket"] == ":443" { $HTTP["host"] == "www.mydomain.com" { proxy.server = ( "" => ( ( "host" => "123.123.123.123", # the Apache "port" => 443))) } } Normal port 80 requests are working fine.. What am I doing wrong ? Edit: Additionaly, error.log doesnt show anything.. Requests to https://www.mydomain.com are not finishing.

    Read the article

  • Multiple SSL Certificates Running on Mac OS X 10.6

    - by frodosghost.mp
    I have been running into walls with this for a while, so I posted at stackoverflow, and I was pointed over here... I am attempting to setup multiple IP addresses on Snow Leopard so that I can develop with SSL certificates. I am running XAMPP - I don't know if that is the problem, but I guess I would run into the same problems, considering the built in apache is turned off. So first up I looked into starting up the IPs on start up. I got up an running with a new StartupItem that runs correctly, because I can ping the ip address: ping 127.0.0.2 ping 127.0.0.1 And both of them work. So now I have IP addresses, which as you may know are not standard on OSx. I edited the /etc/hosts file to include the new sites too: 127.0.0.1 site1.local 127.0.0.2 site2.local I had already changed the httpd.conf to use the httpd-vhosts.conf - because I had a few sites running on the one IP address. I have edited the vhosts file so a site looks like this: <VirtualHost 127.0.0.1:80> DocumentRoot "/Users/jim/Documents/Projects/site1/web" ServerName site1.local <Directory "/Users/jim/Documents/Projects/site1"> Order deny,allow Deny from All Allow from 127.0.0.1 AllowOverride All </Directory> </VirtualHost> <VirtualHost 127.0.0.1:443> DocumentRoot "/Users/jim/Documents/Projects/site1/web" ServerName site1.local SSLEngine On SSLCertificateFile "/Applications/XAMPP/etc/ssl-certs/myssl.crt" SSLCertificateKeyFile "/Applications/XAMPP/etc/ssl-certs/myssl.key" SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/Users/jim/Documents/Projects/site1"> Order deny,allow Deny from All Allow from 127.0.0.1 AllowOverride All </Directory> </VirtualHost> In the above code, you can change the 1's to 2's and it is the setup for the second site. They do use the same certificate, which is why they are on different IP addresses. I also included the NameVirtualHost information at the top of the file: NameVirtualHost 127.0.0.1:80 NameVirtualHost 127.0.0.2:80 NameVirtualHost 127.0.0.1:443 NameVirtualHost 127.0.0.2:443 I can ping site1.local and site2.local. I can use telnet ( telnet site2.local 80 ) to get into both sites. But in Safari I can only get to the first site1.local - navigating to site2.local gives me either the localhost main page (which is included in the vhosts) or gives me a Access forbidden!. I am usure what to do, any suggestions would be awesome.

    Read the article

  • Problems setting up home web server

    - by putmatrix
    Has anyone been able to get a server working with the router smcwbr14t-g? Although I have been able to get Apache set up correctly and my website works on the internal IP 192.168.2.101, I've been running into a dead end when trying to get it to show up on my external IP. In my router, there is no option for port forwarding, but there are options for a 'virtual server'. Following the manual, I have it set up like this: http://imgur.com/zrcV7.png I also disabled the firewall. I configured Apache to listen to ports 80, 81, and 443, none of which solved the problem. However, the IP's 192.168.2.101:443 and :81 load fine. The problem is that I still cannot load the web site from my external IP, either from my computer or outside.

    Read the article

  • is it really necessary to run Apache as a front-end to Glassfish/JBoss/Tomcat?

    - by Caffeine Coma
    I'm primarily a Java developer, and I come to you with a question that straddles the divide between developers and sysadmins. Years ago, when it was a novel thing to run Tomcat as an app server, it was customary to front it with Apache. As I understand it, this was done because: Java was considered "slow", and it was helpful to have Apache serve static content directly. Tomcat couldn't listen to ports 80/443 unless run as root, which was dangerous. Java is no longer considered slow, and I doubt adding Apache to the mix will actually help speed things up. As for the ports issue, there are probably simpler ways to connect app servers to ports 80/443 these days. So my question is- is there really any benefit to fronting Java Webapps with Apache these days? If so, is Apache still the way to go? Should I look at Nginx? Instead of Tomcat I'm using Glassfish, if that matters.

    Read the article

  • Two Tomcat SSL Providers & One FreeBSD

    - by mosg
    Hello everyone. Question: On FreeBSD8 I need to have two opened HTTPS different ports (443 and 444, for example). In other words, I need two providers, working simultaneously: Ordinary SSL signed certificate (# Thawte) on 443 port Special russian security provider (# DIGTProvider, based on CryptoPro CSP software) on 444 port I also have to mentioned, that the major provider is the 2'nd provider. Here is some of DIGTProvider options: add to ${JRE_HOME}/lib/security/java.security this line security.provider.N=com.digt.trusted.jce.provider.DIGTProvider ssl.SocketFactory.provider=com.digt.trusted.jsse.provider.DigtSocketFactory uncomment and edit in conf/server.xml HTTPS section: sslProtocol="GostTLS" (added) edit bin/catalina.sh and add: export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/opt/cprocsp/lib/ia32" export JAVA_OPTS="${JAVA_OPTS} -Dcom.digt.trusted.jsse.server.certFile=/home//server-gost.cer -Dcom.digt.trusted.jsse.server.keyPasswd=11111111" As I know if I just define in server.xml tomcat's configuration file two SSL connectors, tomcat would not start, because in JRE you can use only one JSSE provider. Thanks for help.

    Read the article

  • Set up linux box for secure local hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP/MySQL: To upgrade PHP and MySQL to the latest versions, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! Add IUS repository to our package manager cd /tmp wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. # list all the packages in the IUS repository; use this to find PHP/MySQL version and libraries you want to install Remove old version of PHP and install newer version from IUS rpm -qa | grep php # to list all of the installed php packages we want to remove yum shell # open an interactive yum shell remove php-common php-mysql php-cli #remove installed PHP components install php53 php53-mysql php53-cli php53-common #add packages you want transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) Upgrade MySQL from IUS repository /etc/init.d/mysqld stop rpm -qa | grep mysql # to see installed mysql packages yum shell remove mysql mysql-server #remove installed MySQL components install mysql51 mysql51-server mysql51-devel transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell service mysqld start mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project Upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Install rssh (restricted shell) to provide scp and sftp access, without allowing ssh login cd /tmp wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm useradd -m -d /home/dev -s /usr/bin/rssh dev passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. vi /etc/rssh.conf Uncomment or add: allowscp allowsftp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). rssh instructions appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html Set up virtual interfaces ifconfig eth1:1 192.168.1.3 up #start up the virtual interface cd /etc/sysconfig/network-scripts/ cp ifcfg-eth1 ifcfg-eth1:1 #copy default script and match name to our virtual interface vi ifcfg-eth1:1 #modify eth1:1 script #ifcfg-eth1:1 | modify so it looks like this: DEVICE=eth1:1 IPADDR=192.168.1.3 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes NAME=eth1:1 Add more Virtual interfaces as needed by repeating. Because of the ONBOOT=yes line in the ifcfg-eth1:1 file, this interface will be brought up when the system boots, or the network starts/restarts. service network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] Bringing up interface eth1: [ OK ] ping 192.168.1.3 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.105 ms Virtualhosts In the rssh section above I added a user to use for SFTP. In this users' home directory, I created a folder called 'https'. This is where the documents for this site will live, so I need to add a virtualhost that will point to it. I will use the above virtual interface for this site (herein called dev.site.local). vi /etc/http/conf/httpd.conf Add the following to the end of httpd.conf: <VirtualHost 192.168.1.3:80> ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> I put a dummy index.html file in the https directory just to check everything out. I tried browsing to it, and was met with permission denied errors. The logs only gave an obscure reference to what was going on: [Mon May 17 14:57:11 2010] [error] [client 192.168.1.100] (13)Permission denied: access to /index.html denied I tried chmod 777 et. al., but to no avail. Turns out, I needed to chmod+x the https directory and its' parent directories. chmod +x /home chmod +x /home/dev chmod +x /home/dev/https This solved that problem. DNS I'm handling DNS via our local Windows Server 2003 box. However, the CentOS documentation for BIND can be found here: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-bind.html SSL To get SSL working, I changed the following in httpd.conf: NameVirtualHost 192.168.1.3:443 #make sure this line is in httpd.conf <VirtualHost 192.168.1.3:443> #change port to 443 ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Unfortunately, I keep getting (Error code: ssl_error_rx_record_too_long) errors when trying to access a page with SSL. As JamesHannah gracefully pointed out below, I had not set up the locations of the certs in httpd.conf, and thusly was getting the page thrown at the broswer as the cert making the browser balk. So first, I needed to set up a CA and make certificate files. I found a great (if old) walkthrough on the process here: http://www.debian-administration.org/articles/284. Here are the relevant steps I took from that article: mkdir /home/CA cd /home/CA/ mkdir newcerts private echo '01' > serial touch index.txt #this and the above command are for the database that will keep track of certs Create an openssl.cnf file in the /home/CA/ dir and edit it per the walkthrough linked above. (For reference, my finished openssl.cnf file looked like this: http://pastebin.com/raw.php?i=hnZDij4T) openssl req -new -x509 -extensions v3_ca -keyout private/cakey.pem -out cacert.pem -days 3650 -config ./openssl.cnf #this creates the cacert.pem which gets distributed and imported to the browser(s) Modified openssl.cnf again per walkthrough instructions. openssl req -new -nodes -out dev.req.pem -config ./openssl.cnf #generates certificate request, and key.pem which I renamed dev.key.pem. Modified openssl.cnf again per walkthrough instructions. openssl ca -out dev.cert.pem -config ./openssl.cnf -infiles dev.req.pem #create and sign certificate. cp dev.cert.pem /home/dev/certs/cert.pem cp dev.key.pem /home/certs/key.pem I updated httpd.conf to reflect the certs and turn SSLEngine on: NameVirtualHost 192.168.1.3:443 <VirtualHost 192.168.1.3:443> ServerAdmin [email protected] DocumentRoot /home/dev/https SSLEngine on SSLCertificateFile /home/dev/certs/cert.pem SSLCertificateKeyFile /home/dev/certs/key.pem ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Put the CA cert.pem in a web-accessible place, and downloaded/imported it into my browser. Now I can visit https://dev.site.local with no errors or warnings. And this is where I'm at. I will keep editing this as I make progress. Any tips on how to configure SSL email would be appreciated.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >