Search Results

Search found 6253 results on 251 pages for 'apache2 ssl'.

Page 54/251 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • How do I get nginx to issue 301 requests to HTTPS location, when SSL handled by a load-balancer?

    - by growse
    I've noticed that there's functionality enabled in nginx by default, whereby a url request without a trailing slash for a directory which exists in the filesystem automatically has a slash added through a 301 redirect. E.g. if the directory css exists within my root, then requesting http://example.com/css will result in a 301 to http://example.com/css/. However, I have another site where the SSL is offloaded by a load-balancer. In this case, when I request https://example.com/css, nginx issues a 301 redirect to http://example.com/css/, despite the fact that the HTTP_X_FORWARDED_PROTO header is set to https by the load balancer. Is this an nginx bug? Or a config setting I've missed somewhere?

    Read the article

  • Configure custom SSL certificate for RDP on Windows Server 2012 in Remote Administration mode?

    - by Ryan Bolger
    So the release of Windows Server 2012 has removed a lot of the old Remote Desktop related configuration utilities. In particular, there is no more Remote Desktop Session Host Configuration utility that gave you access to the RDP-Tcp properties dialog that let you configure a custom certificate for the RDSH to use. In its place is a nice new consolidated GUI that is part of the overall "edit deployment properties" workflow in the new Server Manager. The catch is that you only get access to that workflow if you have the Remote Desktop Services role installed (as far as I can tell). This seems like a bit of an oversight on Microsoft's part. How can we configure a custom SSL certificate for RDP on Windows Server 2012 when it's running in the default Remote Administration mode without needlessly installing the Remote Desktop Services role?

    Read the article

  • How to do IIS SSL server redirects correctly? Is meta refresh needed?

    - by Jesse
    Hi all! I think our backend programmer/server admin is handling our SSL redirects pretty wonky - see it in action here: www.mchenry.edu/parentorientation First off, see how it redirects to index2.asp? Is this necessary? Can't she easily redirect to the original index.asp but have it be https:// instead? Also, she is using a meta refresh on the original index.asp page to redirect to index2.asp as well, and she says this is for backup, in case the server configs change and the server can't handle the redirect so then the webpage would take over. Finally, she said she tried using the server redirect solely but that it kept looping on itself- what did she do wrong? Is this even possible? Is she giving us a snow job or what? I want a better understanding of what is happening here so I can talk to my boss about it, because this is driving me up the wall. Thanks for any info you can provide.

    Read the article

  • AnyConnect SSL VPN split tunneling for a single website?

    - by Daniel Lucas
    We have a Cisco ASA 5510. We use split tunneling for AnyConnect SSL VPN clients. All internal addresses are tunnelled. Everything else is routed through the client's own internet connection. We use a SaaS service that only responds to requests when they come from one of our own public IP addresses. Because of this, VPN users are unable to access it currently. Is there a way to specify that a specific website should be tunneled and all others should not? NOTE: Worst case we will use a web bookmark on the clientless portal to tranlate through our network, but I'd like to see if the above is possible first.

    Read the article

  • How to secure both root domain and wildcard subdomains with one SSL cert?

    - by Question Overflow
    I am trying to generate a self-signed SSL certificate to secure both example.com and *.example.com. Looking at the answers to this and this questions, there seems to be an equal number of people agreeing and disagreeing whether this could be done. However, the website from a certification authority seems to suggest that it could be done. Currently, these are the changes added to my openssl configuration file: [req] req_extensions = v3_req [req_distinguished_name] commonName = example.com [v3_req] subjectAltName = @alt_names [alt_names] DNS.1 = example.com DNS.2 = *.example.com I tried the above configuration and generated a certificate. When navigating to https://example.com, it produces the usual warning that the cert is "self-signed". After acceptance, I navigate to https://abc.example.com and an additional warning is produced, saying that the certificate is only valid for example.com. The certificate details only listed example.com in the certificate hierarchy with no signs of any wildcard subdomain being present. I am not sure whether this is due to a misconfiguration or that the common name should have a wildcard or that this could not be done.

    Read the article

  • Slash after domain in URL missing for Rails site

    - by joshee
    After redirecting users in a Rails app, for some reason the slash after the domain is missing. Generated URLs are invalid and I'm forced to manually correct them. The problem only occurs on a subdomain. On a different primary domain (same server), everything works ok. For example, after logging out, the site is directing to https://www.sub.domain.comlogin/ rather than https://www.sub.domain.com/login I suspect the issue has something to do with the vhost setup, but I'm not sure. Here are the broken and working vhosts: BROKEN SUBDOMAIN <VirtualHost *:80> ServerName www.sub.domain.com ServerAlias sub.domain.com Redirect permanent / https://www.sub.domain.com </VirtualHost> <VirtualHost *:443> ServerAdmin [email protected] ServerName www.sub.domain.com ServerAlias sub.domain.com RailsEnv production # SSL Engine Switch SSLEngine on # SSL Cipher Suite: SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL # Server Certificate SSLCertificateFile /path/to/server.crt # Server Private Key SSLCertificateKeyFile /path/to/server.key # Set header to indentify https requests for Mongrel RequestHeader set X_FORWARDED_PROTO "https" BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 DocumentRoot /home/usr/www/www.sub.domain.com/current/public/ <Directory "/home/usr/www/www.sub.domain.com/current/public"> AllowOverride all Allow from all Options -MultiViews </Directory> WORKING PRIMARY DOMAIN <VirtualHost *:80> ServerName www.diffdomain.com ServerAlias diffdomain.com Redirect permanent / https://www.diffdomain.com </VirtualHost> <VirtualHost *:443> ServerAdmin [email protected] ServerName www.diffdomain.com ServerAlias diffdomain.com ServerAlias *.diffdomain.com RailsEnv production # SSL Engine Switch SSLEngine on # SSL Cipher Suite: SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL # Server Certificate SSLCertificateFile /path/to/server.crt # Server Private Key SSLCertificateKeyFile /path/to/server.key # Set header to indentify https requests for Mongrel RequestHeader set X_FORWARDED_PROTO "https" BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 DocumentRoot /home/usr/www/www.diffdomain.com/current/public/ <Directory "/home/usr/www/www.diffdomain.com/current/public"> AllowOverride all Allow from all Options -MultiViews </Directory> </VirtualHost> Please let me know if there's anything else I could provide that would help determine what's wrong here. UPDATE tried adding a trailing slash to the redirect command, but still no luck.

    Read the article

  • Two SSL certificates required for two Apache servers using mod_proxy to serve HTTPS?

    - by Nick
    Our application originally used a single Apache server with mod_perl installed to serve up all HTTPS requests. Due to memory issues I've added a lighter Apache installation and used ProxyPass to hand off the Perl requests to the mod_perl enabled server. We currently have an SSL certificate installed on the mod_perl server but I'm struggling to understand whether we need a certificate for both servers or only the lightweight server which is receiving the original requests. Or can a certificate be used for more than one server on a single machine? Thanks in advance for any help/pointers.

    Read the article

  • importing an existing x509 certificate and private key in Java keystore to use in ActiveMQ ssl context

    - by Aleksandar Ivanisevic
    I have this in activemq config <sslContext> <sslContext keyStore="file:/home/alex/work/amq/broker.ks" keyStorePassword="password" trustStore="file:${activemq.base}/conf/broker.ts" trustStorePassword="password"/> </sslContext> I have a pair of x509 cert and a key file How do I import those two to be used in ssl and ssl+stomp connectors? All examples i could google always generate the key themselves, but I already have a key. I have tried keytool -import -keystore ./broker.ks -file mycert.crt but this only imports the certificate and not the key file and results in 2009-05-25 13:16:24,270 [localhost:61612] ERROR TransportConnector - Could not accept connection : No available certificate or key corresponds to the SSL cipher suites which are enabled. I have tried concatenating the cert and the key but got the same result How do I import the key?

    Read the article

  • How do I make calls to a WCF service with jquery ajax from an SSL-secured page?

    - by NovaJoe
    I have a WCF service returning JSON to jQuery ajax calls and presenting the results on an ASPX page. When the page is NOT under SSL, the ajax calls work perfectly. When the page IS under SSL, the calls fail. I understand that this behavior must be due to the Same Origin Policy (SOP). So, how do I setup my WCF service to accept calls from an SSL-secured page? Does the WCF service also need to be secured? If so, how do I do this? Thanks, Joe

    Read the article

  • OpenSSL in C++ email client - server closes connection with TLSv1 Alert message

    - by mice
    My app connects to a IMAP email server. One client configured his server to reject SSLv2 certificates, and now my app fails to connect to the server. All other email clients connect to this server successfully. My app uses openssl. I debugged by creating minimal openssl client and attempt to connect to the server. Below is the code with connects to the mail server (using Windows sockets, but same problem is with unix sockets). Server sends its initial IMAP greeting message, but after client sends 1st command, server closes connection. In Wireshark, I see that after sending command to server, it returns TLSv1 error message 21 (Encrypted Alert) and connection is gone. I'm looking for proper setup of OpenSSL for this connection to succeed. Thanks #include <stdio.h> #include <memory.h> #include <errno.h> #include <sys/types.h> #include <winsock2.h> #include <openssl/crypto.h> #include <openssl/x509.h> #include <openssl/pem.h> #include <openssl/ssl.h> #include <openssl/err.h> #define CHK_NULL(x) if((x)==NULL) exit(1) #define CHK_ERR(err,s) if((err)==-1) { perror(s); exit(1); } #define CHK_SSL(err) if((err)==-1) { ERR_print_errors_fp(stderr); exit(2); } SSL *ssl; char buf[4096]; void write(const char *s){ int err = SSL_write(ssl, s, strlen(s)); printf("> %s\n", s); CHK_SSL(err); } void read(){ int n = SSL_read(ssl, buf, sizeof(buf) - 1); CHK_SSL(n); if(n==0){ printf("Finished\n"); exit(1); } buf[n] = 0; printf("%s\n", buf); } void main(){ int err=0; SSLeay_add_ssl_algorithms(); SSL_METHOD *meth = SSLv23_client_method(); SSL_load_error_strings(); SSL_CTX *ctx = SSL_CTX_new(meth); CHK_NULL(ctx); WSADATA data; WSAStartup(0x202, &data); int sd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); CHK_ERR(sd, "socket"); struct sockaddr_in sa; memset(&sa, 0, sizeof(sa)); sa.sin_family = AF_INET; sa.sin_addr.s_addr = inet_addr("195.137.27.14"); sa.sin_port = htons(993); err = connect(sd,(struct sockaddr*) &sa, sizeof(sa)); CHK_ERR(err, "connect"); /* ----------------------------------------------- */ /* Now we have TCP connection. Start SSL negotiation. */ ssl = SSL_new(ctx); CHK_NULL(ssl); SSL_set_fd(ssl, sd); err = SSL_connect(ssl); CHK_SSL(err); // Following two steps are optional and not required for data exchange to be successful. /* printf("SSL connection using %s\n", SSL_get_cipher(ssl)); X509 *server_cert = SSL_get_peer_certificate(ssl); CHK_NULL(server_cert); printf("Server certificate:\n"); char *str = X509_NAME_oneline(X509_get_subject_name(server_cert),0,0); CHK_NULL(str); printf(" subject: %s\n", str); OPENSSL_free(str); str = X509_NAME_oneline(X509_get_issuer_name (server_cert),0,0); CHK_NULL(str); printf(" issuer: %s\n", str); OPENSSL_free(str); // We could do all sorts of certificate verification stuff here before deallocating the certificate. X509_free(server_cert); */ printf("\n\n"); read(); // get initial IMAP greeting write("1 CAPABILITY\r\n"); // send 1st command read(); // get reply to cmd; server closes connection here write("2 LOGIN a b\r\n"); read(); SSL_shutdown(ssl); closesocket(sd); SSL_free(ssl); SSL_CTX_free(ctx); }

    Read the article

  • How to use a self-signed SSL certificate when developing with Trigger.io?

    - by user610345
    Our backend is in rails, and for several reasons the development environment has to be run with rails using a self-signed SSL certificate. This works fine on the desktop after manually trusting the certificate. Using Trigger.io, we're developing a mobile application targeting iOS from the same backend. It would be ideal for us to be able to run the rails server with SSL (so we can compare the browser output) and still have the iOS simulator connect properly without complaining about invalid certs. Production is using a proper ssl-cert, but what's the best way to set up the simulator?

    Read the article

  • Using Nortel Netdirect on Windows 7 64bit

    - by Matt Lewis
    Does anyone know how to use Nortel Netdirect (Version 7.1.3.0) with Windows 7 64 bit (Home Premium)? There are several ways available to me for connecting, all of which work for me on a 32-bit XP machine: Nortel Contivity VPN client (v6_02.022). The installer appears to be 16-bit, so I can't even install it on a 64-bit machine. Web-based SSL via IE Web-based SSL via Firefox The Web-based SSL process is supposed to load Netdirect and start it up, establishing the VPN connection. Using Firefox, I'm able to authenticate with my smartcard, but when it tries to download the applet, the process stops with a message box saying that it couldn't download the zip file. If I run Firefox in Vista compatibility mode, it gets a little farther, and manages to start Netdirect, but then exits after notifying me that the netdirect adapter was not installed. Using IE, I'm able to authenticate with my smartcard, then the java applet starts, but dies with the following sent to the java console: load: class NetDirect not found. java.lang.ClassNotFoundException: NetDirect at sun.plugin2.applet.Applet2ClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.plugin2.applet.Plugin2ClassLoader.loadCode(Unknown Source) at sun.plugin2.applet.Plugin2Manager.createApplet(Unknown Source) at sun.plugin2.applet.Plugin2Manager$AppletExecutionRunnable.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: unknown_ca at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Unknown Source) at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.recvAlert(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(Unknown Source) at sun.net.www.protocol.https.HttpsClient.afterConnect(Unknown Source) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at java.net.HttpURLConnection.getResponseCode(Unknown Source) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader.getBytes(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader.access$000(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) ... 7 more Exception: java.lang.ClassNotFoundException: NetDirect I've tried installing certificates using java's keytool, but that didn't change the outcome.

    Read the article

  • Authentication in Apache2 with mod_dav_svn

    - by Poita_
    I'm having some trouble setting up authentication in Apache2 for a SVN repository that's being served using mod_dav_svn. Here is my Apache config for the directory: <Location /svn> DAV svn SVNParentPath /var/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dev.passwd Require valid-user </Location> I can use svn with the projects under /var/svn/repos, so I know that the DAV is working, but when I do svn updates or commits (or anything), Apache doesn't ask for any authentication... It does the exact same thing whether the Auth directives are there or not. The permissions on the repository directory (and all subdirectories/files) only give permission to www-data (the Apache2 user/group). I have also ensured that all relevant modules are enabled (in particular mod_auth is enabled, as are all mod_dav* modules). Any ideas why svn commands aren't authenticating? Thanks in advance.

    Read the article

  • Apache2 server does not start cannot pen shared object file

    - by sid__
    I am working with Apache and Passenger for a Rails project. And a during a restart I got the following error Cannot load /usr/lib/ruby/gems/1.8/gems/passenger-2.2.11/ext/apache2/mod_passenger.so into server: /usr/lib/ruby/gems/1.8/gems/passenger-2.2.11/ext/apache2/mod_passenger.so: cannot open shared object file: No such file or directory However there is no change in the apache configuration file. I have attached the snippet from the conf file 287 LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-2.2.11/ext/apache2/mod_passenger.so 288 PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-2.2.11 289 PassengerRuby /usr/bin/ruby1.8 I am also unable to locate the shared object file in the location pointed to by the server though I am not really sure how the .so file is works (created/destroyed) I would also appreciate it if someone could explain to me what exactly has happened. I understand the shared object file is mission, what could be the reason it got deleted.

    Read the article

  • Accessing your web server via IPv6

    Being able to run your systems on IPv6, have automatic address assignment and the ability to resolve host names are the necessary building blocks in your IPv6 network infrastructure. Now, that everything is in place it is about time that we are going to enable another service to respond to IPv6 requests. The following article will guide through the steps on how to enable Apache2 httpd to listen and respond to incoming IPv6 requests. This is the fourth article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. Surfing the web - IPv6 style Enabling IPv6 connections in Apache 2 is fairly simply. But first let's check whether your system has a running instance of Apache2 or not. You can check this like so: $ service apache2 status Apache2 is running (pid 2680). In case that you got a 'service unknown' you have to install Apache to proceed with the following steps: $ sudo apt-get install apache2 Out of the box, Apache binds to all your available network interfaces and listens to TCP port 80. To check this, run the following command: $ sudo netstat -lnptu | grep "apache2\W*$"tcp6       0      0 :::80                   :::*                    LISTEN      28306/apache2 In this case Apache2 is already binding to IPv6 (and implicitly to IPv4). If you only got a tcp output, then your HTTPd is not yet IPv6 enabled. Check your Listen directive, depending on your system this might be in a different location than the default in Ubuntu. $ sudo nano /etc/apache2/ports.conf # If you just change the port or add more ports here, you will likely also# have to change the VirtualHost statement in# /etc/apache2/sites-enabled/000-default# This is also true if you have upgraded from before 2.2.9-3 (i.e. from# Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and# README.Debian.gzNameVirtualHost *:80Listen 80<IfModule mod_ssl.c>    # If you add NameVirtualHost *:443 here, you will also have to change    # the VirtualHost statement in /etc/apache2/sites-available/default-ssl    # to <VirtualHost *:443>    # Server Name Indication for SSL named virtual hosts is currently not    # supported by MSIE on Windows XP.    Listen 443</IfModule><IfModule mod_gnutls.c>    Listen 443</IfModule> Just in case that you don't have a ports.conf file, look for it like so: $ cd /etc/apache2/$ fgrep -r -i 'listen' ./* And modify the related file instead of the ports.conf. Which most probably might be either apache2.conf or httpd.conf anyways. Okay, please bear in mind that Apache can only bind once on the same interface and port. So, eventually, you might be interested to add another port which explicitly listens to IPv6 only. In that case, you would add the following in your configuration file: Listen 80Listen [2001:db8:bad:a55::2]:8080 But this is completely optional... Anyways, just to complete all steps, you save the file, and then check the syntax like so: $ sudo apache2ctl configtestSyntax OK Ok, now let's apply the modifications to our running Apache2 instances: $ sudo service apache2 reload * Reloading web server config apache2   ...done. $ sudo netstat -lnptu | grep "apache2\W*$"                                                                                               tcp6       0      0 2001:db8:bad:a55:::8080 :::*                    LISTEN      5922/apache2    tcp6       0      0 :::80                   :::*                    LISTEN      5922/apache2 There we have two daemons running and listening to different TCP ports. Now, that the basics are in place, it's time to prepare any website to respond to incoming requests on the IPv6 address. Open up any configuration file you have below your sites-enabled folder. $ ls -al /etc/apache2/sites-enabled/... $ sudo nano /etc/apache2/sites-enabled/000-default <VirtualHost *:80 [2001:db8:bad:a55::2]:8080>        ServerAdmin [email protected]        ServerName server.ios.mu        ServerAlias server Here, we have to check and modify the VirtualHost directive and enable it to respond to the IPv6 address and port our web server is listening to. Save your changes, run the configuration test and reload Apache2 in order to apply your modifications. After successful steps you can launch your favourite browser and navigate to your IPv6 enabled web server. Accessing an IPv6 address in the browser That looks like a successful surgery to me... Note: In case that you received a timeout, check whether your client is operating on IPv6, too.

    Read the article

  • Authentication in Apache2 with mod_dav_svn

    - by Poita_
    I'm having some trouble setting up authentication in Apache2 for a SVN repository that's being served using mod_dav_svn. Here is my Apache config for the directory: <Location /svn> DAV svn SVNParentPath /var/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dev.passwd Require valid-user </Location> I can use svn with the projects under /var/svn/repos, so I know that the DAV is working, but when I do svn updates or commits (or anything), Apache doesn't ask for any authentication... It does the exact same thing whether the Auth directives are there or not. The permissions on the repository directory (and all subdirectories/files) only give permission to www-data (the Apache2 user/group). I have also ensured that all relevant modules are enabled (in particular mod_auth is enabled, as are all mod_dav* modules). Any ideas why svn commands aren't authenticating? Thanks in advance.

    Read the article

  • OpenSSL: certificate signature failure error

    - by e-t172
    I'm trying to wget La Banque Postale's website. $ wget https://www.labanquepostale.fr/ --2009-10-08 17:25:03-- https://www.labanquepostale.fr/ Resolving www.labanquepostale.fr... 81.252.54.6 Connecting to www.labanquepostale.fr|81.252.54.6|:443... connected. ERROR: cannot verify www.labanquepostale.fr's certificate, issued by `/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA': certificate signature failure To connect to www.labanquepostale.fr insecurely, use `--no-check-certificate'. Unable to establish SSL connection. I'm using Debian Sid. On another machine which is running Debian Sid with same software versions the command works perfectly. ca-certificates is installed on both machines (I tried removing it and reinstalling it in case a certificate got corrupted somehow, no luck). Opening https://www.labanquepostale.fr/ in Iceweasel on the same machine works perfectly. Additional information: $ openssl s_client -CApath /etc/ssl/certs -connect www.labanquepostale.fr:443 CONNECTED(00000003) depth=3 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority verify error:num=7:certificate signature failure verify return:0 --- Certificate chain 0 s:/1.3.6.1.4.1.311.60.2.1.3=FR/2.5.4.15=V1.0, Clause 5.(b)/serialNumber=421100645/C=FR/postalCode=75006/ST=PARIS/L=PARIS/streetAddress=115 RUE DE SEVRES/O=LA BANQUE POSTALE/OU=DISF2/CN=www.labanquepostale.fr i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA 1 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 2 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority 3 s:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- <base64-encoded certificate removed for lisibility> -----END CERTIFICATE----- subject=/1.3.6.1.4.1.311.60.2.1.3=FR/2.5.4.15=V1.0, Clause 5.(b)/serialNumber=421100645 /C=FR/postalCode=75006/ST=PARIS/L=PARIS/streetAddress=115 RUE DE SEVRES/O=LA BANQUE POSTALE/OU=DISF2/CN=www.labanquepostale.fr issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA --- No client certificate CA names sent --- SSL handshake has read 5101 bytes and written 300 bytes --- New, TLSv1/SSLv3, Cipher is RC4-MD5 Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : RC4-MD5 Session-ID: 0009008CB3ADA9A37CE45B464E989C82AD0793D7585858584ACE056700035363 Session-ID-ctx: Master-Key: 1FB7DAD98B6738BEA7A3B8791B9645334F9C760837D95E3403C108058A3A477683AE74D603152F6E4BFEB6ACA48BC2C3 Key-Arg : None Start Time: 1255015783 Timeout : 300 (sec) Verify return code: 7 (certificate signature failure) --- Any idea why I get certificate signature failure? As if this wasn't strange enough, copy-pasting the "server certificate" mentionned in the output and running openssl verify on it returns OK...

    Read the article

  • Strategy to isolate multiple nginx ssl apps with single domain via suburi's?

    - by icpu
    Warning: so far I have only learnt how to use nginx to serve apps with their own domain and server block. But I think its time to dive a little deeper. To mitigate the need for multiple SSL certificates or expensive wildcard certificates I would like to serve multiple apps (e.g. rails apps, php apps, node.js apps) from one nginx server_name. e.g. rooturl/railsapp rooturl/nodejsapp rooturl/phpshop rooturl/phpblog I am unsure on ideal strategy. Some examples I have seen and or thought about: Multiple location rules, this seems to cause conflicts between the individual app config requirements, e.g. differing rewrite and access requirements Isolated apps by backend internal port, is this possible? Each port routing to its own config? So config is isolated and can be bespoke to app requirements. Reverse proxy, I am little ignorant of how this works, is this what I need to research? is this actually 2 above? Help online seems to always proxy to another server e.g apache What is an effective way to isolate config requirements for apps served from a single domain via sub uris?

    Read the article

  • Apache Mod_rewrite rule working on one server, but not another

    - by Mason
    I am using mod_jk and mod_rewrite on httpd 2.2.15. I have a rule.... RewriteCond %{REQUEST_URI} !^/video/play\.xhtml.* RewriteRule ^/video/(.*) /video/play.xhtml?vid=$1 [PT] I just want to rewrite something like /video/videoidhere to /video/play.xhtml?vid=videoidhere This works perfectly on my developer machine, but on production I get a 404 (generated by Jboss, not Apache). here is the tail of access.log and rewrite.log on prod (broken). the rewrite.log is exactly the same on dev(working) applying pattern '^/video/(.*)' to uri '/video/46279d4daf5440b2844ec831413dcc3b' RewriteCond: input='/video/46279d4daf5440b2844ec831413dcc3b' pattern='!^/video/play\.xhtml.*' => matched rewrite '/video/46279d4daf5440b2844ec831413dcc3b' -> '/video/play.xhtml?vid=46279d4daf5440b2844ec831413dcc3b' split uri=/video/play.xhtml?vid=46279d4daf5440b2844ec831413dcc3b -> uri=/video/play.xhtml, args=vid=46279d4daf5440b2844ec831413dcc3b forcing '/video/play.xhtml' to get passed through to next API URI-to-filename handler "GET /video/46279d4daf5440b2844ec831413dcc3b HTTP/1.1" 404 420 "-" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.6) Gecko/20100628 Ubuntu/10.04 (lucid) Firefox/3.6.6" I can access http://www.fivi.com/video/play.xhtml?vid=46279d4daf5440b2844ec831413dcc3b but not /video/46279d4daf5440b2844ec831413dcc3b Both server are even using the EXACT same httpd.conf, and modules. I built Apache with... ./configure --prefix /usr/local/apache2.2.15 --enable-alias --enable-rewrite --enable-cache --enable-disk_cache --enable-mem_cache --enable-ssl --enable-deflate Thanks, Mason ----UPDATE---- -mod-jk.conf JkWorkersFile /usr/local/apache2.2.15/conf/workers.properties JkLogFile /var/log/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y]" JkOptions +ForwardKeySize +ForwardURICompatUnparsed -ForwardDirectories JkRequestLogFormat "%w %V %T" JkShmFile run/jk.shm <Location /jkstatus> JkMount status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> -workers.properties worker.node1.port=8009 worker.node1.host=75.102.10.74 worker.node1.type=ajp13 worker.node1.lbfactor=20 worker.node1.ping_mode=A #As of mod_jk 1.2.27 worker.node2.port=8009 worker.node2.host=75.102.10.75 worker.node2.type=ajp13 worker.node2.lbfactor=10 worker.node2.ping_mode=A #As of mod_jk 1.2.27 worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=node2,node1 worker.loadbalancer.sticky_session=True worker.status.type=status -httpd.conf ServerName www.fivi.com:80 Include /usr/local/apache2.2.15/conf/mod-jk.conf NameVirtualHost * <VirtualHost *> ServerName * DocumentRoot /usr/local/apache2/htdocs JkUnMount /* loadbalancer RedirectMatch 301 /(.*) http://www.fivi.com/$1 </VirtualHost> <VirtualHost *> ServerName www.fivi.com ServerAlias www.fivi.com images.fivi.com JkMount /* loadbalancer JkMount / loadbalancer [root@fivi conf]# /usr/local/apache2.2.15/bin/httpd -M Loaded Modules: core_module (static) authn_file_module (static) authn_default_module (static) authz_host_module (static) authz_groupfile_module (static) authz_user_module (static) authz_default_module (static) auth_basic_module (static) cache_module (static) disk_cache_module (static) mem_cache_module (static) include_module (static) filter_module (static) deflate_module (static) log_config_module (static) env_module (static) headers_module (static) setenvif_module (static) version_module (static) ssl_module (static) mpm_prefork_module (static) http_module (static) mime_module (static) status_module (static) autoindex_module (static) asis_module (static) cgi_module (static) negotiation_module (static) dir_module (static) actions_module (static) userdir_module (static) alias_module (static) rewrite_module (static) so_module (static) jk_module (shared) Syntax OK

    Read the article

  • My virtualhost not working for non-www version

    - by johnlai2004
    I have a development web server (ubuntu + apache) that can be accessed via the url glacialsummit.com. For some reason, http://www.glacialsummit.com serves pages from the /srv/www/glacialsummit.com/ directory, but http://glacialsummit.com serves pages from the /var/www/ directory. Here's what some of my virtualhost config files look like filename: /etc/apache2/sites-enabled/glacialsummit.com <VirtualHost 97.107.140.47:80> ServerAdmin [email protected] ServerName glacialsummit.com ServerAlias www.glacialsummit.com DocumentRoot /srv/www/glacialsummit.com/public_html/ ErrorLog /srv/www/glacialsummit.com/logs/error.log CustomLog /srv/www/glacialsummit.com/logs/access.log combined </VirtualHost> <VirtualHost 97.107.140.47:443> ServerAdmin [email protected] ServerName glacialsummit.com ServerAlias www.glacialsummit.com DocumentRoot /srv/www/glacialsummit.com/public_html/ ErrorLog /srv/www/glacialsummit.com/logs/error.log CustomLog /srv/www/glacialsummit.com/logs/access.log combined SSLEngine on SSLCertificateFile /etc/ssl/localcerts/www.glacialsummit.com.crt SSLCertificateKeyFile /etc/ssl/localcerts/www.glacialsummit.com.key <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 </VirtualHost> <VirtualHost 97.107.140.47:80> ServerAdmin [email protected] ServerName project.glacialsummit.com ServerAlias www.project.glacialsummit.com DocumentRoot /srv/www/project.glacialsummit.com/public_html/ ErrorLog /srv/www/project.glacialsummit.com/logs/error.log CustomLog /srv/www/project.glacialsummit.com/logs/access.log combined </VirtualHost> ## i have many other vhosts that work fine in this file filename /etc/apache2/sites-enabled/000-default <VirtualHost 97.107.140.47:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> filename: /etc/apache2/ports.conf NameVirtualHost 97.107.140.47:80 Listen 80 <IfModule mod_ssl.c> # SSL name based virtual hosts are not yet supported, therefore no # NameVirtualHost statement here Listen 443 </IfModule> How do I make http://glacialsummit.com serve web pages from /srv/www/glacialsummit.com/public_html just like http://www.glacialsummit.com?

    Read the article

  • Ubuntu unattended-upgrades stops apache

    - by Robbie
    This morning i was alerted to the fact that both apache instances serving my app were not responding to requests from my load balancer. I attempted apachectl restart and it said apache was not running. So, i started apache on both instances and got the service up again. I then followed the logs and worked out that both had performed upgrades via the unattended-upgrades package moments before they stopped responding. /var/log/unattended-upgrades/unattended-upgrades.log 2013-07-02 06:30:51,875 INFO Starting unattended upgrades script 2013-07-02 06:30:51,875 INFO Allowed origins are: ['o=Ubuntu,a=precise-security'] 2013-07-02 06:33:57,771 INFO Packages that are upgraded: accountsservice apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common apparmor apport apt apt-transport-https apt-utils bind9-host binutils dbus dnsutils gnupg gpgv isc-dhcp-client isc-dhcp-common krb5-locales libaccountsservice0 libapt-inst1.4 libapt-pkg4.12 libbind9-80 libc-bin libc-dev-bin libc6 libc6-dev libcurl3-gnutls libdbus-1-3 libdbus-glib-1-2 libdns81 libdrm-intel1 libdrm-nouveau1a libdrm-radeon1 libdrm2 libexpat1 libfreetype6 libgc1c2 libgnutls-dev libgnutls-openssl27 libgnutls26 libgnutlsxx27 libisc83 libisccc80 libisccfg82 liblwres80 libruby1.8 libx11-6 libx11-data libxcb1 libxext6 libxml2 linux-firmware linux-image-virtual linux-libc-dev linux-virtual multiarch-support openssl perl perl-base perl-modules python-apport python-crypto python-keyring python-problem-report python-software-properties ri1.8 ruby1.8 ruby1.8-dev sudo tzdata update-manager-core 2013-07-02 06:33:57,772 INFO Writing dpkg log to '/var/log/unattended-upgrades/unattended-upgrades-dpkg_2013-07-02_06:33:57.772399.log' 2013-07-02 06:36:10,584 INFO All upgrades installed I'm running Ubuntu 12.04 on Amazon EC2 servers. I have unattended-upgrades installed and configured as follows: /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these (origin:archive) pairs Unattended-Upgrade::Allowed-Origins { "${distro_id}:${distro_codename}-security"; // "${distro_id}:${distro_codename}-updates"; // "${distro_id}:${distro_codename}-proposed"; // "${distro_id}:${distro_codename}-backports"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { }; /etc/apt/apt.conf.d/20auto-upgrades APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Unattended-Upgrade "1"; I've struggled to find documentation about what happens to running processes during an upgrade. - Is this expected behaviour? Or should unattended-upgrades restart apache after upgrading it? - What can I do to ensure apache is restarted correctly? Should I just blacklist the apache package?

    Read the article

  • Apache won't serve images larger than ~2K

    - by dtbaker
    Hello, Just upgraded an old box to Ubuntu to 10.04.2 LTS. Apache will not display images to a browser that are over about 2K. Small images seem to display fine. Static HTML and PHP continues to works fine as well. Installed: apache2 2.2.14-5ubuntu8.4 apache2-mpm-prefork 2.2.14-5ubuntu8.4 apache2-utils 2.2.14-5ubuntu8.4 apache2.2-bin 2.2.14-5ubuntu8.4 apache2.2-common 2.2.14-5ubuntu8.4 here is an ngrep of an image that doesn't display fine in the browser: T 192.168.0.4:32907 - 192.168.0.54:80 [AP] GET /path/path/logo.png HTTP/1.1..Host: 192.1 68.0.54..Connection: keep-alive..Accept: application/xml,application/xhtml+ xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5..User-Ag ent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13..Accept-Enco ding: gzip,deflate,sdch..Accept-Language: en-US,en;q=0.8..Accept- Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3.... T 192.168.0.54:80 - 192.168.0.4:32907 [A] HTTP/1.1 200 OK..Date: Wed, 09 Mar 2011 05:28:38 GMT..Server: Apa che/2.2.14 (Ubuntu)..Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT ..ETag: "17b6f4-15fe-491dd63eb2f40"..Accept-Ranges: bytes..Conten t-Length: 5630..Keep-Alive: timeout=15, max=100..Connection: Keep -Alive..Content-Type: image/png.....PNG........IHDR...!...v...... .%.....sRGB.........bKGD..............pHYs.................tIME.. etc... This looks ok to me! I have tried firefox and chrome, both display small images fine but when a large image is requested the browser prompts to download the file. When the image file is saved to the local computer it is corrupt, it also takes a long time to save which makes me think the browser cannot see the content-length header sent from apache. Also when I look at the saved image file it includes the headers from apache, along with a bit of garbage at the top, like so: vi logo.png: ^@^UÅd^@$^]V^S^H^@E^@^Q,n!@^@@^F^@^@À¨^@6À¨^@^D^@P^Y¬rÇŹéw^P^@Ú^@^@^A^A^H ^@^GÝ^]^@pbSHTTP/1.1 200 OK^M Date: Wed, 09 Mar 2011 04:47:04 GMT^M Server: Apache/2.2.14 (Ubuntu)^M Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT^M ETag: "17b6ff-157c-491dd63eb2f40"^M Accept-Ranges: bytes^M Content-Length: 5500^M Keep-Alive: timeout=15, max=94^M Connection: Keep-Alive^M Content-Type: image/png^M ^M PNG^M etc... Any ideas? It's driving me nuts. There is nothing in apache error logs, and permissions are fine (because the image data is there, it's just somewhat corrupt). There's no proxy or iptables on this ubuntu box either. Thanks heaps!! Dave ps: just tried on IE from a different computer, same problem :( pps: rebooted server, no help.

    Read the article

  • ServerAlias override server name?

    - by GusDeCooL
    I tried to setting up virtual host with apache2 on my ubuntu server. my serverName is not working, it show wrong document, but the server alias is showing right document. How is that happen? Here is my virtual host config: <VirtualHost *:80> ServerAdmin [email protected] ServerName bungamata.web.id ServerAlias www.bungamata.web.id DocumentRoot /home/gusdecool/host/bungamata.web.id <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/gusdecool/host/bungamata.web.id> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> if you access http://bungamata.web.id it shows wrong document, but if you open http://www.bungamata.web.id it open the rights document. The right document should have content "testing gan"

    Read the article

  • Homepage 301 Redirect to SSL Homepage

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have 1 site where we implemented a 301 redirect on the homepage from http to https. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our webmaster account I notice that we are not being provided with any webmaster information (search queries, backlinks) related to our homepage under SSL. I performed a Fetch Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried that the fact that Google Fetch is not getting the correct Title Tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the SiteMap to ensure that Google is correctly indexing all our pages and being able to flow from the https to the http without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • Mac OS - Built SVN from source, now Apache2 not loading sites

    - by Geuis
    This relates to another question I asked earlier today. I built SVN 1.6.2 from source. In the process, it has completely screwed up my dev environment. After I built SVN, Apache wasn't loading. It was giving me this error: Syntax error on line 117 of /private/etc/apache2/httpd.conf: Cannot load /usr/libexec /apache2/mod_dav_svn.so into server: dlopen(/usr/libexec/apache2/mod_dav_svn.so, 10): no suitable image found. Did find:\n\t/usr/libexec/apache2/mod_dav_svn.so: mach-o, but wrong architecture It appears that SVN over-wrote the old mod_dav_svn.so and I am not able to get it to build as FAT, and I can't recover whatever was originally there. I resolved this(temporarily?) by commenting out the line that was loading the mod_dav_svn.so and got Apache to start at this point. However, even though Apache is running I am now getting this error when trying to access my dev sites: Directory index forbidden by Options directive: /usr/share/tomcat6/webapps/ROOT/ I have Apache2 sitting in front of Tomcat6. I access my local dev site using the internal name "http://localthesite". I have had virtual directories set up that have worked until this SVN debacle. Tomcat is installed at /usr/local/apache-tomcat, and webapps is /usr/local/apache-tomcat/webapps. Our production servers deploy tomcat to /usr/share/tomcat6, so I have symlinks setup on my system to replicate this as well. These point back to the actual installation path. This has all been working fine as well. None of our configurations for Apache2, Tomcat, or .htaccess have changed. Over the weekend, I performed a "Repair Disk Permissions" on the system. This was before I discovered the mod_dav_svn.so problem. I have been reading up on this all morning and the most common answer is that there is an Options -Indexes set. We have this in a config file, but it was there before and when I removed it during testing, I still got the same errors from Apache. At this point, I'm assuming I either totally borked the native Apache2 installation on this Mac, or that there is a permissions error somewhere that I'm missing. The permissions error could be from the SVN installation, or from my repair process. Does anyone have any idea what could be the problem? I'm totally blocked right now and have no idea where to check next.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >