Search Results

Search found 3868 results on 155 pages for 'wildcard ssl'.

Page 16/155 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • HAProxy NGInx SSL setup

    - by Niclas
    I've been looking around different setups for a server cluster supporting SSL and I would like to benchmark my idea with you. Requirements: All servers in the cluster should be under the same full domain name. (http and https) Routing to subsystems is done on URI matching in HA proxy. All URIs have support for SSL support. Wish: Centralizing routing rules ---<----http-----<-- | | Inet -->HA--+---https--->NGInx_SSL_1..N | | +---http---> Apache_1..M | +---http---> NodeJS Idea: Configure HA to route all SSL traffic (mode=tcp,algorithm=Source) to an NGInx cluster turning https traffic into http. Re-pass the http traffic from NGInx to the HA for normal load-balancing which performs load balancing based on HA config. My question is simply: Is this the best way to to configure based on requirements above?

    Read the article

  • Problem installing SSL on centos 5.2 with plesk

    - by Haluk
    Hello, I'm trying to install an ssl certificate to a dedicated centos 5.2 server. I followed the hosting company's instructions but the ssl is not working. When I try to access my website using https, Firefox gives the following error: uses an invalid security certificate. The certificate expired on 3/13/2010 11:56 AM. (Error code: sec_error_expired_certificate) I'm not sure where the problem is. You should also know that this server has plesk installed, even though I'm not using it, it could potentially be somehow overriding my httpd.conf or ssl.conf. Thanks!

    Read the article

  • disable "SSL 2.0+ upgrade support" in nginx

    - by Bhargava
    I evaluated the SSL credentials of my server with qualsys ssl page ( https://www.ssllabs.com/ssldb/index.html ) and found the entry "SSL 2.0+ upgrade support" being marked as yes. I want to disable this sslv2 handshake too. I searched around and found http://forum.nginx.org/read.php?2,104032m, which points to creating a openssl.cnf file. Have a naive question here. After creating the file, does one need to re-key his certificate for this to work ? Are there any other steps to follow ? I use nginx 1.0.11 and openssl "OpenSSL 1.0.0e-fips 6 Sep 2011". I have set ssl_ciphers in nginx to SSLv3 TLSv1;

    Read the article

  • SSL and IP addresses on a dedicated server

    - by spike5792
    I've just moved from a shared web hosting server operating on WHM/cPanel running six domains with 1 dedicated IP address. 1 of the 6 domains has an SSL certificate. I have since moved to a dedicated server also with 1 dedicated IP and running cPanel/WHM with the same six domains. I want 1 of the domains to have the SSL certificate but I am being told that it's not possible unless I buy another dedicated IP address. I want to question the hosting provider on this but they haven't really acknowledged it - they've just kept saying that it needs its own IP as the IP I am currently using is shared between my six domains. Does anyone have any experience of this and tell me why my new expensive dedicated hosting provider can't setup SSL using the certificate as I had done before on my shared server?

    Read the article

  • lighttpd with multiple IPs, each with a UCC certificate and many hostnames

    - by Dave
    I'd like to get lighttpd working with UCC certificates, but I can't seem to figure out the correct syntax. Essentially, for each IP address, I have one UCC certificate and a bunch of hostnames. $SERVER["socket"] == "10.0.0.1:443" { ssl.engine = "enable" ssl.ca-file = "/etc/ssl/certs/the.ca.cert.pem" ssl.pemfile = "/etc/ssl/private/websitegroup1.com.pem" $HTTP["host"] =~ "mywebsite.com" { server.document-root = /var/www/mywebsite.com/htdocs" } The above code works fine for one hostname, but as soon as I try to set up another hostname (note the same SSL cert): $SERVER["socket"] == "10.0.0.1:443" { ssl.engine = "enable" ssl.ca-file = "/etc/ssl/certs/the.ca.cert.pem" ssl.pemfile = "/etc/ssl/private/websitegroup1.com.pem" $HTTP["host"] =~ "anotherwebsite.com" { server.document-root = /var/www/anotherwebsite.com/htdocs" } ...I get this error: Duplicate config variable in conditional 6 global/SERVERsocket==10.0.0.1:443: ssl.engine Is there any way I can put a conditional so that only if ssl.engine is not already enabled, enable it? Or do I have to put all my $HTTP["host"]s inside the same $SERVER["socket"] (which will make config file management more difficult for me) or is there some entirely different way to do it? This has to be repeated for multiple IPs too (so I'll have a bunch of SERVER["socket"] == 10.0.0.2:443" etc), each with one UCC cert and many hostnames. Am I going about this the wrong way entirely? My goal is to conserve IP addresses when I have many websites that are related and can share an SSL certificate, but still need their own SSL-accessible version from the appropriate hostname (instead of a single secure.mywebsite.com).

    Read the article

  • Setting up SSL with 389 Directory Server for LDAP authentication

    - by GioMac
    I've got 389 Directory Server running on RHEL 5 with groups, users, posix etc. RHEL clients are authenticating users with LDAP - no problems, everything works perfect, but passwords are sent in plaintext and are visible with network sniffer. So, decided to run with SSL: Created CA - got both private and public CA certificates Using CA certs: generated both of private and public certificates and combined (1st file) for 389DS according to 389DS certificate request, imported with CA public cert to 389DS from graphical console (2nd file). Enabled SSL in 389DS On the client, using authconfig-gtk enabled SSL for LDAP, specified only CA public certificate Doesn't work. Howto? What is the best way to integrate safely?

    Read the article

  • Auto enter pass phrase in case of Python ssl Client/Server

    - by rauch
    I need to create Client/Server application to send files from clients to Server. I use simple ssl sockets for that and authenticate with certificates. ms = socket.socket(socket.AF_INET, socket.SOCK_STREAM) ssl_sock = ssl.wrap_socket(ms, keyfile=".../newCA/my_client.key", certfile=".../newCA/my_client.crt", server_side=0, cert_reqs=ssl.CERT_REQUIRED, ca_certs=".../newCA/CA/my-ca.crt" ) ssl_sock.connect((HOST, MPORT)) And Server side: msock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.ssl_sock = ssl.wrap_socket(msock, keyfile=".../newCA/my_server.key", certfile=".../newCA/my_server.crt", server_side=1, cert_reqs=ssl.CERT_REQUIRED, ca_certs=".../newCA/CA/my-ca.crt" ) self.ssl_sock.bind(('', self.PORT)) self.ssl_sock.listen(self.QUEUE_MAX) The problem is the following: when client tries to connect to Server, it requires Enter the pass phrase for private key for Both: for Server-side and Client-side. In Java we need to set System Property: javax.net.ssl.keyStorePassword="" and it has to be used automatically, But how is it been used in Python? I can't enter pass phrase all time the client connects.

    Read the article

  • Redirect for .htaccess Wildcard Subdomains

    - by waywardspooky
    Hello, I've been trying to figure out a way to redirect requests for wildcard subdomains to a specific folder ( called 'core' ) and calling the requested page/file from that specific folder. For example, making all calls to -http://johnny5.mysite.net redirect to -http://mysite.net/core/, or -http://docholliday.mysite.net/login.php redirect to -http://mysite.net/core/login.php, or a final example, -http://jamesbrown.mysite.net/images/feelgood.jpg to -http://mysite.net/core/images/feelgood.jpg. The problem I've been having has been getting the redirect to call the requested page/file from 'core'. I've been able to get the wildcard subdomains to redirect requested pages/files to the root ( -http://mysite.net ), but not to the specific folder ( -http://mysite.net/core/ ). Here's what I have: Options +FollowSymlinks RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} !^(www|mail|ftp)\.[a-z-]+\.[a-z]{2,6} [NC] RewriteCond %{HTTP_HOST} ([a-z-]+\.[a-z]{2,6})$ [NC] RewriteRule ^/(.*)$ http://%1/core/$1 [L] I've tried several things like removing the http://%1 in the RewriteRule but I can't seem to get it to work in the way I described above.

    Read the article

  • Flash Media Server slow over SSL

    - by Antilogic
    We are using FMS to host a VoD site. We host FMS internally (we do not use a CDN). We recently installed an SSL certificate to alleviate connection issues for clients (they're networks either block or don't support RTMP), however we're noticing that when streaming in RTMPS connections are drastically slower (on the order of Mbps). I know SSL causes some amount of over head but both client and server show almost no signs of exertion. Speedtest.net and a locally hosted speed test confirm that bandwidth is not an issue. I'm really not a network guru, so I'm at a loss as to where to check next. Do any of you have an idea why streaming media would run so slow over SSL?

    Read the article

  • Adding SSL to Heroku site post launch

    - by dineth
    I have a rails API that I want to deploy on Heroku. $20/month for a SSL site on heroku is a little steep given I am not earning anything out of this app yet. I am after advice and wondering if it is possible to add SSL sometime in the future? This is for a iOS app that I'm writing. Basically the idea would be that I continue to use https://myapp.heroku.com through their piggyback SSL. Once I get some cash in, I want to transition to using https://www.myapp.com. At this point the API would still need to work for app users who haven't upgraded to a new version of the app that points to the new domain. Anyone know if this is possible? Would both URLs continue to work? My gut feeling tells me this is not possible. Any advice would help. Thanks!

    Read the article

  • Apache Ubuntu SSL Configuration

    - by JSP
    Where besides the vhost configuration can SSL be configured? I see an SSL configuration in sites-available but it's not an enabled vhost (and the certificate it points to is expired). Using apache2 -V shows me the configuration directory is /etc/apache2 but I can not for the life of me find the SSL configuration and it's driving me crazy. Any suggestions on where to look or what I'm missing? Ubuntu 12 Linux ip-10-39-119-18 3.2.0-23-virtual #36-Ubuntu SMP Tue Apr 10 22:29:03 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • Exchange 2007 OWA not listening on SSL port

    - by krs1
    I have an Exchange 2007 server that went down after a power failure. It has OWA access via SSL both externally and internally. OWA is working fine from the internal notwork, however I am getting a timeout when I attempt to connect externally. I pulled up wireshark and noticed that the server actually redirects to SSL. For some reason the server is not listening on the SSL port, and this seems to be causing the timeout. I normally do only development work, but I'm stuck with this since my sysadmin took off for the week and isn't answering my phone calls. As far as I know it shouldn't be a firewall issue. Aside from me not wanting to work on the damn thing, what should I look for?

    Read the article

  • SSL on Apache seems to significantly affect WebDAV performace

    - by takesides
    I'm using Apache 2.2 running on Windows Server 2008 R2 as a WebDAV server for clients to upload large media files (roughly 100-2000MB). I am finding that when I have SSL enabled (openSSL 0.9.8o) and use HTTPS for the uploads the throughput is around 13Mbps but when I disable it and just use HTTP I get around 80Mbps. I can't understand why this is happening as my understanding was that the heavy SSL work was done at the beginning of the connection. Does anyone have any idea why the performance is so drastically affected by enabling SSL? Cheers.

    Read the article

  • Help: My SSL Certificate expired but I can't renew this weekend, is there a way I can disable SSL through IIS?

    - by shogun
    Or perhaps force redirect to HTTP when they request HTTPS? I tried removing the SSL port settings under 'Advanced' but it broke the login page. I don't want to do a deploy/recompile right now but I am able to edit the VIEWS. However I am thinking there may be a way for IIS to do this, but then again I think the .NET code tries to force SSL. Crap. This is bull crap because I was bugging people about getting the new certificate since December and they were like oh we will take care of it... I think it may have even gotten to the point where they were getting annoyed with me hassling them about it! (rant over.) And yes, the key is that more support tickets will be caused by the browser giving a security error than if SSL was removed entirely for one day. Becaues they wouldn't very very likely not notice it being gone.

    Read the article

  • Apache HTTP Not Working When SSL Enabled

    - by dominic7il
    I've got a very bizarre problem in that after enabling SSL support in Apache I'm only able to access my site via SSL and not through http as well. I can confirm that Apache is definitely listening on both ports 80 and 443 (accdording to netstat). Additionally the Apache access logs are showing the requests - it's just that going in through http results in a timeout and I'm never actually able to reach the content. Like I said going through https works. Here is my httpd.conf: http://pastebin.com/kG2dPjJ2 and here is my httpd-ssl.conf: http://pastebin.com/thqvjgGJ Can anyone spot any issues with those configurations? or Have any suggestion at all? I've searched and searched but there appear to be very few people who have experienced the same. Also worth mentioning that I did a comparision between those configurations and those of a working set up and I couldn't spot anything.

    Read the article

  • Setting up a transparent SSL proxy

    - by badunk
    I've got a linux box set up with 2 network cards to inspect traffic going through port 80. One card is used to go out to the internet, the other one is hooked up to a networking switch. The point is to be able to inspect all HTTP and HTTPS traffic on devices hooked up to that switch for debugging purposes. I've written the following rules for iptables: nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On 192.168.2.1:1337, I've got a transparent http proxy using Charles (http://www.charlesproxy.com/) for recording. Everything's fine for port 80, but when I add similar rules for port 443 (SSL) pointing to port 1337, I get an error about invalid message through Charles. I've used SSL proxying on the same computer before with Charles (http://www.charlesproxy.com/documentation/proxying/ssl-proxying/), but have been unsuccessful with doing it transparently for some reason. Some resources I've googled say its not possible - I'm willing to accept that as an answer if someone can explain why. As a note, I have full access to the described set up including all the clients hooked up to the subnet - so I can accept self-signed certs by Charles. The solution doesn't have to be Charles-specific since in theory, any transparent proxy will do. Thanks! Edit: After playing with it a little, I was able to get it working for a specific host. When I modify my iptables to the following (and open 1338 in charles for reverse proxy): nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j DNAT --to-destination 192.168.2.1:1338 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 1338 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE I am able to get a response, but with no destination host. In the reverse proxy, if I just specify that everything from 1338 goes to a specific host that I wanted to hit, it performs the hand shake properly and I can turn on SSL proxying to inspect the communication. The setup is less than ideal because I don't want to assume everything from 1338 goes to that host - any idea why the destination host is being stripped? Thanks again

    Read the article

  • ISAPI filter with LDAP over SSL only works as administrator

    - by Zac
    I have created an ISAPI filter for IIS 6.0 that tries to authenticate against Active directory using LDAP. The filter works fine when authenticating regularly over port 389, but when I try to use SSL, I always get the 0x51 Server Down error at the ldap_connect() call. Even skipping the connect call and using ldap_simple_bind_s() results in the same error. The weird thing is that if I change the app pool identity to the local admin account, then the filter works fine and LDAP over SSL is successful. I created an exe with the same code below and ran it on the server as admin and it works. Using the default NETWORK SERVICE identity for the site's app pool is what seems to be the problem. Any thoughts as to what is happening? I want to use the default identity since I don't want the website to have elevated admin privileges. The server is in a DMZ outside the network and domain where our DCs are that run AD. We have a valid certificate on our DCs for AD as well. Code: // Initialize LDAP connection LDAP * ldap = ldap_sslinit(servers, LDAP_SSL_PORT, 1); ULONG version = LDAP_VERSION3; if (ldap == NULL) { strcpy(error_msg, ldap_err2string(LdapGetLastError())); valid_user = false; } else { // Set LDAP options ldap_set_option(ldap, LDAP_OPT_PROTOCOL_VERSION, (void *) &version); ldap_set_option(ldap, LDAP_OPT_SSL, LDAP_OPT_ON); // Make the connection ldap_response = ldap_connect(ldap, NULL); // <-- Error occurs here! // Bind and continue... } UPDATE: I created a new user without admin privileges and ran the test exe as the new user and I got the same Server Down error. I added the user to the Administrators group and got the same error as well. The only user that seems to work with LDAP over SSL authentication on this particular server is administrator. The web server with the ISAPI filter (and where I've been running the test exe) is running Windows Server 2003. The DCs with AD on them are running 2008 R2. Also worth mentioning, we have a WordPress site on the same server that authenticates against LDAP over SSL using PHP (OpenLDAP) and there's no problem there. I have an ldap.conf file that specifies TLS_REQCERT never and the user running the PHP code is IUSR.

    Read the article

  • nginx crashes on ssl after about a minute

    - by Scott
    Here are my configuration files ssl.conf # HTTPS server # server { listen 443 ssl; server_name api.domain.com; error_log /var/log/nginx/api.error.log; location / { root /var/www/api.domain.com; index index.html index.php index.php; try_files $uri $uri/ /index.php?$args; } ssl on; ssl_certificate /etc/nginx/api.domain.com.crt; ssl_certificate_key /etc/nginx/api.domain.com.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { # root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_param SCRIPT_FILENAME /var/www/api.domain.com$fastcgi_script_name; fastcgi_param HTTPS on; include fastcgi_params; } location ~ /\.ht { deny all; } } nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; gzip on; include /etc/nginx/conf.d/*.conf; } I have a server running on port 80 that runs with no issues. As soon as I turn on this api server running on ssl, it works for about a minute and then crashes and gives a 504 Gateway Time-out. Running nginx/1.2.3

    Read the article

  • Multiple SSL Certificates Running on Mac OS X 10.6

    - by frodosghost.mp
    I have been running into walls with this for a while, so I posted at stackoverflow, and I was pointed over here... I am attempting to setup multiple IP addresses on Snow Leopard so that I can develop with SSL certificates. I am running XAMPP - I don't know if that is the problem, but I guess I would run into the same problems, considering the built in apache is turned off. So first up I looked into starting up the IPs on start up. I got up an running with a new StartupItem that runs correctly, because I can ping the ip address: ping 127.0.0.2 ping 127.0.0.1 And both of them work. So now I have IP addresses, which as you may know are not standard on OSx. I edited the /etc/hosts file to include the new sites too: 127.0.0.1 site1.local 127.0.0.2 site2.local I had already changed the httpd.conf to use the httpd-vhosts.conf - because I had a few sites running on the one IP address. I have edited the vhosts file so a site looks like this: <VirtualHost 127.0.0.1:80> DocumentRoot "/Users/jim/Documents/Projects/site1/web" ServerName site1.local <Directory "/Users/jim/Documents/Projects/site1"> Order deny,allow Deny from All Allow from 127.0.0.1 AllowOverride All </Directory> </VirtualHost> <VirtualHost 127.0.0.1:443> DocumentRoot "/Users/jim/Documents/Projects/site1/web" ServerName site1.local SSLEngine On SSLCertificateFile "/Applications/XAMPP/etc/ssl-certs/myssl.crt" SSLCertificateKeyFile "/Applications/XAMPP/etc/ssl-certs/myssl.key" SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/Users/jim/Documents/Projects/site1"> Order deny,allow Deny from All Allow from 127.0.0.1 AllowOverride All </Directory> </VirtualHost> In the above code, you can change the 1's to 2's and it is the setup for the second site. They do use the same certificate, which is why they are on different IP addresses. I also included the NameVirtualHost information at the top of the file: NameVirtualHost 127.0.0.1:80 NameVirtualHost 127.0.0.2:80 NameVirtualHost 127.0.0.1:443 NameVirtualHost 127.0.0.2:443 I can ping site1.local and site2.local. I can use telnet ( telnet site2.local 80 ) to get into both sites. But in Safari I can only get to the first site1.local - navigating to site2.local gives me either the localhost main page (which is included in the vhosts) or gives me a Access forbidden!. I am usure what to do, any suggestions would be awesome.

    Read the article

  • DNS CNAME - SSL-certificate issue.

    - by Phoibe
    Hey, I have obtained an SSL certificate by Thawte for domain.com Now my infrastructure changed due to heavy load I have mx.domain.com as SMTP relay storage.domain.com as Mail-Storage and domain.com pointing at Web-Server Every server is hosted on another dedicated/virtual server with individual IP. I do not want to put the Web-Server on the Mail-Storage for security reasons but I do want to use my SSL-Certificate for the Mail-Storage(POP3S/IMAPS). Is that possible or how do I solve that issue?

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • 502 Bad Gateway with nginx + apache + subversion + ssl (SVN COPY)

    - by theplatz
    I've asked this on stackoverflow, but it may be better suited for serverfault... I'm having a problem running Apache + Subversion with SSL behind an Nginx proxy and I'm hoping someone might have the answer. I've scoured google for hours looking for the answer to my problem and can't seem to figure it out. What I'm seeing are "502 (Bad Gateway)" errors when trying to MOVE or COPY using subversion; however, checkouts and commits work fine. Here are the relevant parts (I think) of the nginx and apache config files in question: Nginx upstream subversion_hosts { server 127.0.0.1:80; } server { listen x.x.x.x:80; server_name hostname; access_log /srv/log/nginx/http.access_log main; error_log /srv/log/nginx/http.error_log info; # redirect all requests to https rewrite ^/(.*)$ https://hostname/$1 redirect; } # HTTPS server server { listen x.x.x.x:443; server_name hostname; passenger_enabled on; root /path/to/rails/root; access_log /srv/log/nginx/ssl.access_log main; error_log /srv/log/nginx/ssl.error_log info; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; add_header Front-End-Https on; location /svn { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; set $fixed_destination $http_destination; if ( $http_destination ~* ^https(.*)$ ) { set $fixed_destination http$1; } proxy_set_header Destination $fixed_destination; proxy_pass http://subversion_hosts; } } Apache Listen 127.0.0.1:80 <VirtualHost *:80> # in order to support COPY and MOVE, etc - over https (443), # ServerName _must_ be the same as the nginx servername # http://trac.edgewall.org/wiki/TracNginxRecipe ServerName hostname UseCanonicalName on <Location /svn> DAV svn SVNParentPath "/srv/svn" Order deny,allow Deny from all Satisfy any # Some config omitted ... </Location> ErrorLog /var/log/apache2/subversion_error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/subversion_access.log combined </VirtualHost> From what I could tell while researching this problem, the server name has to match on both the apache server as well as the nginx server, which I've done. Additionally, this problem seems to stick around even if I change the configuration to use http only.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >