Search Results

Search found 45013 results on 1801 pages for 'example'.

Page 51/1801 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • DKIM passes everywhere apart from Yahoo!

    - by Ian
    Hi, I'm using dkim-milter, Postfix on Ubuntu (I think I used these instructions for setting up). Anyway, using the reflectors such as Port25, BlackOps and Altn.com I get passes for DKIM: X-DKIM: OpenDKIM Filter v2.0.1 medusa.blackops.org o2SGTMSg005616 Authentication-Results: medusa.blackops.org; dkim=pass (1024-bit key) [email protected]; dkim-adsp=pass dkim=pass header.d=example.com (b=miSIxi7TMX; 1:0:good); Authentication-Results: verifier.port25.com header.d=example.com; dkim=pass (matches From: [email protected]); Yahoo gives this: Authentication-Results: mta1031.mail.ukl.yahoo.com from=; domainkeys=neutral (no sig); from=example.com; dkim=permerror (key failed) Where, obviously, example.com is my site address. Is anyone aware of anything different with Yahoo! that would stop these from signing? TIA

    Read the article

  • Indirect Postfix bounces create new user directories

    - by hheimbuerger
    I'm running Postfix on my personal server in a data centre. I am not a professional mail hoster and not a Postfix expert, it is just used for a few domains served from that server. IIRC, I mostly followed this howto when setting up Postfix. Mails addressed to one of the domains the server manages are delivered locally (/srv/mail) to be fetched with Dovecot. Mails to other domains require usage of SMTPS. The mailbox configuration is stored in MySQL. The problem I have is that I suddenly found new mailboxes being created on the disk. Let's say I have the domain 'example.com'. Then I would have lots of new directories, e.g. /srv/mail/example.com/abenaackart /srv/mail/example.com/abenaacton etc. There are no entries for these addresses in my database, neither as a mailbox nor as an alias. It's clearly spam from auto-generated names. Most of them start with 'a', a few with 'b' and a couple of random ones with other letters. At first I was afraid of an attack, but all security restrictions seem to work. If I try to send mail to these addresses, I get an "Recipient address rejected: User unknown in virtual mailbox table" during the 'RCPT TO' stage. So I looked into the mails stored in these mailboxes. Turns out that all of them are bounces. It seems like all of them were sent from a randomly generated name to an alias that really exists on my system, but pointed to an invalid destination address on another host. So Postfix accepted it, then tried to redirect it to another mail server, which rejected it. This bounced back to my Postfix server, which now took the bounce and stored it locally -- because it seemed to be originating from one of the addresses it manages. Example: My Postfix server handles the example.com domain. [email protected] is configured to redirect to [email protected]. [email protected] has since been deleted from the Hotmail servers. Spammer sends mail with FROM:[email protected] and TO:[email protected]. My Postfix server accepts the mail and tries to hand it off to hotmail.com. hotmail.com sends a bounce back. My Postfix server accepts the bounce and delivers it to /srv/mail/example.com/bob. The last step is what I don't want. I'm not quite sure what it should do instead, but creating hundreds of new mailboxes on my disk is not what I want... Any ideas how to get rid of this behaviour? I'll happily post parts of my configuration, but I'm not really sure where to start debugging the problem at this point.

    Read the article

  • Exim forwards not going out through TLS

    - by Blake
    I'm trying to get Exim to use STARTTLS to send emails that are just FORWARDS. I have a server accepting email at example-accepting.com for users. So I want [email protected] to forward all email to [email protected]. If I do this from the command like on example-accepting.com... echo "test" | mail -s "ssl/tls test" [email protected] Success!! Sent via TLS BUT, if I send an email to [email protected] the forward fails, it's NOT being sent via TLS. I've tried both forwarding the email via /etc/aliases and the user .forward file. The email is indeed sent, but NOT via TLS. Why is it when I run "mail" from the command like it's working like it should, but a .forward is not using TLS? Thanks

    Read the article

  • One domain, dedicated SSL IP on whm

    - by Vanja D.
    It's long, but please read carefully. I am trying to install an SSL certificate on my dedicated server with WHM/cPanel. I have a dedicated IP to use with the SSL certificate. My main domain is example.com (NOT www.example.com), and I have an account and website already running on it. I bought the certificate for the main domain (example.com without www.). I installed the certificate (successfully). I used the example.com domain, the dedicated IP and the same cPanel user which owns example.com (non-ssl) I double checked ConfigServer for port 443 being open. RESULT: https://example.com won't open, ssl check tool returns a "SSL is not configured on this port (443)" error. I have three questions: where did I go wrong, wht did I miss? is it possible to have one domain on two ips (one for http, one for https)? is it possible to have an ssl host with the same user as the regular one?

    Read the article

  • Web Server Scripting Hack to Maintain State and Keep a Domain Cookieless

    - by jasonspalace
    Hello, I am looking for a solution on a LAMP server to keep a site cookieless such as "example.com", where static content is served from "static.example.com", and with rules in place to rewrite requests for "www.example.com" to "example.com". I am really hoping to avoid setting up a cookieless domain for the static content due to an unanswered SEO concern with regards to CNAMEing to a CDN. Is there a way, (or safe hack), that can be implemented where a second domain such as "www.example2.com" is CNAMEd, aliased, or otherwise used with "example.com" to somehow trick a php application into maintaining state with a cookie dropped on "www.example2.com" therefore keeping all of "example.com" cookieless? If such a solution is feasible, what implications would exists with regards to SSL and cross-browser compatibility other than requiring users to accept cookies from 3rd party domains and possibly needing an additional SSL to keep the cookie secure? Thanks in advance to all.

    Read the article

  • port to subdomain

    - by takeshin
    I have installed Hudson using apt-get, and the Hudson server is available on example.com:8080. For example.com I use standard port *:80 and some virtual hosts set up this way: # /etc/apache2/sites-enabled/subdomain.example.com <Virtualhost *:80> ServerName subdomain.example.com ... </Virtualhost> Here is info about Hudson process: /usr/bin/daemon --name=hudson --inherit --env=HUDSON_HOME=/var/lib/hudson --output=/var/log/hudson/hudson.log --pidfile=/var/run/hudson/hudson.pid -- /usr/bin/java -jar /usr/share/hudson/hudson.war --webroot=/var/run/hudson/war 987 ? Sl 1:08 /usr/bin/java -jar /usr/share/hudson/hudson.war --webroot=/var/run/hudson/war How should I forward: http:// example.com:8080 to: http:// hudson.example.com

    Read the article

  • ssl between balancer members?

    - by jemminger
    I have apache running on one machine as a load balancer: <VirtualHost *:443> ServerName ssl.example.com DocumentRoot /home/example/public SSLEngine on SSLCertificateFile /etc/pki/tls/certs/example.crt SSLCertificateKeyFile /etc/pki/tls/private/example.key <Proxy balancer://myappcluster> BalancerMember http://app1.example.com:12345 route=app1 BalancerMember http://app2.example.com:12345 route=app2 </Proxy> ProxyPass / balancer://myappcluster/ stickysession=_myapp_session ProxyPassReverse / balancer://myappcluster/ </VirtualHost> Note that the balancer takes requests under SSL port 443, but then communicates to the balancer members on a non-ssl port. Is it possible to have the forwarding to the balancer members be under SSL too? If so, is this the best/recommended way? If so, do I have to have another SSL cert for each balancer member? Does the SSLProxyEngine directive have anything to do with this?

    Read the article

  • Setting alias for DynDNS domain

    - by metalball
    Hey all, I've created DynDNS domain for testing my local sites, and i'm having trouble with pointing to root domain. From my registrar (GoDaddy) I've created a CNAME for www to point my example.dyndns.com so going to url www.example.com I'm reaching my site. But if I'm going to example.com I'm reaching to the IP of the A record. I can't set the IP for the A record to be my IP because I have dynamic IP, and it changes constatly, and I can't point the A record to domain, only IP. When trying to create CNAME record @ to point example.dyndns.com I'm getting error "A record of a different type exists for the hostname @, could not create CNAME" The only record using the '@' host are NS record, which I can't delete, and when tried to set another NS record with @ point to example.dyndns.com, I've lost connection to my site :) So what can I do to get example.com url reach my site? Thanx!

    Read the article

  • Nginx $scheme doesn't always work while using SSL for one specific page

    - by jjiceman
    I read and followed this question in order to configure nginx to force SSL for one page (admin.php for XenForo), and it is working well for a few of the site administrators but is not for myself. I was wondering if anyone has any advice on how to improve this configuration: ... ssl_certificate example.net.crt; ssl_certificate_key example.key; server { listen 80 default; listen 443 ssl; server_name www.example.net example.net; access_log /srv/www/example.net/logs/access.log; error_log /srv/www/example.net/logs/error.log; root /srv/www/example.net/public_html; index index.php index.html; location / { if ( $scheme = https ){ rewrite ^ http://example.net$request_uri? permanent; } try_files $uri $uri/ /index.php?$uri&$args; index index.php index.html; } location ^~ /admin.php { if ( $scheme = http ) { rewrite ^ https://example.net$request_uri? permanent; } try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS on; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; } } ... It seems that the extra information in the location ^~ /admin.php block is unecessary, does anyone know of an easy way to avoid duplicate code? Without it it skips the php block and just returns the php files. Currently it applies https correctly in Firefox when I navigate to admin.php. In Chrome, it downloads the admin.php page. When returning to the non-https website in Firefox, it does not correctly return to http but stays as SSL. Like I said earlier, this only happens for me, the other admins can go back and forth without a problem. Is this an issue on my end that I can fix? And does anyone know of any ways I could reduce duplicate configuration options in the configuration? Thanks in advance!

    Read the article

  • subdomain/virtualhost problem on unix + apache

    - by Aaron
    Hello, I'm having a strangely difficult time setting up a subdomain (x.example.com). The main site works fine, but I get 404 errors attempting to hit x.example.com no matter how I set up the VirtualHost config. NameVirtualHost *:80 <VirtualHost *:80> ServerName www.example.com DocumentRoot /var/www/example.com/htdocs ServerAlias example.com </VirtualHost> <VirtualHost *:80> ServerName x.example.com ErrorLog /var/logs/x-error-log CustomLog /var/logs/x-access-log common DocumentRoot /var/www/x/htdocs </VirtualHost> As far as I can tell, this is a vanilla set up. Any suggestions would be appreciated.

    Read the article

  • Warning: DocumentRoot * does not exist on Centos 6

    - by user1213807
    My virtual hosts lines: <VirtualHost *:80> ServerAdmin webmaster@dummy-host.example.com DocumentRoot /www/docs/example.com ServerName example.com ErrorLog logs/example.com-error_log CustomLog logs/example.com.com-access_log common </VirtualHost> And when I apache restart (sudo apachectl -k stop) get this error: Warning: DocumentRoot [/www/docs/example.com] does not exist I've checked some ways: All files and directories permissions is OK, everything 755. I think, maybe this error about SeLinux and disable it. But not working. Still same error. How can I fix this problem?

    Read the article

  • 403 in Response to OPTIONS when updating working copy having full access

    - by user23419
    There is an SVN repository (single repository) http://example.net/svn The repository contains several projects (directories): http://example.net/svn/Project1 http://example.net/svn/Project2 User has full access to Project1 directory and has no access neither to root nor to Project2. Everything works fine for a while: user checks out http://example.net/svn/Project1, commits and updates it successfully. But sometimes trying to update leads to the following error: Command: Update Error: Server sent unexpected return value (403 Forbidden) in response to OPTIONS Error: request for 'http://example.net/svn' Finished! Why does TortoiseSVN request something in the root??? I have noticed that this happens after somebody else committed copy or move operation. Checking out http://example.net/svn/Project1 helps till next time... The main question: How to set up access rights for user to avoid these errors? Note, it's not an option to grant user any read or write access right on the root directory for security reasons.

    Read the article

  • Space a valid delimiter for email addresses in email header?

    - by semanticalo
    Is it syntactically correct to delimit multiple email recipients in the "To" header of an email with spaces only or do I need to use another delimiter (a semicolon or the like)? Example (MIME data reads as follows): Date: Mon, 04 Oct 2010 06:14:16 +0200 From: [email protected] To: [email protected] [email protected] [email protected] Subject: Test Subject The above will be processed by many email processing applications, but I need to know whether it's correct according to standard (RFC). Unfortunately I didn't find anything useful on the internet so far. Thanks a million for your help!

    Read the article

  • How to setup multiple Apache SSL sites using multiple IP addresses

    - by Jeff
    How do you setup a single Apache2 config to host multiple HTTPS sites each on their own IP address? There will also be multiple HTTP sites on just a single IP address. I do not want to use Server Name Indication (SNI) as described here, and I'm only concerned with the important top-level Apache directives. That is, I just need to know the skeleton of how my config should look. The basic setup looks like this: Hosted on 1.1.1.1:80 (HTTP) - example.com - example.net - example.org Hosted on 2.2.2.2:443 (HTTPS) - secure.com Hosted on 3.3.3.3:443 (HTTPS) - secure.net Hosted on 4.4.4.4:443 (HTTPS) - secure.org And here are the important config directives I have so far, which is the closest I've come to a working iteration, but still no dice. I know I'm close, just need a little push in the right direction. Listen 1.1.1.1:80 Listen 2.2.2.2:443 Listen 3.3.3.3:443 Listen 4.4.4.4:443 NameVirtualHost 1.1.1.1:80 NameVirtualHost 2.2.2.2:443 NameVirtualHost 3.3.3.3:443 NameVirtualHost 4.4.4.4:443 # HTTP VIRTUAL HOSTS: <VirtualHost 1.1.1.1:80> ServerName example.com DocumentRoot /home/foo/example.com </VirtualHost> <VirtualHost 1.1.1.1:80> ServerName example.net DocumentRoot /home/foo/example.net </VirtualHost> <VirtualHost 1.1.1.1:80> ServerName example.org DocumentRoot /home/foo/example.org </VirtualHost> # HTTPS VIRTUAL HOSTS: <VirtualHost 2.2.2.2:443> ServerName secure.com DocumentRoot /home/foo/secure.com SSLEngine on SSLCertificateFile /home/foo/ssl/secure.com.crt SSLCertificateKeyFile /home/foo/ssl/secure.com.key SSLCACertificateFile /home/foo/ssl/ca.txt </VirtualHost> <VirtualHost 3.3.3.3:443> ServerName secure.net DocumentRoot /home/foo/secure.net SSLEngine on SSLCertificateFile /home/foo/ssl/secure.net.crt SSLCertificateKeyFile /home/foo/ssl/secure.net.key SSLCACertificateFile /home/foo/ssl/ca.txt </VirtualHost> <VirtualHost 4.4.4.4:443> ServerName secure.org DocumentRoot /home/foo/secure.org SSLEngine on SSLCertificateFile /home/foo/ssl/secure.org.crt SSLCertificateKeyFile /home/foo/ssl/secure.org.key SSLCACertificateFile /home/foo/ssl/ca.txt </VirtualHost> For what it's worth, I prefer to have each of my SSL sites on their own IP instead of including one of them on the primary VHOST IP. Any links which show a standard setup would be more than welcome!

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Http to https behavior for visits from Internet Explorer client

    - by Emile
    My website has an SSL cert (example url: https://subdomain.example.com). Under Apache it's set up for both port 80 and port 443. So under the following configuration, anyone who goes to http://subdomain.example.com is sent to https://subdomain.example.com . But for visits from Internet Explorer, the redirect doesn't happen. Instead, http visits get a "Internet Explorer cannot display the web page." with a list of client-side solutions to try. Any ideas on how to fix the config so IE visits have the same behavior as the other browsers (that is, send http to https automatically)? NameVirtualHost *:443 <VirtualHost *:80> DocumentRoot /var/www/somewebroot ServerName subdomain.example.com </VirtualHost> <VirtualHost *:443> DocumentRoot /var/www/somewebroot ServerName subdomain.example.com # SSL CERTS HERE </VirtualHost> *Tested IE8, IE9 beta

    Read the article

  • Create a AD-LDS partition under a child of the primary partition

    - by ixe013
    I have a AD-LDS instance running on a Server 2008 R2. I have this application partition, created at installation : dc=enterprise,dc=example,dc=com I have succesfully followed this procedure to create application partitions. They are named : cn=stuff,dc=enterprise,dc=example,dc=com cn=things,dc=enterprise,dc=example,dc=com If I configure my client(s) to follow referals, I can search from dc=enterprise,dc=example,dc=com and find objects under cn=stuff and cn=things. How can I create (or move after the fact) the stuff and things partitions so they are logically located under a OU under the initial partition, ending up with something like : cn=stuff,ou=applications,dc=enterprise,dc=example,dc=com cn=things,ou=applications,dc=enterprise,dc=example,dc=com

    Read the article

  • Nginx Rewrite to Previous Directory

    - by ThinkBohemian
    I am trying to move my blog from blog.example.com to example.com/blog to do this I would rather not move anything on disk, so instead i changed my nginx configuration file to the following: location /blog { if (!-e $request_filename) { rewrite ^.*$ /index.php last; } root /home/demo/public_html/blog.example.com/current/public/; index index.php index.html index.html; passenger_enabled off; index index.html index.htm index.php; try_files $uri $uri/ @blog; } This works great but when i visit example.com/blog nginx looks for: /home/demo/public_html/blog.example.com/current/public/blog/index.php instead of /home/demo/public_html/blog.example.com/current/public/index.php Is there a way to put in a rewrite rule so that I can have the server automatically take out the /blog/ directory? something like ? location /blog { rewrite \\blog\D \; }

    Read the article

  • DKIM on postfix relay server

    - by Danijel Krmar
    I have a postfix/amavis relay server, with the domain name mail.example.com. It will be a relay for dozens of VPSs, which will have domains like hostname.example.net. So i have actually two questions. Is it possible to dkim sing the mails originating from the VPSs over the postfix relay on the relay server? Or have the mails to be signed on the VPSs where they are actually from? Would a amavis configuration like this be ok? # DKIM key dkim_key('example.com', 'dkim', '/var/dkim/DKIMkey.pem'); # Cover subdomains in @dkim_signature_options_bysender_maps= (): @dkim_signature_options_bysender_maps = ( { # Cover subdomains example.net. '.example.net' => { d => 'example.com' }, }); Or have I misunderstood the whole concept. Do I even need to sign subdomains if they are going over an relay server, or is it enough to just sign the relay server domain?

    Read the article

  • How to combine wildcards and spaces (quotes) in an Windows command?

    - by Jan Fabry
    I want to remove directories of the following format: C:\Program Files\FogBugz\Plugins\cache\[email protected]_NN NN is a number, so I want to use a wildcard (this is part of a post-build step in Visual Studio). The problem is that I need to combine quotes around the path name (for the space in Program Files) with a wildcard to match the end of the path. I already found out that rd is the remove command that accepts wildcards, but where do I put the quotes? I have tried no ending quote (works for dir), ...example.com*", ...example.com"*, ...example.com_??", ...cache\"[email protected]*, ...cache"\[email protected]*, but none of them work. (How many commands to remove a file/directory are there in Windows anyway? And why do they all differ in capabilities?)

    Read the article

  • Why my dns server ip got blacklisted instead of my email server ip?

    - by Khurram Masood
    We are hosting our own dns server our scenario is as under; dns ip: a.b.c.1 fqdn:ns1.example.com ------ reverse lookup to a.b.c.1 mail server ip a.b.c.2 mail.example.com ------ reverse lookup to a.b.c.2 smtp.example.com ------ no reverse lookup pop.example.com ------ no reverse lookup web server ip a.b.c.3 example.com ------ reverse lookup to a.b.c.3 www.example.com ------ no reverse lookup a few days back our dns server ip got blacklisted and all our services were down from outside. We had also added a new dns server on a separate network that caused our domain and machines with same names as above to resolve on different ips, can this b a cause of being blacklisted? But all blacklists points towards spamming. Can anyone please explain why my dns ip got blacklisted instead of my email or web server ip?

    Read the article

  • Apache virtual host documentroot in other folders

    - by giuseppe
    I am trying to set up a couple ov VritualHost in my Apache, but I would like to put the DocumentRoot of these virtual host on folders outside the basic www folder. It happens that I get alwasy "Permission Denied". My httpd.conf follows: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin webmaster@dummy-host.example.com DocumentRoot /home/giuseppe/www ServerName www.example.com/www ErrorLog logs/host.www.projects-error_log CustomLog logs/dummy-host.example.com-access_log common <Directory "/home/giuseppe/www"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@dummy-host.example.com DocumentRoot /home/developper ServerName www.example.com ErrorLog logs/host.developper-error_log CustomLog logs/dummy-host.example.com-access_log common </VirtualHost>

    Read the article

  • Route multiple subdomains on one external ip to multiple internal ips

    - by Abenil
    i have several subdomains(git.example.org, build.example.org, etc.), i have a router with an external ip and i have several virtual machines on a host computer with internal ips. Now i want to route git.example.org to internal ip 10.0.2.1 and build.example.org to internal ip 10.0.2.2. How can I do this? I setup in the Router that all traffic on port 80 is comming to my host computer with internal ip 10.0.2.3 and installed Squid on that computer. I added the following lines to the squid.conf file: cache_peer 10.0.2.1 parent 80 0 no-query originserver name=server_1 cache_peer_domain server_1 git.example.org cache_peer 10.0.2.2 parent 80 0 no-query originserver name=server_2 cache_peer_domain server_2 build.example.org But this is not working for me. :( Any help appreciated. Regards Nils Update: Here is the solution for Apache http://serverfault.com/a/273693

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >