Search Results

Search found 6253 results on 251 pages for 'apache2 ssl'.

Page 79/251 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Debug HTTPS errors

    - by Murkin
    Hello everyone, When accessing my site, the SSL session is successful while the page loads. A few seconds after the page loaded FireFox shows that SSL is no longer available. I am guessing its some script (all I have is Google Analytics and Facebook). How can I see what caused FireFox (or IE/Chrome) to drop the SSL and why ?

    Read the article

  • Running HTTP and HTTPS connections for a single domain (say, www.example.com) through a Cisco ACE SS

    - by Paddu
    My web application config has a Cisco ACE load balancing across a server farm and I want to use the ACE as an SSL endpoint as well. To make this work, the network architect has come up with a design where all secure pages have to be served from secure.my-domain.com, while non-secure pages are served up from www.my-domain.com. The reason for this is apparently that the configuring the Cisco ACE to accept HTTPS requests on port 443 for a particular public IP prevents the simultaneous acceptance of HTTP requests on port 80 for the same IP. While I'm not a networking (or Cisco) expert, this seems to be intuitively wrong, as it would prevent any website using the Cisco ACE to serve pages on http://www.my-domain.com and https://www.my-domain.com simultaneously. In this situation, my questions are: Is this truly a limitation of the Cisco ACE when used as an SSL endpoint? If not, then can I assume that we can set up the ACE to accept connections for a particular IP on ports 80 and 443, and function as an SSL endpoint for the incoming requests on 443? Links to appropriate documentation most welcome here. Assuming the setup in the previous question, can I then redirect both sets of requests to the same server farm on the same port?

    Read the article

  • Two servers, two domains, one ip. mod_proxy beginner

    - by Gutsav
    I run two virtual web servers (both running apache2 on debian). I have just one external IP, but two domains, and I want a domain going to each of the servers. I've understood that I need a Reverse Proxy, and I enabled both the mod_proxy and the mod_proxy_http modules on the "primary server". Do I need to enable anything on the "secondary server"? I also understood that I need to write some things in a virtual host file, but what? On the primary server, I have a virtual host file for one of the domains, and some for subdomains. I want domain1.tld to go to the primary server (port 80 is forwarded to it, so that works) and domain2.tld to go to the other server (internal ip 192.168.0.x). No ports needs to be forwarded to it, right? So, what to add and in which virtual host file? Or a new one? Other questions suggest adding ProxyPass and ProxyPassReverse, but I'm lost anyway, and I just don't understand the apache documentation. Thanks in advance

    Read the article

  • HTTP request hangs for for exactly 150 seconds, then gives incomplete response. How do I find out wh

    - by Nathan
    I am hosting a Wordpress blog, and having a strange problem. When I connect to the server (http://71.65.199.125/ at the time of this writing) it displays the Title correctly, and half of a download bar, indicating it has received some of the page, then it hangs for exactly 150 seconds (timed it twice), then it sends the rest of the page, but without the stylesheet. after that it hangs indefinitely, continuing to say "connecting..." without making any progress. If you have any clues as to what might be happening, or how I could print debug logs of PHP or something to see what it is looking for during that hang time that would probably help. recent changed I have made: switched wordpress themes, however I did see it work once with the new theme. moved the server to another building, with an identical ISP, and linksys router forwarding setup. I have also added a favicon.gif file to /var/www but without linking to it from any of the wordpress pages. I have also had a unanticipated power interruption. System info: Ubuntu debian 9.04 Apache2 PHP 5 Wordpress 2.9.2 Thank you

    Read the article

  • Apache Bench reports different result with same page

    - by Aspis
    I'm running into a little problem base-lining an Apache2/fcgi/php-fpm server I am setting up. 1) If I run: ab -n 15000 http://mysite.com/index.php. Apache Bench returns Time per request: 41ms but document length: 0 bytes and html transferred: 0 bytes. The Transfer rate: 7.9Kb/s. 2) If I run: ab -n 15000 http://mysite.com/ Apache Bench returns Time per request: 83ms along with the accurate document length and html transferred total. The APC cache status reports identical hit counts from both test. Also Apache Bench reports no errors in either case. Overall, no errors on any test sites and all logs are clean, etc. DocumentRoot is set to index.php so I would expect both of these test runs to produced a similar result. My 2 question(s) are: 1) why the discrepancy? 2) which is the correct result? I've seen plenty of results like test 1 posted (with out question) but frankly from my own experience and those of others, accurate testing is hard to come by. Even with out goofy issues like this.

    Read the article

  • Pages load in brower fine, but 404 not found reported for the page during the GET on all pages except index

    - by user885983
    I believe this question is more suited to serverfault (please correct me if not). This issue appears very similar to this question (except there are no 301 Moved Permanently for any pages). The domain is yorkshirebadges.co.uk. For example, loading yorkshirebadges.co.uk or yorkshirebadges.co.uk/index.php reports no 404s during network inspection. But every other page (/contact.php, /products.php) report a not found. Mod_rewrite is being used on the site, I checked this out but didn't see any obvious errors. It's included below for reference: RewriteEngine on RewriteRule ^store/material/([^/\.]+)/price/?([^/\.]+)?$ products.php?prodType=$1&price=$2 RewriteRule ^store/price/?([^/\.]+)?$ products.php?price=$1; RewriteRule ^store/material/?([^/\.]+)?$ products.php?prodType=$1 RewriteRule ^store/([^/\.]+)/?$ products.php?prodCat=$1 RewriteRule ^store/([^/\.]+)/price/([^/\.]+)$ products.php?prodCat=$1&price=$2 RewriteRule ^store/Type/?([^/\.]+) products.php?prodType=$1 RewriteRule ^store/([^/\.]+)/?([^/\.]+)?$ view-product-details.php?cat=$1&prodName=$2 RewriteRule ^store/([^/\.]+)/material/?([^/\.]+)?$ products.php?prodCat=$1&prodType=$2 RewriteRule analytics http://www.google.com/analytics <IfModule mod_suphp.c> suPHP_ConfigPath /home/yorkshir <Files php.ini> order allow,deny deny from all </Files> </IfModule> Chrome Network Inspection (and firebug on firefox) report 404s on all pages except the index, the server is apache2. Really scratching my head on this one!

    Read the article

  • Rails app returns HTTP 422 for new ServerAlias - Internet Explorer only

    - by Snips
    I have a long-standing Rails app running on Mac OS X (apache2). The set-up uses Apache virtual hosts and Passenger. The Rails app also uses HTTP Basic Authentication. I need to migrate the app from one url domain to another - with some overlap of both domain names being accessible simultaneously for a period. To do this, I've added the new domain name as a ServerAlias of the existing domain name in the Passenger Virtual Host config. I can now Browse the Rails app using both the legacy url, and the new url from any of Safari, Chrome, Firefox, or Internet Explorer. I can also 'HTTP post' updates to the Rails app using Safari, Chrome, or Firefox. All good. Except, attempts to post updates from Internet Explorer result in the Rails app rejecting the update, The Rails app log contains the message, ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken): I have other domains & aliases working just fine on this same machine. Any suggestions as to what is causing the Rails app to reject posts from IE would be appreciated.

    Read the article

  • Apache can't get viewed from outside of my LAN

    - by Javier Martinez
    I fixed it in PORTS TRIGGER menu of my router. Thanks you anyway I have a weird problem related with (i think) my cable-router and my configured vhosts in Apache2. The point is I can't access from outside of my LAN to any of my configured vhosts if I set the http port of Apache to 80 and i add a NAT rule for it. Otherwise, if I set my Apache port to 81 (or any else) with its respective NAT rule on my router it works. My router is an ARRIS TG952S and I am using Apache/2.2.22 (Debian) ports.conf NameVirtualHost *:80 Listen 80 vhost1.mydomain.net.conf <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName vhost1.mydomain.net ServerAlias vhost1.mydomain.net www.vhost1.mydomain.net vhost2.mydomain.net.conf <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName vhost2.mydomain.net ServerAlias vhost2.mydomain.net www.vhost2.mydomain.net DNS records (using FreeDNS) are: mydomain.net --> pointing to another server vhost1.mydomain.net --> pointing to my server vhost2.mydomain.net --> pointing to my server iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination fail2ban-apache-noscript tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443 fail2ban-apache tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443 fail2ban-ssh tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain fail2ban-apache (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-apache-noscript (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Thanks you

    Read the article

  • trouble running multiple domains on tomcat behind apache via mod_jk

    - by mkoryak
    I am having trouble setting up tomcat6 with 2 virtual hosts, behind apache2. if i have just one host defined in tomcat, and one jk worker, everything works fine. as soon as i define another jk worker and a corresponding tomcat host i get this error in jk.log: 9:3075328656] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (69.164.218.75:8009) (errno=111) [Tue Feb 08 03:08:13 2011] [17159:3075328656] [error] ajp_send_request::jk_ajp_common.c (1507): (dogself) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=111) [Tue Feb 08 03:08:13 2011] [17159:3075328656] [info] ajp_service::jk_ajp_common.c (2447): (dogself) sending request to tomcat failed (recoverable), because of error during request sending (attempt=2) [Tue Feb 08 03:08:13 2011] [17159:3075328656] [error] ajp_service::jk_ajp_common.c (2466): (dogself) connecting to tomcat failed. [Tue Feb 08 03:08:13 2011] [17159:3075328656] [info] jk_handler::mod_jk.c (2615): Service error=-3 for worker=dogself my tomcat server.xml looks like this: <Service name="Catalina"> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" URIEncoding="UTF-8" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="dogself.com"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="dogself.com" appBase="webapps-dogself" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> <Host name="nousophia.com" appBase="webapps-test" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> my workers.properties looks like this: # workers.properties - ajp13 # # List workers worker.list=dogself,nousophia # Define dogself worker.dogself.port=8009 worker.dogself.host=dogself.com worker.dogself.type=ajp13 worker.nousophia.port=8009 worker.nousophia.host=nousophia.com worker.nousophia.type=ajp13 tomcat is started/restarted i followed these directions for setting it up: http://stackoverflow.com/questions/1765399/linking-apache-to-tomcat-with-multiple-domains can someone confirm that it would work as above?

    Read the article

  • 3 simple questions about file permissions

    - by Camran
    1- Wonder, is this a good setup of permissions in the /var directory? drwxr-xr-x 2 root root 4096 2010-05-30 03:34 backups drwxr-xr-x 7 root root 4096 2010-05-29 17:55 cache drwxr-xr-x 29 root root 4096 2010-05-29 17:55 lib drwxrwsr-x 2 root staff 4096 2009-07-14 04:36 local drwxrwxrwt 3 root root 60 2010-06-02 03:34 lock drwxr-xr-x 9 root root 4096 2010-06-02 03:34 log drwxrwsr-x 2 root man 4096 2009-09-20 20:36 mail drwxr-xr-x 2 root root 4096 2009-09-20 20:36 opt drwxrwxrwt 12 root root 420 2010-06-02 12:12 run drwxr-xr-x 4 root root 4096 2009-09-20 20:37 spool drwxrwxrwt 2 root root 4096 2009-07-14 04:36 tmp drwxr-xr-x 14 user root 4096 2010-05-30 22:21 www 2- Could you give me a brief explanation of the columns above? First one is which permissions they have. Second is a nr. Third and fourth says "root root" for example. fifth is another nr (4096 for example). and the others are obvious. 3- Could you give me a brief explanation of the folders above? Especially the "lock" and "tmp" folders. Lock contains an apache2 folder which seems empty. Thanks

    Read the article

  • Using <VirtualHost> over .htaccess for mod_rewrite

    - by DarkWolffe
    I have a LAMP stack installed on Ubuntu 12.10 with three sites created under /etc/apache2/sites-available, all of which are working. My problem lies in wanting to use those files over .htaccess for appending the .php file extension from the URL. My file currently stands as such: # The VGC <VirtualHost *:80> ServerAdmin [email protected] ServerName thevgc.net ServerAlias www.thevgc.net DocumentRoot /var/www/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/www/> Options Indexes +FollowSymLinks +MultiViews Includes RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ $1.php [L,QSA] AddType application/x-httpd-php .php AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> I'm almost certain I'm doing something wrong. All I know is that my .htaccess files refused to append the extension, or rather find the file that has the same name and load that file, so I wanted to go about this method. Any suggestions? Here is an example page from my site.

    Read the article

  • Apache vhosts config: Host Name instead of IP Address

    - by Johe Green
    I have a domain (example.com) hosted at an external provider. I directed the subdomain sub.example.com to my ubuntu server (12.04 with apache2). On my ubuntu server I have a vhost setup like this. The rest of the config is basically apache 2 standard: <VirtualHost *:80> ServerName sub.example.com ServerAlias sub.example.com ServerAdmin [email protected] DocumentRoot /var/www/sub.example.com ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined WSGIScriptAlias / /home/application/sub.example.com/wsgi.py <Directory /home/application/sub.example.com> <Files wsgi.py> Order allow,deny Allow from all </Files> </Directory> </VirtualHost> When I enter http://sub.example.com in my browser my application shows up fine. But the domain is replaced by the IP address of my server. Do I have to configure my server somewhere else to deliver all its content under my domain sub.example.com?

    Read the article

  • why won't php 5.3.3 compile libphp5.so on redhat ent

    - by spatel
    I'm trying to upgrade to php 5.3.3 from php 5.2.13. However, the apache module, libphp5.so will not be compiled. Below is a output I got along with the configure options I used. The configure statement is a reduced version of what I normally use. ========== './configure' '--disable-debug' '--disable-rpath' '--with-apxs2=/usr/local/apache2/bin/apxs' ... ** ** ** Warning: inter-library dependencies are not known to be supported. ** ** ** All declared inter-library dependencies are being dropped. ** ** ** Warning: libtool could not satisfy all declared inter-library ** ** ** dependencies of module libphp5. Therefore, libtool will create ** ** ** a static module, that should work as long as the dlopening ** ** ** application is linked with the -dlopen flag. copying selected object files to avoid basename conflicts... Generating phar.php Generating phar.phar PEAR package PHP_Archive not installed: generated phar will require PHP's phar extension be enabled. clicommand.inc pharcommand.inc directorytreeiterator.inc directorygraphiterator.inc invertedregexiterator.inc phar.inc Build complete. Don't forget to run 'make test'. ============= php 5.2.13 recompiles just fine so something is up with 5.3.3. Any help would be greatly appreciated!!

    Read the article

  • LDAP SSL connect problem

    - by juergen
    I set up a test domain for my LDAP SSL tests and it is not working. I am using Windows Server 2008 R2 SP 1. I came so far: 1. i generated and installed my self signed certificate on the test domain controller 2. on the server i can log into ldap over SSL with the MS ldp.exe tool. 3. using ldp.exe on a client that is no in this domain the login fails with error 0x51 = "failed to connect". (i don't have a client computer that is in this domain right now) 4. I testet the certificate by using it in the IIS on the test server and I can reach the default page of the test server over SSL. (from the client that is not in the domain) 5. analysing the traffic between client and server I can see that the server is sending a certificate to the client. why isn't this working on my client computer?

    Read the article

  • What causes PHP pages to consistently download instead of running normally

    - by Jonathan
    Hi, I'm running a Ubuntu Server on a VM, to test out different web forum solutions. I have set up a ~/public_html/ to be accessible with the apache2 web server, and that works fine. However when I go to a .php file on a browser (using my VM's ip-address/~username/phpfile.php) it does not display it as it should. Instead it offers to save to file/asks what program to open it with. Interestingly though that dialog box does recognise that it is a php file. I have the following version of php installed on the system: PHP 5.3.2-1ubuntu4.5 with Suhosin-Patch (cli) (built: Sep 17 2010 13:49:46) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies And the following server: Server version: Apache/2.2.14 (Ubuntu) Server built: Nov 18 2010 21:19:09 If anyone knows what might be causing this/potential solutions it would make me very happy :) EDIT: Turns out files this behaviour was only apparent on files in the ~/public_html/ directory. All php files in /var/www/ work fine. Prizes go to whoever can explain why? :D (And by prizes I just mean a well done, no actual prizes I'm afraid.)

    Read the article

  • Merely installing PHP5 causes my AWS Ubuntu server to die minutes later from a massive CPU spike

    - by Mark Amery
    I have an AWS server with Ubuntu 11.04 as the OS that is running an Apache2 webserver (incidentally Python-based and using Django). We recently needed to add support for php5 to let us use a third party PHP library (incidentally for serving minified versions of js and css files). However, for no reason any of us can discern, if we simply run sudo apt-get install php5 on the server, then the install appears to finish successfully but, without us taking any further action (including not yet running sudo apt-get install libapache2-mod-php5, which I think would be the next step for us if everything worked), or actually running any PHP scripts on the server, a few minutes later the server becomes impossible to connect to, and looking at the 'Monitoring' tab for the server in the EC2 Management Console reveals that a while after the installation, CPU usage spikes to 100% and stays there permanently (until we reboot the server from the AWS Console). After rebooting, the server also reliably dies within a few (between 0 and 10) minutes. We restored the server to a pre-PHP state from an AMI Image, observed that it was stable, and then tried installing PHP5 again and observed the server die in exactly the same way, so we're pretty much certain that installing PHP5 is what causes the symptoms. What on earth could be causing this behaviour, and how can we get PHP installed on the server without it dying?

    Read the article

  • HAProxy crashes on all requests in 1.5-dev12

    - by Daniel Hough
    I'm having an issue where HAProxy is crashing with no explanation when I switch from 1.4.12 to 1.5-dev12. The reason I'm switching is for the SSL offloading. My config file doesn't have any errors, it's quite simple and it works well with 1.4 - but for some reason when I run it with 1.5-dev12 I see the logs noting that the two backends I have have been set up, and then when I hit one of the frontends, I get an HTTP 400 in the browser and suddenly HAProxy isn't running anymore when I check. I understand that a common workaround to the lack of SSL support for HAProxy is to use Stud, and I may go with that since I am in need of an SSL solution for my service, but before I dele into that world I thought I might see if anybody has experienced the same problems and might know how to fix it. The server is Ubuntu 10.04 and I followed the make instructions on the Exceliance blog here. EDIT: On the advice of Kyle Brandt, I did a bit more investigation. I attached gdb to the haproxy process and when the crash occurred this is what I got: Program received signal SIGSEGV, Segmentation fault. 0x0804e5c2 in dequeue_all_listeners (list=0x9e1a418) at src/protocols.c:184 184 list_for_each_entry_safe(listener, l_back, list, wait_queue) { P.S. HAProxy is awesome, so thank you Exceliance for providing us with something so useful :)

    Read the article

  • apache virtualhost: Auto subdomain with exception

    - by Ineentho
    I've been searching for a way to automatically redirect domains to a specific folder, and fond a good answer here on serverfault: Apache2 VirtualHost auto subdomain. (The accepted answer) So far everything works good, however now I need to add an exception to this. The result I want is this: http://localhost/ --> E:/websites/ http://specialDomain2/ --> E:/websites/ http://normal1.com/ --> E:/websites/normal1.com/ http://normalDomain.com/ --> E:/websites/normal2.com/ I get the expceted result for the two last domains, but the localhost doesn't work. I copied the script from the question aboved, and tried to add something like <VirtualHost *:80> RewriteEngine On RewriteMap lowercase int:tolower # if already rewitten and we have the right path, stop right here RewriteRule ^(E:/websites/[^/]+/.*)$ $1 [L] RewriteRule ^localhost/(.*)$ E:/websites/$1 [L] # <-- Added this row RewriteRule ^(.+) ${lowercase:%{SERVER_NAME}}$1 [C] RewriteRule ^(www\.)?([^/]+)/(.*)$ E:/websites/$2/$3 [L,E=VHOST_ROOT:E:/websites/$2/] </VirtualHost> I thought this would make sense, since I would translate this to if URL = localhost/* Do nothing (because of the [L] flag), and use the default document root specified earlier else continue What's wrong with this? Thanks for any help!

    Read the article

  • why won't php 5.3.3 compile libphp5.so on redhat ent

    - by spatel
    I'm trying to upgrade to php 5.3.3 from php 5.2.13. However, the apache module, libphp5.so will not be compiled. Below is a output I got along with the configure options I used. The configure statement is a reduced version of what I normally use. ========== './configure' '--disable-debug' '--disable-rpath' '--with-apxs2=/usr/local/apache2/bin/apxs' ... ** ** ** Warning: inter-library dependencies are not known to be supported. ** ** ** All declared inter-library dependencies are being dropped. ** ** ** Warning: libtool could not satisfy all declared inter-library ** ** ** dependencies of module libphp5. Therefore, libtool will create ** ** ** a static module, that should work as long as the dlopening ** ** ** application is linked with the -dlopen flag. copying selected object files to avoid basename conflicts... Generating phar.php Generating phar.phar PEAR package PHP_Archive not installed: generated phar will require PHP's phar extension be enabled. clicommand.inc pharcommand.inc directorytreeiterator.inc directorygraphiterator.inc invertedregexiterator.inc phar.inc Build complete. Don't forget to run 'make test'. ============= php 5.2.13 recompiles just fine so something is up with 5.3.3. Any help would be greatly appreciated!!

    Read the article

  • Is it bad to redirect http to https?

    - by jasondavis
    I just installed an SSL Certificate on my server. I use a web hosting panel called ZPanel that is an open source project. It then set up a redirect for all traffic on my domain on Port 80 to redirect it to Port 443. In other words, all my http://example.com traffic is now redirected to the appropriate https://example.com version of the page. The redirect is done in my Apache Virtual Hosts file with something like this... RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] My question is, are there any drawbacks to using SSL? Since this is not a 301 Redirect, will I lose link juice/ranking in search engines by switching to https? I appreciate the help. I have always wanted to set up SSL on a server, just for the practice of doing it, and I finally decided to do it tonight. It seems to be working well so far, but I am not sure if it's a good idea to use this on every page. My site is not eCommerce and doesn't handle sensitive data; it's mainly for looks and the thrill of installing it for learning. UPDATED ISSUE Strangely Bing creates this screenshot from my site now that it is using HTTPS everywhere...

    Read the article

  • Accessing a webpage folder with .htaccess in it via apache webdav?

    - by pingo
    I have setup webdav access in order to enable an external user to upload the content of his web page to his folder on my server that is served by apache to the web. This way he could update his web page via webdav. Now the problem is that the user requires a .htaccess file and of course .htaccess breaks webdav probably because it overrides settings. (new files cannot be uploaded anymore via webdav if below specified .htaccess exists) I am running Apache2.2.17 and this is my webdav config: Alias /folderDAV "d:/wamp/www/somewebsite/" <Location /folderDAV> Order Allow,Deny Allow from all Dav On AuthType Digest AuthName DAV-upload AuthUserFile "D:/wamp/passtore/user.passwd" AuthDigestProvider file require valid-user </Location> This config is part of my naive solution to fixing this problem. The idea was to specify an alias to the web page folder where webdav would be enabled and then set AllowOverride to none so that the .htaccess would have no effect. Of course I then found out that in <Location /> AllowOverride directive is not valid. The .htaccess file looks like this: #opencart settings Options +FollowSymlinks Options -Indexes <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] ErrorDocument 403 /403.html deny from 1.1.1.1/19 allow from 2.2.2.2 What would be the solution here? I would like to have the web page accessible from the web but at the same time be able to access and modify it via apache's webdav (with digest auth). How would I do that? Also if possible I would like a solution that permits the existence of the .htaccess so that the user still has the power to setup access rules for his web page.

    Read the article

  • Apache directory access with virtual host

    - by alexeygaidamaka
    I have a virtual host with a configuration like that. When i'm trying to get into foobar.com/dir providing valid username/password pair i get 403 forbidden page instead of that directory contents. www.foobar.com/dir has 777 rights, .httpaswd is chmoded 644. But i can't figure out why i am still not seeing contents. Please, give me a hint. ServerAdmin webmaster@localhost ServerName www.foobar.com ServerAlias www.foobar.com DocumentRoot /var/www/foobar <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/foobar> Options -Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> <Directory /var/www/foobar/dir> AllowOverride AuthConfig AuthName "Authorize yourself, please!" AuthType Basic AuthUserFile /etc/apache2/.htpasswd AuthGroupFile /dev/null Allow from All Order Allow,Deny Require valid-user

    Read the article

  • Relative path incorrect in the view layer when hosting a rails3 app in a subdirectory using passenger and apache

    - by Saifis
    I want to host multiple Rails apps on a multiple server using sub-directories. And have encountered some relative path problems. I have made a symbolic link to the app's public directory and placed it in the /var/www/html directory, var/www/html/ /test_app (symbolic link to the public folder of test_app) and set apache as so LoadModule passenger_module /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12/ext/apache2/mod_passenger.so PassengerRoot /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12 PassengerRuby /usr/local/bin/ruby <VirtualHost *:80> ServerName test.com DocumentRoot /var/www/html Options Indexes FollowSymLinks -MultiViews RailsBaseURI /test_app </Location> </VirtualHost> The links in the app itself works just fine, all the links acknowledge the test_app/ directory and work, however, when it comes to showing images in the public directory in the view, the relative path goes wrong. Say I have /system/files/1/aaa.png it goes looking for it in /var/www/html/system/files/1/aaa.png rather than /var/www/html/test_app/system/files/1/aaa.png As far as I understand this is an Apache setting problem than something to be done in Rails, if its possible I would prefer to have it contained in the conf file of apache rather than having to alter the code.

    Read the article

  • Website latency and bad tcp packets

    - by Mistero Lupo
    I have multiple websites hosted on a Linode VPS and I'm having an issue with one of them: every page that I try to load has about 10 seconds latency. Apache logs are clean and the other websites on the same machine are running well. At a first glance I tought it was a memory problem since the VPS has got only 512M, but from the linode dashboard CPU and Disk I/O are normal. Anyway here we have the ram status: $ free -m total used free shared buffers cached Mem: 487 463 23 0 2 55 -/+ buffers/cache: 404 82 Swap: 255 155 100 Only 23M free, but if it was a memory problem why other websites are going as usual? I took a live capture with wireshark, and there are some duplicates SYN ACK packets just before the 10 seconds gap. I'm out of ideas, looking for some clues. Wireshark live capture screenshot As you can see from the image, the gap is after the last bad tcp. Thank you in advance. UPDATE I've checked Apache2 logs in debug error level, and this is where something is appening: 151.97.156.191 - - [14/Nov/2012:11:19:40 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6801578/subreq] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] applying pattern '^index\.php$' to uri 'index.php' 151.97.156.191 - - [14/Nov/2012:11:19:40 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6801578/subreq] (1) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] pass through /home/fmaisi/sites/www.fmaisi.it/public_html/index.php 151.97.156.191 - - [14/Nov/2012:11:19:54 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6537c78/initial] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] strip per-dir prefix: /home/fmaisi/sites/www.fmaisi.it/public_html/wp-content/plugins/wp-filebase/wp-filebase_css.php -> wp-content/plugins/wp-filebase/wp-filebase_css.php 151.97.156.191 - - [14/Nov/2012:11:19:54 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6537c78/initial] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] applying pattern '^index\.php$' to uri 'wp-content/plugins/wp-filebase/wp-filebase_css.php' As you can see there is a gap of 14 seconds after the pass through of index.php. Any suggestions? I'm out of ideas again.

    Read the article

  • Mysql Query - That Is Returning Blatanty Incorrect Result

    - by user866190
    I am building a VPS node that is running Ubuntu 10.10LTS, Apache2, Mysql 5.1 and php5. I could not log in to my website admin through the browser, even though I am using the correct login details. So I logged in from the command line to check the results. When I run this query I get expected results: mysql> select * from users; +----+----------+-----------------------+----------+ | id | username | email | password | +----+----------+-----------------------+----------+ | 1 | myUserName | [email protected] | myPassword | +----+----------+-----------------------+----------+ And the same goes for this query: mysql> select * from users where id = 1; +----+----------+-----------------------+----------+ | id | username | email | password | +----+----------+-----------------------+----------+ | 1 | myUserName | [email protected] | myPassword | +----+----------+-----------------------+----------+ 1 row in set (0.00 sec) But when I run this query I get this 'unexpected response': mysql> select * from users where username = 'myUserName' and password = 'myPassword'; Empty set (0.00 sec) I am not sure why this is happening. Any help would be greatly appreciated. BTW.. I will be encrypting the user details but for now I just want to get it set up. Please help, Thanks

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >