Search Results

Search found 10808 results on 433 pages for 'apache regexp'.

Page 128/433 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • APC fragmention woes on Apache AWS EC2 Small instance with WordPress and W3TC

    - by two7s_clash
    AWS EC2 Small instance, Apache 2 running WordPress and W3TC. Within an hour, my APC fragmentation hits 100%. My APC settings are: apc.enabled = 1 apc.shm_segments = 1 apc.shm_size = 100M apc.optimization = 0 apc.num_files_hint = 512 apc.user_entries_hint = 1024 apc.ttl = 7200 apc.user_ttl = 7200 apc.gc_ttl = 3600 apc.cache_by_default = 1 apc.use_request_time = 1 apc.filters = "apc\.php$" apc.mmap_file_mask = "/tmp/apc.XXXXXX" apc.slam_defense = 0 apc.file_update_protection = 2 apc.enable_cli = 0 apc.max_file_size = 2M apc.stat = 1 apc.write_lock = 1 apc.report_autofilter = 0 apc.include_once_override = 0 apc.rfc1867 = 0 apc.rfc1867_prefix = "upload_" apc.rfc1867_name = "APC_UPLOAD_PROGRESS" apc.rfc1867_freq = 0 apc.localcache = 0 apc.localcache.size = 256M apc.coredump_unmap = 0 apc.stat_ctime = 0 apc.canonicalize = 1 apc.lazy_functions = 0 apc.lazy_classes = 0 /etc/php.d/apc.ini More poop can be seen here. Mostly cribed settings from here. The shm was meant to be whittled down from such a high value after some observation, but apparently such a large value isn't even high enough.... I found an similar question/answer here. I do have some virtual hosts setup, but they aren't being touched much at all. Having users logged into the admin panel of WP does make things worse, but that's certainly not the main culprit. The question asker seems to suggest that it turns out W3TC is probably causing the problem, which the plugin author seems to agree with, but there aren't any helpful details beyond that. Why is it causing the problem? Do I just take it for now and turn off object caching with APC? Is there nothing I can do? Does having it turned on without being used for object caching actually help anything? Would memcache be an ok substitute just for object caching here? Finally, maybe I just shouldn't worry so much about the fragmentation?

    Read the article

  • Trying to run an ASP.NET MVC application using Mono on Apache with FastCGI

    - by Arda Xi
    I have a hosting account with DreamHost and I would like to use the same account to run ASP.NET applications. I have an application deployed in a subdomain, a .htaccess with a handler like this: # Define the FastCGI Mono launcher as an Apache handler and let # it manage this web-application (its files and subdirectories) SetHandler monoWrapper Action monoWrapper /home/arienh4/<domain>/cgi-bin/mono.fcgi virtual My mono.fcgi is set up as such: #!/bin/sh #umask 0077 exec >>/home/arienh4/tmp/mono-fcgi.log exec 2>>/home/arienh4/tmp/mono-fcgi.err echo $(date +"[%F %T]") Starting fastcgi-mono-server2 cd / chmod 0700 /home/arienh4/tmp/mono-fcgi.sock echo $$>/home/arienh4/tmp/mono-fcgi.pid # stdin is the socket handle export PATH="/home/arienh4/mono/bin:$PATH" export LD_LIBRARY_PATH="/home/arienh4/mono/lib:$LD_LIBRARY_PATH" export TMP="/home/arienh4/tmp" export MONO_SHARED_DIR="/home/arienh4/tmp" exec /home/arienh4/mono/bin/mono /home/arienh4/mono/lib/mono/2.0/fastcgi-mono-server2.exe \ /logfile=/home/arienh4/logs/fastcgi-mono-web.log /loglevels=All \ /applications=/:/home/arienh4/<domain> I took this from the Mono site for CGI. I'm not sure if I'm doing it correctly though. This code is resulting in this error: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. I have no idea what's causing this. As far as I can see, Mono isn't even hit (no log files are created).

    Read the article

  • apache vhost not working consistently

    - by petrus
    I have a vhost on my webserver whose sole and unique goal is to return the client IP adress: petrus@bzn:~$ cat /home/vhosts/domain.org/index.php <?php echo $_SERVER['REMOTE_ADDR']; echo "\n" ?> This helps me troubleshoot networking issues, especially when NAT is involved. As such, I don't always have domain name resolution and this service needs to work even if queried by its IP address. I'm using it this way: petrus@hive:~$ echo "GET /" | nc 88.191.124.41 80 191.51.4.55 petrus@hive:~$ echo "GET /" | nc domain.org 80 191.51.4.55 router#more http://88.191.124.41/index.php 88.191.124.254 However I found that it wasn't working from at least a computer: petrus@seth:~$ echo "GET /" | nc domain.org 80 petrus@seth:~$ petrus@seth:~$ echo "GET /" | nc 88.191.124.41 80 petrus@seth:~$ What I checked: This is not related to ipv6: petrus@seth:~$ echo "GET /" | nc -4 ydct.org 80 petrus@seth:~$ petrus@hive:~$ echo "GET /" | nc ydct.org 80 2a01:e35:ee8c:180:21c:77ff:fe30:9e36 netcat version is the same (except platform, i386 vs x64): petrus@seth:~$ type nc nc est haché (/bin/nc) petrus@seth:~$ file /bin/nc /bin/nc: symbolic link to `/etc/alternatives/nc' petrus@seth:~$ ls -l /etc/alternatives/nc lrwxrwxrwx 1 root root 15 2010-06-26 14:01 /etc/alternatives/nc -> /bin/nc.openbsd petrus@hive:~$ type nc nc est haché (/bin/nc) petrus@hive:~$ file /bin/nc /bin/nc: symbolic link to `/etc/alternatives/nc' petrus@hive:~$ ls -l /etc/alternatives/nc lrwxrwxrwx 1 root root 15 2011-05-26 01:23 /etc/alternatives/nc -> /bin/nc.openbsd It works when used without the pipe: petrus@seth:~$ nc domain.org 80 GET / 2a01:e35:ee8c:180:221:85ff:fe96:e485 And the piping works at least with a test service (netcat listening on 1234/tcp and output to stdout) petrus@bzn:~$ nc -l -p 1234 GET / petrus@bzn:~$ petrus@seth:~$ echo "GET /" | nc domain.org 1234 petrus@seth:~$ I don't know if this issue is more related to netcat or Apache, but I'd appreciate any pointers to troubleshoot this issue ! The IP addresses have been modified but kept consistent for easy reading. bzn is the server, hive is a working client and seth is the client on which I have the issue.

    Read the article

  • SSI includes not working on Debian with Apache

    - by Mike
    I'm trying to get SSI to work on Debian running Apache, however the .shtml files are not being parsed. From a PHP file with phpinfo() I can see that the following show up in the loaded modules section: mod_mime_xattr mod_mime mod_mime_magic In /etc/apache2/mods-enabled/mime.conf I have (among other things): AddType text/html .shtml AddOutputFilter INCLUDES .shtml In /etc/apache2/sites-enabled/domain.com.conf (for the virtual host in question) I have: <Directory /home/username/public_html> Options +Includes allow from all AllowOverride All </Directory> and for good measure, I added the following as well: <Directory /> Options +Includes </directory> In the user's .htaccess file, I tried adding: Options +Includes AddType text/html shtml AddHandler server-parsed shtml Nothing seems to work. How can I even debug this? Edit: Here is the output of ls /etc/apache2/mods-enabled/ in case this helps actions.conf dav_svn.load proxy_balancer.load actions.load deflate.conf proxy.conf alias.conf deflate.load proxy_connect.load alias.load dir.conf proxy_http.load auth_basic.load dir.load proxy.load auth_digest.load env.load python.load authn_file.load fcgid.conf reqtimeout.conf authz_default.load fcgid.load reqtimeout.load authz_groupfile.load mime.conf rewrite.load authz_host.load mime.load ruby.load authz_user.load mime_magic.conf setenvif.conf autoindex.conf mime_magic.load setenvif.load autoindex.load mime-xattr.load ssl.conf cgi.load negotiation.conf ssl.load dav_fs.conf negotiation.load status.conf dav_fs.load php5.conf status.load dav.load php5.load suexec.load dav_svn.conf proxy_balancer.conf

    Read the article

  • Apache mod-pagespeed installation affects mod-spdy?

    - by tim peterson
    Recently my site (an https connection, running on an Amazon EC2 ubuntu apache2.2) has this issue where I need to load the page several times (3-4) before it will load normally without issue. It will then load normally as long as I keep loading pages regularly (every couple seconds). It will stall again if I don't load pages for a few minutes. It has nothing to do with my application because I don't have this problem with the exact same app codebase on my Apache installation on my laptop. The only things to my knowledge that I've changed is that I recently installed mod_spdy and then a few weeks later I installed mod_pagespeed, https://developers.google.com/speed/pagespeed/mod. However, I have since turned mod_pagespeed off by setting its pagespeed.conf to mod_pagespeed off. Unfortunately, that didn't solve the problem. The line below is how every of last 10 lines of my error.log look: # tail -f /var/log/apache2/error.log ... [32728:32729:ERROR:mod_spdy.cc(162)] request->chunked == 1 in request GET / HTTP/1.1 [Sat Jun 02 04:50:08 2012] [warn] [client 50.136.93.153] [stream 5] [32728:32729:WARNING:http_to_spdy_filter.cc(113)] HttpToSpdyFilter is not the last filter in the chain: chunk any thoughts? thank you, tim

    Read the article

  • Apache Reverse Proxy not working inside a VirtualHost running a Mono Web Application

    - by Arwen
    I have a mono web application running with this virtual host below. It is running on Apache 2.2.20 / Ubuntu 11.10. I tried to add a reverse proxy inside this virtualhost so I can make asynchronous or AJAX type calls back to this same domain. My asynchronous requests would have problems in many browsers calling services that are on another domain (cross domain requests problem). I am wanting to do reverse proxy calls to this other service using http://www.whatever.com/monkey/. So, I added the directive and top directive to try to make this work. It is weird though...nothing I do seems to have any effect. I can put the exact same markup in my default website virtualhost file and it works great. What is the deal? Are some of these Mono directives causing problems? <VirtualHost *:80> ServerName www.whatever.com ServerAlias whatever.com *.whatever.com ServerAdmin [email protected] DocumentRoot /home/myuser/web/whatever ProxyRequests off <Proxy *> Order allow,deny Allow from all </Proxy> <Location /monkey/> ProxyPass http://www.google.com/ ProxyPassReverse http://www.google.com/ </Location> MonoServerPath www.whatever.com "/usr/bin/mod-mono-server2" MonoSetEnv www.whatever.com MONO_IOMAP=all MonoApplications www.whatever.com "/:/home/myuser/web/whatever" <Location "/"> Allow from all Order allow,deny MonoSetServerAlias www.whatever.com SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> </VirtualHost>

    Read the article

  • Apache Caching and Expires configuration

    - by mcondiff
    I'm looking for a best possible caching/expires configuration for my specific situation. I realize that some sites have advocated turning etags off: Header unset ETag, FileETag None I know that I should use either Expires or Cache-Control. In additions, I know that I should use either Last-modified or ETAGs (Per ySlow docs). I inherited a clients server that uses the following in .htaccess: <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf|xml|txt|html|htm)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> With this server I am not going to be able to rely on staff to rename images, css and js in web applications so I do not want to set the expires far in the future without knowing (with a good certainty) that "most/all" browsers will check to see if content has changed. What I do not want to happen is someone call me and say the website is broken because they replaced an image and it's not showing up. But I do want to take the most advantage I can with caching and expires while still maintaining that mostly all browsers will check with the server to see if components have changed. I have access to both the .htaccess and apache .conf file and it is a single server, the content is not deployed on multiple servers. What would be the best .htaccess or .conf configuration for me to achieve my goals for this clients server? Thanks for your help

    Read the article

  • Running SSL locally on a hosts redirected domain name with Ubuntu and Apache

    - by Matthew Brown
    I recently made some changes to my Ubuntu computer so that a domain name resolved to my local copy of Apache. I edited /etc/hosts and added 127.0.0.1 thisbit.example.com Then set up a VirtualHost for the responses I wishes to create. That all works fine and my testing is now shooting on ahead without harm or risk tot he production server. Now for my next trick I need to test the authentication and so need to do this with HTTPS Basically https://auth.example.com needs to work on my PC without the SSL causing an issue which I imagine would be the case as I am clearly not the true https://auth.example.com but for the basis of this exercise I need to pretend that I am. Now it might be that the Apps I'm testing don't worry about checking the certificate. (Many are in Java which I'm no expert with). What gotchas am I likely to encounter and what is the best way of not letting my own hacks spoil my testing? I'm guessing the place to start is to enable SSL with Apcahe... I've never done that before as it has never come up before.

    Read the article

  • Apache/Mongrel/Redmine installation problem (VirtualHost/ProxyPass)

    - by Riddler
    I am installing Redmine as per this step-by-step instruction: http://justnotes.co.cc/2010/02/11/how-to-install-redmine-on-ubuntu/ I am using Ubuntu 10.04.1, Apache 2.2.14, Mongrel 1.1.5. On the VirtualHost configuration stage, I am using this: <VirtualHost *:80> ServerName myserver.lv ProxyPass /redmine/ http://localhost:8000/ ProxyPassReverse /redmine/ http://localhost:8000 ProxyPreserveHost on <Proxy *> Order allow,deny Allow from all </Proxy> </VirtualHost> But, when I direct my browser to http://<my-server's-ip>/redmine/ what I see is not the redmine web application but "Index of /redmine" with, well, index of the files from the root directory of Redmine. Any idea how to fix that? P.S. Tried removing the VirtualHost stuff alltogether and instead adding the following simple clauses to apache2.conf: <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass /redmine/ http://localhost:8000/ ProxyPassReverse /redmine/ http://localhost:8000/ ProxyPreserveHost on As a result, the behavior changes! Now http://<my-server's-ip>/redmine/ produces the source code of the Redmine's start page, so it is served, but apparently not rendered. At the same time, still, http://<my-server's-ip>:8000/ works perfectly fine, so Mongrel is serving the Redmine application as it should, it's just that something is wrong with my VirtualHost/proxying clauses in the .conf file.

    Read the article

  • nginx starts up before apache

    - by paullb
    I've been fumbling through setting up redmine on a unbuntu (12.04) box and somewhere along the line NginX got set up and now apache no longer loads because nginx has already grabbed the port. I tried removing NginX with the below command but that didn't seem to make any difference. When I restarted the server and pointed my web browser I still got the "Welcome to NginX" message sudo apt-get purge nginx I have confirmed that NginX is gone because when I run the above now I get as an output Package nginx is not installed, so not removed Yet everytime I start the machine it is running again. I noticed the following for the running processes (if that is helpful) root 923 0.0 0.0 76784 1280 ? Ss 03:00 0:00 nginx: master process /usr/sbin/nginx www-data 925 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process www-data 926 0.0 0.1 77092 2204 ? S 03:00 0:00 nginx: worker process www-data 927 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process www-data 928 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process Any advice for bringing back apache2 as the "default" (for lack of a better term) web server?

    Read the article

  • Apache ProxyPassReverse and https

    - by joshuaball
    Hi, I would like to map all traffic on 80 and 443 from foo.com to an internal server: 192.168.1.101. I have a VirtualHost (Apache 2.2 on Ubuntu) setup as follows (note, I had to break up the hyperlinks below because I am a 'new user'): <VirtualHost *:80> ServerName foo.com ServerAlias *.foo.com ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://192.168.1.101/ ProxyPassReverse / http://192.168.1.101/ </VirtualHost> And that works great for http traffic. However, I can't seem to do the same thing for https. I have tried: Changing VirtualHost *:80 to * - but that doesn't work (I need it http-http and https-https) Creating a new VirtualHost entry for *:443 that redirects to http://192.168.1.101/, but that fails as well (browser timeouts) I did some searching, here and elsewhere, and the closest question I could find was this, but that didn't quite answer it. Also, just out of curiosity, I tried mapping all ports to https (by changing the two ProxyPass lines from http to https (and removing the :80 from VH), and that didn't work either. How would you do that as well? Any thoughts? Thanks in advance.

    Read the article

  • Apache/Passenger and cpulimit

    - by Dave Smylie
    I run a ruby on rails site that processes email - the email is dumped directly into the web app via a POST from postfix. At times I can get a burst of email coming in causing a prolonged surge in CPU usage making my VPS provider understandable unhappy with me. These emails don't need to be processed in a timely manner - they just need to be (eventually) processed. Obviously I can't just nice the process as that only looks at the cpu usage on my VPS and can't take into account the cpu usage on the other VPS's. I have found a utility called cpulimit that will you put hard limits on cpu usage for a particular process. (eg 20%). This seems ideal for this purpose, but I can't work out to integrate with apache/passenger. Passenger starts up a ruby process for each server and restarts them periodically. Each time the pid will change. Cpulimit needs to be given a pid number for it to act on. Anyone got any ideas how I could get passenger to fire off a call this command when it's starting up this particular virtual host?

    Read the article

  • Configuring suExec to work with Apache and PHP via FastCGI

    - by RandomPsychology
    I have installed ISPConfig 3 on an Ubuntu VPS and configured it for Apache + PHP via FastCGI and suexec. I am able to upload PHP apps (e.g. Wordpress) and run them normally w/ suexec. However, for some reason the PHP scripts cannot write data to disk. For instance, trying to upgrade a plugin via Wordpress' web interface causes it to fail with the error "Could not create directory /path/to/wp-content/upgrade/plugin.tmp." Trying to upload media and other assets also fails via the web. I've checked owner/group on the directory structure and it looks good. The suExec log also seems to be normal and I don't see any indicative errors in the web server logs. I can also confirm that changing the owner/group on the directories does result in the expected error in suexec.log. Additionally, I have the directory permissions set to u=rw,g=r,o= and I've also tried setting g=rw. None of this results in my scripts being able to write to the directories. What am I doing wrong?

    Read the article

  • Failed to start apache Can't open /etc/apache2/envvars

    - by bumperbox
    i have had this problem a couple of times now and i am not sure what is causing it Failed to start apache : .: 45: Can't open /etc/apache2/envvars when i look at a dir listing, i get these question marks next to envvars, does anyone know what that means? os is ubuntu 10 if that helps drwxr-xr-x 7 root root 4096 Jan 29 11:56 . drwxr-xr-x 83 root root 4096 Feb 4 10:34 .. -rw-r--r-- 1 root root 8113 Sep 29 01:52 apache2.conf -rw-r--r-- 1 root root 8027 Oct 3 22:26 apache2.conf.dpkg-old drwxr-xr-x 2 root root 4096 Jan 29 11:56 conf.d ?????????? ? ? ? ? ? envvars -rw-r--r-- 1 root root 0 Oct 3 22:25 httpd.conf ?????????? ? ? ? ? ? magic drwxr-xr-x 2 root root 4096 Jan 29 11:56 mods-available drwxr-xr-x 2 root root 4096 Jan 29 10:18 mods-enabled ?????????? ? ? ? ? ? ports.conf drwxr-xr-x 2 root root 4096 Jan 29 11:56 sites-available drwxr-xr-x 2 root root 4096 Jan 29 11:55 sites-enabled UPDATE Just heard back from the hosting company, they move my VPS to a new hardware node last night, and something at their end wasn't quite right which caused the issue

    Read the article

  • Apache Alias Isn't In Directory Listing

    - by Phunt
    I've got a site running on my home server that's just a front end for me to grab files remotely. There's no pages, just a directory listing (Options Indexes...). I wanted to add a link to a directory outside of the webroot so I made an alias. After a minute of dealing with permissions, I can now navigate to the directory by typing the URL into the browser, but the directory isn't listed in the root index. Is there a way to do this without creating a symlink in the root? Server: Ubuntu 11.04, Apache 2.2.19 Relevant vhost: <VirtualHost *:80> ServerName some.url.net DocumentRoot "/var/www/some.url.net" <Directory /var/www/some.url.net> Options Indexes FollowSymLinks AllowOverride None Order Allow,Deny Allow From All AuthType Basic AuthName "TPS Reports" AuthUserFile /usr/local/apache2/passwd/some.url.net Require user user1 user2 </Directory> Alias /some_alias "/media/usb_drive/extra files" <Directory "/media/usb_drive/extra files"> Options Indexes FollowSymLinks Order Allow,Deny Allow From All </Directory> </VirtualHost>

    Read the article

  • Hostname and SSL (apache) issue on Debian

    - by user105566
    I have been trying to setup SSL virtual host ServerAdmin [email protected] ServerName moclm.tap.pt DocumentRoot /var/www/tapme/ <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> <Directory /var/www/tapme/> Options -Indexes FollowSymLinks MultiViews AllowOverride All #Order allow,deny #allow from all </Directory> SSLEngine on SSLCertificateFile /etc/ssl/moclm.cer SSLCertificateKeyFile /etc/ssl/moclm.pem </VirtualHost> For some reason, the server automatically redirect to SSL (http:// to https://). The apache is not configured to redirect and application was working fine on port 80 only. I have no knowledge how the internal network works as i am working remotely. The SSL error logs show: [Tue Oct 02 22:40:32 2012] [error] Hostname linemnt01.tap.pt provided via SNI and hostname moclm.tap.pt provided via HTTP are different I thought may be the hostname has some issue and have changed the hostname of the server from "linemnt01.tap.pt" to "moclm.tap.pt" but the issue is still there. I am getting the following error on browser: Bad Request Your browser sent a request that this server could not understand. i have /etc/hosts: 127.0.0.1 localhost.localdomain localhost moclm.tap.pt moclm and openssl returns: openssl verify -CAfile cert-CA.cer moclm.cer moclm.tap.pt.cer: OK I have been trying to troubleshoot the issue but no luck. Need help Thanks

    Read the article

  • Phusion Passenger (Apache, Sinatra) suddenly not working for a single site on my server

    - by Kerrick
    I've had Phusion Passenger working for a few of my sites for months. Then, today, it stopped working for a single site. I hadn't changed anything (I hadn't even SSH'ed into the server for a week), and everything is set up the way it should for it to work. Plus, it's working fine for other sites! I'm about to pull my hair out trying to find out what's wrong, so I was hoping y'all could help. Passenger is not working on kerricklong.com -- I only get the "It works!" Apache default page. If I look at the headers, it's not even serving the X-Powered-By: Phusion Passenger (mod_rails/mod_rack) header that I get on my other (currently working) Passenger-powered sites on the same server running Ubuntu Server 10.04. The following is in my /etc/apache2/sites-available/kerricklong.com file, but it's identical (with names and paths changed) to the configuration file for the site that is working. <VirtualHost *:80> ServerAdmin [email protected] ServerName kerricklong.com ServerAlias *.kerricklong.com DocumentRoot /redacted/path/to/kerricklong.com/public ErrorLog /redacted/path/to/kerricklong.com/logs/error.log <Directory /redacted/path/to/kerricklong.com/public> Allow from all Options -MultiViews Include /etc/apache2/h5bp.conf </Directory> php_flag engine off </VirtualHost> I've got the necessary tmp/, logs/, and public/ directories, along with config.ru. I've also run sudo a2dissite then sudo a2ensite, sudo service apache2 restart, and reboot the server to try to fix it. What gives?

    Read the article

  • Apache 2.4, Ubuntu 12.04 Forbidden Errors

    - by tubaguy50035
    I just installed Apache 2.4 today, and I'm having some issues getting vhost configuration to work correctly. Below is the vhost conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /hosting/Client/site.com/www ServerName site.com ServerAlias www.site.com <Directory "/hosting/Client/site.com/www"> Options +Indexes +FollowSymLinks Order allow,deny Allow from all </Directory> DirectoryIndex index.html </VirtualHost> There is an index.html file in /hosting/Client/site.com/www. When I go to the site, I receive a 403 forbidden error. The www-data group is the group on the www folder, which I've already given all permissions (r/w/x). I'm really at a loss as to why this is happening. Any thoughts? If I remove the vhost and go straight to the IP address, I get the default, "It works!" page. So I know that it's working. The error log says "client denied by server configuration". apache2ctl -S dump: nick@server:~$ apache2ctl -S /usr/sbin/apache2ctl: 87: ulimit: error setting limit (Operation not permitted) VirtualHost configuration: *:80 is a NameVirtualHost default server site.com (/etc/apache2/sites-enabled/site.com.conf:1) port 80 namevhost site.com (/etc/apache2/sites-enabled/site.com.conf:1) alias www.site.com port 80 namevhost site.com (/etc/apache2/sites-enabled/site.com.conf:1) alias www.site.com ServerRoot: "/etc/apache2" Main DocumentRoot: "/var/www" Main ErrorLog: "/var/log/apache2/error.log" Mutex watchdog-callback: using_defaults Mutex default: dir="/var/lock/apache2" mechanism=fcntl Mutex mpm-accept: using_defaults PidFile: "/var/run/apache2.pid" Define: DUMP_VHOSTS Define: DUMP_RUN_CFG Define: ENALBLE_USR_LIB_CGI_BIN User: name="www-data" id=33 not_used Group: name="www-data" id=33 not_used Ouput of namei -mo /hosting/Client/site/www/index.html f: /hosting/Client/site.com/www/index.html drwxr-xr-x root root / drwxr-xr-x root root hosting drwxr-xr-x root root Client drwxr-xr-x nick www-data site.com drwxr-xr-x nick www-data www -rw-rwxr-x nick www-data index.html

    Read the article

  • Ways to go about optimizing website performance WordPress, Amazon EC2 Apache and RDS MySQL

    - by fuzzybee
    I have 6 WordPress websites running on 1 single EC2 instance. All the the websites are connecting to databases in 1 same RDS instance. Earlier today, traffic to the largest website peaked and the RDS instance went bottle-neck - CPU utilization was 100% for over an hour. It affected all of my websites as it took them all forever to load. In order to prevent such issue from happening again, which of the following will matter most so that I invest time and effort in first of all? (I will work on all later, I just need to prioritise now) To improve caching for all websites To fine-tune the database server To fine-tune my Apache server What will be the effect on user experience for my websites? Some quick searches show that I should limit number of concurrent connections to my web server but wouldn't that prevent users from accessing my websites? More background: My largest website has 140k visits and 660k page views a month. The other 5 websites should add up much less than that. I'm using a large EC2 instance as the web server I'm using a medium RDS instance as the database server What I've already done: Use W3 Total Cache plugin for caching for most the websites, especially the largest one (I can barely anything else in terms of caching I could do for the largest website) Am I using my resources wastefully or is there simply not enough resources for my websites - or rather, how do I answer that question myself?

    Read the article

  • apache name virtual host - two domains and SSL

    - by Tom
    I'm trying to setup Apache(2.2.3) to run two websites with SSL using both different domains and IP addresses. Both websites run fine on port 80 but when I tried to enable SSL for website2 I get a ssl_error_bad_cert_domain error; website2 picks up the SSL cert for website1. Here is my setup in httpd.conf: # Website1 NameVirtualHost 192.168.10.1:80 <VirtualHost 192.168.10.1:80> DocumentRoot /var/www/html ServerName www.website1.org </VirtualHost> NameVirtualHost 192.168.10.1:443 <VirtualHost 192.168.10.1:443> SSLEngine On SSLCertificateFile conf/ssl/website1.cer SSLCertificateKeyFile conf/ssl/website1.key </VirtualHost> # Website2 NameVirtualHost 192.168.10.2:80 <VirtualHost 192.168.10.2:80> DocumentRoot /var/www/html/chart ServerName www.website2.org </VirtualHost> NameVirtualHost 192.168.10.2:443 <VirtualHost 192.168.10.2:443> SSLEngine On SSLCertificateFile conf/ssl/website2.cer SSLCertificateKeyFile conf/ssl/website2.key </VirtualHost> Update: In answer to Shane (this wouldn't fit in comment box) here is the output from apachectl -S: VirtualHost configuration: 192.168.10.2:80 is a NameVirtualHost default server www.website2.org (/etc/httpd/conf/httpd.conf:1033) port 80 namevhost www.website2.org (/etc/httpd/conf/httpd.conf:1033) 192.168.10.2:443 is a NameVirtualHost default server bogus_host_without_reverse_dns (/etc/httpd/conf/httpd.conf:1040) port 443 namevhost bogus_host_without_reverse_dns (/etc/httpd/conf/httpd.conf:1040) 192.168.10.1:80 is a NameVirtualHost default server www.website1.org (/etc/httpd/conf/httpd.conf:1017) port 80 namevhost www.website1.org (/etc/httpd/conf/httpd.conf:1017) 192.168.10.1:443 is a NameVirtualHost default server bogus_host_without_reverse_dns (/etc/httpd/conf/httpd.conf:1024) port 443 namevhost bogus_host_without_reverse_dns (/etc/httpd/conf/httpd.conf:1024) wildcard NameVirtualHosts and _default_ servers: _default_:443 192.168.10.1 (/etc/httpd/conf.d/ssl.conf:81) Syntax OK

    Read the article

  • Java issues with Apache 2.0 Agent 2.202 for RHEL5 Linux 64bit

    - by Richard
    In trying to install Apache 2.0 Agent 2.202 for RHEL5 Linux 64bit, the dialogue appears as follows. $ ./setup Error : java is not present in path. Please enter JAVAHOME path to pick up java:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/ Launching installer... Attach to native process failed $ ./setup Error : java is not present in path. Please enter JAVAHOME path to pick up java:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib ./setup: line 80: [: 107:: integer expression expected ./setup: line 83: [: 107:: integer expression expected Error : Incorrect java version (1.2.2 or above is needed). Please enter JAVAHOME path to pick up java: On the server we have the following JREs and I've tried both. $ sudo rpm -qa | egrep "(openjdk|icedtea)" java-1.6.0-openjdk-1.6.0.0-1.27.1.10.8.el5_8 $ find 2>/dev/null | grep -i '/jre/' ./usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre/bin/ ... ./usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/ Any suggestions? I know I'm overlooking something. In previous searches I've only found one other posting that comes close but it has no responses (http://forum.parallels.com/showthread.php?t=76556).

    Read the article

  • domain2.com redirects to domain1.com in Apache

    - by Dmitry Mikhaylov
    I created new virtual host, but when I try to request it, Apache redirects me to another virtual host. What could cause this problem? <VirtualHost XXX.XXX.XXX.XXX:80 > ServerName domain1.com AddDefaultCharset utf-8 CustomLog /var/www/httpd-logs/domain1.com.access.log combined DocumentRoot /home/user/www/domain1.com ErrorLog /var/www/httpd-logs/domain1.com.error.log ServerAdmin [email protected] ServerAlias www.domain1.com SuexecUserGroup user user AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml AddType application/x-httpd-php-source .phps php_admin_value open_basedir "/home/user:." php_admin_value sendmail_path "/usr/sbin/sendmail -t -i -f [email protected]" php_admin_value upload_tmp_dir "/home/user/mod-tmp" php_admin_value session.save_path "/home/user/mod-tmp" ScriptAlias /cgi-bin/ /home/user/www/domain1.com/cgi-bin/ </VirtualHost> <VirtualHost XXX.XXX.XXX.XXX:80 > ServerName domain2.com CustomLog /dev/null combined DocumentRoot /home/user/www/domain2.com ErrorLog /dev/null ServerAdmin [email protected] ServerAlias www.domain2.com SuexecUserGroup user user AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml AddType application/x-httpd-php-source .phps php_admin_value open_basedir "/home/user:." php_admin_value sendmail_path "/usr/sbin/sendmail -t -i -f [email protected]" php_admin_value upload_tmp_dir "/home/user/mod-tmp" php_admin_value session.save_path "/home/user/mod-tmp" </VirtualHost> "apache2ctl -S" output: VirtualHost configuration: XXX.XXX.XXX.XXX:80 is a NameVirtualHost default server domain1.com (/etc/apache2/apache2.conf:266) port 80 namevhost domain1.com (/etc/apache2/apache2.conf:266) port 80 namevhost domain2.com (/etc/apache2/apache2.conf:284) XXX.XXX.XXX.XXX:443 is a NameVirtualHost default server domain1.com (/etc/apache2/apache2.conf:246) port 443 namevhost domain1.com (/etc/apache2/apache2.conf:246) wildcard NameVirtualHosts and _default_ servers: *:443 is a NameVirtualHost default server www.example.com (/etc/apache2/apache2.conf:239) port 443 namevhost www.example.com (/etc/apache2/apache2.conf:239) *:80 is a NameVirtualHost default server domain1.com (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost domain1.com (/etc/apache2/sites-enabled/000-default:1)

    Read the article

  • Apache / PHP Begins to Deny SQL Requests after about 2000

    - by Daniel Stern
    We have a web page on our server that we use to run administrative scripts. For example, we might run the script "unenrolStudents()" which runs 5,000 SQL SET commands one after another and sets 5000 student entries in an SQL database to unenrolled. However, we are finding that after running a few thousand queries (it is not totally consistent) we will be "locked out" by our server. SYMPTOMS OF LOCKING OUT: - unable to connect to server with winSCP - opening putty with that connection shows a blank screen (no login / pass) - clearing cookies / cache in chrome does NOT fix locking out - other computers in the office ALSO become locked out - locking out can be triggered with a high frequency of requests (10000 in 1 second) or by less over time (10000 in 500 seconds - this will still cause a lockout even though the frequency is much less) We believe this is a security feature of our own Apache. I know we are using Suhosin but I didn't configure it so I don't know. How can I disable this locking effect so that I can confidently run all my SQL requests and they will go through? Has anyone else dealt with this and found workarounds? Thanks DS

    Read the article

  • Git push over http (using git-http-backend) and Apache is not working

    - by Ole_Brun
    I have desperately been trying to get push for git working through the "smart-http" mode using git-http-backend. However after many hours of testing and troubleshooting, I am still left with error: Cannot access URL http://localhost/git/hello.git/, return code 22 fatal: git-http-push failed` I am using latest versions of Ubuntu (12.04), Apache2 (2.2.22) and Git (1.7.9.5) and have followed different tutorials found on the Internet, like this one http://www.parallelsymmetry.com/howto/git.jsp. My VHost file currently looks like this: <VirtualHost *:80> SetEnv GIT_PROJECT_ROOT /var/www/git SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER DocumentRoot /var/www/git ScriptAliasMatch \ "(?x)^/(.*?)\.git/(HEAD | \ info/refs | \ objects/info/[^/]+ | \ git-(upload|receive)-pack)$" \ /usr/lib/git-core/git-http-backend/$1/$2 <Directory /var/www/git> Options +ExecCGI +SymLinksIfOwnerMatch -MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have changed the ownership of the /var/www/git folder to root.www-data and for my test repositories I have enabled anonymous push by doing git config http.receivepack true. I have also tried with authenticated users but with the same outcome. The repositories were created using: sudo git init --bare --shared [repo-name] While looking at the apache2 access.log, it appears to me that WebDAV is trying to be used, and that git-http-backend is never fired: 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/info/refs?service=git-receive-pack HTTP/1.1" 200 207 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/HEAD HTTP/1.1" 200 232 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "PROPFIND /git/hello.git/ HTTP/1.1" 405 563 "-" "git/1.7.9.5" What am I doing wrong? Is it an issue with the version of git and/or apache that I am using perhaps? BTW: I have read all the git http related questions on ServerFault and StackOverflow, and none of them provided me with a solution, so please don't mark this as duplicate.

    Read the article

  • APACHE - PHP - Bounce emails

    - by user1179459
    I want to improve our mail server lists by handling all the bounces we get in our websites. I have a website which has over 8000+ users and another website which as over 1500+ users, they are emailed various notifications every second, ie. job alerts, email alerts, I am using POP connections with EXIM on APACHE server, most scripts are based on PHP language generates email on the fly. Problems i have But some users has registered long time ago and now quite few has bounce email address Some users register with dummy emails like [email protected] which never existed but a valid email address, any chance of stopping this without asking them login to the email account and clicking links which dont work most times.. (too annoying to the end user) Server is sending unnecessary emails can be avoided if i know they dont exists ? Solutions i need to have Is there a way i can download the bounce email list somewhere (WHM/Cpanel), i know exim mail has it but its not readable (i need a file like CSV or something similar to scan them over and write a php script to delete the users from the database ?) I need to know if there is any function in the PHP that can check the existence of the email address on the fly ? so that i can set the email send function in the mailer class to check before it sends out. On the server will bouncing emails are going to eatup lots of server resources ? like memory/cpu on processing them ? or are they are minimal where we dont have to worry about this at all ? May be a opensoruce or linux software to capture them and view them in a report basis and cleaning them up ? I am not a linux expert or server admin but i do lot of PHP coding, so please be descriptive of the solutions specially if they are linux commands Thank you!

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >