Search Results

Search found 19615 results on 785 pages for 'apache config'.

Page 42/785 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Setting up Apache directory root behavior

    - by Corey
    I'm running Apache on a Windows machine for local testing and I'm new to it. Currently, if I navigate to localhost/ in a web browser, it will display an index.html page if one exists. Otherwise, it will display the directory listing. How can I make it so that navigating to a root directory will display more than index.html? What I need is so that if either: index.html, index.htm, or index.php exist, it will navigate to one of those. How can I disable showing directory roots? I would like it to return a 403 Forbidden error if no index page exists.

    Read the article

  • Redirect non-www ssl traffic to www ssl (apache)

    - by The NinjaSysadmin
    Hello, I'm attempting to get a redirect which is failing, and for some reason I can't think today. I have a vHost file within HTTPD that listens on standard port 80 and port 443. I'm attempting to redirect https://domain.com/(.*) to https://www.domain.com/$1 so that the URL remains intact. My config is as follows: ServerName www.domain.com ServerAlias tempdomain.testdomain.co.uk ServerAlias domain.com My rerwrite rule I'm using is. RewriteCond %{HTTP_HOST} ^domain.com$ RewriteRule ^(.*)$ https://www.domain.com$1 [R=301,L] I've also tried removing the . and $ but nothing.. When I visit the url https://domain.com/secure.page?action=comp it doesn't redirect to https://www.domain.com/secure.page?action=comp I do also have other SSL pages, the above was just an example.. Can anyone point out my stupidity.

    Read the article

  • Apache: Stealth 404 the admin area until authenticated via basic auth, then allow access

    - by Kzqai
    Given a administrative area with urls like this: wp-admin/ wp-admin/whatever wp-admin/another-page wp-adminsecretlogin/ A standard basic-auth coverage would provide a username and password prompt on all three urls, and return a 403 on all failed auth attempts. This is a pretty obvious signal that something exists there, and thus is an invitation to script/brute force access. I would like to instead, require basic auth everywhere, but when not authenticated, not prompt for username and password, and instead return a 404 not found error for all urls except a wp-adminsecretlogin/ url. At that individual-to-the-site url, basic auth could go through, and unlock the rest of the administrative functionality (though the standard application login would still be necessary). How would I do that via apache .htaccess or .conf directives?

    Read the article

  • Apache reverse proxy setup

    - by nixnotwin
    I have a jboss application server on machine1. The application address is http://ip-address:8080/webapp. I wanted to have only an ip pointing to the application. So on machine2 I setup an apache proxy. But it only helps to shift to port 80 but the directory webapp cannot be removed. So using proxy, the address is http://ip-address/webapp. So is there a way to just have the ip point to the application. For example the address http://ip-address should open the web page of the application.

    Read the article

  • Specific apache + mysql settings for a light-weight site

    - by Good Person
    I have a small website with a Joomla and a Moodle set up. It seems that both of these are very slow. The server (CentOS release 5.5 (Final)) is a virtual dedicated server with about 2GB of ram. I don't expect to ever get more than 10-15 people on at the same time (and if that is high) What settings could I change in either apache, mysql, or even the OS to increase the performance of my site? I'm not concerned about running out of resources if I get too many visitors. If you need more specific data leave a comment and I'll edit the question.

    Read the article

  • Making a .so file for Apache.

    - by Josh
    I am using CentOS 5. I am trying using Mod_Security which requires liblua. I was not able to find lua on the default repos. I prefer not to use any third party repos. This in mind, I downloaded the lua source from the offical site. After compiling the only file even close is liblua.a. I need a liblua-5.1(.4).so file for Apache. How do I make a .so file for liblua?

    Read the article

  • Resolve local subdomain on apache for paths within user dir

    - by MaoPU
    On Apache 2.2.x I've activated mod_userdir. I used the default setup, so that http://localhost/~name/ will be connect with ~name/public_html/ and a path within public_html, e.g. ~name/public_html/mySite can be reached through http://localhost/~name/mySite. How can I achieve, that the same path can be reached through http://mySite.name.localhost/? I don't want a manual approach like it is suggested in other SF questions (such as http://serverfault.com/q/133921/53624), but rather want an automatic mapping of all available paths to the corresponding URL. I think, several steps will need to be taken: Change mod_userdir configuration, so that the subdomain of localhost will be connected with all available user names on the machine. The second step would maybe include the usage of mod_rewrite, so that the subsubdomain could be matched to the path within ~name/public_html... What would be your prefered way?

    Read the article

  • Setup Apache in Amazon AWS

    - by hudarsono
    Hi, I tried to setup apache 2.2 in amazon aws using amazon ami. i did installed httpd and php and congirue http.conf to use /var/www/html. Then put file index.html on that folder. But when i tried to use browser to browse my domain which is ec2-122-248-255-181.ap-southeast-1.compute.amazonaws.com, nothing was loaded. I did start httpd by running apachectl start, and i can see it listening on port 80. Does anybody know what is wrong?

    Read the article

  • curl makes a site work externally once run locally (apache)

    - by Kyle_at_NU
    Currently when I visit mysite.mydomain.com external to the local network I get in the browser: This is the default web page for this server. Nothing to see here. This is not even the "It Work's" Apache page. Then if locally (Apache2 on Ubuntu Server 12.04 with curl installed ) type: curl mysite.mydomain.com I get the site I expect. Then the next time i visit the page externally I get the correct site. Has anyone seen this before? Tips/Suggestions?

    Read the article

  • Apache RewriteRule ignoring RewriteCond?

    - by winsmith
    So I have an Apache running on OSX Server 10.4 (don't ask) with multiple sites. In 0002_[example.com].conf, I have this bit of code: <Directory "/Library/WebServer/Documents/secret/"> RewriteEngine On RewriteCond %{REMOTE_ADDR} !^137\.250\. RewriteRule .* /messages/secret.html </Directory> However, in this configuration, the RewriteCond always seems to evaluate to false, since the secret directory gets shown even if the client's address does not begin with 137.250. If I change the config to this <Directory "/Library/WebServer/Documents/secret/"> RewriteEngine On RewriteRule .* /messages/secret.html RewriteCond %{REMOTE_ADDR} !^137\.250\. </Directory> the condition either does not get evaluated at all or always evaluates to true. Either way, all clients get blocked. What am I doing wrong?

    Read the article

  • Apache Alias - Chiliproject

    - by asdz
    I'm trying to setup Chiliproject (a ruby application for project management) I have setup my Apache already. However I want the Chiliproject to be like http://abc.com/Chiliproject as I want the abc.com to be used for other application. Following is my Chiliproject vhost setting: ServerName abc.com DocumentRoot /var/www/chiliproject/public Alias /chiliproject /var/www/chiliproject/public Options -MultiViews AllowOverride all When I go to abc.com, the Chiliproject page will appear but when I go to abc.com/chiliproject, I will reach the 404 page not found instead. If I change the DocumentRoot to /var/www, the page abc.com will be what I want, but the abc.com/chiliproject will comes to the 'Directory view' of my page.

    Read the article

  • Gui for viewing Apache headers

    - by user49249
    Is there any GUI for viewing Apache headers which are being served by a chain of Reverse Proxy Servers. I have a cloud which uses a few Proxy Servers in between the client and actual server which has to serve the original request. All servers are Unix Servers. And if there is a problem which I do not get a clue to then to be able to post them here downloading and doing an ftp of those headers with all the logs , loging in each time to each proxy server and Opening the browser and exporting the X display to some remote server each time from the chain and then observing HTTP_RESPONSES and checking the request from each of those servers and then posting log with configuration and response takes at least 2-3 hours to type an email. Is there a shorter way to do so?

    Read the article

  • Apache mod_rewrite not working properly on Mac OS X 10.6 (Snow Leopard)

    - by DashRantic
    Hello all, I'm trying to create a PHP website with clean URLs with Apache's mod_rewrite, using a .htaccess file. mod_rewrite seems to be working, however, it claims it cannot find files on my server that do exist. Just as a basic test, this is what my .htaccess file looks like at the moment--going to [mysite]/page should redirect to the index.php file: Options +FollowSymLinks RewriteEngine on RewriteRule ^page$ index.php Afaik, I have setup the .conf file appropriately as well: <Directory "/Users/myuser/Sites/"> Options Indexes MultiViews AllowOverride All Order allow,deny Allow from all </Directory> However, when I try accessing the URL setup via mod_rewrite ( localhost/~myuser/mysite/page ), I get this: Not Found The requested URL /Users/myuser/Sites/mysite/index.php was not found on this server. However, that file does exist, and that is the proper location! The site works fine otherwise, if I go to localhost/~myuser/mysite/index.php, everything works fine--minus any sort of clean URLs, of course. Has anyone seen this before/have any ideas as to what I'm doing wrong?

    Read the article

  • 2nd apache server fails to start

    - by ito3
    HI, I have determine that my 2nd server which fail to start because of this entry in conf. Once I remove this entry, the server start up as normal. Alias /Reports/ "//abc/filedir/a/" <Directory "//abc/filedir/a/"> Order allow,deny Allow from all </Directory> I have a primary apache server which is also pointing to the folder with the same setting. I will like to know why the 2nd server failed to start, is it because the server one has locked the folder. //abc is my NAS server running on window 2003. Thanks

    Read the article

  • GhettoVCB.sh log is wrong

    - by Michael
    2010-02-25 16:03:02 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 2 2010-02-25 16:03:02 -- info: CONFIG - DISK_BACKUP_FORMAT = thin 2010-02-25 16:03:02 -- info: ============================== ghettoVCB LOG START ============================== 2010-02-25 16:03:02 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-02-25 16:03:02 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-02-25 16:03:02 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-02-25 16:03:02 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/nfs_storage_backup/vm1 2010-02-25 16:03:02 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-02-25 16:03:02 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 2 2010-02-25 16:03:02 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-02-25 16:03:02 -- info: CONFIG - DISK_BACKUP_FORMAT = thin 2010-02-25 16:03:02 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-02-25 16:03:02 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-02-25 16:03:02 -- info: CONFIG - LOG_LEVEL = info 2010-02-25 16:03:02 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB.log 2010-02-25 16:03:02 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-02-25 16:03:02 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-02-25 16:03:02 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-02-25 16:03:02 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-02-25 16:03:02 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-02-25 16:03:02 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-02-25 16:03:02 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all 2010-02-25 16:03:02 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-02-25 16:03:02 -- info: CONFIG - LOG_LEVEL = info 2010-02-25 16:03:02 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB.log 2010-02-25 16:03:02 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-02-25 16:03:02 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-02-25 16:03:02 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all 2010-02-25 16:03:13 -- info: Initiate backup for VM1 2010-02-25 16:03:13 -- info: Initiate backup for VM1 2010-02-25 16:03:13 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-02-25" for VM1 2010-02-25 16:03:13 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-02-25" for VM1 Failed to clone disk : The file already exists (39). Destination disk format: VMFS thin-provisioned Cloning disk '/vmfs/volumes/datastore1/machine/VM1.vmdk'... 2010-02-25 16:04:16 -- info: Removing snapshot from VM1 ... Destination disk format: VMFS thin-provisioned Cloning disk '/vmfs/volumes/datastore1/machine/VM1.vmdk'... How can I fix this issue, the backup is working, but the log shows something like 2 back-up's in the exact time?

    Read the article

  • Apache Commons Codec with Android: could not find method

    - by dqminh
    Today I tried including the apache.commons.codec package in my Android application and couldn't get it running. Android could not find method ord.apache.commons.codec.binary.* and output the following errors in DDMS 01-12 08:41:48.161: ERROR/dalvikvm(457): Could not find method org.apache.commons.codec.binary.Base64.encodeBase64URLSafeString, referenced from method com.dqminh.app.util.Util.sendRequest 01-12 08:41:48.161: WARN/dalvikvm(457): VFY: unable to resolve static method 10146: Lorg/apache/commons/codec/binary/Base64;.encodeBase64URLSafeString ([B)Ljava/lang/String; 01-12 08:41:48.161: WARN/dalvikvm(457): VFY: rejecting opcode 0x71 at 0x0004 Any clue on how to solve this problem ? Thanks a lot.

    Read the article

  • Use multiple WSGI mount points in Apache with an Nginx reverse proxy

    - by Thomas
    I am trying to set up multiple virtual hosts on the same server with Nginx and Apache and have run into a curious configuration issue. I have nginx is configured with a generic upstream to apache. upstream backend { server 1.1.1.1:8080; } I'm trying to set up multiple subdomains in nginx that hit different mountpoints in apache. Each would act like the following examples. server { listen 80; server_name foo.yoursite.com; location / { proxy_pass http://backend/bar/; include /etc/nginx/proxy.conf; } ... } server { listen 80; server_name delta.yoursite.com; location / { proxy_pass http://backend/gamma/; include /etc/nginx/proxy.conf; } ... } These mountpoints are pointed at django projects, however each of the url entries are coming back prepended with the apache mountpoint path. So, if I called the django url entry for foo.yoursite.com/wiki/biz/, django appears to be returning foo.yoursite.com/bar/wiki/biz/. Similarly, if I call for the url entry for delta.yoursite.com/wiki/biz/, I get delta.yoursite.com/gamma/wiki/biz/. Is there any way get rid of the prefix being returned on the url entries by django and apache?

    Read the article

  • Apache & SVN on Ubuntu - Post-commit hook fails silently, pre-commit hook "Permission Denied"

    - by Andy R
    I've been struggling for the past couple days to get post-commit email notifications working on my SVN server (running via HTTP with Apache2 on Ubuntu 9.10). SVN commits work fine, but for some reason the hooks are not being properly executed. Here are the configuration settings: - Users access the repo via HTTP with the apache dav_svn module (I created users/passwords via htpasswd in a dav_svn.passwd file). dav_svn.conf: <Location /svn/repos> DAV svn SVNPath /home/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> I created a post-commit hook file that writes a simple message to a file in the repository root: /home/svn/repos/hooks/post-commit: #!/bin/sh REPOS="$1" REV="$2" /bin/echo 'worked' > ${REPOS}/postcommit.log I set the entire repository to be owned by www-data (the apache user), and assigned 755 permissions to the post-commit script when I test the post-commit script using the www-data user in an empty environment, it works: sudo -u www-data env - /home/svn/repos/hooks/post-commit /home/svn/repos 7 But when I commit on a client machine, the commit is successful, but the post-commit script does not seem to be executed. I also tried running a simple script for the pre-commit hook, and I get an error, even with an empty pre-commit script: "Commit failed (details follow): Can't create null stdout for hook '/home/svn/repos/hooks/pre-commit': Permission denied" I did a few searches on Google for this error and I presume that this is an issue with the apache user (www-data) not having adequate permissions, specifically to execute /dev/null. I also read that the reason post-commit fails silently is because that it doesn't report with stdout. Anyway, I've also tried giving the apache user (www-data) ownership of the entire repository, and edited the apache virtualhost to allow operations on the server root, and I'm still getting permission denied /etc/apache2/sites-available/primarydomain.conf <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any ideas/suggestions would be greatly appreciated! Thanks

    Read the article

  • Apache unresponsive on Vista [closed]

    - by William Hudson
    I had been running Apache on Vista for around a year, but recently upgraded my workstation. I did a clean install of Vista Ultimate and installed the latest version of the Apache server for win32 (2.2.11, no SSL). The service runs fine and there were no errors reported during the install, nor are there any errors in the Apache logs. However, any attempt to access the web site on localhost (or 127.0.0.1) just hangs the browser. I have used netstat to check who is listening to port 80 and it shows httpd.exe. I have also tried adjusting the .conf file to use port 8080 but this had no effect either (except to change the netstat output). This is a development system with quite a few other pieces of software installed. However, when I tried installing IIS, it worked fine (I removed it soon after before reattempting the Apache install). Using the older 2.0 version of Apache has no effect. Windows firewall is not running. I have disabled my NOD32 anti-virus. Any ideas what is going on? Regards, William

    Read the article

  • PHP mkdir and apache ownership

    - by elcorazon
    Is there a way to set php running under apache to create folders with the folder owned by the owner of the program that creates it instead of being owned by apache? Using word press it creates new folders to upload into but these are owned by apache.apache and not by the site that they are running in. This also happens using ostickets. For now we have to SSH into the server and chmod the folder, but it would seem there would be a setting somewhere to override the ownership outside of any program that does it.

    Read the article

  • Apache local configuration to resolve files correctly

    - by Alex E.
    Hello, I am new at this so bare with me. I have just configured Apache and PHP to work on my local Mac OS X computer. Now PHP works fine, except when I try to load the files for my live sites. The live sites have separate directories and are sorted by client name etc. I've created symlinks in the default root for the local web server documents. My issue is that Apache doesn't seem to want to load any of the relative paths that are found in the HTML pages. For example, I have src="/css/main.css" but Apache doesn't load the file, similarly for images, it just resolves as a file not found 404 error. I then thought it might be the symlinks so I copied the full directory into the Apache document root, and still had the same result. I would really love to setup my local development environment to run Apache, PHP, MySQL to develop locally then publish when ready. I also tried the MAMP installation, and had the same issues. Any help at all in this would be greatly appreciated. If my explanation wasn't clear please let me know. Thanks! Alex.

    Read the article

  • Apache not loading Xdebug, but does when started from the Command Line

    - by JamesD
    I know that this sounds odd, but believe me, it's what is happening. Here are my system settings: Windows7 Apache 2.2 PHP 5.2.12 Xdebug 2.0.5 I have XDebug configured in my PHP.ini file. When I run php -m, I do in fact see that Xdebug is loaded. Now, if I start Apache AS A SERVICE (or by the Apache Monitor), and run phpinfo(), it is NOT showing Xdebug as being loaded. However, (now here's the crazy part), if I go to my Apache bin directory, and simply run httpd.exe, and then go and look at phpinfo(), Xdebug now shows as being loaded! Also, comparing some phpinfo() when started via service or by command line, it looks like the php.ini file is the same for either case. Everything looks the same except for the Xdebug being loaded part. Please, if you have any ideas it would be greatly appreciated.

    Read the article

  • Nginx for static files, Apache isn't working now...

    - by matthewsteiner
    So anything that is a "static file" that exists will just be done with nginx. Otherwise, it should pass it off to Apache. Right now, static files are working correctly. However, if something is passed to apache and it's example.com or subdomain.example.com, apache just spits out the "Apache 2 Test Page" that you get if there's nothing there. Here's my nginx.conf: location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /var/www/vhosts/example.com/public/; access_log off; expires 30d; } location / { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } Apache worked fine before, so I'm guessing it has to do with the way nginx is "asking". I'm not sure though. Any ideas?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >