Search Results

Search found 3247 results on 130 pages for 'apache2'.

Page 45/130 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • I Can't Get Ruby on Rails + Passenger + Apache to Work

    - by Luke Crowe
    I'm sorry if this is a stupid question, but I can't get Ruby on Rails to work on my Apache server. I'm using Phusion Passenger (mod_rails, mod_rack) for app deployment. Here is my RoR-specific configuration code in my website's Apache configuration file: Alias /rails /var/www/syyborg.com/ruby/blog/public <Directory /var/www/syyborg.com/ruby/blog/public Options FollowSymLinks AllowOverride None Order Allow,Deny Allow from All </Directory RailsBaseURI /rails Again, I really have very little knowledge of this kind of thing; I have never set up a server from scratch before. Anyways, my rails app, as you can see, is located at /var/www/syyborg.com/ruby/blog/. I am trying to access it from http://[my domain, syyborg.com]/rails. However, when I try to load the site, I get a "403 Forbidden" error. Any help would be greatly appreciated, and I can provide further details if they are required. Thanks in advance!

    Read the article

  • pure-ftpd debian, can't get www-data user working

    - by lynks
    I'm trying to add FTP access to the apache web files, in the past I have done this with an ftpuser and group arrangement. This time I would like to make it possible to login directly as www-data (the default apache user on debian) to make things a bit cleaner. I have checked and re-checked all the common issues; MinUID is set to 1 (www-data has uid 33) www-data has shell set to /bin/bash in /etc/passwd PAMAuthentication is off UnixAuthentication is on I have restarted pure-ftpd using /etc/init.d/pure-ftpd restart My resulting pure-ftpd run is; /usr/sbin/pure-ftpd -l unix -A -Y 1 -u 1 -E -O clf:/var/log/pure-ftpd/transfer.log -8 UTF-8 -B My syslog contains; Oct 7 19:46:40 Debian-60-squeeze-64 pure-ftpd: ([email protected]) [WARNING] Can't login as [www-data]: account disabled And my ftp client is giving me; 530 Sorry, but I can't trust you Am I missing something obvious?

    Read the article

  • Apache to read from /home/user/public_html on CentOS 5.7

    - by C.S.Putra
    this is my first experience using CentOS 5.7 / Linux as my web server OS and I have just finished installing Apache. Then I created a new account using WHM. The account is now created and the domain name can be accessed. I have put the web files under /home/user/public_html/ but when I access the domain assigned for that user which I assigned when creating new account in WHM, it doesn't read the files. In /usr/local/apache/conf/httpd.conf : <VirtualHost 175.103.48.66:80> ServerName domain.com ServerAlias www.domain.com DocumentRoot /home/user/public_html ServerAdmin [email protected] User veevou # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup group1 group1 </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup group1 group1 </IfModule> CustomLog /usr/local/apache/domlogs/domain.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." CustomLog /usr/local/apache/domlogs/domain.com combined ScriptAlias /cgi-bin/ /home/user/public_html/cgi-bin/ </VirtualHost> Instead of reading from /home/user/public_html/ apache will read the /var/ww/html/ folder. How to set the apache so that when user access www.domain.com, they will access the files under /home/user/public_html/ ? Please advice. Thanks

    Read the article

  • apache rewrite debian vs windows

    - by user1079002
    I have simple rewrite rules as I just learned about them RewriteEngine On RewriteRule ^dl/(.*)/.*$ dl/$1/index.php [L] RewriteRule ^index.php$ upload.js [L] both are working on Windows for url localhost/upload/dl/mkdji/index.php, but on Debian works only second rule for url www.domain.com/index.php, but not for www.domain.com/dl/oksoks/index.php After dl is some random string. Obviously I'm missing something regarding directory depth, but don't know what. file htacces is in localhost/upload and root of domain.com folders. What am I missing here?

    Read the article

  • ErrorDocument not working when accessing .htaccess

    - by oxguy3
    I've been setting up ErrorDocuments for a website I'm working, and generally they've been working. However, after I set the 403 ErrorDocument, I noticed that it didn't work when I tried to access the .htaccess file itself. When I access a different forbidden file, the Error Document appears just fine. How can I make the ErrorDocument work on the .htaccess file? If you didn't follow my explanation, here are links to show you what I mean: ErrorDocument works fine: http://keycraft.haydencity.net/.ftpquota ErrorDocument doesn't work: http://keycraft.haydencity.net/.htaccess

    Read the article

  • from svn to git (+ LDAP + password-less updates + passworded access control)

    - by Jayen
    We have an SVN setup and there are some things we dislike about it and some things we like about it. We want to move to git, but we're not sure exactly what setup will work for us. We're currently using SVN (w/ Authz) + Apache (w/ WebDAV & LDAP). Hook to update the live site [like] Live site update requires no additional interaction [like] Live site update uses stored password [dislike] Commits require centralized-password authentication [like] Commit from live site changes stored credentials [dislike] Access control (per repository) for commits [like] Point 5 above is the one that keeps stuffing us up. Someone makes a commit from the live site and then the hook breaks. We're thinking to use gitosis/gitolite to get access control, but as they use ssh keys, we won't be requiring passwords. We're also thinking to use git-http-backend, and use Apache for authentication, but then do we lose access control? Can the live site be automatically updated from a hook if Apache requires authentication? Can we combine git-http-backend and gitosis/gitolite somehow? Can we store http credentials with git?

    Read the article

  • Lock down Wiki access to password only but remain open to a subnet via .htaccess

    - by Treffynnon
    Basically we have a Wiki that has some sensitive information stored in it - not the best I know but my predecessor set it up. I want to be able to request password access from any one who is not on the local network subnet. Those on the local subnet should be able to proceed without entering a password. The following .htaccess does not seem to work any more as it is letting non-local access without requiring the password: AuthName "Our Wiki" AuthType Basic AuthUserFile /path/to/passwd/file AuthGroupFile /dev/null Require valid-user Allow from 192.168 Satisfy Any order deny,allow And I cannot work out why. The WikkaWiki it is supposed to be protecting was recently upgraded, which clobbered the .htaccess file so I restored the above from memory/googling. Maybe I am missing an important directive? The full .htaccess is as follows: AuthName "Our Wiki" AuthType Basic AuthUserFile /path/to/passwd/file AuthGroupFile /dev/null Require valid-user Allow from 192.168 Satisfy Any SetEnvIfNoCase Referer ".*($LIST_OF_ADULT_WORDS).*" BadReferrer order deny,allow deny from env=BadReferrer <IfModule mod_rewrite.c> # turn on rewrite engine RewriteEngine on RewriteBase / # if request is a directory, make sure it ends with a slash RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^(.*/[^/]+)$ $1/ # if not rewritten before, AND requested file is wikka.php # turn request into a query for a default (unspecified) page RewriteCond %{QUERY_STRING} !wakka= RewriteCond %{REQUEST_FILENAME} wikka.php RewriteRule ^(.*)$ wikka.php?wakka= [QSA,L] # if not rewritten before, AND requested file is a page name # turn request into a query for that page name for wikka.php RewriteCond %{QUERY_STRING} !wakka= RewriteRule ^(.*)$ wikka.php?wakka=$1 [QSA,L] </IfModule>

    Read the article

  • how to best config for synflood setup in csf but web response still fast

    - by Binh Nguyen
    my server down random every day 4-5 time cause get high load very quick.. I have install csf and with some config server now stable.. load around 5. BUT the big isuse is : the real user very hard to access website specially from IE browser you can test at xaluan.com the flowing is config using in csf: SYNFLOOD = "1" SYNFLOOD_RATE = "100/s" SYNFLOOD_BURST = "10" CONNLIMIT = "80;30" PORTFLOOD = "80;tcp;70;5" CT_LIMIT = "29" # other config may same as default i playing around with this config for a week but still not work around.. If increase the rate SYNFLOOD_RATE = "140/s" or more.. the website response very fast.. be side have bad effect of server load increase so fast normal 20 and may be up to few hundred in peck time .. my need is response time fast but load still low.. please help thanks ps: server runing nginx frontend, apache, mysql, php ,, the home page has around 70 elements which will cached in browser in fist time access..

    Read the article

  • Adding PHP to Apache

    - by user528451
    Where I work, we use ancient technology that belongs in a museum. Further I have to get everything done through system admins. They are telling me in order to get PHP, they will need to upgrade the operating system as well as the Apache version. lcas100[67]% uname -a Linux lcas100 2.6.9-11.ELsmp #1 SMP Fri May 20 18:26:27 EDT 2005 i686 i686 i386 GNU/Linux lcas100[68]% cat /etc/*-release LSB_VERSION="1.3" Red Hat Enterprise Linux AS release 4 (Nahant) lcas100[75]% /ots/apache/bin/httpd -v Server version: Apache/1.3.31 (Unix) Server built: Nov 3 2004 18:47:31 This doesn't make sense to me because apparently Apache 1.3.x supports PHP: http://php.net/manual/en/install.unix.apache.php Furthermore, we have another machine that runs PHP and is running the exact same OS and OS version. The reason I want it on the former machine is because it is mounted on a different file system. Lastly they tell me that all software the Apache webserver runs will need to be reinstalled/recompiled (assuming an Apache upgrade WAS needed). I am not even sure about this. Are they full of it? Thanks

    Read the article

  • Weird Apache Crash (with Dump) zend_hash_find (), libphp5.so

    - by Jacob84
    To be honest I don't have experience working with Apache. I'm just putting the best of my intentions on solving this and don't know if I'm making it right. So any help will be greatly appreciated. We have a php page wich is throwing the following message in the browser: Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data. The logs from /var/log/httpd doesn't seem to help because It seems that the Apache is unable to write any information. So the exception or error is preventing the writing (maybe ocurring in some stage of the process that makes impossible to log?). I've read about the procedure to make dumps of the apache, and here we have the content: Reading symbols from /lib64/libgpg-error.so.0...(no debugging symbols found)...done. Loaded symbols for /lib64/libgpg-error.so.0 Reading symbols from /usr/lib64/php/modules/zip.so...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/php/modules/zip.so Core was generated by `/usr/sbin/httpd'. Program terminated with signal 11, Segmentation fault. 0 0x00007fb828fff712 in zend_hash_find () from /etc/httpd/modules/libphp5.so Missing separate debuginfos, use: debuginfo-install httpd-2.2.15-15.el6.centos.1.x86_64 I've been looking in the PHP files and I haven't found any direct call to zend_hash_find (wich seems to be causing the error). I've been looking at Google but found nothing related. Can somebody please help? Is there any step that I need to accomplish to know more? Thanks a lot, as always!

    Read the article

  • Mysql not loading correctly

    - by mcondiff
    PHP 5.3.2 Apache 2.2.15 Mysql 5.1.X Windows XP SP3 I have now configured everything correctly but get a timeout when trying to connect to Mysql via PHP. So frustrated. I don't get an error message, the script just times out. I have made sure I have the correct paths. Fatal error: Maximum execution time of 60 seconds exceeded Any idea why this might be happening? I do a php -v from the command line and everything is normal, no errors. i upgraded PHP from 5.2.6 to 5.3.2 - does there seem to be problems or bugs with this? I am essentially using my previous PHP.ini while editing paths. I am lost. Help! If you need anything from phpinfo() or httpd.conf or php.ini let me know. else

    Read the article

  • Kerberos authentication not working for one single domain

    - by Buddy Casino
    We have a strange problem regarding Kerberos authentication with Apache mod_auth_kerb. We use a very simple krb5.conf, where only a single (main) AD server is configured. There are many domains in the forest, and it seems that SSO is working for most of them, except one. I don't know what is special about that domain, the error message that I see in the Apache logs is "Server not found in Kerberos database": [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(1025): [client xx.xxx.xxx.xxx] Using HTTP/[email protected] as server principal for password verification [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(714): [client xx.xxx.xxx.xxx] Trying to get TGT for user [email protected] [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(625): [client xx.xxx.xxx.xxx] Trying to verify authenticity of KDC using principal HTTP/[email protected] [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(640): [client xx.xxx.xxx.xxx] krb5_get_credentials() failed when verifying KDC [Wed Aug 31 14:56:02 2011] [error] [client xx.xxx.xxx.xxx] failed to verify krb5 credentials: Server not found in Kerberos database [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(1110): [client xx.xxx.xxx.xxx] kerb_authenticate_user_krb5pwd ret=401 user=(NULL) authtype=(NULL) When I try to kinit that user on the machine on which Apache is running, it works. I also checked that DNS lookups work, including reverse lookup. Who can tell me whats going?

    Read the article

  • Plesk + Apache + PHP (FastCGI): Constant session permissions problems, conflicts between HTTP / HTTPS

    - by Hans Engel
    I've just moved a collection of sites over to a brand-new server, running Apache 2.2.3, PHP 5.3, and Plesk 10.1.1. I am having problems with file permissions on PHP sessions, which are being stored in /var/lib/php/session. I originally set the permissions like so for this folder: drwxrwx--- 2 apache psacln 8192 Mar 22 23:25 session This worked fine, for HTTP sessions. Files were being saved in that folder with these permissions: -rw------- 1 client1 psacln 0 Mar 22 23:24 sess_507... -rw------- 1 client2 psacln 0 Mar 22 23:25 sess_8o1... The problem, however, is that PHP scripts accessed via HTTPS do not seem to be run by the same client1 or client2 user. I deleted files in the session directory and accessed a login page via HTTPS to see how sessions were being saved when initiated via this protocol: -rw------- 1 apache apache 0 Mar 22 23:25 sess_507... So, for whatever reason, sessions initiated by clients browsing with HTTPS were being saved by apache:apache, while sessions from HTTP clients were saved with someclient:psacln. What I'd like to ask: How can I avoid this problem with session permissions? When sessions are created via unencrypted HTTP and a client visits an HTTPS portion of the site, permission errors are shown, since apache:apache tries to access the session save created by someclient:psacln. The converse is also true. Can I change the user which runs the Apache HTTPS server, via Plesk or the command line? If not, can I have PHP sessions save with rw-rw---- permissions, and then add apache to the psacln group? Any other suggestions on how to fix this issue?

    Read the article

  • How do I use .htaccess conditional redirects for multiple domains?

    - by John
    I'm managing about 15 or so domains for a particular promotion. Each domain has specific redirects in place, as shown below. Rather than make 15 different .htaccess files that I would later have to manage separately, I'd like to use a single .htaccess file and use a symbolic link into each website's directory. The trouble is that, I can't figure out how to make the rules apply only for a specific domain. Every time I visit www.redirectsite2.com, it sends me to www.targetsite.com/search.html?state=PA&id=75, when it should instead be sending me to www.targetsite.com/search.html?state=NJ&id=68. How exactly do I make multiple RewriteRules apply for a given domain and only that domain? Is this even possible to do within a single .htaccess file? Options +FollowSymlinks # redirectsite1.com RewriteEngine On RewriteBase / # start processing rules for www.redirectsite1.com RewriteCond %{QUERY_STRING} ^$ RewriteCond %{HTTP_HOST} ^www\.redirectsite1\.com$ # rule for organic visit first RewriteRule ^$ http://targetsite.com/search.html?state=PA&id=75 [QSA,R,L] RewriteRule ^PGN$ http://targetsite.com/search.html?state=PA&id=26 [QSA,R,NC,L] RewriteRule ^NS$ http://targetsite.com/search.html?state=PA&id=27 [QSA,R,NC,L] RewriteRule ^INQ$ http://targetsite.com/search.html?state=PA&id=28 [QSA,R,NC,L] RewriteRule ^AA$ http://targetsite.com/search.html?state=PA&id=29 [QSA,R,NC,L] RewriteRule ^PI$ http://targetsite.com/search.html?state=PA&id=30 [QSA,R,NC,L] RewriteRule ^GV$ http://targetsite.com/search.html?state=PA&id=31 [QSA,R,NC,L] # catch-all rule, using the same id as the organic visit RewriteRule ^([a-z]+)?$ http://targetsite.com/search.html?state=PA&id=75 [QSA,R,NC,L] # end processing rules for www.redirectsite1.com # begin rules for redirectsite2.com RewriteCond %{QUERY_STRING} ^$ RewriteCond %{HTTP_HOST} ^www\.redirectsite2\.com$ # rule for organic visit first RewriteRule ^$ http://targetsite.com/search.html?state=NJ&id=68 [QSA,R,L] RewriteRule ^SL$ http://targetsite.com/search.html?state=NJ&id=6 [QSA,R,NC,L] RewriteRule ^APP$ http://targetsite.com/search.html?state=NJ&id=8 [QSA,R,NC,L] # catch-all rule, using the same id as the organic visit RewriteRule ^([a-z]+)?$ http://targetsite.com/search.html?state=NJ&id=68 [QSA,R,NC,L] Thanks for any help you may be able to provide!

    Read the article

  • How do I install PHP 5.3 on CentOS?

    - by fivelitresofsoda
    I have to install PHP 5.3 on my CentOS server. If I do yum install php, the base repository installs 5.1.6 which is too old for the applications I need to install. So I've been trying to use the IUS repository, following the official instructions from IUS: root@linuxbox ~]# wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-2.ius.el5.noarch.rpm root@linuxbox ~]# wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm root@linuxbox ~]# rpm -Uvh ius-release*.rpm epel-release*.rpm OK. Now I simply do yum install php53, etc. for all I need... but I get this error: Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Check Error: file /usr/bin/php from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /usr/bin/php-cgi from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /usr/share/man/man1/php.1.gz from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /etc/php.ini from install of php53u-common-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-common-5.1.6-27.el5_5.3.x86_64 Error Summary ------------- I have no idea on how to solve this. I think I have to delete the base packages. However, as someone new to Linux, I don't know how to do that.

    Read the article

  • Is this normal? Multiple httpd process

    - by ilcreatore
    I'm testing a new Server. This isnt really a peak time for my server (2pm), but still its running a bit slow, I was checking the ESTABLISHED connections using the following command: # netstat -ntu | grep :80 | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n http://i.stack.imgur.com/cZuvP.jpg My MaxClients are set to 50. So as you can see on the picture, only 10 people are eating most of my ram. I got a server with 4GB Ram (2.7GB free for apache) but each apache process is eating 53MB each, wich mean im only allowed to accept 50 process. The KeepAlive = Off, but I notice those connections arent closing fast enough, is that normal?

    Read the article

  • Linux / apache web-server segmentation fault warnings

    - by jeroen
    Lately I have been receiving a lot of segmentation fault warnings on my web-server. The warnings look like: [notice] child pid xxxx exit signal Segmentation fault (11) I have consulted with the server provider (it is a dedicated redhat enterprise server) and they could not find anything. What I have done so far: Since the error I have already tried the following: I have added more ram I have turned off / turned on several php modules (they sent me to a web-page someone had the same problem, caused by an excessive amount of php modules) At the moments the warnings occur, there seems to be plenty of free ram left and the number of processes is very low (the number of httpd processes is about a quarter of the maximum allowed). What can be causing these errors? Edit: current versions apache: 2.0.52 php: 5.2.8 RHEL 4 Edit 2: Although I asked this a long time ago, I never was able to solve it until I upgraded to php 5.3.

    Read the article

  • How to port Apache rewrite rules to cherokee?

    - by saint
    I'm pretty new to cherokee, it's great and pretty straight forward except URL Rewrites. Is there a straight forward guide to it? Let me know. Also how would I port this: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] Thanks

    Read the article

  • Apache ProxyPassReverse and https

    - by joshuaball
    Hi, I would like to map all traffic on 80 and 443 from foo.com to an internal server: 192.168.1.101. I have a VirtualHost (Apache 2.2 on Ubuntu) setup as follows (note, I had to break up the hyperlinks below because I am a 'new user'): <VirtualHost *:80> ServerName foo.com ServerAlias *.foo.com ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://192.168.1.101/ ProxyPassReverse / http://192.168.1.101/ </VirtualHost> And that works great for http traffic. However, I can't seem to do the same thing for https. I have tried: Changing VirtualHost *:80 to * - but that doesn't work (I need it http-http and https-https) Creating a new VirtualHost entry for *:443 that redirects to http://192.168.1.101/, but that fails as well (browser timeouts) I did some searching, here and elsewhere, and the closest question I could find was this, but that didn't quite answer it. Also, just out of curiosity, I tried mapping all ports to https (by changing the two ProxyPass lines from http to https (and removing the :80 from VH), and that didn't work either. How would you do that as well? Any thoughts? Thanks in advance.

    Read the article

  • How Do I Cache Just the Homepage with Apache .htaccess?

    - by Volomike
    This config is close... <FilesMatch "\.(php)$"> Header set Cache-Control "max-age=7200, must-revalidate" </FilesMatch> ...but it does all php pages, not just the home page like I want. Basically the developer said he wants example.com to be cached, while: http://example.com/electronics/ would not be cached. Note the developer is using pretty URLs with an MVC framework that runs everything through index.php.

    Read the article

  • Apache/Passenger and cpulimit

    - by Dave Smylie
    I run a ruby on rails site that processes email - the email is dumped directly into the web app via a POST from postfix. At times I can get a burst of email coming in causing a prolonged surge in CPU usage making my VPS provider understandable unhappy with me. These emails don't need to be processed in a timely manner - they just need to be (eventually) processed. Obviously I can't just nice the process as that only looks at the cpu usage on my VPS and can't take into account the cpu usage on the other VPS's. I have found a utility called cpulimit that will you put hard limits on cpu usage for a particular process. (eg 20%). This seems ideal for this purpose, but I can't work out to integrate with apache/passenger. Passenger starts up a ruby process for each server and restarts them periodically. Each time the pid will change. Cpulimit needs to be given a pid number for it to act on. Anyone got any ideas how I could get passenger to fire off a call this command when it's starting up this particular virtual host?

    Read the article

  • How to restrict Apache Location directive to cetain sub-domain?

    - by ohho
    On our server www.example.com, we use the <Location> directive to proxy traffic to a back-end server: <Location /app1> ProxyPass http://192.168.1.20 </Location> Then we added a sub-domain uat.example.com which points to the same IP address of www.example.com. We want to use it as a proxy for client to test an app being developed. Hopefully, the client can access via: http:/uat.example.com/app2_uat Now if we add a Location: <Location /app2_uat> ProxyPass http://192.168.1.30 </Location> The client can access both: http:/www.example.com/app2_uat http:/uat.example.com/app2_uat How can I restrict Location such that only: http:/uat.example.com/app2_uat is accessible? (i.e. http:/www.example.com/app2_uat should not be accessible.)

    Read the article

  • mod_rewrite to page with HTTP auth

    - by Joe
    I'm trying to use modrewrite to proxy http:://myserver/cam1 to an internal, http-auth protected server at http:://admin:[email protected]/cgi/mjpg/mjpg.cgi No matter what I try, though, requests to http:://myserver/cam1 always prompt me for the username and password. I've tried all of these to no avail. RewriteRule ^/cam1 http://admin:[email protected]/cgi/mjpg/mjpg.cgi [P,L] RewriteRule ^/cam1 http://192.168.99.130/cgi/mjpg/mjpg.cgi [E=Authorization:Basic\ YWRtaW46YWRtaW4=,P,L] RewriteRule ^/cam1 http://192.168.99.130/cgi/mjpg/mjpg.cgi [E=HTTP_USERID:admin,E=HTTP_PASSWORD:admin,P,L]` Anybody have any other ideas?

    Read the article

  • Disable ProxyPass rules within a virtual host on apache 2

    - by chinto
    I have a global proxypass rule in httpd.conf rules at global level ProxyPass /test/css http://myserver:7788/test/css ProxyPassReverse /test/css http://myserver:7788/test/css and I have a virtual host Listen localhost:7788 NameVirtualHost localhost:7788 <VirtualHost localhost:7788> Alias /test/css/ "C:/jboss/server/default/deploy/test.ear/test-web-app.war/css/" </VirtualHost> I would like to disable all global proxypass rules applying in this virtual host? NoProxy doesn't seem to work. (The reason I would like to do this is I have below global rules which create a 502 proxy loop if applied within this virtual host #pass all requests to application server ProxyPass /test http://localhost:8080/test ProxyPassReverse /test http://localhost:8080/test ) What I'm trying to do is, serve all static content (like css) using apache, while still proxying all the rest of requests to the application server.

    Read the article

  • I can't log in to Nagios web interface

    - by M. Saâd
    When i try to login to Nagios in my web browser and after having repeatedly enter my login and password on my Nagios page http://127.0.0.1/nagios/, i get this : Authorization Required This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required. Apache/2.2.15 (Red Hat) Server at 127.0.0.1 Port 80 I changed the password : htpasswd -c /etc/nagios/htpasswd.users nagiosadmin And restart the server : service httpd restart But without result !

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >