Search Results

Search found 13625 results on 545 pages for 'apache math'.

Page 81/545 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Forwarding apache requests (port 80) to Tomcat (port 8080)?

    - by iftrue
    I want to run a Tomcat application through a regular website URL, such as www.xyz.com. I would like the root of this domain to act as the base directory for the web application, so each request to www.xyz.com/a/b/c becomes www.abc.com:8080/myApp/a/b/c. Ideally, I would be able to do this transparently and only for certain webapps. www.abc.com:8080 should still respond to requests. What do I need to do to make this happen?

    Read the article

  • I run Webmin and I want it to be accessed with two URLs, both using proxypass in apache

    - by user36644
    This is what I am trying to do: NameVirtualHost * <VirtualHost *> ServerName testsite.org ServerAdmin [email protected] DocumentRoot /var/www/ </VirtualHost> <VirtualHost *> ServerName panel.testsite.org ProxyPass / http://panel.testsite.org:10000/ ProxyPassReverse / http://panel.testsite.org:10000/ </VirtualHost> <VirtualHost 12.34.56.78> ServerName newsite.com ServerAdmin [email protected] DocumentRoot /var/newsite/ </VirtualHost> <VirtualHost 12.34.56.78> ServerName panel.newsite.com ProxyPass / http://panel.newsite.com:10000/ ProxyPassReverse / http://panel.newsite.com:10000/ </VirtualHost> The problem is that it won't accept the 2nd vhost with the IP 12.34.56.78 because it says one already exists. panel.newsite.com and newsite.com have the same IP...so I am not sure how I can make it so that only the URL "panel.newsite.com" will get proxypassed to port 10000 but no other URL on newsite.com

    Read the article

  • How can I password prompt certain IPs and allow all others free access using Apache?

    - by Moak
    SOLVED: The idea is that if the visitor comes from China they have to pass a basic authentication. If you have any other IP address you can visit the site without being hassled (including proxies) //1400 rules.... SetEnvIf Remote_Addr 222.249.128.0/19 china SetEnvIf Remote_Addr 222.249.160.0/20 china SetEnvIf Remote_Addr 222.249.176.0/20 china AuthType Basic AuthName "Restricted" AuthUserFile /www/passwd/users Require valid-user Order allow,deny Allow from All Deny from env=china Satisfy any

    Read the article

  • Apache: Limit the Number of Requests/Traffic per IP?

    - by Ian Kern
    I would like to only allow one IP to use up to, say 1GB, of traffic per day, and if that limit is exceeded, all requests from that IP are then dropped until the next day. However, a more simple solution where the connection is dropped after a certain amount of requests would suffice. Is there already some sort of module that can do this? Or perhaps I can achieve this through something like iptables? Thanks

    Read the article

  • php include path problem:Same code works on Ubuntu default Apache and php conf, but not on CentOS

    - by Neo
    So the same code works on my ubuntu server but when I upload it to my dedicated hosting server running CentOS it seems to add an extra prefix of .:/usr/share/pear:/usr/share/php: I tried setting includepath to different things but it just doesn't work. the file is in a directory called language in the same folder as the file that is including it and I'm using : include dirname(FILE).DIRECTORY_SEPARATOR."language".DIRECTORY_SEPARATOR."storage.inc"; and include dirname(__FILE__)."/language/language.php"; and include "language/language.php"; and alot of other combinations but I can't get it to find the file. Fatal error: require_once() [function.require]: Failed opening required '/home/neo/public_html/migration/include/class/core/storage.inc' (include_path='.:/usr/share/pear:/usr/share/php:/home/neo/public_html/migration') in /home/neo/public_html/migration/include/class/core/class_lang.inc on line 153

    Read the article

  • Apache server rewrite rules: how to avoid "implicitly forcing redirect (rc=302)"?

    - by Olivier Pons
    Hi! I've got a very annoying problem: our webserver handles 2 (more actually but let's say 2 for a simpler example): pretassur.fr pretassuragentimmobilier.fr Here's what I want to do: change (whatever1).pretassuragentimmobilier.fr(/whatever2) to (whatever1).pretassur.fr(/whatever2)?theme=agentimmobilier So here's my rewriterule: RewriteCond %{SERVER_NAME} (([a-z]+\.)*)pretassuragentimmobilier.(fr|com) RewriteRule ^(.+) http://%1pretassur.fr$1 [E=THEME:pretassur_agent,QSA] # if THEME not empty, set it : RewriteCond %{ENV:THEME} ^(.+)$ RewriteRule (.*) $1?IDP=%{ENV:THEME} [QSA] The big (huge) problem is: let's have a look at the rewrite logs: [pretassurmandataireimmo.com] (5) => setting env variable 'THEME' to 'pretassur_mandataire' [pretassurmandataireimmo.com] => (2) implicitly forcing redirect (rc=302) with http://pretassur.fr/ Aaaaaaaaarg! "implicitly forcing redirect" = I don't want that ! I want to internally redirect to pretassur.fr, not to make a real redirect! Now if you type: http://pretassurmandataireimmo.com it is redirected to http://pretassur.fr/?IDP=pretassur_mandataire (try it) I don't want that! I want to display this page http://pretassur.fr/?IDP=pretassur_mandataire but without touching the original host! Any idea? Thanks a lot!

    Read the article

  • I have an internal machine with apache that I can access by IP Address but not computer name

    - by Parris
    I have ubuntu machine running lampp. While on the machine I can type localhost or the computers name to access htdocs. From another machine I can only access the machine via its IP Address. This just started happening recently when someone rearranged the network cables and removed a hub sitting between the machine and the network, which makes me think it wasn't all the stable to begin with anyways. Any suggestions on where I should start looking?

    Read the article

  • Rolling Apache2 log files

    - by Andrew B
    I'm using a Collabnet svn distribution on linux, and the log files are configured through the standard apache httpd.conf. It's been a while since I dealt directly with apache, but my memory and google seem to indicate that the only way to rotate apache log files is outside of apache, using a periodic script. Is there some convenient way I'm missing to rotate these?

    Read the article

  • How to allow unprivileged apache/PHP to do a root task (CentOS)

    - by Chris
    I am setting up a sort of personal dropbox for our customers on a CentOS 6.3 machine. The server will be accessible thru SFTP and a proprietary http service base on PHP. This machine will be in our DMZ so it has to be secure. Because of this I have apache running as an unprivileged user, hardened the security on apache, the OS, PHP, applied a lot of filtering in iptables and applied some restrictive TCP Wrappers. Now you might have suspected this one was coming, SELinux is also set to enforcing. I'm setting up PAM to use MySQL so my users in the web application can login. These users will all be in a group that can use SSH only for SFTP and users will be chrooted to their own 'home' folder. To allow this SELinux wants the folders to have the user_home_t tag. Also the parent directory needs to be writable by root only. If these restrictions are not met SELinux will kill the SSH pipe immediately. The files that need to be accessible thru both http and SFTP so I have made a SELinux module to allow Apache to search/attr/read/write etc. to directories with the user_home_dir_t tag. As sftp users are stored in MySQL I want to setup their home dirs upon user creation. This is a problem since Apache has no write access to the /home dir, it's only writable by root since it's required to keep SELinux and OpenSSH happy. Basically I need to let Apache do only a few tasks as root and only within /home. So I need to somehow elevate the privileges temporarily or let root do these tasks for apache instead. What I need to have apache do with root privileges is the following. mkdir /home/userdir/ mkdir /home/userdir/userdir chmod -R 0755 /home/userdir umask 011 /home/userdir/userdir chcon -R -t user_home_t /home/userdir chown -R user:sftp_admin /home/userdir/userdir chmod 2770 /home/userdir/userdir This would create a home for the user, now I have an idea that might work, cron. That would mean the server needs to check for users that have no home every minute, then when creating users the interface would freeze for an average of 30 seconds before the account creation can be confirmed which I do not prefer. Does anybody know if something can be done with sudoers? Or any other idea's are welcome... Thanks for your time!

    Read the article

  • How do I uninstall PHP and Apache on linux?

    - by ngache
    Now it's installed wrongly, I need to uninstall and reinstall them. They're installed from source. How can I efficiently uninstall them first? I tried make uninstall in php_source_dir,but only got: make: *** No rule to make target `uninstall'. Stop. Thanks !

    Read the article

  • How to configure apache's mod_proxy_html to work as an ajax proxy?

    - by dcerecedo
    I'm trying to build a web site that let's you view and manipulate data from any page in any other website. To do that, I have to bypass 'Allow Origin' problems: i'm loading the other domain's content in an iframe and i have to manipulate its content with javascript downloaded from my domain. My first attempt was to write a simple proxy myself, requesting the other domains page through a server proxy coded in Java that not only serves the content but rebuilds links (src's and href's) in the content so that the content referenced by these links alse get downloaded through my handmade proxy. The result is not bad but has problems with url's in css and scripts. It's then that i realized that mod_proxy_html is supposed to do exactly all this job. The problem is that i cannot figure out how to make it work as expected. Let's suppose my server runs in my-domain.com and to proxy and transform content from another domain i'd make a request like this: my-domain.com/proxy?url=http://another-domain.com/some/content I'd want mod_proxy_html to serve the content and rewrite following URLs in http://another-domain.com/some/content in the following ways: Absolute URLs not from another-domain.com: no rewritting Relative from root urls:/other/content - /proxy?url=http://another-domain.com/other/content Relative urls: other/content - /proxy?url=http://another-domain.com/some/content/other/content Relative to parent urls: ../other/content - /proxy?url=http://another-domain.com/some/other/content The url should be specified at runtime, not configuration time. Can this be achieved with mod_proxy_html? Could anyone provide a simple working configuration to start with? EDIT 1-First approach The following site config will work fine with sites that use absolute url's everywhere like http://www.huffingtonpost.es/. Youc could try on this config on localhost: http://localhost/asset/http://www.huffingtonpost.es/ <VirtualHost *:80> ServerName localhost LogLevel debug ProxyRequests off RewriteEngine On RewriteRule ^/asset/(.*) $1 [P] ProxyHTMLURLMap $1 /asset/ <Location /asset/> ProxyPassReverse / ProxyHTMLURLMap / /asset/ </Location> </VirtualHost> But as explained in the documentation, if I hit a site using relative url's, I'd like to have these rewritten on the html via mod_proxy_html. So I shoud change the Location block as follows: <Location /asset/> ProxyPassReverse / #Depending on your system use one line or the other #Ubuntu: #SetOutputFilter proxy-html #any other system: ProxyHTMLEnable On ProxyHTMLURLMap / /asset/ </Location> ...which doesn't seem to work. Comments, hints and ideas welcome!

    Read the article

  • Apache VirtualHost, multiple sites. 1 ssl with redirect and 1 regular http

    - by pedalpete
    I've got a server with one site which I am redirecting to https via <VirtualHost *:80> DocumentRoot /var/www/html/secure ServerName secure.com Redirect / https://secure.com </VirtualHost> That works no problem. Now I'm trying to add another non-secure site <VirtualHost *:80> DocumentRoot /var/www/html/notsecure ServerName notsecure.com </VirtualHost> of course, because the redirect is on '/', all sites are getting redicted. I've tried changing the Redirect to the full document root, but no luck.

    Read the article

  • What is the proper way to set up the Apache document root in terms of privileges?

    - by racl101
    I have just installed Ubuntu 9.10 server edition on my machine and I wish to run my own personal local server with other users in the same LAN. First, I was wondering what folder directory structure is best for the web root? Should I just use: /var/www/ and start throwing web documents there or should I create a folder elsewhere (maybe the home directory)? Second, in the /var/www/ directory only the root user can create documents in there, however, I wish to have other users be able to create files in the document root and upload them via FTP. Should I change the permissions or the www/ folder? Or again, should I create the document root elsewhere with different permissions? What is the safest way of doing this?

    Read the article

  • What is the best way to configure the number of workers in Apache?

    - by rbm
    My site receives a lot of traffic for 2 hours during the day (2000 hits per minute). The rest of the day receives less traffic(500e hits per minute). I have been experimenting with the MaxClients and MaxSpareServers values but I still get some downtime during peek hours. How can I calculate the best values for my configuration based on the amount of ram that I have ? Each process is like 36-40 M of Memory total used free shared buffers cached Mem: 3096 793 2302 0 0 0 -/+ buffers/cache: 793 2302 Swap: 0 0 0 Values that I am using now <IfModule prefork.c> StartServers 10 MinSpareServers 22 MaxSpareServers 60 ServerLimit 90 MaxClients 90 MaxRequestsPerChild 400 </IfModule>

    Read the article

  • 403 Forbidden for web root on Apache on Mac OS X v10.7, but can access user directories

    - by philosophistry
    When I access http://localhost/ I get 403 Forbidden, but if I access http://localhost/~username it serves up pages. Things I've tried: Checking error logs Swapping out with original httpd conf files Changing DocumentRoot to my user directory (after all that should work if I can access ~username) I've seen 30 plus Q&A sites that all point to people having trouble with user directories being forbidden. I have the opposite problem, and so I'm tearing my hair out here.

    Read the article

  • Redirect an URL to another URL with Apache and WAMP?

    - by user1719496
    I was wondering how to make a simple redirection, I've got WAMP installed on my computer and I wish I could do that: When I go to abc.com it redirects to xyz.com. I did this in the httpd.conf file, but it isn't working. It seems to work now, but only when I go to localhost. However, what I want is that when I go to abc.com it goes to xyz.com, and I can't do that. Here is my conf : <VirtualHost *:80 > ServerName abc.com Redirect permanent / http://www.xyz.com/ </VirtualHost>

    Read the article

  • Password-protect a directory, but not that directory's files? [Apache]

    - by Onion
    Hi, I was just wondering if it was possible to protect a directory with a username/password combination using .htaccess and .htpasswd files, but not protect the files within. i.e. One is able to link, say, images within that directory to friends, but browsing the directory itself would not be allowed without a username/password. Thanks to all in advance.

    Read the article

  • How to properly deny Railo directory access through Apache

    - by Sn3akyP3t3
    I've been battle tested on this and failed to achieve my goal which is to deny all access to all directories except the Public directory and only allow access to all all other directories with specific IP addresses. To get Railo+Apache+Tomcat installed I pretty much followed this script: https://github.com/talltroym/Railo-Ubuntu-Installer-Script then verified settings with this tutorial: http://blog.nictunney.com/2012/03/railo-tomcat-and-apache-on-amazon-ec2.html From the installation script these mods are enabled: sudo a2enmod ssl sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod rewrite sudo a2ensite default-ssl Outside of the script I copied the sites-available to sites-enabled then reloaded Apache. I have a directory created for Railo cmfl located at /var/www/Railo/ Navigating the browser to http ://Server_IP_Address/Railo forces ssl and relocates to https ://Server_IP_Address/Railo which shows off index.cfm. Not providing index.cfm and omitting https indicates that the DirectoryIndex directive and RewriteCond of Apache appears to be working for the sites-enabled VirtualHost. The problem I'm encountering is that I cannot seem to deny access to all directories except Public. My directory structure is rather simple and looks like this: Railo error Public NotPublic Sandbox These are my sites-enabled configurations: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #Default Deny All to prevent walking backwards in file system Alias /Railo/ "/var/www/Railo/" <Directory ~ ".*/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html index.cfc RewriteEngine on RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R] </VirtualHost> and <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www Alias /Railo/ "/var/www/Railo/" <Directory ~ "/var/www/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html #Proxy .cfm and cfc requests to Railo ProxyPassMatch ^/(.+.cf[cm])(/.*)?$ http://127.0.0.1:8888/$1 ProxyPassReverse / http://127.0.0.1:8888/ #Deny access to admin except for local clients <Location /railo-context/admin/> Order deny,allow Deny from all # Allow from <Omitted> # Allow from <Omitted> Allow from 127.0.0.1 </Location> </VirtualHost> </IfModule> The apache2.conf includes the following: # Include the virtual host configurations: Include sites-enabled/ <IfModule !mod_jk.c> LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so </IfModule> <IfModule mod_jk.c> JkMount /*.cfm ajp13 JkMount /*.cfc ajp13 JkMount /*.do ajp13 JkMount /*.jsp ajp13 JkMount /*.cfchart ajp13 JkMount /*.cfm/* ajp13 JkMount /*.cfml/* ajp13 # Flex Gateway Mappings # JkMount /flex2gateway/* ajp13 # JkMount /flashservices/gateway/* ajp13 # JkMount /messagebroker/* ajp13 JkMountCopy all JkLogFile /var/log/apache2/mod_jk.log </IfModule> I believe I understand most of this except the jk_module inclusion which I've noticed has an error that shows up in the logs that I can't sort out: [warn] No JkShmFile defined in httpd.conf. Using default /etc/apache2/logs/jk-runtime-status I've checked my Regular expression against the paths of the directories with RegexBuddy just to be sure that I wasn't correct. The problem doesn't appear to be Regex related although I may have something incorrect in the Directory directive. The Location directive seems to be working correctly for blocking out Railo admin site access.

    Read the article

  • htaccess correct, Apache logs still showing the evil visitors with 200 code

    - by bulgin
    I hope someone can help me. Please take a look at the following snippet of Apache logs: 95-169-172-157.evilvisitor.com - - [12/Nov/2012:09:46:02 -0500] "GET /the-page-I-dont-want-to-deliver.html HTTP/1.1" 200 9171 "http://hackers.ru/" "Mozilla/4.0 (MSIE 6.0; Windows NT 5.1; Search)" I have the following included in my .htaccess for the root directory of the website and there are no other .htaccess files anywhere that would affect this: RewriteEngine On Options +FollowSymLinks ServerSignature Off ErrorDocument 403 "Nothing Interesting Here" order allow,deny deny from evilvisitor.com deny from hackers.ru deny from anonymouse.org allow from all I also have GeoIP functioning properly and have this included there: #for stuff from different countries RewriteCond %{ENV:GEOIP_COUNTRY_CODE} ^(UA|TR|RU|RO|LV|CZ|IR|HR|KR|TW|NO|NL|NO|IL|SE) RewriteRule ^(.*)$ [R=F,L I know this works because whenever I attempt to access the website from a proxy in say, Spain, I get the error message. I also know it works because when accessing the website from anonymouse.org, the proper error code page is displayed. So then why am I still getting these visitors who successfully access the page I don't want them to see with an Apache 200 code when it should be an error code?

    Read the article

  • Apache Lucene: Is Relevance Score Always Between 0 and 1?

    - by Eamorr
    Greetings, I have the following Apache Lucene snippet that's giving me some nice results: int numHits=100; int resultsPerPage=100; IndexSearcher searcher=new IndexSearcher(reader); TopScoreDocCollector collector=TopScoreDocCollector.create(numHits,true); Query q=parser.parse(queryString); searcher.search(q,collector); ScoreDoc[] hits=collector.topDocs(0*resultsPerPage,resultsPerPage).scoreDocs; Results r=new Results(); r.length=hits.length; for(int i=0;i<hits.length;i++){ Document doc=searcher.doc(hits[i].doc); double distanceKm=getGreatCircleDistance(lucene2double(doc.get("lat")), lucene2double(doc.get("lng")), Double.parseDouble(userLat), Double.parseDouble(userLng)); double newRelevance=((1/distanceKm)*Math.log(hits[i].score)/Math.log(2))*(0-1); System.out.println(hits[i].doc+"\t"+hits[i].score+"\t"+doc.get("content")+"\t"+"Km="+distanceKm+"\trlvnc="+String.valueOf(newRelevance)); } What I want to know, is hits[i].score always between 0 and 1? It seems that way, but I can't be sure. I've even checked the Lucene documentation (class ScoreDocs) to no avail. You'll see I'm calculating the log of the "newRelevance" value, which is based on hits[i].score. I need hits[i].score to be between 0 and 1, because if it is below zero, I'll get an error; above 1 and the sign will change from negative to positive. I hope some Lucene expert out there can offer me some insight. Many thanks,

    Read the article

  • Javascript A* path finding ENEMY MOVEMENT in 3D environment

    - by faiz
    iam trying to implement pathfinding algorithm using PATHFINDING.JS in 3D world using webgl. iam have made a matrix of 200x200. and placed my enemy(swat) in it .iam confused in implmenting the path. i have tried implementing the path by compparing the value of each array value with swat's position . it works ! but ** THE ENEMY KEEPS GOING FROM THE UNWALKABLE AREA OF MY MATRIX....like the enemy should not move from 119,100(x=119,z=100) but its moving from that co-ordinate too ..... can any one help me out in this regard .. *prob facing :* enemy (swat character keeps moving from the wall /unwalkable area) wanted solution : enemy does not move from the unwalkable path.. ** function draw() { grid = new PF.Grid(200, 200); grid.setWalkableAt( 119,100, false); grid.setWalkableAt( 107,100, false); grid.setWalkableAt( 103,104, false); grid.setWalkableAt( 103,100, false); grid.setWalkableAt( 135,100, false); grid.setWalkableAt( 103,120, false); grid.setWalkableAt( 103,112, false); grid.setWalkableAt( 127,100, false); grid.setWalkableAt( 123,100, false); grid.setWalkableAt( 139,100, false); grid.setWalkableAt( 103,124, false); grid.setWalkableAt( 103,128, false); grid.setWalkableAt( 115,100, false); grid.setWalkableAt( 131,100, false); grid.setWalkableAt( 103,116, false); grid.setWalkableAt( 103,108, false); grid.setWalkableAt( 111,100, false); grid.setWalkableAt( 103,132, false); finder = new PF.AStarFinder(); f1=Math.abs(first_person_controller.position.x); f2=Math.abs(first_person_controller.position.z); ff1=Math.round(f1); ff2=Math.round(f2); s1=Math.abs(swat.position.x); s2=Math.abs(swat.position.z); ss1=Math.round(s1); ss2=Math.round(s1); path = finder.findPath(ss1,ss2,ff1,ff2, grid); size=path.length-1; Ai(); } function Ai(){ if (i<size) { if (swat.position.x >= path[i][0]) { swat.position.x -= 0.3; if(Math.floor(swat.position.x) == path[i][0]) { i=i+1; } } else if(swat.position.x <= path[i][0]) { swat.position.x += 0.3; if(Math.floor(swat.position.x) == path[i][0]) { i=i+1; } } } if (j<size) { if((Math.abs(swat.position.z)) >= path[j][1]) { swat.position.z -= 0.3; if(Math.floor(Math.abs(swat.position.z)) == path[j][1]) { j=j+1; } } else if((Math.abs(swat.position.z)) <= path[j][1]) { swat.position.z += 0.3; if(Math.floor(Math.abs(swat.position.z)) == path[j][1]) { j=j+1; } } } }

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >