Search Results

Search found 2680 results on 108 pages for 'soft 404'.

Page 57/108 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Url rewriting stops working after changing default port on iis7

    - by Somesh
    I have migrated the IIS6 webserver 2003 websites to IIS7 webserver 2008 using msdeploy tool. Application pool setting are changed with "Enable 32-bit Applications=true", "Managed_Pipeline_Mode=Classic","Identity=NetworkService" Framework=v1.1/2.0. All the websites are working fine on default port along with url rewriting migrated from iis6. When I start the webserver on port other than default port by changing bindings, url rewriting stops workings and get 404 error in logs. I think I don't have to change the handler mapping cause I am running it in classic mode. How can I troubleshoot this?

    Read the article

  • IIS 7.5 Request Filtering logs versus UrlScan 3.1

    - by Mouffette
    When IIS 7.5 Request Filtering blocks a request it seems to add an entry into the regular IIS web logs with a 404. a) Is there any way to send the detailed Request Filtering logs to a separate file? UrlScan could specify LoggingDirectory and keep this "noise" out of our real IIS logs b) Also, is there a way to get more information that Request Filtering blocked a request? UrlScan logged the rule that caused the denial as well as control over a redirection using RejectResponseUrl which was especially convenient in non-production sites. c) If these features are important is the recommended practice to still install UrlScan 3.1 on IIS 7.5 (Windows 2008 R2) and disable Request Filtering? Any guidance is appreciated.

    Read the article

  • Some people suddenly say they cant access my site, but me and all others can

    - by Cain
    I have had this problem many times before and im still unable to find out what is causing it. It happned to me some months ago but it got fixed by itself and had already gone back to normal. Everything was working fine for quite a while until 2 days ago. Some people are reporting they cant access my site, they get a 404 error, however i can access it normally and many other users cant. I dont know whats the common denominator since both groups of people are from the same countries, use the same browsers, OS, etc. so the issue doesnt seem related to that. I have reported this problem before to my both my host and domain registrar but none of them claim responsibility for that. Who is then to blame? Waht can i do to find out whats causing the issue and solve it? Thank you.

    Read the article

  • robot hammering apache2

    - by user1571418
    My apache2 log is bombarded with lines like: 108.5.114.118 - - [03/Aug/2012:15:23:28 +0200] "GET http://xchecker.net/tmp_proxy2012/http/engine.php HTTP/1.0" 404 1690 "http://xchecker.net/tmp_proxy2012/http/engine.php" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98; Win 9x 4.90)" I am puzzled by this -- why is a request for some weird xchecker.net domain ending up on my server in the first place?! The request comes every few dozens of seconds, must be a robot. Any ideas what it is? Btw that URL is valid -- apparently it contains some test page...

    Read the article

  • IIS7 defaulting to default.php instead of default.aspx

    - by emzero
    Hi guys. My client has just got a new dedicated server running Win2008 (we had 2003 before), II7, etc. I started setting a little ASP.NET 2.0 web application we have. Running on its own AppPool 2.0. The problem is that when I browse the site root (locally or remotely), I get 404 because the url now points to http://domain/default.php, when it should be default.aspx. Yes, I've checked the Defaults Documents settins for the website and I deleted everything but default.aspx (default.php was not even listed). To finish, I'll say that if I navigate to http://domain/default.aspx, the site works perfectly and I can follow links without problem. Any idea why is this happening? Or at least where I should start looking? Thanks!

    Read the article

  • nginx points the sub-directory of an alias folder to the base directory

    - by Starry
    I am new to Nginx. Now I have a confusion on nginx configurations: My web site contains folders in different locations: location / { root /Path1 } location ^~ /personal { alias /Path2 } When I query http://mysite/personal, I am accessing the content of /Path2 instead of /Path1 Now I want to add a sub-directory in /personal with specific configurations, so I add: location /personal/download { autoindex on; } But I got 404 error when querying http://mysite/personal/download. According to the error log, I am directed to /Path1/personal/download, which is not correct. How can I configure nginx, such that all access to http://mysite/personal/* will be directed to the same directory in /Path2?

    Read the article

  • How to enable mod_info in Apache?

    - by Amit Nagar
    I gone through the apache guide to enable to mod_info. As per doc: To configure mod_info, add the following to your httpd.conf file. Location /server-info SetHandler server-info /Location You may wish to use mod_access inside the directive to limit access to your server configuration information: Location /server-info SetHandler server-info Order deny,allow Deny from all Allow from yourcompany.com Location Once configured, the server information is obtained by accessing http://your.host.dom/server-info In my case the this link is not giving any info. Http 404 NOT FOUND error Is there anything I need to install as mod_info.c or something ? Is there anything i need to put as AddModule or something ? In Error log : File does not exist: /usr/local/apache2/htdocs/example1/server-info I have 3 virtual host. One of this as default which use example1 as Docroot dir. I am not sure where this page (server-info) should be ? in case of server-status, it's working fine

    Read the article

  • convert .htaccess to nginx

    - by Chip Gà Con
    It's me again :( I was trying to install siwapp on my webserver but I couldn't make it work with nginx, here is the .htaccess file content: RewriteCond %{REQUEST_FILENAME} !index.php RewriteRule (.*)\.php$ index.php/$1 RewriteCond $1 !^(index\.php|nhototamsu|assets|cache|xd_receiver\.html|photo|ipanel|automap|xajax_js|files|robots\.txt|favicon\.ico|ione\.ico|(.*)\.xml|ror\.xml|tool|google6afb981101589049\.html|googlec0d38cf2adbc25bc\.html|widget|iradio_admin|services|wsdl) RewriteRule ^(.*)$ index.php/$1 [QSA,L] When I access http://myurl.com/tin-tuc/tuyen-sinh/tu-van/2012/04/25757-phan-van-qua-giua-khoi-a1-va-khoi-a.html ,nginx could display the page correctly, it said: "404 Not Found" (new URL: http://myurl.com/tin-tuc/tuyen-sinh/tu-van/2012/04/25757-phan-van-qua-giua-khoi-a1-va-khoi-a.html)

    Read the article

  • Nginx trailing slash not working

    - by user1573604
    I am using the below to match urls and add a trailing slash. Under location / rewrite ^([^.]*[^/])$ $1/ permanent; According to he logs below, it is working, however the get request is still he old url, prior to adding the slash, which results in a 404. 2013/11/08 12:33:19 [notice] 29919#0: 92 "^([^.][^/])$" matches "/foo", client: 100.100.100.100, server: _, request: "GET /foo HTTP/1.1", host: "100.100.100.100" 2013/11/08 12:33:19 [notice] 29919#0: *92 rewritten redirect: "/foo/", client: 100.100.100.100, server: _, request: "GET /foo HTTP/1.1", host: "100.100.100.100" Any suggestions as to why this might be happening?

    Read the article

  • path problem with mod_rewrite, XDebug, PDT, XAMPP and Windows XP

    - by Delirium tremens
    My mod_rewrite turns accounts/create into index.php?folder=accounts&action=create, but pdt ignores it, so when I try to start a PHP Script debug session, I have to type a folder location in the file field and pdt doesn't accept. When PDT auto generates the URL for the PHP Web Page debug session, I go to http://localhost/myframe/index.php?XDEBUG%5FSESSION%5FSTART=ECLIPSE%5FDBGP&KEY=12569067976875, but myframe is in the frameworks folder, so I get a 404 error. When I check a breakpoint, uncheck Auto Generate, add frameworks before myframe in URL, set Start Debug from http://localhost/frameworks/myframe/accounts/create in Advanced and click Debug, the debugger doesn't stop at the breakpoint.

    Read the article

  • Arch Linux with an nginx/django setup refuses to display ANYTHING

    - by Holland
    I'm on Amazon Ec2, with an Arch Linux server. While I truly am loving it, I'm having the issue of actually getting nginx to display anything. Everytime I try to throw my hostname into the browser, the browser states that it's not available for some reason - almost as if the host doesn't even exist. One thing I'd like to know is, how can I get this up and running? Is there a specific arch linux configuration I have to do to make it web accessible? I have port 80 open, as well as port 22. I've tried using gunicorn, python-flup, and nginx. Nginx Config user http; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name _; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; #charset koi8-r; location ^~ /media/ { root /path/to/media; } location ^~ /admin-media/ { root /usr/lib/python2.7/site-packages/django/contrib/admin/media; } location / { root /path/to/root/; fastcgi_pass 127.0.0.1:8080; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; fastcgi_index index.html; index index.htm index.html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /etc/nginx/html/50x.html; } } # server { # listen 80; # server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; # location / { # root html; # index index.html index.htm; # } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { root html; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} #} # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } I can't quite tell if it's a server issue or a configuration issue: I've followed so many guides now I can't even count them all. The thing is that Django itself is working fine, and my permissions to the document root of the where the site files are stored is 777. Ontop of that, I have a git repo which works perfectly fine, and django, python, and runfcgi all start without issues. The same goes for gunicorn, when I do a gunicorn_django -b 0.0.0.0:8000 in my document root. Here is my output from that: 2012-04-15 05:17:37 [3124] [INFO] Starting gunicorn 0.14.2 2012-04-15 05:17:37 [3124] [INFO] Listening at: http://0.0.0.0:8081 (3124) 2012-04-15 05:17:37 [3124] [INFO] Using worker: sync 2012-04-15 05:17:37 [3127] [INFO] Booting worker with pid: 3127 As far as I know, everything seems fine, as well as error.log and access.log for nginx. The access log is completely blank, for that matter. I just feel lost here; what would be a step in the right direction to bebugging an issue such as this?

    Read the article

  • subdomain/virtualhost problem on unix + apache

    - by Aaron
    Hello, I'm having a strangely difficult time setting up a subdomain (x.example.com). The main site works fine, but I get 404 errors attempting to hit x.example.com no matter how I set up the VirtualHost config. NameVirtualHost *:80 <VirtualHost *:80> ServerName www.example.com DocumentRoot /var/www/example.com/htdocs ServerAlias example.com </VirtualHost> <VirtualHost *:80> ServerName x.example.com ErrorLog /var/logs/x-error-log CustomLog /var/logs/x-access-log common DocumentRoot /var/www/x/htdocs </VirtualHost> As far as I can tell, this is a vanilla set up. Any suggestions would be appreciated.

    Read the article

  • 310 too many redirects after moving drupal site to fast-cgi

    - by Jaels
    Here is trouble: When i follow this link - http://znak.net.ua it rewrites to http://znak.net.ua/ru/ru/ru/ru/ru/ and i got Error 310 (net::ERR_TOO_MANY_REDIRECTS) This happend when i start using fast-cgi insteed of mod_php Here is my .htaccess: ErrorDocument 404 "The requested file favicon.ico was not found. DirectoryIndex index.php <IfModule mod_php4.c> </IfModule> <IfModule sapi_apache2.c> </IfModule> <IfModule mod_php5.c> </IfModule> <IfModule mod_expires.c> ExpiresActive On ExpiresDefault A1209600 ExpiresByType text/html A1 </IfModule> <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^(.*)$ http://znak.net.ua/ru/$1 [L,R=301] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^(.*)$ ru/index.php?q=$1 [L,QSA] </IfModule>

    Read the article

  • passing URL vars to a wordpress page and pretty-fying it with .htaccess

    - by Jonah
    I have wordpress installed in a directory called welcome, and /welcome/samples is a "page" (created via Wordpress). It's has a php template waiting for a $_REQUEST['category'] When a user goes to /welcome/samples/fun, I want to have "fun" passed to the samples php template in the form welcome/samples/?category=fun But I want the URL to remain in its original form - it's currently replacing the it with the ugly "?cat...etc" # Outside the wordpress block so it won't be overwritten Options +FollowSymlinks RewriteEngine On RewriteRule ^samples/([^/]+)$ /welcome/samples?cat=$1 [R,L] # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /welcome/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /welcome/index.php [L] </IfModule> # END WordPress I tried Rewriting with simply samples?cat=$1 but I was getting a 404. I tried putting in the RewriteBase /welcome/ in the first block. without the [R] flag it doesn't work at all. I keep trying different permutations... and failing:( Perhaps I'm missing some basic concepts... thanks if you take the time to even read through this:) ciao

    Read the article

  • fail2ban with Cloudflare

    - by tatersalad58
    I'm using fail2ban to block web vulnerability scanners. It is working correctly when visiting the site if CloudFlare is bypassed, but a user can still access it if going through it. I have mod_cloudflare installed. Is it possible to block users with IPtables when using Cloudflare? Ubuntu Server 12.04 32-bit Access.log: 112.64.89.231 - - [29/Aug/2012:19:16:01 -0500] "GET /muieblackcat HTTP/1.1" 404 469 "-" "-" Jail.conf [apache-probe] enabled = true port = http,https filter = apache-probe logpath = /var/log/apache2/access.log action = iptables-multiport[name=apache-probe, port="http,https", protocol=tcp] maxretry = 1 bantime = 30 # Test Apache-probe.conf [Definition] failregex = ^<HOST>.*"GET \/muieblackcat HTTP\/1\.1".* ignoreregex =

    Read the article

  • Googlebot cant access my site webmaster tools reply Unreachable robots.txt

    - by Ahmad Ahmadi
    When I try to fetch my site as a googlebot in webmaster tools it return Unreachable robots.txt, after investigate I understood google bot can see my server: tcpdump | grep google it return that google can access my server with IP 66.249.81.172 or 66.249.75.111. but there is not any think in access log or error log or other apache logs. cat access_log | grep google or cat error_log | grep 66.249.81.172 Other bot (bing,...) can access apache but google cant. there is not any problem in my robots.txt or its permissions because as you know robots.txt is not necessary so I delete it but again webmaster tools returned Unreachable robots.txt not 404 not found! information about server: Server OS : CentOS 6 Web Server : Apache 2.x Firewall : IPTables is stoped SELinux is Disabled There is not any think else for security on my server. how can I investigate the problem and is there any other command that can help me to find the problem.

    Read the article

  • Need help with some mod_rewrite on lighttpd

    - by Christoph
    Hello, I recently couldn't configure my mod_rewrite where I'm using Wordpress and MyBB. And now I need Your help, because I couldn't deal with it... Here is the code: http://i.imgur.com/9I7nX.png The problem is with third, fifth and sixth line. On the third, it couldn't display comments (error 404). On fifth, forum categories are not working. Finally on sixth, post aren't working. I appreciate, any help. Thanks!

    Read the article

  • Testing Tomcat with Virtual Hosts

    - by Marty Pitt
    I'm trying to test Tomcat virtual hosts on my dev machine (windows 7/Tomcat 6). I'd like to have requests for localhost, test1.localhost and test2.localhost all route through to the same tomcat instance. I've edited my hosts file to look as follows: 127.0.0.1 localhost ::1 localhost 127.0.0.1 test1.localhost 127.0.0.1 test2.localhost And added modified the Engine in server.xml as follows: <Engine defaultHost="localhost" name="Catalina"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase" /> <Host appBase="webapps" autoDeploy="true" name="localhost" unpackWARs="true" xmlNamespaceAware="false" xmlValidation="false"> <Alias>test1.localhost</Alias> <Alias>test2.localhost</Alias> </Host> </Engine> However, I'm getting a 404 when hitting test1.localhost:8080/myWebApp, although localhost:8080/myWebApp works fine. I can ping test1.localhost fine. What have I missed?

    Read the article

  • max length of url 257 characters for mod_rewrite?

    - by Daniel
    My url scheme is /foo/var1-var2-var3.../bar I am using these mod_rewrite rules: RewriteBase /foo/ RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^ index.php [PT,L] If the length of the string 'var1-var2...' is greater than 257 characters then an error 403 Forbidden and a 404 are returned. However, if the length of the 'var1-var2...' string is 257 characters or less and subsequently followed by a slash the length of the remaining url may be any length. How does one overcome this limit?

    Read the article

  • dev_install failed on ARM chromebook

    - by user1027721
    I'm trying this guide for having access to emerge on chromeos. http://www.chromium.org/chromium-os/how-tos-and-troubleshooting/install-software-on-base-images Unfortunately I always got the same error which is $ sudo dev_install Starting installation of developer packages. First, we download the necessary files. Downloading https://commondatastorage.googleapis.com/chromeos-dev-installer/board/daisy/full-3.168.0.0/packages/app-misc/mime-types-8.tbz2 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 127 100 127 0 0 252 0 --:--:-- --:--:-- --:--:-- 305 [: 184: -ne: unexpected operator Extracting /usr/local/portage/packages/app-misc/mime-types-8.tbz2 I think that it somehow returns a 404 everytime. Thanks for your help

    Read the article

  • nginx regex locations w/ different roots not working as expected

    - by Wells Oliver
    I have the following two rules: location / { root /var/www/default; } location ~* /myapp(.*)$ { root /home/me/myapp/www; try_files $uri $uri/ /handle.php?url=$uri&$args; } When I browse to myapp/foo it works- kind of, the error is logged as a 404: *3 open() "/var/www/default/handle.php" failed (2: No such file or directory) - so its handling the regex match but just not using the right document root-- why is this? For the record, I am trying to get /myapp/* requests handled by the second location, and everything else the first.

    Read the article

  • Rework filename from mod_pagespeed back to normal files

    - by British Sea Turtle
    I am hoping someone can help me with this problem. I am moving to a new server and not using mod_pagespeed any more. However We not that we have lots of external links that link to images on our server using the strange mod_pagespeed filenames. This is not an issue but we do not want to have lots of 404 errors. So I have lots of links like the following : http://www.domain.com/images/150x150xlink.png.pagespeed.ic.pPXw45HSQm.png http://www.domain.com/images/paris_01.gif.pagespeed.ce.vfrkuKUaj0.gif http://www.doamin.com/images/1st2.gif.pagespeed.ce.OUg38q6VbZ.gif How can I redirect them to : http://www.domain.com/images/150x150xlink.png http://www.domain.com/images/paris_01.gif http://www.doamin.com/images/1st2.gif There are thousands of files like this so I am hoping for a simple solution with mod_rewrite, I tried this but it does not work. So any help would be appreciated. RewriteCond %{REQUEST_URI} \.gif\.pagespeed\. [NC] RewriteRule ^(.*?\.gif)\..*\.gif$ $1 [NC,L]

    Read the article

  • Apache2 alias in virtual host

    - by 0x7c00
    I have multiple virtual host in one server and plan to has some alias setup in one virtualhost. So I add the Alias /foo/ /path/to/foo/ in virtualhost directive,but it has no effect. request of host1/foo/ will return 404. But if I add this to /etc/apache2/mods-available/alias.conf, it works. But the problem is host2 will also share this alias. Is there a way to make the alias work only for host1? B.T.W, I use apache2ctl -l, there's no mod_alias.c listed, weird.

    Read the article

  • httpd.conf for case-insensitive file serving

    - by Anton Gogolev
    I'm a complete newbie with regard to managing Apache, so excuse me if I'm phrasing something incorrectly. I have a web site -- say, http://domain.com. The problem is that when I try to open http://domain.com/index.html in a web browser it displays the page, but when I attempt to access http://domain.com/Index.html (note capital I), it responds with HTTP 404. How do I configure Apache to serve both these files (and directories, for that matter) in a case-insensitive manner? Current httpd.conf is here. EDIT Dan C, thanks for a hint. I basically want to allow users to download files from my server and don't really want them to be aware that Index.html and index.html are in fact different. I'm also very willing to know as to what are the ramifications of this decision.

    Read the article

  • Nagios3 gives a warning on HTTP service monitoring

    - by Dez
    Already set up my local net configuration to be monitored by Nagios3. I found a problem that Nagios3 reports a warning in the HTTP monitoring service of a Debian server set at ip 192.168.1.52, that has an individual virtual host and a mass virtual host for application development. I get this status message: HTTP WARNING: HTTP/1.1 404 Not Found I used the Nagios tools to check. servername is the name of the vhost server name I used in the Apache configuration. /usr/lib/nagios/plugins/check_http -H servername -I 192.168.1.52 receiving this status message: HTTP OK HTTP/1.1 200 OK - 37900 bytes in 0.504 seconds |time=0.503946s;;;0.000000 size=37900B;;;0 But when I check like this: /usr/lib/nagios/plugins/check_http -I 192.168.1.52 I get the same status message as the warning, so I assume that I don't have Nagios completely well set up because doesn't recognize the vhosts for that server, how it should be as the check_http service shows. Where should I look to fix that warning?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >