Search Results

Search found 50839 results on 2034 pages for 'http 404'.

Page 124/2034 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Why my site is not ranking for particular keyword

    - by user543087
    My site is only 3 days to be 6 months old. This website is unique, that is there is no competitor to this type site in India, providing comparison of payment gateways in India, besides the payment gateways companies itself. I've optimized it for key word : "payment gateway" I've changed the url's twice, latest being 3 months back, in which case Google Webmaster gave plently of 404's. I corrected the useful 404's and left meaningless ones as it is. What is the reason it's not ranking well for payment gateways? Even site with single page about "Payment gateways" seem to be ranking better than this. Is it does to: 1) Lot of outbound links to in-context companies and information 2) 404's as reported in Google Webmaster My another site is successfully getting 1500 unique visitors daily and is up in Google ranking. I don't know why it is not!

    Read the article

  • Set maximum requests per IP in IIS7

    - by Maxim V. Pavlov
    I have a web site deployed to IIS 7. One page it is has 15+ .js files linked to it. Last two files referenced in <head> tag (loaded last) get 403 forbidden response from server. I have enabled FailedRequestTracing and have been able to see a detailed error code which is 403.502. I suppose over a very short period of time I am just pulling to much and the IIS blocks me. Is there a way I can configure the limit to enable larger number of requests and get rid of 403.502 error?

    Read the article

  • Varnish: User specific pages

    - by jchong0707
    I'm new to Varnish and am interested in using it to speed up my web application I wanted to know if Varnish can handle caching and serving user specific content. For example if I have a page say for example /welcome which is dynamically generated in the backend and is user specific So if User John Smith shows up to /welcome it'll show in the page itself 'Welcome John Smith' and if Bob Smith shows up to /welcome it'll show 'Welcome Bob Smith' Ideally both of those /welcome pages will be cached for each unique User, is this something Varnish can do? (is this even a good Use Case of Varnish?) Thanks!

    Read the article

  • apache 'The specified module could not be found. ' error

    - by Weiwei
    Hi all, I got thie message when i started apache service The Apache service named reported the following error: httpd.exe: Syntax error on line 128 of C:/data/apache/conf/httpd.conf: Cannot load C:/data/apache/modules/mod_wsgi.so into server: The specified module could not be found. . Not sure what went wrong, I do have "C:/data/apache/modules/mod_wsgi.so" Thanks for any help.

    Read the article

  • Client-based program to track response time for online webservice

    - by Søren Haagerup
    I am helping a customer with general IT support, and they have a problem with a hosted web-based system being slow. The provider of the system blames the client's computer, and the client calls me for help. I blame the provider, but it is hard to get them to do something about it without rock-solid evidence. And every time the provider comes around for a TeamViewer session, everything of course runs smoothly. Does there exist a client program or browser plugin that tracks statistics about response time for specific web services?

    Read the article

  • How to install GIMP 2.7?

    - by Bucic
    Here I ask this question since Beta 2 is usable and this question would come up sooner then later ;) After issuing the standard sudo add-apt-repository ppa:matthaeus123/mrw-gimp-svn sudo apt-get update I get W: Failed to fetch http://ppa.launchpad.net/matthaeus123/mrw-gimp-svn/ubuntu/dists/precise/main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/matthaeus123/mrw-gimp-svn/ubuntu/dists/precise/main/binary-amd64/Packages 404 Not Found W: Failed to fetch http://ppa.launchpad.net/matthaeus123/mrw-gimp-svn/ubuntu/dists /precise/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead. Is there any way to get around this. Excluding compiling from source as it usually introduces even more multilevel errors.

    Read the article

  • How to use PAM to restrict HTTP access for some users?

    - by MaxB
    I've read that PAM can be used to restrict HTTP access for some users, but I can't figure out how to do it in Ubuntu 12.04. The /etc/security/time.conf man page contains this example: All users except for root are denied access to console-login at all times: login ; tty* & !ttyp* ; !root ; !Al0000-2400 For this to work, /etc/pam.d/login needs to have a line account requisite pam_time.so This example works, and I tried to adapt it to limit HTTP access from the console. I added http ; tty* & !ttyp* ; !root ; !Al0000-2400 to /etc/security/time.conf, and created /etc/pam.d/http with account requisite pam_time.so This doesn't work. I can still use wget as non-root from the console.

    Read the article

  • Tell browsers to cache until last modified date changes?

    - by Chad Johnson
    My web site consists of static HTML files which are usually republished once per day, and sometimes more. I'm using Apache. In the vhost settings for my site, I'd like to tell browsers to cache HTML files indefinitely, until Apache sees that they are modified. So as soon as an HTML file is changed, Apache should immediately begin telling browsers it's changed and send the updated file. As soon as a new file is published, browsers should immediately begin receiving that...they should never receive old versions of files. Maybe ExpiresByType text/html modification and no "plus x days." Is something like this possible?

    Read the article

  • How can Facebook's session get mixed up because of NAT and/or Proxy

    - by Alex
    Have received some reports from a customer (a very large company) they reported issues from clients who are using Facebook. These clients claim that once in a while when they log in to Facebook they end up in someone else's session. I know that network is NATed then Proxied before getting to Facebook.com. Although I'm not able to explain how this issue can occur. Is it possible that the Proxy is not sending the right session back to the clients? How can they end up with someone else's session since Facebook is cookie based session?? Anyone seen this before?

    Read the article

  • Can I enable gzip/deflate in IIS6 without restart?

    - by nibblebot
    I have gone through the steps of enabling static compression for my IIS 6.0 site: enable it in IIS Manager enable edit-while-running add the extensions i need to compress directly to the metabase: js, css wait for the metabase.xml to update to the latest major history version It is still not compressing JS and CSS. Is there anyway to enable this without iisreset?

    Read the article

  • Serving static web files off a non-standard port

    - by Nimmy Lebby
    I'm close to deploying a Django project to production. I'm looking over some infrastructure decisions. Something that came up was serving static files with a different server such as lighttpd. However, we're starting off with a single dedicated server so our only option would be to use a non-standard port for the static file webserver. Is there precedence for this? I.e. Does anyone "big" do this? Any particular port I should use or shy away from using? Can anyone thing of some downsides of going this route?

    Read the article

  • how can I see a page's referrer in Chrome?

    - by EmmyS
    I've read the "answers" on this question, which is pretty much what I'm asking, but no one actually provides an answer. Nowhere in the Developer Tools (that I can see, anyway) is there a clear indicator of the current page's referring page. This is something that's really easy to find in Firefox; just right-click and choose Page Info. Where is this functionality in Chrome? If it's in the developer tools, which tab is it under? If it's not, is there an extension I can use to get this info? I've tried the Web Developer extension, and can't seem to find this very basic piece of info there, either.

    Read the article

  • View unknown content-types in Firefox?

    - by dirtside
    A certain URL on my server returns Content-Type: application/json. The filename ends with .phtml, so whenever I go to it, Firefox 3.5 asks me if I want to save it or open it with another program. But the answer is neither; I want to view it in Firefox as if it was text/plain. Suggestions? The Applications tab of the Preferences window doesn't give this as an option for the "PHTML" type which now is listed in there (ever since I tried to open it with no external program).

    Read the article

  • Webmin apache on CentOS 6.3 results in 403 forbidden, permissions are OK

    - by Mario De Schaepmeester
    First of all, I will mention that the permissions are fine for the document root directory, which is /webapps/nimbus/www/public_html The www directory contains a PHP application. PHP is a problem for later if it doesn't work, as I've tested it with a plain html file (does not work either) I just get 403 forbidden responses. The permissions are 755 on webapps and all subdirectories. I've checked other questions here and on the internet, but it was all about those permissions. Whatever info you still need, just ask, I don't know what's relevant as it's the first time ever I'm using webmin or configuring apache.

    Read the article

  • apt-get update error - binary-i386, binary-amd64 [duplicate]

    - by magamig
    This question already has an answer here: How can I fix a 404 Error when updating packages? 5 answers When I run: sudo apt-get update It shows me the following error: W: Failed to fetch http://ppa.launchpad.net/directhex/ppa/ubuntu/dists/trusty/main/binary-amd64/Packages 404 Not Found W: Failed to fetch http://ppa.launchpad.net/directhex/ppa/ubuntu/dists/trusty/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead. I have googled for solutions, but none of what I found, have worked for me. Please give me your suggestions

    Read the article

  • how do I setup Apache's Content-Encoding Header?

    - by Nick
    When attempting to validate my site with the W3C validator, it returns the error, "Don't know how to decode Content-Encoding 'none'". Firebug confirms that my server is sending the header, "Content-Encoding: none". But I can't find any directive in apache2.conf or in my vhost that sets the Content-Encoding header. Where does the directive go, and what should it be set to? UPDATE: On further examination it seems something is wrong with mod_deflate (gzip). It's zipping my css files just fine, but is not zipping the html generated by my php scripts. I have: AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css And the pages are showing a mime type of: "text/html". But content encoding is "none" and they aren't zipping. Perhaps these issues are related?

    Read the article

  • Using nginx and/or varnish to cache server-generated 301 redirects

    - by rlotun
    I'm implementing a sort of url-shortener service. What happens is that I have some backend app server that takes in a request, does some computation and returns a 301 redirected url back upstream to an nginx frontend: request ---> nginx ----> app_server What I want to be able to do is cache this returned 301 url for the same request (a specific url with a "short code"). Does nginx do this caching automatically? Or should I drop in something like varnish in between nginx and the app_server? I can easily cache this in memcache, but that would require hitting the app_server, which I'm sure can be dispensed with after the first request. Thanks.

    Read the article

  • Rewrite rule to show as directory using .htaccess

    - by chanchal1987
    I want to implement a rewrite rule in my .htaccess file to show a specific url as a directory of my server. See the code below I written, RewriteRule ^(.*)/$ ?page=$1 [NC] This will rewrites urls like www.mysite.com/abc/ to www.mysite.com/index.php?page=abc. But if I request www.mysite.com/abc then it is throwing an 404 error. How can I write a rewrite rule which will match www.mysite.com/abc and www.mysite.com/abc/ both? Edit: My current .htaccess file (After Litso's answer's 3rd revision) is like below: ## ErrorDocument 401 /index.php?error=401 ErrorDocument 400 /index.php?error=400 ErrorDocument 403 /index.php?error=403 ErrorDocument 500 /index.php?error=500 ErrorDocument 404 /index.php?error=404 DirectoryIndex index.htm index.html index.php RewriteEngine on RewriteBase / Options +FollowSymlinks RewriteRule ^(.+)\.html?$ $1.php RewriteCond !-d RewriteRule ^(.*)/$ ?page=$1 [NC,L] RewriteCond %{REQUEST_URI} !index.php RewriteRule ^(.*)$ ?page=$1 [NC,L] ##

    Read the article

  • nginx rule to capture header and append value as query string

    - by John Schulze
    I have an interesting problem I need to solve in nginx: one of the sites I'm building receives inbound traffic on port 80 (and only port 80) which may have a certain header set in the request. If this header is present I need to capture the value of it and append that as a querystring parameter before doing a temporary redirect (rewrite) to a different (secure) server, while passing the paramater and any other querystring params along. This should be very doable, but how!? Many thanks, JS

    Read the article

  • Enforcing a specific order for cookie headers

    - by Paul
    We have an application that cares about the order of cookie headers. It shouldn't, since this isn't mandated by the standards and indeed we're getting the headers in various different orders So we would like to rewrite the headers in Apache so that the cookie headers always appear in a specific order. Is there any way of doing this? An ideal solution would be specifically about cookie headers, but something that lets us mess with the header order more generally would do too.

    Read the article

  • Circumventing a manual HTML login page for "unclassified" websites

    - by auramo
    The IT department just made my life a little bit harder again: they introduced a manual HTML login page for all websites they have not "classified". This means that all the applications which try to access unclassified websites for e.g. downloading plugins do not work. Examples: Eclipse plugin installation, Maven builds etc. What would be the easiest workaround for this? The best I've come up with is try to extend/customize Ruby's httpproxy.rb that comes with Webrick. I would automate the manual login process whenever that login response page is detected. This sounds quite painful, and I think there might/should be simpler options?

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >