Search Results

Search found 1969 results on 79 pages for '404'.

Page 39/79 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • handling a GET error properly

    - by Andrew Heath
    I have a website that takes two primary get strings: ?type=GAME&id=SomeGameID ?type=SCENARIO&id=SomeScenarioID for reasons unknown, I have recently begun receiving requests for erroneous get strings from both Yandex and Baidu. They are always in the form of: ?type=GAME&id=SomeScenarioID None of my users are triggering these errors, so I am (sort of) confident that this is not due to an HTML template error somewhere on my part. There is also no HTTP_REFER showing up in the $_SERVER array, so I'm guessing these are direct requests from bad dbase data on their part. I see two options for dealing with these bad requests, and would like to know which is recommended... or if there are other, better options I have not thought of: simply 404 the request, since it is incorrect redirect the request to ?type=SCENARIO&id=SomeScenarioID because the scenario IDs are always valid, the breakage is due to asking for the wrong type.

    Read the article

  • Files inside Alias folder not accessible

    - by John Isaacks
    In my apache2.conf I have an alias setup like this: Alias /cake/ /var/www-cake/repo <Directory /var/www-cake/repo> Order allow,deny Allow from all AllowOverride All Options +Indexes </Directory> inside the /var/www-cake/repo directory I just have 1 file that is index.php when I go to http://linux-server/cake/ I get a directory listing that shoes the index.php file. When I click on the file it takes me to http://linux-server/cake/index.php in which I get a 404 page not found error. What do I need to do to make the files accessible?

    Read the article

  • Using env variables with RewriteRule and ErrorDocument

    - by misterte
    Hi, I'm having problems with the following while config. my Apache server to Rewrite some urls. SetEnv PATH_TO_DIR /directory RewriteRule ^%{PATH_TO_DIR}/([a-zA-Z0-9_\-]+)/([a-zA-Z0-9_\-\.]+)/?$ /index.php?dir=$1&file=$2 ErrorDocument 404 %{PATH_TO_DIR}/index.php?dir=null&file=error This conf. used to work perfectly fine until I used SetEnv PATH... etc. I need to use this because there are lots of rules, not just those. Can anyone point out my mistake? Apache returns %{PATH_TO_DIR}/index.php?dir=null&file=error when I try anything (www.site.com/foo/bar for instance). Apache returns the ErrorDocument if i just try to fetch the index. I know it's not a problem with the rewrite rules because they work when I remove the PATH_TO_DIR variable and just hard code it. Thanks! A.

    Read the article

  • 14.04 LTS, 32-bit, Software Updater error "Failed to download repository information: Check your internet connection"

    - by Lucas W
    There isn't much to say about this one: when I run Software Updater, I get the above error message. That can't be good. Interestingly, when I click on "Settings..." and then close the settings dialogue that pops up, all of a sudden Software Updater successfully finds updates and installs them. I thought I should bring this to the attention of the Ubuntu community. sudo apt-get update returns the following: W: Failed to fetch http://ppa.launchpad.net/deluge-team/ppa/ubuntu/dists/trusty/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead. I have screen captures, but I don't have enough reputation points to post them.

    Read the article

  • How do i completely remove phpmyadmin?

    - by blade19899
    I messed up my phpmyadmin, I haven't logged in, in phpmyadmin, in a while, and as a result i forgot my password, so i purged it like so: sudo apt-get purge phpmyadmin. I did get some error messages asking for my password but i forgot that, so i just pressed ignore, after that i installed phpmyadmin again like so: sudo apt-get install phpmyadmin. This time i wont be forgetting my password. But now, when i logging my phpmyadmin I get a 404 not found error page!? Question: How do i completely remove phpmyadmin and as a result get phpmyadmin working again Note: I am running Ubuntu 12.10(AMD64)

    Read the article

  • Looking for a CDN

    - by Bill
    Most of the CDN's that I've seen require you to upload your content in advance. I'm looking for a CDN that, upon receiving a request for a resource it hasn't seen, will contact my application server. If the application server returns something, it should be sent to the user and then cached in the CDN. If not, it should just return a 404. If the user requests an unexpired item, the CDN should just serve it without bothering my app server. Does anything like this exist? Is there a way to get Cloudfront to work like this?

    Read the article

  • Google still has record of my old site URL - what to do?

    - by Mayeenul Islam
    I had a blog site, i.e. http://example2.com, then I bought a new domain, i.e. http://example.com and 301 permanent redirected example2.com to example.com. But when I get into the Google Webmaster Tools, if I get some 404, and then click into the link and see the "Linked from" tab, it shows some links like: http://example.com/post-1 http://example2.com/feed http://example2.com/post-1 According to Google, if you change your domain, just use a redirection for at least 4-6 months, but it almost passed. Then why Google has still traces of my old site? The issue is important, because I don't want to pay for the old domain anymore. I tried deleting my existing sitemap.xml and recreating it from the new site, but still such links are stored. What could I do?

    Read the article

  • Shouldn't storage classes be taught early in a C class or book?

    - by Adam Mendoza
    Shouldn't storage classes be taught early in a C class or book? I notice that a lot of books, even some of the better ones, covert it toward and end of the book and some books just add it as an appendix. I would teach it together with variables. This is so foundational and I think unfortunately many do not make it that far in a book. Now that auto has a different meaning (vs being optional) it may confuse people that didn't realize it has always been there. for example: C Programming: A Modern Approach 18.2 Storage Classes 401 Properties of Variables 401 The auto Storage Class 402 The static Storage Class 403 The extern Storage Class 404 The register Storage Class 405 The Storage Class of a Function 406 Summary 407

    Read the article

  • Why is nesting or piggybacking errors within errors bad in general?

    - by dietbuddha
    Why is nesting or piggybacking errors within errors bad in general? To me it seems bad intuitively, but I'm suspicious in that I cannot adequately articulate why it is bad. This may be because it is not in general bad and that it is only bad in specific instances. Why is it detrimental to design error/exception handling in such a way. The specific instance is that of a REST service. There is a desire by some to use http errors (specifically the 500 response) as a way to indicate any problem with specific instances of a resource. An example of an instance resource in this case would be: http://server/ticket/80 # instance http://server/ticket # not an instance So this is the behavior that is being proposed. If ticket 80 does not exist return a http response code of 500. Within the body of the error return the "real" error as an additional error code and description. If the ticket resource doesn't exist return a response code of 404.

    Read the article

  • Google Webmaster Tools shows invalid data

    - by Altar
    Webmaster Tools shows 1 URL error (not found page). The report says that 5 pages are linking to a page (let's call it x) that does not exist (and because it doesn't exists it returns a soft 404). HOWEVER, I look in those 5 pages (in the source code) and none is linking to the x page. It is like Google sees an old page that was indeed pointing to x. What is the problem? How do I know if Google cached an old version for those 5 pages?

    Read the article

  • mismatch of version of libkdcraw20 and libkdcraw-data

    - by naveen jankar
    I'm using Ubuntu 12.04 LTS. When I'm installlin digiKam, I'm getting a mismatch in the versions of libkdcraw20 and libkdcraw-data wanted by it and that available in the repositories. It wants version 4.8.5-0ubuntu0.2 (or in other words that is the latest version according to synaptic) but the available one is 4.8.5-0ubuntu0.3 in both cases. Is there a work-around? Or how do I request the Ubuntu managers to rectify this? addenda On Synaptic i selected digikam to be installed. it downloaded all the dependencies but the 2 in question were not found - the message it gave "W: Failed to fetch security.ubuntu.com/ubuntu/pool/main/libk/libkdcraw/… 404 Not Found [IP: 91.189.91.13 80]". I google-searched the 2 files in the repositories to find that the version available there is ubuntu0.3 instead of ubuntu0.2.

    Read the article

  • Static HTML to Wordpress Migration SEO Implications?

    - by Kayle
    Recently, I migrated a client's site to a new server and a new home within wordpress so they could more easily edit their website and start a blog section. The static site was 10 years old a was showing up at place #3 for it's primary keyword, consistently, according to my client, and has dropped to rank #6-8 following the migration. At launch, we made sure the urls were identical (save the removal of ".htm" which we used 301 redirects to compensate for) and we generated a new XML map and pinged google with the new site. We keep a 404 log to make sure we're not losing any incoming links. We also have Google Webmaster Tools on this site and have zero errors/suggestions, everything seems ok. I was told by numerous sources that Google would not penalize us for the use of 301s, but it's the only thing I can think of right now that is different about the site, other than the platform. Any ideas about what we could be getting docked for?

    Read the article

  • "Invalid operation" status code in a HATEOAS REST API

    - by FinnNk
    In a HATEOAS API links are returned which represent possible state transitions. A conforming client should just be retrieving and following those links, but if a non-conforming client is constructing URIs rather than following the supplied links what would be the most appropriate status code/response to return? 400 would work, together with some information in the response body - this is what we're currently doing 403 I guess would be wrong, as it implies that the request could never work - but potentially the link may be available in the future 404 sounds plausible - at this point in time the resource doesn't exist What do people think? I know that conditional requests can handle requests based on stale responses (resulting in e.g. 412s), but this is a slightly different situation.

    Read the article

  • How can I stop a bot attack on my site?

    - by tnorthcutt
    I have a site (built with wordpress) that is currently under a bot attack (as best I can tell). A file is being requested over and over, and the referrer is (almost every time) turkyoutube.org/player/player.swf. The file being requested is deep within my theme files, and is always followed by "?v=" and a long string (i.e. r.php?v=Wby02FlVyms&title=izlesen.tk_Wby02FlVyms&toke). I've tried setting an .htaccess rule for that referrer, which seems to work, except that now my 404 page is being loaded over and over, which is still using lots of bandwidth. Is there a way to create an .htaccess rule that requires no bandwidth usage on my part? I also tried creating a robots.txt file, but the attack seems to be ignoring that. #This is the relevant part of the .htaccess file: RewriteCond %{HTTP_REFERER} turkyoutube\.org [NC] RewriteRule .* - [F]

    Read the article

  • Forward to other domain with CNAME

    - by xybrek
    In my GoDaddy DNS manager, I made some A Record that points to *.mirror for my domain Now when I access URL 123.mirror.mydomain.com from the browser I can see that my app is loaded, and its all OK. My problem now is when doing a CNAME point to the URL above on another domain like this: Accessing 123.otherdomain.com which I expect to "forward to" 123.mirror.mydomain.com I only get this 404 error: The IP 173.194.71.121 is actually ghs.googlehosted.com What I am missing here? Why 123.otherdomain.com which points to 123.mirror.mydomain.com cannot open that page and I think google is handing the web page request?

    Read the article

  • Blocking path scanning

    - by clinisbut
    I'm seeing in my access log a number of request very suspicious: /i /im /imaa /imag /image /images /images/d /images/di /images/dis They part from a known resource (in the above example /images/disrupt.jpg). All comming from same IP. Requests varies from 1/sec to 10/sec, seems somewhat random. It's obviously they are trying to find something and seems they are using a script. How do I block this kind of behaviour? I though of blocking the IP request, at least for a given time. Keeping in mind that: Request intervals seems legitimate (at least I think so). I don't want to end blocking a search engine bot, which may find 404 urls too (and that's a different problem, I know). ¿Do they use always same IP?

    Read the article

  • Web Platform Installer issues deploying Azure SDK 1.4 on refreshed systems.

    - by Enrique Lima
    Recently I have been doing quite a bit of testing on different means to deploy the Azure SDKs and such. After a very successful couple of systems, I started running into issues last night. Here is the problem, if I go to the Windows Azure Website, and go to Develop, then click on the SDK and Tools, then Get Tools & SDK, it launches the Web Platform Installer.  All seems well at that point, except it will go through the initial process, will find the SDK files for 1.4, but since the tools for Visual Studio are still 1.3, the location throws back a 404, which causes the Installer to fail.  NOTE:If you already had SDK 1.3 and the tools in place, it will go through. The fix is to go directly to the Microsoft Download Center location and download the files.  Here is the link … http://www.microsoft.com/downloads/en/details.aspx?FamilyID=7a1089b6-4050-4307-86c4-9dadaa5ed018

    Read the article

  • Tips for managing internal and external links using WordPress [closed]

    - by keruilin
    So I'm looking for ways to optimize my site for user and search engine purposes. I've read several articles and looked at several different plugins. To say the least, I'm thoroughly confused as what are the best practices for managing internal and external links. Here is a list of some of my questions: Which internal links should be set to "nofollow"? Which external links should be set to "nofollow"? To what degree does actively managing links contribute to your PR? Should you use "nofollow" blindly on all links in comments? If a link to an external site is broken (404 or whatever), should you "nofollow" that link? What about "noindex"? As you can see, lots of questions. I'm hoping that you experienced webmasters can give a newb some best-practice advice.

    Read the article

  • Ubuntu 12.10 Help! Everything is incredibly slow

    - by Keith
    I installed 12.10 from usb onto this machine. Intel celeron 2.00 GHz 496MB RAM I had to modify GNU-Grub to read "nomodeset" or i could not see the GUI. I have an Nvidia graphics card. Takes about 2 minutes to boot. The icons on the left of desktop take about 1 min to slowly open their menu. Have a network connection but mozilla is 404 and i cannot update. Where can i find a blow by blow explanation for troubleshooting and repairing this problem?

    Read the article

  • RewriteRule working local but not on remote server

    - by m0tv
    I have a .htaccess file with one simple RewriteRule: RewriteEngine on RewriteRule ^([A-Za-z0-9-]+)$ ?site=$1 I want to have an url like http://www.example.com/imprint and forward it to http://www.example.com/?site=imprint I checked this rule with an RewriteRule tester which gave me the results I want to achieve. On my local development system it works well too. But on a remote server the URLs just give me a 404 error. Other more simple rewrite rules are working with no problems, so everything must be set up correctly (I think..). The problem is that I don't have access to any error logs or the server configs. So the only thing I can do is to guess... Can anyone tell me if theres something wrong with this rule? Or anything else I can do or test to solve this? Or has someone an idea what could be wrong on the server?

    Read the article

  • Configuring php on Ubuntu server

    - by mk_89
    I have been following this tutorial http://www.howtoforge.com/installing-apache2-with-php5-and-mysql-support-on-ubuntu-12.04-lts-lamp And I have got to the part where I am running a simple test to determine whether php has been installed properly the installation went fine, I installed php5 using the following command apt-get install php5 libapache2-mod-php5 and I then restarted the server. to see whether php5 has been installed I did the following created a file vi /var/www/info.php edited the file <?php phpinfo(); ?> after trying to run it on my server I get a 404 Not Found error. What could be the problem?

    Read the article

  • Deleting Pages and SEO

    - by Lynda
    I am in the process of re-designing my website. I will be changing the URL structure of several pages and in some cases the pages are going to be deleted as they are no longer necessary or obsolete. My question is this: I do not want to effect any SEO by having a lot of 404 errors pop-up. For the pages that I am changing the URL I will set 301 redirects but how do I handle the pages that I am deleting outright and have no redirects? From my understanding if those pages start showing 404s it will hurt SEO. Is this correct? How do I handle deletion of pages?

    Read the article

  • I can not download anything

    - by Jason Machen
    I am very new to ubuntu but decided to wipe my windows 7 and install it. I can not download anything from the software center. This is the error message I get. I can use the web in all other ways including this site. What can I do? Thanks, Jason W:Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/main/source/Sources 404 Not Found [IP: 91.189.91.13 80] W:Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/restricted Plus about 20 other lines.

    Read the article

  • I am unable to use the "wubi" install I get the message "ERROR TaskList: Cannot download the metalink and therefore the ISO"

    - by pat
    used WUBI a few months ago on both XP and win7 systems with no problem. Unable to install on either for the last 2 weeks? p From the log 09-05 11:36 DEBUG CommonBackend: Could not find any ISO or CD, downloading one now 09-05 11:36 DEBUG TaskList: New task get_metalink 09-05 11:36 DEBUG TaskList: ### Running get_metalink... 09-05 11:36 DEBUG downloader: downloading http://cdimage.ubuntu.com/xubuntu/releases/12.04/release/xubuntu-12.04-desktop-amd64.metalink > C:\ubuntu\install 09-05 11:36 ERROR CommonBackend: Cannot download metalink file http://cdimage.ubuntu.com/xubuntu/releases/12.04/release/xubuntu-12.04-desktop-amd64.metalink err=[Errno 14] HTTP Error 404: Not Found

    Read the article

  • Site is working with http:// not with http://www

    - by Forza
    My site is working if i type it as domain.com. But If I type it as www.domain.com I get an error 404 page. The domain is registered with Google Apps for Business and the hosting is done with another company. I have some A records that are pointing the site to this server. However, as it appears the A records are only working halfway. What do I have to do to make it working for both http:// and http://www ?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >