Search Results

Search found 499 results on 20 pages for 'robots'.

Page 15/20 | < Previous Page | 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • When the canonical page itself changes url

    - by lulalala
    This is a continuation of the question: How to handle canonical url changes like Stack Overflow. Say I have the canon url: questions/11/car <---canonically-linked-from--- questions/11/ What will happen if I want to change the canon url to questions/11/car-with-sgx Obviously, questions/11/ will point to the new canon url. But how should the old questions/11/car change to the new one? There are two ways: 301 redirect that to new canon url the old canon url canonically link to the new canon url According to this post: [By using canonical link instead of redirect,] OldPage.html’s rankings will drop over time due to fewer internal links, but the canonical tag won’t make it disappear entirely. It could theoretically remain in their index until one of the following occurs: it is redirected permanently via 301 it returns a 404 for an extended period of time (they will keep checking for a while before dropping a URL) a meta robots “noindex” tag is added If this is true, I really need to use redirect from old canon url to the new canon url, which means I need to keep a log of previous old canon urls of this content, so I know when I can redirect. This is a bit of a hassle to do.

    Read the article

  • Google Analytics setting cookies on static content despite being on entirely separate domain

    - by Donald Jenkins
    I recently decided to comply with the YSlow recommendation that static content is hosted on a cookieless domain. As I already use the root of my domain (donaldjenkins.com) to host my website—on which Google Analytics sets a few cookies—that meant I had to move the CNAME URL for the CDN serving the static files from cdn.donaldjenkins.com to an entirely separate, dedicated domain. I purchased cdn.dj (yes, it's a real Djibouti domain name), hosted the files on the root (which contains nothing else, other than a robots.txt file) and set a CNAME of e.cdn.dj for the CDN. This setup works, but I was rather surprised to find that YSlow was still flagging the static files for not being cookie-free: here's a screenshot: The cdn.djdomain was new, and was never used for anything other than hosting these static files. Running httpfox on the site shows the _utma and _utmz Google Analytics cookies are being set on the static files listed above—despite their being hosted on an entirely separate, dedicated domain. Here's my Google Analytics code: //Google Analytics tracking code var _gaq=[['_setAccount','UA-5245947-5'],['_trackPageview']]; (function(d,t){var g=d.createElement(t),s=d.getElementsByTagName(t)[0]; g.src=('https:'==location.protocol?'//ssl':'//www')+'.google-analytics.com/ga.js'; s.parentNode.insertBefore(g,s)}(document,'script')); // [END] Google Analytics tracking code I'm not obsessing about this issue—I know it's not really affecting server performance—but I'd like to just understand what is causing it not to go away...

    Read the article

  • Take a Tour of Google’s Data Centers

    - by Jason Fitzpatrick
    Miles of cables, robots archiving backup tapes, and quarter-million-gallon cooling tanks: take of tour of Google’s data centers to see just how the search giant fuels the engine that delivers your search results so quickly. The collection of photos includes data centers around the world and offers a rare behind the scenes look at their operations. In some cases, we’re even treated to a literal behind the scenes view as seen in the photo above, from the Mayes County, Oklahoma data center: A rare look behind the server aisle. Here hundreds of fans funnel hot air from the server racks into a cooling unit to be recirculated. The green lights are the server status LEDs reflecting from the front of our servers. Hit up the link below for the full tour that includes photos and information about the data centers, the people that run them, and even a Street View style tour inside. Where the Internet Lives [Google Data Centers] Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference

    Read the article

  • Soft 404 error on redirected outbound links

    - by Techlands
    I have a redirect script on my site which sends visitors to an affiliate site. However in the last month I've noticed that Google webmaster tools is reporting my outbound links as a 404 error. Here is the breakdown on how its setup: My outbound links are coded like this: <a href="/f/c123" rel="nofollow" target="_blank">Link Title</a> My redirect script will then perform a 302 redirect to the affiliate link Originally I had the affiliate links (CJ) directly in the HTML, however I noticed over time that this had some impact on my sites traffic. So I changed them to a redirect script and my traffic returned. This seemed to work with no issues for over 1 year but now I'm getting soft 404 errors in Google webmaster tools. I did try adding a rule in my robots.txt to block any links starting with /f/ but I'm not sure if this will help or Google will still report soft 404 errors. I am considering as possible options to change the a tag to a button tag and use an onclick event to load the link.

    Read the article

  • Web Essentials extension now available for Visual Studio Express

    - by ihaynes
    Originally posted on: http://geekswithblogs.net/ihaynes/archive/2014/08/12/web-essentials-extension-now-available-for-visual-studio-express.aspxVisual Studio Express has always had one big issue for me, the inability to install any (or very few) of the normal Visual Studio extensions. One of the best extensions for front-end development and design is the Web Essentials extension by Mads Kristensen on the VS team. It was a hugely welcome surprise to find that this extension is now available, in full, for VS Express, and had been since May, darn it! It has a huge number of useful features, not least being able to minify HTML, CSS and JavaScript files, which is almost a prerequisite for responsive sites. It can also bundle them together. There are lots of other features for HTML editing; intellisense and syntax highlighting for robots.txt files; 'surround with tag', intellisense for meta tags (including iOS, Twitter and Facebook); generate vendor-specific CSS etc etc. The full details can be found at the dedicated site. http://vswebessentials.com This extension alone makes VS Express far more useful and worth having.

    Read the article

  • convert .htaccess to nginx

    - by Chip Gà Con
    It's me again :( I was trying to install siwapp on my webserver but I couldn't make it work with nginx, here is the .htaccess file content: RewriteCond %{REQUEST_FILENAME} !index.php RewriteRule (.*)\.php$ index.php/$1 RewriteCond $1 !^(index\.php|nhototamsu|assets|cache|xd_receiver\.html|photo|ipanel|automap|xajax_js|files|robots\.txt|favicon\.ico|ione\.ico|(.*)\.xml|ror\.xml|tool|google6afb981101589049\.html|googlec0d38cf2adbc25bc\.html|widget|iradio_admin|services|wsdl) RewriteRule ^(.*)$ index.php/$1 [QSA,L] When I access http://myurl.com/tin-tuc/tuyen-sinh/tu-van/2012/04/25757-phan-van-qua-giua-khoi-a1-va-khoi-a.html ,nginx could display the page correctly, it said: "404 Not Found" (new URL: http://myurl.com/tin-tuc/tuyen-sinh/tu-van/2012/04/25757-phan-van-qua-giua-khoi-a1-va-khoi-a.html)

    Read the article

  • New META TAGS with positive effects for seo ranking in 2011 and beyond

    - by Sam
    Hi all, im trying to make an up to date chart of meta tags, for all of us, with their purposes, their use and their good (or bad) effects on search engines/being found. Also any body knows new/promising meta tags? I will add yours into my list so this chart is a result of live discussion and up to date. Also, it would be creative to invent your own useful meta, because we are the ones making the web, or aren't we? LEGEND P PURPOSE? What does this meta tag do in 2011, if anything N NECESSARY? Does every site really needs it or not? G GOOD wether it will have a good effect for your site to be found I INVENTED meta tag, who knows it will be accepted in a year! META "METANAME" = PURPOSE? - NECESSARY? - GOOD EFFECT? #### important meta "title" = P consice summary + teaser - N very - G extremely meta "description" = P description + teaser - N yes - G very meta "robots" = P if needed, to skip default dmoz/yahoodir listing - N no - G? #### new & promising! Thanks for input (John, ) meta "original-source" P url of whoever broke the news gets credits - N? - G? meta "syndication-source" P url for syndication of published news - N? - G? meta "canonical" P? - N? - G? #### seems obsolete meta "keywords" = P some keywords - N+G not for google but yahoo likes them meta "language" = P overrule guesswork by defining language - N no - G? meta "page-topic" = P topic/theme - N? - G? meta "abstract" = P short summary - N? - G? meta "copyright" = ? #### invented by me meta "audience" = P filteres audience: "+seniors, +parents, -children, -youth" meta "mood" = P specifies textual style: "discussion, informative, commercial, sexual, fictional, scientific, romantic, therapeutic, technical"

    Read the article

  • Microsoft Robotics Studio in Ubuntu, with Wine ?

    - by Arkapravo
    I am using Ubuntu 9.10 and I am a bit of a robotics enthusiast. I have used KiKS (in MATLAB for simulating Khepera robots), MobotSim (in Windows, simulates a point like robot using a BASIC editor) and Player/Stage (with C/C++ on Ubuntu 9.04). My question, can MS Robotics Studio be installed in Ubuntu Linux using Wine (I am using 1.1.31) ? Has anyone done it ? Any other way to install MS Robotics Studio in Unix (Any flavour) ? Thanks for your reply !

    Read the article

  • problem with Webmaster Google Sitemap

    - by Alex
    I have a wp mu 3.6.1 with domain mapping (0.5.4.3) with w3tc (0.9.3) and Google XML Sitemaps (4.0 BETA). I have 4 different sitemaps. sub-1.com/sitemap.xml sub-2.com/sitemap.xml sub-3.com/sitemap.xml sub-4.com/sitemap.xml on google webmaster i got 59 errors & 14 warnings. Sitemap errorsErrors: We encountered an error while trying to access your Sitemap. Please ensure your Sitemap follows our guidelines and can be accessed at the location you provided and then resubmit. General HTTP error: 404 not found Sitemap: sub-2.com/sitemap-pt-post-2011-02.xml etc (but when i click on my sitemap links they work fine) Sitemap errorsWarnings: URLs not accessible When we tested a sample of the URLs from your Sitemap, we found that some URLs were not accessible to Googlebot due to an HTTP status error. All accessible URLs will still be submitted. Sitemap: sub-2.com/sitemap-misc.xml HTTP Error: 404 URL: /sitemap.html (but when i click on my sitemap links they work fine) Sitemap errorsIndex errors URLs not accessible When we tested a sample of the URLs from your Sitemap, we found that some URLs were not accessible to Googlebot due to an HTTP status error. All accessible URLs will still be submitted. HTTP Error: 404 URL: /sitemap-pt-post-2010-09.xml (but when i click on my sitemap links they work fine) Web pages 3,276 Submitted 3,247 Indexed what do I have to put on network adminperformance(w3tc)page cachecache preloadSitemap URL: ? i have add "/sitemap.xml" my robots.txt: http://pastebin.com/3K2U0mQa my .htaccess: http://pastebin.com/efJJ6zwy How can I make it work right?

    Read the article

  • MS Bing web crawler out of control causing our site to go down

    - by akaDanPaul
    Here is a weird one that I am not sure what to do. Today our companies e-commerce site went down. I tailed the production log and saw that we were receiving a ton of request from this range of IP's 157.55.98.0/157.55.100.0. I googled around and come to find out that it is a MSN Web Crawler. So essentially MS web crawler overloaded our site causing it not to respond. Even though in our robots.txt file we have the following; Crawl-delay: 10 So what I did was just banned the IP range in iptables. But what I am not sure to do from here is how to follow up. I can't find anywhere to contact Bing about this issue, I don't want to keep those IPs blocked because I am sure eventually we will get de-indexed from Bing. And it doesn't really seem like this has happened to anyone else before. Any Suggestions? Update, My Server / Web Stats Our web server is using Nginx, Rails 3, and 5 Unicorn workers. We have 4gb of memory and 2 virtual cores. We have been running this setup for over 9 months now and never had an issue, 95% of the time our system is under very little load. On average we receive 800,000 page views a month and this never comes close to bringing / slowing down our web server. Taking a look at the logs we were receiving anywhere from 5 up to 40 request / second from this IP range. In all my years of web development I have never seen a crawler hit a website so many times. Is this new with Bing?

    Read the article

  • Apache virtual host for drupal test site

    - by bsreekanth
    Hello, I am a programmer, trying to launch my first website.. through different helpful posts in sf and others, I setup an account with Linode and set up a slice (Debian, Apache, ..etc). I have a Drupal site under development, and like to have a test site in the Linode server as well. Now, I like to have a site setup with the following requirement. What is the best way to setup and protect the test site along with the actual (production) site?. Is virtual host is the answer? To protect the test site, is .htaccess authentication sufficient to prevent access from public and robots? I also modifying the theme, database contents etc, so having two sites under one drupal installation may not be good idea . what do u suggest? thanks in advance. bsreekanth.

    Read the article

  • Duplicating someone's content legitimately & writing HTML to support that

    - by Codecraft
    I want to add content from other blogs to my own (with the authors permission) to help build additional relevant content and support articles I've found useful that others have written. I'm looking into how to do this responsibly - ie, by giving the original content author a boost and not competing against them for search traffic which should go to their site. In order to keep my duplicate content out of search, and to hint to the search engines where the original content is to be found i've implemented: <head> <meta name='robots' content='noindex, follow'> <link rel='canonical' href='http://www.originalblog.com/original-post.html' /> </head> Additionally, to boost the original article and to let readers know where it came from i'll be adding something like this: <div> Article originally written by <a href='http://www.authorswebsite.com'>Authors Name</a> and reproduced with permission.<br/> <a href='http://www.originalblog.com/original-post.html' target='new'> Read the original article here. </a> </div> All that remains is a way to 'officially' credit the original author in the HTML for the search spiders to see. Can anyone tell me a way to do this possibly using rel="author" (as far as I can see thats only good for my own original content), or perhaps it doesn't matter given that the reproduced pages will be kept out of search engines? Also, have I overlooked anything in the approach?

    Read the article

  • What naughty ways are there of driving traffic?

    - by Tom Wright
    OK, so this is purely for my intellectual curiosity and I'm not interested in illegal methods (no botnets please). But say, for instance, that some organisation incentivised link sharing in a bid to drive publicity. How could I drive traffic to my link? Obviously I could spam all my friends on social networking sites, which is what they want me to do, but that doesn't sound as fun as trying to game the system. (Not that I necessarily dispute the merit of this particular campaign.) The ideas I've come up with so far (in order of increasing deviousness) include: Link-dropping - This is too close to what they want me to do to be devious, but I've done it here (sorry) and I've done it on Twitter. I'm subverting it slightly by focusing on the game aspects rather than their desired message. AdWords - Not very devious at all, but effectively free with the vouchers I've accrued. That said, I must be pretty poor at choosing keywords, because I've seen very few hits (~5) so far. Browser testing websites - The target has a robots txt which prevents browsershots from processing it, but I got around this by including it in an iframe on a page that I hosted. But my creative juices have run dry I'm afraid. Does anyone have any cheeky/devious/cunning/all-of-the-above idea for driving traffic to my page?

    Read the article

  • HTTPS To http redirect issue. How to overcome?

    - by Akshay
    Have already seen suggests on how to rewrite https to http. currently using this technique : RewriteCond %{SERVER_PORT} ^443$ RewriteRule ^(.*)$ http://www.mysite.com/$1 [L,R=301] RewriteRule ^(.*)$ http ://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] Problem : I am currently on hostgator VPS and have found Google indexing my HTTPS pages. Weird for me as never bought an SSL. My site is a blog only. When spoke to Google forums, ( https://productforums.google.com/forum/#!msg/webmasters/2Hz46t44nwk/7voZWudFtAQJ ), they say I should redirect https to http. Now when I have redirected this using the above method, I am still getting SSL warning in browsers. And found that Google is still indexing my new pages with https. I feel as I do not have an SSL, adding a redirect in https doesn't work. So if Google indexes my https page, then I should go and buy SSL, and tell there to redirect https to http. Why would I do that? Please help me, reduced the traffic by nearly 30% because of this. Have even told search engines to go to this file (disallows everything) if they are on https. Options +FollowSymlinks RewriteCond %{SERVER_PORT} ^443$ RewriteRule ^robots.txt$ robots_ssl.txt

    Read the article

  • prevent search engines indexing depending on domain

    - by Javier
    We have a dedicated server with a hosting company with a couple of dozens of webs in it. It happens that the nameservers (EG: ns1.domain.com, ns2.domain.com) ip's are coincident with some client webs, let's say webclient1.com and webclient2.com Problem is that for a certain searches in google, some results are showing up like ns1.domain.com/result instead of webclient1.com/result which is pretty wrong and annoying for our clients. Actually if you type in the browser ns1.domain.com or ns2.domain.com it will load some pageclients instead. Is there any way to prevent google to track those results only in case the robots are coming to check ns domains? It may be not correct to ask this as well, but why is it happening? is it a result of a bad server configuration? I'm pretty new on these matters, so thank you in advance for any help!

    Read the article

  • Daily Blog Archives and Duplicate Content

    - by nemmy
    A few weeks back I realised that my blog software was creating daily post archives. Which basically resulted in duplicate content especially if I only had one post a day. The situation is something like this: www.sitename.com/blog/archives/2013/06/01 - daily archive for 1 June 2013 www.sitename.com/blog/archives/2013/06/my-post-name.html So, here we have two pages that are basically identical except the daily archive has some meaningless title like "Daily Archive for 1 June 2003". And I have no control over which content Google decides is the primary content. It's quite possible (and likely) that the daily archive could be the "primary" content and the actual post itself the "duplicate". Once I realised it was doing this I modified the daily archive template to include <meta name="robots" content="noindex"> Here we are a few weeks later and I still see some daily archives coming up in Google search results. I realise some of those deep pages might not be crawled yet but I am worried that the original post (which should be the PRIMARY content) has been marked duplicate content by Google. Now I've no indexed the daily archives I might end up with no indexed content AND the original articles still flagged as duplicates. And nothing will show up in search at all. Have I screwed myself here or is there a way out?

    Read the article

  • Why is <my site url> not indexed by search engines? [closed]

    - by Henrik Erlandsson
    was indexed fine until about a year ago. The only thing I can think of is that search engines throw up at using h5 before h4, or that some person (fantasizing now) has reported my site as unsafe to every search engine. However, I'm not here to speculate. The site validates, and has an RSS feed on the front page, for Pat Morita's sake! To me, it looks like the kind of site search engines would feast on. It's got more than a dozen blogs on it, if nothing else. Hah. :) I was thinking you could identify basically what has changed in search engines (currently, google, yahoo, bing which used to work fine) the last year to make them not find news and blog articles on this site. The site was submitted to Google, oh, way back in 2006. With online crawler tests I get mixed results, some crawlers index fine, some go blank. I don't really know which ones are reliable and am looking to you guys for advice on that. Yes, I am prepared to again verify my site with Google and upload a sitemap, but that's not the topic here. I really would first like to know what change on the site last year could make search engines not index it. (Yees, the robots.txt is fine. Should be nothing to discourage bots there.) It's a very intriguing problem. One which I have yet to find the reason for but would like to know the reason for. Any and all input appreciated, but I would heavily enjoy pertinent advice the most. ;) Edit: Some google searches that don't show up include - aca630 All of which are posted in the news and blogs that are on the front page there. Now, these search terms are extremely specific as the term in is almost unique on the web and ACA630 is also a very qualified search term that can't be confused with mainstream search terms.

    Read the article

  • Does Google sometime prevent new white hat sites from ranking at all in some verticals?

    - by JVerstry
    Assuming someone wants to implement a new viagra or akai berry e-commerce website. There is a lot of competition and this site does not really bring something new, other than a new online counter to buy products at a nice price. Assuming this site does not use any black hat techniques and that it stays with Google quality guidelines, and assuming it has no (or few) backlinks (from non-authoritative websites). Assuming this website's pages are indexed properly in Webmaster Tool, and that no penalties are reported. No site improvements are suggested. Google crawls the site daily as reported in GWT. No robots.txt configuration issues. Does Google sometime decide to no rank this site for any user query (for weeks), because of lack of original content? The reason I am asking this is that I am trying to understand the possible cause of a similar situation I am observing with two sites. If so, what is the way out to start ranking for these site? If not, does it mean the cause is elsewhere for sure? Any confirmed info to get out of the maze is welcome.

    Read the article

  • Empty APN list on your Android phone? Restart the phone to fix it

    - by Gopinath
    Today I tried to connect to internet on my Google Galaxy Nexus running on 4.2.2 using Cellular Data connection and it failed. Tried reaching customer care representative to figure out why data connection is not working, but the robots (Interactive Voice Response systems) never allowed me to reach a human. After digging through the settings I found empty list of APN (Access Point Names) is the reason for not able to connect to internet. Not sure what caused APN list to vanish but I tried to create a new one that matches with the settings required for AT & T mobile. To my surprise I found that the newly created APN is also not shown in the APN list. Well there is something wrong with the phone – my APN’s are not shown as well as the newly created one is also not displayed. A simple Google search on this problem shown many forum discussions list and the solution to resolve the issue is to restart the phone. As soon as I restarted my phone the APN list is automatically populated and I’m able to connect to internet on my mobile.

    Read the article

  • How do I convince my boss that it's OK to use an application to access an outside website?

    - by Cyberherbalist
    That is, if you agree that it's OK. We have a need to maintain an accurate internal record of bank routing numbers, and my boss wants me to set up a process where once a week someone goes to the Federal Reserve's website, clicks on the link to get the list of routing numbers (or the link giving the updates since a particular date), and then manually uploads the resultant text file to an application that will make the update to our data. I told him that a manual process was not at all necessary, and that I could write a routine that would access the FED's routing numbers in the application that keeps our data updated, and put it on whatever schedule was appropriate. But he is greatly opposed to doing this, and calls it "hacking the Federal Reserve website." I think he's afraid that the FED is going to get after us. I showed him the FED's robot.txt file, and the only thing it forbids is an automated indexing of pages with extension .cf*: User-agent: * # applies to all robots Disallow: CF # disallow indexing of all CF* directories and pages This says nothing about accessing the same data automatically that you could access manually. Anyone have a good counterargument to the idea that we'd be "hacking" the FED?

    Read the article

  • Why was my site rejected for Google Adsense?

    - by hyuun jjang
    I have a 3 year old blog and its got around 16 articles/tutorials about some programming problems and solutions. It's getting pretty much a lot of view lately so I decided to apply for a google adsense account. When I first applied via blogger, google replied with the following statement: Page Type: In order to participate in Google AdSense, publishers' websites and application information must satisfy the following guidelines: - Your website must be your own top-level domain (www.example.com and not www.example.com/mysite). - You must provide accurate personal information with your application that matches the information on your domain registration. - Your website must contain substantial, original content... So, as I understood it, I decide to buy a domain and point my blogger blog to that new naked domain. and here is the newly bought domain where all the contents of my old blog resides. http://icodeya.com/ I reapplied, hoping that this time, I will make the cut. But then I got this reply Further detail: Unable to review your site: While reviewing http://www.icodeya.com/, we found that your site was down or unavailable. We suggest you check whether there was a typo in the URL submitted. When your site is operational, you can resubmit your application with the correct site by following the directions below. I'm a bit disappointed. Maybe I did something wrong with DNS configuration or something. But you can clearly see that my site is fully functional. I heard that google sends robots to crawl on to the site etc. It's just sad because I invested on a domain name, and now I can't even find ways to earn from it. Any tips?

    Read the article

  • New site not appearing in index after change of address, no feedback from google webmaster tools

    - by Duffy
    Our change of address seems to not be taking effect. Here's the story so far: We're a web company and our product is called The New Hive. Our site used to be at thenewhive.com, but we decided to switch to newhive.com (drop the "the", it's cleaner). So the timeline of what I've tried, starting on July 29th: used 301 redirects for all pages (e.g. thenewhive.com/tag/art = newhive.com/tag/art) At this point we noticed that we had disappeared from search results when searching "The New Hive", the front page used to be all links to our site plus a couple news articles about the company. So on August 5th I: verified new domain in webmaster tools (old domain was already verified) submitted a change of address request on August 5th with Webmaster Tools / Configuration / Change of Address Then after another week, on August 13th I did this: Went to Webmaster Tools / Health / Fetch as google fetched our homepage and a couple sub pages, all successfully clicked "Submit to Index" for homepage As of today (August 23rd) we're still not showing up in the index. We're getting no warnings or feedback of any kind from the dashboard so I'm inclined to think something's broken with the dashboard rather than that something's wrong with our site from an SEO perspective. From the dashboard: No new messages or recent critical issues. Crawl Errors: No data available. From Health - Index Status: Total indexed 0 Ever crawled 42,490 Not selected 12 Blocked by robots 0 I'm really at a loss here, any help would be appreciated.

    Read the article

  • Apache .htaccess problem: No input file specified.

    - by Michal M
    Hello Everyone, Can someone help me with this. I'm feeling like I've been hitting my head against a wall for over 2 hrs now. I've got Apache 2.2.8 + PHP 5.2.6 installed on my machine and the .htacces with code below works fine, no errors. RewriteEngine on RewriteCond $1 !^(index\.php|css|gfx|js|swf|robots\.txt|favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L] The same code on my hosting provider server gives me a 404 error code and outputs only: No input file specified. index.php IS there. I know they have Apache installed (cannot find version info anywhere) and they're running PHP v5.2.8. I'm on windows xp 64-bit, they're running some Linux and php in cgi/fastcgi mode. Can anyone suggest what could be the problem? PS. if that's important that's for CodeIgniter to work with friendly URLs.

    Read the article

  • Existing laravel 4 project gives 404 in browser

    - by Richard A
    I'm trying to set up a development environment on a virtual machine running Ubuntu 14.04 LTS using Nginx and HHVM. To do this, I followed the tutorial here. This goes well with a new installation of Laravel. But when I import an existing Laravel 4 project and try to open that on my actual machine (which will serve as the client running Windows 7), I'm getting a 404 File Not Found error on the screen while connecting to http://sav.savrichard.dev. I did add this to the hosts file with the correct IP Address. The virtual machine is receiving the request and responds with a 404 error. How do I solve this error? I'm pretty new to Ubuntu so I'm not exactly sure what's wrong. The project is located at /var/www/sav.savrichard.net The server configuration is as follow: server { listen 80 default_server; root /var/www/sav.savrichard.net/public; index index.html index.htm index.php; server_name sav.savrichard.dev; access_log /var/log/nginx/localhost.sav.savrichard.dev-access.log; error_log /var/log/nginx/localhost.sav.savrichard.dev-error.log error; charset utf-8; location / { try_files \$uri \$uri/ /index.php?\$query_string; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { log_not_found off; access_log off; } error_page 404 /index.php; include hhvm.conf; # Deny .htaccess file access location ~ /\.ht { deny all; } } And the hhvm.conf file is: location ~ \.(hh|php)$ { fastcgi_keep_conn on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }

    Read the article

  • Blogger homepage won't update!

    - by Sims Siniron
    i am new on blogging webmaster Tools. When i usually add new post to my blog, it will automatic updated my homepage also. But from last 14th January, my homepage won't update by Google SEPR. As a result i am losing my popularity on SEPR. Previously when i post new article, 70-80% will go to the first page result. But after the problem occurs, none of them reach in top 15page of Google SEPR :( Last 1/12/12, Google webmaster sent me a "Notice of DMCA removal from Google Search" massage to indicates one of my URL that contains some infringing content which i deleted after receiving their notice. Not only that, i also cheeked all of my posts if there any additional infringing content available. After removing that, i fill out Google's Content Removed Notification form to notify them and Google also sent me a feedback that they received it and suggest "In the future, if you have removed the allegedly infringing content from your site (and won’t put it back), please use the correct form" which also i filled up. Now my question is that, Is everything alright which i did before? Although my new posts are indexed in GSEPR with ".." but why Google Robots.txt won't update my homepage which previously automatically updated when a new article was published.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20  | Next Page >