Search Results

Search found 9717 results on 389 pages for 'gkt pro'.

Page 137/389 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • Fetch as Google error 403

    - by Bojan Vidanovic
    2 weeks ago, google cant access my website anymore, in webmaster tools i cant fetch any page, i always get error 403, and the website has been completly disapperard form the google search results. I cant figure how suddendly it cant see it anymore, i've checked .htaccess and there nothing that blocks google crawlers, and robots.txt is fine to. Anyway the site is accesibly normaly for users. Anyone had this problems? please help!

    Read the article

  • Wordpress .htaccess preventing subfolder access

    - by John K.
    This is sort of a goofy setup, but it's not in my power to reconfigure it at this time. I'm running in a shared hosting environment. The domain is example.com. This is an add-on domain on the host side with example.com being redirected to the www/example.com sub-directory. That directory houses a standard Wordpress site which acts as the main site when you visit example.com. The .htaccess file within that directory is: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^wp-admin/profile\.php$ /ssm/welcome [R] </IfModule> I have a subdirectory, at the root level with the /example.com subdirectory that houses a cake php application. That subdirectory is /tracker. My problem is that when I attempt to browse to example.com/tracker, I get a 404 from Wordpress because perma links are on. What I think I need is a rewrite rule in the Wordpress .htaccess file that short circuits the existing rewrite rules and permits example.com/tracker to work independently of the Wordpress install. Or a rewrite rule at the root level that short circuits the redirect to the /example.com directory in the first place. Not sure how well I explained that so here's a summary. The www/ directory structure: example.com/ tracker/ Add on domain of www.example.com redirecting to the /example.com directory with Wordpress and a tracker/ directory running CakePHP which I would like to access via www.example.com/tracker. If you need further info or clarification let me know!

    Read the article

  • Copyright of pictures upload to a website?

    - by All
    I want to run a website like stock photos. How can I be sure that the uploader is real copyright holder of the picture? Is it possible to leave the responsibility of this copyright claim to the uploader or at last the webmaster is responsible for the website content? It generally confuses me, as for example, stock photo websites needs form signed by the model for photos showing human face. How they can be sure that the signature actually belongs to the model? How they keep them safe from possible lawsuit in this case (e.g. if selling photos of a models with a fake signature?)

    Read the article

  • Shared Hosting Provider [closed]

    - by Garry
    Possible Duplicate: How to find web hosting that meets my requirements? I've been with Dreamhost for 5 years but the amount of downtime I have experienced over the last 6 months has been outrageous. As of now (2012) which hosting provider would you recommend? Most of my sites are small to medium readership blogs running WordPress. I've been looking at Inmotion and Hostgator. Reliability is paramount. Thanks

    Read the article

  • Meaning of Crawl errors

    - by com
    My question is about definition of Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. Let's first consider HTTP section. I assume that all broken links in this section was somehow found by crawler, this is not the links from sitemap. If all this links was found by scanning pages from sitemap for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. Please correct me if I am wrong. Sitemap section. Looks like all those links came from my sitemap. But there is Linked From column, I already know, that all those broken links is from sitemap, so in order to fix the error, I should revise my sitemap. Am I wrong? Not followed section. I don't know what does it mean. Looks like it accumulates all links that caused redirect, but for some reason Google considers all those redirect as wrong redirect. Do you know if there are any set of rules how to determine wrong redirect. Actually I found were was my mistake, I tried to normalize URL and redirect it to the right URL, but I did normalization in a wrong way. Not found section. This section like HTTP section but with 404 errors. This section has Linked From column. But very often Linked From has unavailable. What does it mean, Google can not say me how it found this non existing page. How this section related to sitemap section. Does this section contains all 404 links from sitemap too. But there is too many 404 links, much more than in sitemap. I tried to take a look what we have in Linked From, and I saw that this link came from sitemap two month ago. But why Google keeps it indexed, the link is already dead, new sitemap doesn't have it. If there is any expire date for old links? Unreachable section. Looks like this section for 500 errors. This section doesn't contain Linked From column. There are too many completely meaningless links, I really don't know where this stuff came from, and without Linked From I am not able to figure out how to deal with it. Sorry for such a big topic, but I just want to make it clear, what every section stands for, because it's extremely crucial in order to deal with all those problems. Hopefully it will be useful not just for me. Thanks!

    Read the article

  • Canonical links for huge websites

    - by Florin
    Let's say I have 5 products that are identical but the product code, the product color specifications and the product image. The title, meta and description are identical (by the way the color is in a select form). I made 4 products link canonical to the 1 that is the master based on many factors. If the master becomes inactive or without a stock one product from the other 4 will become the new master and the rest will become canonical to it. The question is if that by becomeing master from canonical will the site suffer a penalty from Google or it will work just fine? What will Google think about this strategy?

    Read the article

  • Google search does not show sub-pages from my website

    - by Chang
    My website appears in Google search, but only the first page. Of course I have sub-pages linked from the first page, but the sub-pages do not show in Google search. Not in Yahoo, not in Bing. What should I do? It has been three years that sub-pages do not show. (I tried searching site:mydomain.com and pressed 'repeat the search with the omitted results included' link) What would you suspect the reason? My website addresses were like xxx.php?yy=zzz etc, etc, so I changed it to /yy/zzz using mod_rewrite. I thought it might be (X)HTML standard violations, so now I changed it. I hope Google will soon have my entire website, but I am a little bit pessimistic. Do you have any thought?

    Read the article

  • URL structure for content that is updated daily

    - by Brendon
    A small, simple site I am working on displays a single page with the day's best offers on it. The user is able to move back and forth between previous days. Which of the following URL structures works best? Structure 1 /index.html -- today's best offers /2013-06-29.html -- yesterday's best offers, etc. Structure 2 /index.html -- 302 redirects to /2013-06-30.html (or whatever today is) /2013-06-30.html -- today's best offers /2013-06-29.html -- yesterday's best offers, etc. I quite like structure 2 from the user's point of view (they can share content easily), but I am a bit concerned about updating the redirect from /index.html every single day -- would this perhaps have unintended SEO consequences?

    Read the article

  • Having good domain name and using domain aliases ( I use notlong.com)?

    - by Michal P.
    I use only free servers and after creating my website: http://pundaquit.republika.pl I decided to make access to that domain by simple domain name . I decided to use domain alias http://notlong.com/ service and have simple domain name http://pundaquit.notlong.com The second advantage of using alias here was to be independant from my file host which I will have to change. I haven't found a better alias service like notlong, because notlong.com is easy to remember. After that I encounter many problems: * most of forums or social services treat notlong adress as a spam, * Bing so far hvn't accepted http://pundaquit.notlong.com domain and others. Is it another way to have good free domain name? How about the situation when your hosting server will inform you to expire? Only a lasting layer of domain aliases make you independant from the real file hosts.

    Read the article

  • multilingual mobile site and google seo [closed]

    - by kollo
    Possible Duplicate: How should I structure my urls for both SEO and localization? What's the preferred SEO compliance for a mobile website that is multilingual ? I have - web: en: http://mysite.com fr: http://fr.mysite.com es: http://es.mysite.com mobi: http://m.mysite.com Should I use http://m.fr.mysite.com for my mobile french version ? Nothing is specified on google blog for mobile : http://googlewebmastercentral.blogspot.co.uk/2011/12/new-markup-for-multilingual-content.html

    Read the article

  • Garbled text in server logs

    - by Glenn Dayton
    I recently looked over my server's logs and I found a bunch of garbled text. Here is a link to the full log, and here is a snapshot of what it looks like: ¹^œÌÓûFF™ÃŒ-ôÚÏàÃÒNRs§cÝi ~F#J"|³Ôq0ã~QQbA ¼¹¦’š¶É3œßå<ú€Ç©XAwdL?R°ÝbÒt©ôÇ·Æ…÷q˜ÇѺ| Þ,߯¡Êr yR¤Q¹Jêlš‘AzP\ ¦ÂY„ÉÉ,æ™ U™»ì³ÔÝáCÿ42‹Ö.nŽÉ2%ÓN8i4Œ®¿‘•"-se•äŽ¿ÊÁ§€þ 8åv%'#Äpžs/ÙÍ:¡1ÑÖÃå ºu|Q®!ÏyÆ,­NR@¶ËȯRDkã=ÿÀܸ ›¼Ô ’ð>ÓÌBftdÃ8–é}‰[øbãÝÁ嘲b¾W n´tT­œpäNëëÔ ·RUÓP+ÅuKÁ£¬\âÌ®:J<ÍÁ0:Q%ª(Œ˜E-ÁI:ï™4®hæœT†«);°Çda@´#èì}‡£ü•{57ý]¼|øÓñð÷ÈÌð‡MkŠâ•C~$Óô#ÙV¾Núå.#Á]vôžóæ» V&8)%øVSž“±ÔQLåÓý1–ŽÃßQ$¹ýž")ÈûQcÄý_ÔüGP=s‹vq#Pmoo.tigertutorialscomµÐOKÃ0ð»Ÿâ‘ØH“ What is this? and is someone trying to do something to my website?

    Read the article

  • Reason why a Brand new website is ranking for a top keyword? [on hold]

    - by Prasad EBK
    Its been noticed, one of our (new)competitor website is ranking 5 for a top keyword with high competition. The website is barely 2 months old. When I checked not much SEO is done on the website other than basic title/desc tags. No backlinks. The website pushed down our website and took its place for the keyword. The only reason that came to my mind is the latest penguin update. Or is the ranking just temporary???, will it eventually be pushed back?? its been holding on for atleast one month and its irritating. Thanks in advance.

    Read the article

  • Which Bliki (Blog+Wiki) solution can you recommend?

    - by asmaier
    I'm searching for a good Bliki solution, meaning a combination of blog and wiki that I can install on my own web space. I would like to be able to write articles in the wiki style much like with media wiki. So I want to use a wiki markup language, have a revision history, comments, internal links to other pages (maybe in other languages) and be able to collaboratively edit the articles. On the other side I would like to have a blog-like view on my articles, showing new articles (and changes to existing articles) in a time ordered fashion. It would be nice if it would be possible to search through the articles and also tag the articles, so one could generate a tag cloud for the articles. A nice feature would also be to be able to order the articles according to views or even a voting system for the articles. Good would also be a permission system to keep certain articles private, showing them only to people logged in to the platform. Apart from these nice to have features an absolute must have feature for the Bliki platform I'm searching is the possibility to handle math equations (written in LaTeX syntax) and display them either as pictures like media wiki or even better using Mathjax. At the moment I'm using a web service called wikiDot which offers some of the mentioned features, however the free version shows to much advertisements, the blog feature is not mature, the design is quite ugly and loading of the page is often slow. So I want to install a Bliki solution on my own webspace. Can you recommend any solution for that?

    Read the article

  • How to open the console in different browsers?

    - by Šime Vidas
    Chrome: Press CTRL + SHIFT + I to open the Developer Tools. Click on the "Open console." icon in the bottom left corner. IE9: Press F12 to open the developer tools. Open the Script tab, click the "Console" button on the right. Firefox 4: Press CTRL + SHIFT + K to open the Web console. What about Opera 11 and Safari 5? Clarification: By console I mean the JavaScript console that lets you input and execute JavaScript code.

    Read the article

  • How to properly remove URL's from Google's index?

    - by ElHaix
    On some of our sites, we now have several thousand pages that dilute our website's keyword density. The website is an MVC site with SEO routing. If I submit a new sitemap with say only the 2000 or so pages that we want indexed, even though navigating to the diluting pages still works, will Google re-index the site with only those 2000 pages, dropping the superfluous ones? For example, I want to keep roughly 2000 of the following: www.mysite.com/some-search-term-1/some-good-keywords www.mysite.com/some-search-term-2/some-more-good-keywords And remove several thousand of the following that have already been indexed. www.mysite.com/some-search-term-xx/some-poor-keywords www.mysite.com/some-search-term-xx/some-poor-more-keywords These pages are not actually "removed" as navigating to these URL's still renders a page. Even though there are potentially hundreds of thousands of pages, I only want say 2000 to be re-indexed and retained. The others removed (without having to do these manually). Thanks.

    Read the article

  • Dealing with blackhat SEO companies and low quality link building competitors [closed]

    - by Mikko Ohtamaa
    I have often faced a case where the competitors of my client use SEO blackhat tactics where they contact a SEO company to do link building for their websites and products. Here is an example of a typical case of a fake blog created only for link building purposes A very low content article http://marshallfab.com/fundus-camera-explained.html in obvious fake blog: no author information, partially machine generated text, all blog posts are solely about link building Following the link you get to the promoted company page http://www.patternless.com/ ... which, unsurprisingly, links the SEO company homepage in the footer text http://www.affordableseofl.com/ ... who are not shy to advertise their Extremely aggressive SEO plan Does Google have any feedback channel where one could submit cases like this, so that Google would punish the link builders? Are there any means to bring these blackhat companies to pushame to damage their reputation?

    Read the article

  • Does your company name in an article title damage Search Engine relevance

    - by user492681
    I've been wondering about this for a while but never come across a solid answer. Many websites include their name in all the title tags of their articles. This is often apparent in word-press blogs etc. eg: Tsunami hits Japan and leaves thousands homeless | My Website Name The issue I have is that Search engines strip the stop words out of this sentence to leave the words in which it compares to the body text. So if I want my article to rank well and be relevant, in this case about the terrible Tsunami that has recently struck Japan what is to STOP the MY WEBSITE NAME section of the title de-valuing the relevance of the article. Am I over-worrying? Or should I take this in to consideration? Thanks for advice in advance.

    Read the article

  • Facebook Comments Lost

    - by Rish
    I am using Facebook comments on couple of my blogs at the moment and I just found that somehow magically all the previous comments made on posts are gone and are no longer being displayed. I'm using wordpress for all of these blogs and Facebook Comments for WordPress to manage all the facebook comments. But somehow they all disappeared all of a sudden. Another problem which I've been facing lately is that I can't seem to moderate the faceboook comments. When I go to http://developers.facebook.com/tools/comments where there should be a list of all the comments made on my sites (against the Applications that I've created just for the sake of comments), there is nothing there. This has been the thing from the starting, before the comments vanished on my site, today So technically, there are two issues to solve here.

    Read the article

  • Projects to learn web development

    - by David McDavidson
    I'm trying to get a job as a web developer, but the great majority of jobs offers requires previous experience and a portfolio to prove you've got the required skills. Unfortunately I don't have any real experience or anything to show. The best way to learn is to try and tackle real world problems, so I'd like to know what would be some nice projects to learn stuff and that will look good in a portfolio?

    Read the article

  • Use Outlook password for website verification

    - by Jack Lockyer
    I am currently building an internal employee dashboard for our global company (it is hosted on an external website for logistical reasons) I'd like (need) to password protect the page as we will be displaying sensitive information, my question is, is it possible to integrate with Outlook passwords? We have over 350 staff all of whom use outlook on a daily basis, I'd love for the website to check whether the visitor is logged into Outlook and if they're not, prompt them to log in. Is it possible?? If it is I'll get is developed straight away.

    Read the article

  • why the difference in google search result using script for search and using a browser for search

    - by Jayapal Chandran
    I wrote a code to find the position in google search result for a search keyword. I also did the same with the browser. Both the results are different. Let me explain in detail here. I have a website and i wanted to know on which page number my domain appears for a search string. Like when i search for 'code snippets' i wanted to find in google search on which page number a certain domain appears. I wrote a php code to search page by page starting from page 1 to page n. I did the same task using a browser. The script returned page 4 and when browsed i can see the domain appearing in second page. here is the search string i use in my code. /search?hl=en&output=search&sclient=psy-ab&q=code+snippets&start=0&btnG= and for each request i change the start=0 to start=1, start=2, etc... and in the response i will check whether my domain appears in it. any idea for this different in search results?

    Read the article

  • How to handle URLs with diacritic characters

    - by user359650
    I am wondering how to handle URLs which correspond to strings containing diacritic (á, u, ´...). I believe what we're seeing mostly are URLs where diacritic characters where converted to their closest ASCII equivalent, for instance Rånades på Skyttis i Ö-vik converted to ranades-pa-skyttis-i-o-vik. However depending on the corresponding language, such conversion might be incorrect. For instance in German, ü should be converted to ue and not just u, as seen with the below URL representing the Bayern München string as bayern-muenchen: http://www.bundesliga.de/en/liga/clubs/fc-bayern-muenchen/index.php However what I've also noticed, is that browsers can render non-ASCII characters when they are percent-encoded in the URL, which is the approach Wikipedia has chosen, for instance http://de.wikipedia.org/wiki/FC_Bayern_M%C3%BCnchen which is rendered as: Therefore I'm considering the following approach for creating URL slugs: -(1) convert strings while replacing non-ASCII characters to their recommended ASCII representation: Bayern München - bayern-muenchen -(2) also convert strings to percent encoding: Bayern München - bayern_m%C3%BCnchen -create a 301 redirect from version (1) to version (2) Version (1) URLs could be used for marketing purposes (e.g. mywebsite.com/bayern-muenchen) but the URLs that would end being displayed in the browser bar would be version (2) URLs (e.g. mywebsite.com/bayern-münchen). Can you foresee particular problems with this approach? (Wikipedia is not doing it and I wonder why, apart from the fact that they don't need to market their URLs)

    Read the article

  • Website Stopped Showing From Google Search Results Sunddenly

    - by Aman Virk
    I have a design and development blog http://www.thetutlage.com (1.5 years old), which was doing really well in Google search as I was getting over 70% of my traffic from Google. Now suddenly from last two days it reduced the amount of traffic from 70% to 20% and also when I am trying to search for the exact posts that I can created even after appending my website name to it does not show any results for that. Sample Search Text: JQuery Game Programming Creating A Ping Pong Game Part 1 I have post with exact same title and it does not show it on Google search anywhere. I am totally shocked, I write my own unique content and follow Google guide lines like bible. Also there is no message under my webmasters account stating any problem or error.

    Read the article

  • Why does 301 redirect work for http but not for https?

    - by Tom G
    Through my domain registrar I have set up a domain, essayme.co.uk, to automatically forward to https://google.com. If I go to http://essayme.co.uk it works as expected and redirects me to https://google.com. $curl -i http://essayme.co.uk HTTP/1.1 301 Moved Permanently Cache-Control: max-age=900 Content-Type: text/html Location: https://google.com Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sat, 07 Jun 2014 11:14:16 GMT Content-Length: 0 Age: 0 Connection: keep-alive However, if I go to https://essayme.co.uk it just freezes and times out. $curl -i https://essayme.co.uk curl: (7) Failed connect to essayme.co.uk:443; Operation timed out What is happening in the second case? (and, if possible, how can I get the redirect to work for https?) Problem background/clarification: I don't have an SSL certificate for the essayme.co.uk domain above, but I do for my live domain (let's call it mywebsite.com), and I was seeing the exact same problem on this domain (hence why I'm trying to debug the problem). Unfortunately I can't experiment with the live domain (as it's live) and I would like to avoid having to buy a second certificate for essayme.co.uk just for debugging (unless absolutely necessary). The problem I was seeing: my live domain, mywebsite.com (not its real name), has a valid SSL certificate. Visiting https://www.mywebsite.com displayed the webpage as expected. I had set up forwarding (like in the question above) from the naked domain (mywebsite.com) to https://www.mywebsite.com) Visiting http://mywebsite.com redirected to https://www.mywebsite.com as expected. However, visiting https://mywebsite.com would freeze and time out (as in the question above). I also tried forwarding it to http://www.otherwebsite.com as an experiment (i.e. forwarding to another site that does not use SSL), but the result was the same: Visiting http://mywebsite.com redirected to http://www.otherwebsite.com as expected. Visiting https://mywebsite.com would freeze and time out again. So I set up essayme.co.uk as an experiment to try and understand why it doesn't work.

    Read the article

  • adding regular expression in php not work

    - by John Smiith
    Code i added ([a-zA-Z0-9\_\-]+) but not work i wan't to include all css files is there is any other way to add?? My code file css.php header("Content-type: text/css"); $css = array( '([a-zA-Z0-9\\_\\-]+).css', ); foreach ($css as $css_file) { $css_get = file_get_contents($css_file); echo $css_get; } call.php <link href="css.php" rel="stylesheet" type="text/css" /> i wan't to rewrite css.php to css.css so public can see css.css instead of css.php. how can i do that using php script?

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >