Search Results

Search found 14053 results on 563 pages for 'upk pro (knowledge pathwa'.

Page 154/563 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Facebook Comments Lost

    - by Rish
    I am using Facebook comments on couple of my blogs at the moment and I just found that somehow magically all the previous comments made on posts are gone and are no longer being displayed. I'm using wordpress for all of these blogs and Facebook Comments for WordPress to manage all the facebook comments. But somehow they all disappeared all of a sudden. Another problem which I've been facing lately is that I can't seem to moderate the faceboook comments. When I go to http://developers.facebook.com/tools/comments where there should be a list of all the comments made on my sites (against the Applications that I've created just for the sake of comments), there is nothing there. This has been the thing from the starting, before the comments vanished on my site, today So technically, there are two issues to solve here.

    Read the article

  • Has anyone bought Market Samurai and had a good experience?

    - by ZakGottlieb
    It's hard when a piece of marketing software offers an affiliate program to ever find an objective review of it, so I thought I might try on Quora. It just boggles my mind that it can only cost $97 flat, when other SEO or keyword research tools like Wordtracker cost almost the same PER MONTH, and don't seem to offer much, if anything, more... Can anyone explain this, and would anyone recommend Market Samurai WITHOUT posting a link to it in their review? :)

    Read the article

  • Meaning of Crawl errors

    - by com
    My question is about definition of Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. Let's first consider HTTP section. I assume that all broken links in this section was somehow found by crawler, this is not the links from sitemap. If all this links was found by scanning pages from sitemap for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. Please correct me if I am wrong. Sitemap section. Looks like all those links came from my sitemap. But there is Linked From column, I already know, that all those broken links is from sitemap, so in order to fix the error, I should revise my sitemap. Am I wrong? Not followed section. I don't know what does it mean. Looks like it accumulates all links that caused redirect, but for some reason Google considers all those redirect as wrong redirect. Do you know if there are any set of rules how to determine wrong redirect. Actually I found were was my mistake, I tried to normalize URL and redirect it to the right URL, but I did normalization in a wrong way. Not found section. This section like HTTP section but with 404 errors. This section has Linked From column. But very often Linked From has unavailable. What does it mean, Google can not say me how it found this non existing page. How this section related to sitemap section. Does this section contains all 404 links from sitemap too. But there is too many 404 links, much more than in sitemap. I tried to take a look what we have in Linked From, and I saw that this link came from sitemap two month ago. But why Google keeps it indexed, the link is already dead, new sitemap doesn't have it. If there is any expire date for old links? Unreachable section. Looks like this section for 500 errors. This section doesn't contain Linked From column. There are too many completely meaningless links, I really don't know where this stuff came from, and without Linked From I am not able to figure out how to deal with it. Sorry for such a big topic, but I just want to make it clear, what every section stands for, because it's extremely crucial in order to deal with all those problems. Hopefully it will be useful not just for me. Thanks!

    Read the article

  • Google indexed page a day before also reflecting in search but today everything vanish

    - by ganesh
    We had robots.txt which disallow all robots as we were in development. We are live now. We change robots.txt as per our requirement a day before. Submit indexes using Google Webmaster Tools index status. After this we can see proper result in search as well as Google images search was working as expected. Suddenly today all these things vanish from Google Search. Now again I can see old result i.e. under construction message. I checked robots.txt in Google Webmaster Tools, it's ok - no crawling errors. Kindly let me know what exactly happened? How I can inform this issue to Google?

    Read the article

  • How to handle URLs with diacritic characters

    - by user359650
    I am wondering how to handle URLs which correspond to strings containing diacritic (á, u, ´...). I believe what we're seeing mostly are URLs where diacritic characters where converted to their closest ASCII equivalent, for instance Rånades på Skyttis i Ö-vik converted to ranades-pa-skyttis-i-o-vik. However depending on the corresponding language, such conversion might be incorrect. For instance in German, ü should be converted to ue and not just u, as seen with the below URL representing the Bayern München string as bayern-muenchen: http://www.bundesliga.de/en/liga/clubs/fc-bayern-muenchen/index.php However what I've also noticed, is that browsers can render non-ASCII characters when they are percent-encoded in the URL, which is the approach Wikipedia has chosen, for instance http://de.wikipedia.org/wiki/FC_Bayern_M%C3%BCnchen which is rendered as: Therefore I'm considering the following approach for creating URL slugs: -(1) convert strings while replacing non-ASCII characters to their recommended ASCII representation: Bayern München - bayern-muenchen -(2) also convert strings to percent encoding: Bayern München - bayern_m%C3%BCnchen -create a 301 redirect from version (1) to version (2) Version (1) URLs could be used for marketing purposes (e.g. mywebsite.com/bayern-muenchen) but the URLs that would end being displayed in the browser bar would be version (2) URLs (e.g. mywebsite.com/bayern-münchen). Can you foresee particular problems with this approach? (Wikipedia is not doing it and I wonder why, apart from the fact that they don't need to market their URLs)

    Read the article

  • How to block FeedReader from fetching my content to their site?

    - by Wei Kai
    As you all can see from the picture below, my site's content is duplicated by FeedReader and indexed at Google. When I clicked at the FeedReader link, it uses some sort of iFrame to draw content from my site live. This forms some sort of content duplication to me, and I believe it does harm to my site. (Stackoverflow doesn't allow me to post image due to new account, pls click at the link above to see the picture, million thanks to you.) What can I do to prevent Feedreader to fetch my content to their site? I know robots.txt can perform such function, but I don't know how to do it. Any help would be much appreciated. I have also highlighted this issue to FeedReader 2 days ago, but yet to get any reply from them.

    Read the article

  • A good resource to get the most out of Google Analytics

    - by glinch
    I was wondering if any one could offer me some advice as to the best resources out there (ideally books) on google analytics. I have a basic understanding but have a lot of room for improvement. The following book "Advanced Web Metrics with Google Analytics" by Brian Clifton, appears to be a good starting but but is already quite dated, even though published in march 2010. Any advice would be greatly appreciated.

    Read the article

  • Page displaying sections using opacity in CSS3 but without navigating or scrolling down [closed]

    - by Senthil Kumaran
    Here is my app - http://www.shalgreetings.com/ I am trying to override the scroll bar going down to a imagesection in CSS, so that whole app is visible with logo, header and other controls all the times when people navigate through different #sections. I am not sure where in the CSS, I am making the mistake as clicking on #sections traverses the page. Here is this app's original inspiration code, which has got this right. Anyone can point me where the problem seems to be in the above app?

    Read the article

  • Google analytics: how many visitors have visited n times?

    - by Riley
    I'm trying to guess how many loyal users I have by counting the number of people that have visited the site 10 times. How can I answer this question with Google Analytics? "Visitor Loyalty" is a tempting answer, but the label for loyalty is "Visits that were the visitor's nth visit," and I want something more like "Visitors that visited n times." For example, we have 40 visits in the "51-100" visit range, but I think that could be a single user who visited 91 times. Or two users who visited 71 times each. The whole chart makes a good logic puzzle (I wonder if there's a unique solution) but doesn't easily answer the question I have.

    Read the article

  • For Google Rich Snippets: Is it 'harmful' to add the same `hreview-aggregate` microformat markup in several places?

    - by Oliver
    We are right now incorporating microformats markup for reviews into a client's web application and were wondering, whether it can be harmful to provide the same information on more than one page, e.g. on a dynamic search page and on the concrete product page. Does anybody have any experience with this? UPDATE: Actually, I was wondering, if Google showed a link to the page the review comes from, then how would they decide which of the sources of the review they would link to? Or don't they?

    Read the article

  • Which Bliki (Blog+Wiki) solution can you recommend?

    - by asmaier
    I'm searching for a good Bliki solution, meaning a combination of blog and wiki that I can install on my own web space. I would like to be able to write articles in the wiki style much like with media wiki. So I want to use a wiki markup language, have a revision history, comments, internal links to other pages (maybe in other languages) and be able to collaboratively edit the articles. On the other side I would like to have a blog-like view on my articles, showing new articles (and changes to existing articles) in a time ordered fashion. It would be nice if it would be possible to search through the articles and also tag the articles, so one could generate a tag cloud for the articles. A nice feature would also be to be able to order the articles according to views or even a voting system for the articles. Good would also be a permission system to keep certain articles private, showing them only to people logged in to the platform. Apart from these nice to have features an absolute must have feature for the Bliki platform I'm searching is the possibility to handle math equations (written in LaTeX syntax) and display them either as pictures like media wiki or even better using Mathjax. At the moment I'm using a web service called wikiDot which offers some of the mentioned features, however the free version shows to much advertisements, the blog feature is not mature, the design is quite ugly and loading of the page is often slow. So I want to install a Bliki solution on my own webspace. Can you recommend any solution for that?

    Read the article

  • Why does 301 redirect work for http but not for https?

    - by Tom G
    Through my domain registrar I have set up a domain, essayme.co.uk, to automatically forward to https://google.com. If I go to http://essayme.co.uk it works as expected and redirects me to https://google.com. $curl -i http://essayme.co.uk HTTP/1.1 301 Moved Permanently Cache-Control: max-age=900 Content-Type: text/html Location: https://google.com Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sat, 07 Jun 2014 11:14:16 GMT Content-Length: 0 Age: 0 Connection: keep-alive However, if I go to https://essayme.co.uk it just freezes and times out. $curl -i https://essayme.co.uk curl: (7) Failed connect to essayme.co.uk:443; Operation timed out What is happening in the second case? (and, if possible, how can I get the redirect to work for https?) Problem background/clarification: I don't have an SSL certificate for the essayme.co.uk domain above, but I do for my live domain (let's call it mywebsite.com), and I was seeing the exact same problem on this domain (hence why I'm trying to debug the problem). Unfortunately I can't experiment with the live domain (as it's live) and I would like to avoid having to buy a second certificate for essayme.co.uk just for debugging (unless absolutely necessary). The problem I was seeing: my live domain, mywebsite.com (not its real name), has a valid SSL certificate. Visiting https://www.mywebsite.com displayed the webpage as expected. I had set up forwarding (like in the question above) from the naked domain (mywebsite.com) to https://www.mywebsite.com) Visiting http://mywebsite.com redirected to https://www.mywebsite.com as expected. However, visiting https://mywebsite.com would freeze and time out (as in the question above). I also tried forwarding it to http://www.otherwebsite.com as an experiment (i.e. forwarding to another site that does not use SSL), but the result was the same: Visiting http://mywebsite.com redirected to http://www.otherwebsite.com as expected. Visiting https://mywebsite.com would freeze and time out again. So I set up essayme.co.uk as an experiment to try and understand why it doesn't work.

    Read the article

  • Text limit on analytics event code

    - by Theo G
    I am just about to add the event code a button that downloads the pdf. Event code fields: _trackEvent(category, action, opt_label, opt_value, opt_noninteraction) Example of event code: onClick="_gaq.push(['_trackEvent', 'Videos', 'Play', 'Baby\'s First Birthday']);" I was just wondering if anyone knows if there is a text limit on the opt_value? Do you think the following would be too long 'Elmhurst School says IPC has made all the difference'?

    Read the article

  • Moving from one DNS provider to another

    - by Senthil Kumaran
    I had registered with a particular DNS provider X and I have been unhappy with their services and now when the time for renewal came, I did not renew and I let it expire. I am hoping that once it is expired from this provider, I would be able to sign up for the same domain name from an alternative provider which I have tested and I am satisfied. What kind of precautions should I take? The domain name is not a critical one, it is of a NGO and we prefer to own it again without any change in the name. The information given by the expiry notice says Domains can be renewed between 90 days before and 14 days after the expiry date. If domains are not renewed they will be removed from the account and set for deletion. Should I wait for time till gets deleted at their end so that I can sign up for the same from another provider?

    Read the article

  • Projects to learn web development

    - by David McDavidson
    I'm trying to get a job as a web developer, but the great majority of jobs offers requires previous experience and a portfolio to prove you've got the required skills. Unfortunately I don't have any real experience or anything to show. The best way to learn is to try and tackle real world problems, so I'd like to know what would be some nice projects to learn stuff and that will look good in a portfolio?

    Read the article

  • Fetch as Google error 403

    - by Bojan Vidanovic
    2 weeks ago, google cant access my website anymore, in webmaster tools i cant fetch any page, i always get error 403, and the website has been completly disapperard form the google search results. I cant figure how suddendly it cant see it anymore, i've checked .htaccess and there nothing that blocks google crawlers, and robots.txt is fine to. Anyway the site is accesibly normaly for users. Anyone had this problems? please help!

    Read the article

  • breadcrumb dilemma -SEO impact

    - by HaCos
    i am updating the breadcrumb module of an commerce website, implementing microdata (schema.org). My dilemma is about showing last page: a.product name on breadcrumb or not? b.Should that be active link to current page or not? eg: http://www.google.com/webmasters/tools/richsnippets?url=http%3A%2F%2Fwww.urbanspoon.com%2Fr%2F23%2F1600592%2Frestaurant%2FPoint-Breeze%2FAlma-Pan-Latin-Kitchen-Pittsburgh urbanspoon example doesnt link last page, but is this right?

    Read the article

  • Download/submission directories are dead? Are they good for SEO?

    - by fborozan
    We have just released a new major software product version. In the past if you wanted visibility you would create a standardized pad file and you would submit it to hundreds of directories or you used web service that would do that for you. These directories would then serve as first incoming links to your web site. How about today? I think download directories are pretty much dead? Do you think this is still good SEO approach today? Are these software download directories useless?

    Read the article

  • Social media measurement tools [on hold]

    - by user29187
    I work for a non-profit and we are currently trying to develop a system to measure and evaluate our social media platforms (Facebook and Twitter). We would liek to tarck a variety of social metrics, including, but not limited to: Twitter: # of followers, # of mentions, # of retweets Facebook: # of likes, # people talking about, # of page views We are currently using a paid platform for this. I wondered if there is a way to configure Google Analytics to do this or if there are any other free/clever or smart ways to track social engagement with our brand?

    Read the article

  • Reason why a Brand new website is ranking for a top keyword? [on hold]

    - by Prasad EBK
    Its been noticed, one of our (new)competitor website is ranking 5 for a top keyword with high competition. The website is barely 2 months old. When I checked not much SEO is done on the website other than basic title/desc tags. No backlinks. The website pushed down our website and took its place for the keyword. The only reason that came to my mind is the latest penguin update. Or is the ranking just temporary???, will it eventually be pushed back?? its been holding on for atleast one month and its irritating. Thanks in advance.

    Read the article

  • Link tags in iframe widget

    - by john Smith
    I have a rating community-site and I´m offering little iframe widgets with the average rating and some little other info. Does it make sense (for visibility, SEO) to add link tags to the head like: <link rel="alternate" type="application/rss+xml" title="RSS 2.0" href="rssfeed" /> <link rel="index" title="main-profile" href="main-profile"> To get a logical association of the widget to relating pages? How would you do this?

    Read the article

  • How to deal with malicious domain redirections?

    - by user359650
    It is possible for anybody to buy a domain name containing negative terms and point it to someone's website in order to damage their reputation. For instance someone could buy the domain child-pornography.com and point it to the address 64.34.119.12 which is the address behind stackoverflow.com and people navigating to the domain in question would end up visualizing content from StackExchange which would be detrimental to StackExchange's image. To illustrate this, I added the entry 64.34.119.12 child-pornography.com to my /etc/hosts file and tested. Here is what I obtained: I personally found this user experience terrible as someone could think that Stack Exchange are in favor of child pornography and awaiting support from the community to create a Q&A site about it. I tested with other websites and experienced other behaviors that I would categorize as follows: 1 - Useful 404 page (happens with stackoverflow.com): For me the worst way of handling this as the image of the targeted website is directly associated with the offending domain. The more useful the 404 page, the bigger the impression that the targeted website would be willing to help with child pornography. 2 - Redirection (happens with microsoft.com): For instance when accessing child-pornography.com you get redirected to www.microsoft.com. It isn't as bad as above as the offending domain name never appears alongside the targeted website's content, but still bad in my opinion as it gives the impression the targeted website bought the offending domain and redirected it to their website to get more traffic. 3 - Server error (happens with lemonde.fr): You get an error from the webserver which page doesn't contain any content that can be associated with the targeted website (e.g. default Apache 404 page, completely blank page). I believe that is good as the identify of the targeted website isn't revealed. Above are the various behaviors I experienced, but I also thought about a fourth way of dealing with this which is described below. 4 - Disclaimer page (haven't found any website implementing that technique): Display a message such as : "You ended here because someone bought and linked the child-pornography.com domain to our website. We do not own this domain and do not associate ourselves with it. This request has been logged by our servers and we will raise this issue with the competent authorities to have this domain taken down. If you want to access our website, please click here." The good thing about this method is that it can be implemented at application layer (good if you don't have control over web server which happens with some hosting solutions), allows you to protect yourself from any liability, and offer the visitor to be redirected to your own website. Which of the above options would you implement to deal with malicious domain linking (IMO only options 3 and 4 are worth considering) ?

    Read the article

  • Loading main javascript on every page? Or breaking it up to relevant pages?

    - by Kyle
    I have a 700kb decompressed JS file which is loaded on every page. Before I had 12 javascript files on each page but to reduce http requests I compressed them all into 1 file. This file is ~130kb gzipped and is served over gzip. However on the local computer it is still unpacked and loaded on every page. Is this a performance issue? I've profiled the javascript with firebug profiler but did not see any issues. The problem/illusion I am facing is there are jquery libraries compressed in that file that are sometimes not used on the current page. For example jquery datatables is 200kb compressed and that is only loaded on 2 of my website pages. Another is jqplot and that is another 200kb. I now have 400kb of excess code that isn't executed on 80% of the pages. Should I leave everything in 1 file? Should I take out the jquery libraries and load only relevant JS on the current page?

    Read the article

  • How to force user to use subdomain?

    - by David Stockinger
    I am hosting a webshop with OpenCart and its current URL is e.g. http://mydomain.com/shop/ I have created two subdomains ( http://pg.mydomain.com/ and http://shop.mydomain.com/ ) and both subdomains are already working as they should. However, can I restrict direct access to mydomain.com/shop/ while leaving all the files (index.php, etc.) there? Since both subdomains are pointing to http://mydomain.com/shop/, I thought this would restrict all access. So in the end, I would like my two shops to be accessable through http://pg.mydomain.com/ and http://shop.mydomain.com/, but not http://mydomain.com/shop/ while leaving all the files in http://mydomain.com/shop/.

    Read the article

  • Canonical links for huge websites

    - by Florin
    Let's say I have 5 products that are identical but the product code, the product color specifications and the product image. The title, meta and description are identical (by the way the color is in a select form). I made 4 products link canonical to the 1 that is the master based on many factors. If the master becomes inactive or without a stock one product from the other 4 will become the new master and the rest will become canonical to it. The question is if that by becomeing master from canonical will the site suffer a penalty from Google or it will work just fine? What will Google think about this strategy?

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >