Search Results

Search found 9724 results on 389 pages for 'pro zeck'.

Page 143/389 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • MediaWiki: how to make DISPLAYTITLE be used in categories listings

    - by Konstantin Boyandin
    The problem: a MediaWiki-driven site uses subpages to build pages hierarchy. When I add something like Page1/Page2/Subpage the exactly above string appears in listings and looks clumsy. I can't efficiently use short subpage title (Subpage in this example), since it can appear in different contexts and could confuse users. I can use DISPLAYTITLE magic word, with proper values of $wgRestrictDisplayTitle and $wgAllowDisplayTitle, to reassign page title and make it show on the page. However, when I look into categories listing this page, I will still see "Page1/Page2/Subpage" instead of the title assigned. Is there a simple way (through 'hack' or via relevant extension) to make the new title appear in every listing as well?

    Read the article

  • Google Authorship Image of my blogspot has been disappeared in Google SERP. Why?

    - by Sathiya Kumar
    I have a blogspot and i used my image to appear on the Google SERP for my keywords using Google Authorship Markup. My image was showed for last 2 months but while checking SERP for my blog, i found that my authorship markup is not working. My image, name and G+ followers count is not appearing near my blogspot URL in SERP. I didn't made any changes in my google+ profile or in my blogspot header tag where i had put the authorship code. I tried to find the reason but i didn't find any value answer. May anyone answer this question. Please let me know if you had already experienced like this.

    Read the article

  • Search ranking for important keywords has gone down drastically [duplicate]

    - by Vaivhav
    This question already has an answer here: How to diagnose a search engine ranking drop? 5 answers Firstly, we are a small entrepreneurial team of 3 persons and I am more like an amateur webmaster of the company's website as we cannot really afford a technical guy/department right now. A few weeks earlier, our website traffic and rankings for most keywords decreased overnight. I did a lot of reading henceforth and learned about Penguin 2.1 which people said is the reason for the drop. Something like this had never happened before. Now, I have gone through the entire Google webmaster help section. It says there that if a manual penalty is taken against us, we would notice a message in Manual Actions page. So far, we haven't received any notice from Google for web spam. Some SEO guys I contacted said they found spam links in our backlink profile. I do believe I had mistakenly purchased a cheap link/SEO scheme when I was yet very new to SEO. This was more than a year back but since then we have been legitimate. Moreover, how do I find out which is a spam link and which is not? Our content is all original, refreshing and the best you will find in our niche. We also have a blog but on a different domain (wordpress.com) from where we send out anchored links to our business website. Is this a good thing to do? Now, how should we proceed and recover our traffic/rankings. I tried searching in webmasters for a way to reach google and ask them why the traffic has decreased suddenly, but I couldn't find a contact form or something. Can someone please go through our website and help in making things more clear regarding the reason for the drop, along with a solution. Will really appreciate this as I can't get to figure this out and its taking a lot of time. Vaivhav

    Read the article

  • Advertising Campaign Bidding Network

    - by David
    I hope this is the correct stackexchange to post on. Seems to most relevant. I'm looking for more ways to monetize my site. I'm already using the usual Google Adsense, tradedoublers, ad networks, etc. What I want to know, is if anyone knows of a service/network out there (similar to tradedoubler), which allows you to accept (or bid) on campaigns (primarily CPM based). For example: I log in, I get a list of advertising campaigns that are available and the prices they are willing to pay or I can put a bid in with imp count and cpm rate. I'm looking for something ideally europe or UK based, but im open to others. Thanks!

    Read the article

  • Google Fetch issue

    - by Karen
    When I do a Google fetch on any of my webpages the results are all the same (below). I'm not a programmer but I'm pretty sure this is not correct. Out of all the fetches I have done only one was different and the content length was 6x below and showed meta tags etc. Maybe this explains other issues I've been having with the site: a drop in indexed pages. Meta tag analyzer says I have no title tag, meta tags or description even though I do it on all pages. I had an SEO team working on the site and they were stumped by why pages were not getting indexed. So they figure it was some type of code error. Are they right? HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Content-Encoding: gzip Vary: Accept-Encoding Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Thu, 11 Oct 2012 11:45:41 GMT Content-Length: 1054 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> <script type="text/javascript"> function getCookie(cookieName) { if (document.cookie.length > 0) { cookieStart = document.cookie.indexOf(cookieName + "="); if (cookieStart != -1) { cookieStart = cookieStart + cookieName.length + 1; cookieEnd = document.cookie.indexOf(";", cookieStart); if (cookieEnd == -1) cookieEnd = document.cookie.length; return unescape(document.cookie.substring(cookieStart, cookieEnd)); } } return ""; } function setTimezone() { var rightNow = new Date(); var jan1 = new Date(rightNow.getFullYear(), 0, 1, 0, 0, 0, 0); // jan 1st var june1 = new Date(rightNow.getFullYear(), 6, 1, 0, 0, 0, 0); // june 1st var temp = jan1.toGMTString(); var jan2 = new Date(temp.substring(0, temp.lastIndexOf(" ") - 1)); temp = june1.toGMTString(); var june2 = new Date(temp.substring(0, temp.lastIndexOf(" ") - 1)); var std_time_offset = (jan1 - jan2) / (1000 * 60 * 60); var daylight_time_offset = (june1 - june2) / (1000 * 60 * 60); var dst; if (std_time_offset == daylight_time_offset) { dst = "0"; // daylight savings time is NOT observed } else { // positive is southern, negative is northern hemisphere var hemisphere = std_time_offset - daylight_time_offset; if (hemisphere >= 0) std_time_offset = daylight_time_offset; dst = "1"; // daylight savings time is observed } var exdate = new Date(); var expiredays = 1; exdate.setDate(exdate.getDate() + expiredays); document.cookie = "TimeZoneOffset=" + std_time_offset + ";"; document.cookie = "Dst=" + dst + ";expires=" + exdate.toUTCString(); } function checkCookie() { var timeOffset = getCookie("TimeZoneOffset"); var dst = getCookie("Dst"); if (!timeOffset || !dst) { setTimezone(); window.location.reload(); } } </script> </head> <body onload="checkCookie()"> </body> </html>

    Read the article

  • Which is the best image hosting site for hosting images for the website? [closed]

    - by rahul dagli
    Possible Duplicate: How to find web hosting that meets my requirements? I currently have a website and blog and using a limited web hosting plan. When I upload images on my hosting server it consumes a lot of bandwidth and space. So I was thinking of hosting images on some-other image hosting site and direct linking it to my site. I found out few sites like imageshack, photobucket, tinypic, imgur. However, I see all have certain restrictions. The features i am looking for are as follows: 1. At least 10gb space 2. At least 500gb bandwidth (bec I hav very high traffic) 3. Very high speed even during heavy load like 1000 visitors accessing every hour. 4. Ultra reliable servers (99.9% uptime) 5. Privacy control 6. Must not ever delete image if inactive 7. Create and manage albums 8. Company that will last long in business atleast for next 10 years. 9. Free of cost 10. Hotlinking/ Directlinking image.

    Read the article

  • Hijax == sneaky Javascript redirects? Will I get banned from Google?

    - by Chris Jacob
    Question Will I get penalised as "sneaky Javascript redirects" by Google if I have the following Hijax setup (which requires a JavaScript redirect on the page indexed by google). Goal I want to implement Hijax to enable AJAX content to be accessibile to non-JavaScript users and search engine crawlers. Background I'm working on a static file server (GitHub Pages). No server side tricks allowed (so Google's #! "hash bang" solution is not an option). I'm trying to keep my files DRY. I don't want to repeat the common OUTER template in all my files i.e. header, navigation menu, footer, etc They will live in the main index.html Setup the Hijax index.html page contains all OUTER html/css/js... the site's template. index.html has a <div id="content"> which defaults to containing the "homepage" html. index.html has a navigation menu, with a Hijax link to an "about" page. With JavaScript disabled (e.g. crawler) it follows link to /about.html. With JavaScript enabled (e.g. most people) the link updates the url hash fragment to /#about and jQuery replaces the <div id="content"> innerHTML with $("#content").load("about.html #inner-container");. AJAX content about.html does not contain anything extra to try an cloak content for crawlers. about.html file contains enough HTML / CSS / JavaScript to display /about.html as a standalone page with it's own META data... e.g. <html><head><title>About</title>...</head><body></body></html>. about.html has NO OUTER HTML template (i.e. header, navigation menu, footer, etc). about.html <body> contains a <div id="inner-container"> which holds the content that is injected into index.html. about.html has a <noscript> tag as the first child of <body> which explains to non-JavaScript users that they are viewing the about page "inner content" - with a link to navigate to the index.html page to get the full page layout with menu. The (Sneaky?) Redirect Google indexes the /about.html page. However when a person with JavaScript enabled visits that page there is no OUTER html template (e.g. header, navigation menu, footer, etc). So I need to do a JavaScript redirect to get the person over the /#about page (deeplinking to the "about" page "state" in index.html). I'm thinking of doing a "redirect on click or after 10 seconds". The end results is that user ends up on an "enhanced" page back on index.html with all it's OUTER template - but the core "page" content is practically identical. Known issue with inbound links e.g. Share / Bookmarking It seems that if a user shares the URL /#about on their blog, when allocating inbound links to my site Google ignores everything after the # ... it allocates value to the / page - See: http://stackoverflow.com/questions/5028405/hashbang-vs-hijax/5166665#5166665. I can only try an minimise this issue offering "share" buttons on the page with the appropriate urls i.e. /about.html. Duplicate Sorry. I posted this same question over on http://stackoverflow.com/questions/5561686/hijax-sneaky-javascript-redirects-will-i-get-banned-from-google ... then realised it probably belongs more on this Stack Exchange site... Not sure if I should delete the Stack Overflow question? Or just leave it on both sites? Please leave comment.

    Read the article

  • How can I assign id to subject input box in simple machines forum?

    - by Mahesh Bhat
    I have installed the simple machines forum. For my requirement I need to change the subject input box. Presently the subject input box has the following code <input type="text" class="input_text" maxlength="80" size="80" tabindex="1" name="subject"> I want this to be changed like <input type="text" id="subject" class="input_text" maxlength="80" size="80" tabindex="1" name="subject"> How can I do this? I browsed through many files in simple machines forum and couldn't find out. Anyone one simple machines forum please help me

    Read the article

  • What is the typical example of old school website design?

    - by Pierre 303
    I want to build a website for a retro thing that was popular in the mid 90s (beginning of the commercial internet). So I want use old designs that was very popular at that time. The first thing that comes to my mind was those "under construction" animated gifs. People often put animated gifs everywhere. But also those awful repeating backgrounds. So yes, I want my website to look exactly like in the mid nineties ;) (please suggest practical and usable features, I guess an Java Applet menu would not work today, or saying on the bottom that this website is optimized for Netscape 3) EDIT: for those that wants to see the result: Retrology

    Read the article

  • Shared hosting banwidth limits

    - by mike
    I have a shared hosting account with a 20GB monthly bandwidth limit. I have exceeded my monthly limit and according to my host my counter is never reset, they say they use a continuous 30 day counter. So for example, I make payment on the 1st of each month, say I use 20GB in the last week of the month. My bandwidth counter is not reset on the 1st of the new month and my bandwidth will only become available in the last week of the new month. Is this common practice by shared hosting companies? Sounds a bit shady to me. Surely my counters should be reset on the 1st of every month when I make payment and 20GB of bandwidth should be available from the day payment is made?

    Read the article

  • Are shorter URLS better for SEO?

    - by articlestack
    Many people shorten their URLs. But as per my understanding it creates overhead of extra redirection, other can not guess about the target article with their url, and it should be less friendly for "inurl:..." type search. Should I shorten the URLs of my sites? Is there any advantage with short URLs besides the fact that they take fewer characters in anchor tags on the page (good for site loading)?

    Read the article

  • Webmaster Tools word count

    - by Henrik Erlandsson
    Is there a way to somehow verify that the googlebot finds the headings and the content, for example by word count? I'm asking this because I tried a program called Screaming Frog, which fails to even fetch the first h1 on a validated page - for about 1/3 of all the pages(!) - and got insecure. Even though the site looks hunky dory in Webmaster Tools, I'd like to know what a googlebot-like content crawler finds on my page and in what order. Any tips on such tools is appreciated. This is not about keyword count.

    Read the article

  • Why the amount of 'indexed' images can go down?

    - by Roman Matveev
    I have a site with several thousand of images. All those images included into the sitemap submitted to Google Webmaster Tools. The amount of 'submitted' images is OK, but the amount of 'indexed' is significantly lower than the amount of 'submitted' and it is going DOWN! I'd understand if not all of my images got indexed (however it is also not clear and very frustrating for me) but I can not understand how the indexing can go in the negative direction?! All the images stays on their places. And pages containing them stays unchanged. At least they intended to be. Any thoughts?

    Read the article

  • Webmasters hentry error and authorless pages

    - by Ben Racicot
    Within Google Webmasters Search Appearance-Structured data I'm getting a series of errors: Error: Missing required hCard "author". And most of my 44 errors have: Missing: Author Missing: entry-title Missing: updated There seems to be no CLEAR explanation of these errors. It is either because these classes exist without their nested classes, or they are expected to exist because of something else, possibly itemscope or itemtype='' The Question: How do you specify with richsnippets that the page is about a location and there is no human author?

    Read the article

  • What is the the maximum time for a user to return to Google for the visit to be flagged up as a bounce in GA?

    - by Anonymous
    I know that Google measures bounce rates by how fast a user returns to the results page after clicking-through to a website. Roughly what is the maximum duration of the visit for the user to then return for it to be considered a bounce? i.e. <5 seconds, <30 seconds? I'm mainly interested as it appears a lot of users clicking through my PPC adverts (Adwords) are bouncing, despite my ads having a high quality score and the page's being entirely related to the adverts copy and at as best tied to what I think user's may be searching for from the key phrases I've selected so the high bounce rate (100% on some keywords) seems a bit strange. If a bounce isn't determined by time, but simply whether a user returns to the SERP after visiting my site or not after any amount of time that would make more sense but the average duration of visit for my keywords with a 100% bounce rate in GA is 00:00:00, which suggests a user immediately returned to the SERPs, which again, is odd. Is my GA data being skewed by https or anything like that? Scratching my head here.

    Read the article

  • When will my old page stop appearing on Google?

    - by Bane
    I recently bought a new address for my Blogger blog, from yannbane.blogspot.com to www.yannbane.com. However, www.yannbane.com addresses do not appear when they are searched for! Is this natural? How much time will it take for Google to update its index? yannbane.blogspot.com 301's to www.yannbane.com. Both are added to my Webmaster Tools account, but it shows no data for www.yannbane.com (strangely). And, finally, is there something I could do to speed up the process?

    Read the article

  • Can mass different log-in pages result in SEO duplicate and/or low quality punishments?

    - by Noam
    I have internal pages that rely on an external API which I would like to build upon user request. Two options I thought about: Make lots of 'thin' pages that specifies that if you want content about X, you need to log-in, and then the page will be built. Pros: user understands what he'll get when logging in. Cons: SEO implications of such a solution due to the mass 'low quality' and 'cross-sites duplicate content' Make them all redirect to ONE same generic log-in page. Pros: No duplicate low quality content. Cons: Lots of internal links to the same log-in page. Which would you recommend?

    Read the article

  • How can I store post rankings that change with time?

    - by Daniel Fein
    I'm trying to learn how to code a website algorithm like Reddit.com where there are thousands of posts that need to be ranked. Their ranking algorithm works like this (you don't have to read it, its more of a general question that I have): http://amix.dk/blog/post/19588 Right now I have posts stored in a database, I record their dates and they each have an upvotes and downvotes field so I'm storing their records. I want to figure out how do you store their rankings? When specific posts have ranking values, but they change with time, how could you store their rankings? If they aren't stored, do you rank every post every time a user loads the page? When would you store the posts? Do you run a cron job to automatically give every post a new value every x minutes? Do you store their value? Which is temporary. Maybe, until that post reaches its minimum score and is forgotten?

    Read the article

  • Will these type of 403 errors affect my ranking?

    - by Gkhan14
    Let's say I have a directory that has a 403 forbidden error for all of the content in it, however a few of the images in the subdirectoies of the main diretory do NOT have a 403 forbidden error. Will this fact affect my ranking? For example: test.com/system/ (HAS 403 ERROR FOR ALL FILES) - test.com/system/pie/ (HAS 403 ERROR FOR ALL FILES) - test.com/system/pie/image.png (DOES NOT HAVE A 403 ERROR, AND THIS IMAGE IS EMBEDED ON A PAGE ON test.com e.g(test.com/pie/)) This sort of pattern repeats for about 10 different images. This directory is like a secret "system", however all of the content on the main site (test.com) is still accessible to everyone from the public.

    Read the article

  • Tack anchor link with Google Analytics

    - by Fredrik
    I have searched for how to track anchor links in analytics, but couldn't get it working. I have this code in the header: <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('_setAllowAnchor', true); ga('create', 'UA-*******-1', '****.com'); ga('send', 'pageview'); </script> And my links looks like this: <a href='#/contact'><span>Contact</span></a> I also tried to use this links: <a href='#/contact' onClick="_gaq.push(['_trackPageview', location.pathname+location.search+location.hash]);"><span>Contact</span></a> Is there any tips on what I can do?

    Read the article

  • Amazon Affiliate search using a movie title

    - by Matt Walker
    I am currently working on a movie trailer site. I have over 300 movies and I do not want to add an amazon affiliate link to each one individually. Does amazon offer any sort of api that will allow me to use a movie title to search for a dvd on amazon? Ex. For the movie skyfall, the amazon affiliate link would be amazon.com/search/dvd/skyfall/affiliateid ^ I just made the link up as i don't know how their system works, but I just want it to do a search on the movie title Thanks in advance for any help you can give me!

    Read the article

  • Are bad backlinks causing thousands of 404 and 410 errors in webmaster tools?

    - by Natália
    Our webmaster tools account is showing 250.000 errors related with weird links from other sites. These URLs are comming mostly from non existent sites or are being generated directly by our website. Here some examples of these URLs: oursite.com/&q=videos+caseros+sexo+pornos+gratis&sa=X&ei=R638T8eTO8WphAfF2vG8Bg&ved=0CCAQFjAC%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F5%2Fpage%2F4/page/3 Our site is a popular spanish adult site, yet we don´t have keywords which are being mentioned in this URL. Apparently this link comes from our site. Some more examples: oursite.com/&q=losmejoresvideosporno&sa=X&ei=U__8T-BnqK7RBdjmhYsH&ved=0CBUQFjAA%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3/page/4 Once again: not our queries, not out URLs. oursite/tag/tetonas We think that it might be other site, which is having a policy of extremely bad SEO based on other sites branding and keywords usage: thirdsite/buscador/tetonas-oursite The question is: if other sites are generating these URLs, how can we prevent this? Why the tag is being generated if no link was added to the other site? What should we do with these errors? 301? 410 gone? I have read all similar Q&A here but none of them seems to solve our problem. It is not likely to be a bad ad (Inspected them all). Maybe some all content which google decided to recrawl suddenly? Maybe third parties bad SEO policy? Maybe all of them?

    Read the article

  • Redirect pages to fix crawl errors

    - by sarah
    Google is giving me a crawl error for pages that I have removed like www.mysite.com/mypage.html. I want to redirect this pages to the new page www.mysite.com/mysite/mypage. I tried to do that by using .htaccess but instead of fixing the problem, the crawl pages increased and a new crawl came www.mysite.com/www.mysite.com. This is my .htaccess file: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /sitename/ RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /sitename/index.php [L] </IfModule> # END WordPress Should I add this after the rewrite rule or I should do something else? RewriteRule ^pagename\.html$ http://www.sitename.com/pagename [R=301]

    Read the article

  • rel="nofollow" SEO impact

    - by Torez
    I saw a technique used where there was a block with three parts: 1. Image (wrapped in an anchor tag) 2. Heading (anchor tag with heading text) 3. Paragraph (regular p tag with synopsis content) e.g. <li class="block"> <a rel="nofollow" class="thumb" href="#"><img src="images/placeholder_service_thumbnail.jpg" alt="" /></a> <a class="h3" href="#"Good SEO Heading</a> <pPellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum tortor quam, feugiat vitae, ultricies eget, tempor sit amet, ante. Donec eu...</p> </li> With the image tag there was a rel="nofollow" on the wrapped anchor tag. So the idea is that the users still has the ability to click the image and go to the details page, but the image link does not rank. When users click on the heading text, that is only what ranks for that specific page. Q: Is this the correct approach? Does this even do anything? What is the best practice?

    Read the article

  • website attack form submission triggering emails related questions

    - by IberoMedia
    We are experiencing website attacks that trigger the submission of a form, and send alert emails. Normal process of form submission is to fill up a couple of text fields, and when the user is redirected, the next page processes $_POST. If $_POST exists, then the email to intended form recipients is triggered. What is happening right now, we are receiving the email of the form submission, three emails at a time with same information. The information per email is the same, but not all of the spam emails contain the same information, each batch of triggered emails has unique information. The form has no captcha, and if possible we would like to keep it this way. The website has worked fine and had no spamming problems until today. We have monitoring software for the website, but whoever is submitting this form over and over is not being recorded by the tracking software WHY IS THIS? IS THE PERSON ACTUALLY VISITING THE WEBSITE? The only suspicious visit tracked was on November 10th, and this record ALSO shows three forms submitted (this is how I identified possible first visit by attacker). Then no incidents until today. WHAT IS THE GOAL of the spam attack? Is the attacker expecting us to respond to the bogus emails? What can they achieve with repeated submission of form Why are three emails triggered in the row? Is this indicative that they may be using a script? This is a PHP website. Is there a way for a client to view the PHP code of a page? Thank you

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >