Search Results

Search found 9717 results on 389 pages for 'gkt pro'.

Page 113/389 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • How to effectively use an overseas SEO team?

    - by Dan Gayle
    My company is currently in contract with a 20+ person team in the Philippines, previously used for comment linking and guest blogging spun content articles. This is a practice that we're stopping, but we don't want to sever our team because they work hard, they're really cheap, and they produce excellent accounting and reporting of their actions. What are ways that we can best put them to use as a link generating or content generating resource? Their English is fair, but not of high enough quality to use them for any direct content creation. Thanks

    Read the article

  • How to write good blog post tags

    - by keruilin
    It seems that you have three choices in deciding how you write tags for your blog posts: Make them user friendly Make them highly searchable Combo of the two For example, let's say that I have a blog post that has write-ups on the top 10 ipad apps for business travel (e.g., Evernote, Dragon Diction, Instapaper, etc.). User friendly tags: ipad apps, business travel Searchable keywords (analyzed with Google Keyword Analyzer): ipad apps, ipad travel apps, evernote ipad, instapaper, instapaper ipad Combo: ipad apps, ipad travel apps So my question comes down to this: which is really the best choice -- 1, 2 or 3? Note: this visible post tags will also serve as the meta keywords for the post page.

    Read the article

  • How to get tens of millions of pages indexed by Google bot?

    - by Chris Adragna
    We are currently developing a site that currently has 8 million unique pages that will grow to about 20 million right away, and eventually to about 50 million or more. Before you criticize... Yes, it provides unique, useful content. We continually process raw data from public records and by doing some data scrubbing, entity rollups, and relationship mapping, we've been able to generate quality content, developing a site that's quite useful and also unique, in part due to the breadth of the data. It's PR is 0 (new domain, no links), and we're getting spidered at a rate of about 500 pages per day, putting us at about 30,000 pages indexed thus far. At this rate, it would take over 400 years to index all of our data. I have two questions: Is the rate of the indexing directly correlated to PR, and by that I mean is it correlated enough that by purchasing an old domain with good PR will get us to a workable indexing rate (in the neighborhood of 100,000 pages per day). Are there any SEO consultants who specialize in aiding the indexing process itself. We're otherwise doing very well with SEO, on-page especially, besides, the competition for our "long-tail" keyword phrases is pretty low, so our success hinges mostly on the number of pages indexed. Our main competitor has achieved approx 20MM pages indexed in just over one year's time, along with an Alexa 2000-ish ranking. Noteworthy qualities we have in place: page download speed is pretty good (250-500 ms) no errors (no 404 or 500 errors when getting spidered) we use Google webmaster tools and login daily friendly URLs in place I'm afraid to submit sitemaps. Some SEO community postings suggest a new site with millions of pages and no PR is suspicious. There is a Google video of Matt Cutts speaking of a staged on-boarding of large sites, too, in order to avoid increased scrutiny (at approx 2:30 in the video). Clickable site links deliver all pages, no more than four pages deep and typically no more than 250(-ish) internal links on a page. Anchor text for internal links is logical and adds relevance hierarchically to the data on the detail pages. We had previously set the crawl rate to the highest on webmaster tools (only about a page every two seconds, max). I recently turned it back to "let Google decide" which is what is advised.

    Read the article

  • Fix 403 errors in Google Webmaster Tools

    - by Justin
    Hi Team, I have a domain that has "fallen off a cliff" for searches in Google. Searches that used to be in position 1-4 are now gone from page 1. The same search in Bing shows the typical position expected (top 5 results). In reviewing Google Webmaster Tools, I am seeing two problems: 1. The Sitemap is reporting two errors: General HTTP error: HTTP 403 error (Forbidden) URLs not accessible However, the URL they provide as "no accessible" is accessible. I can click the link Google provides and it works fine. There are 6,000 crawl errors of type 403. Again, most of these pages that have 403 are accessible in my browser (tried various browsers as well). About half are from January, the other half from November. There are no IP-specific firewall rules on ports 80 and 443 that could block the goolgebot Using the user agent switcher add-on for FF I confirmed that the page loads when the user agent is the googlebot I an confirm that most of the pages reported as 403 are accessible. A search of just "site:thedomain.com" does confirm there are over 9,000 in the index. But most searches don't return the site. I believe the 403 issues are the cause of the fall in search rankings, but I can't seem to find any information online with ideas about how to address this. Any ideas? jpe

    Read the article

  • Google is not indexing my entire site despite having a sitemap

    - by Anusha
    I have an e-commerce website www.beyondtime.in. I have been constantly monitoring Googlebot crawling on my website and my webmaster account. Lately, I have found two issues that I have not been able to understand. 1.) The Google Bots have been only crawling www.beyondtime.in/telecom.php when the URL is not even valid. What needs to be done to let Google crawl other pages of the website as well? 2.) The second question is about the Google Webmaster account, where I've submitted my sitemap with 227 URLs. Out of that, only 156 have been indexed. None of the images of my website have been indexed by Google.

    Read the article

  • Lost Traffic from Google Because of Meta-tag Adding

    - by Marian
    I have a site aroundnails.com. It has English version on subdomain en.aroundnails.com. Reading about language related meta-tags for Google, I have placed such a meta tag on the main page of main site: <link rel="alternate" hreflang="en" href="http://en.aroundnails.com/" /> By this way I have tried to say Google, that my site on en.aroundnails.com is the english version of main site, not a duplicate. After a fortnight I have lost a huge part of traffic from Google, more than a half. At the beginning of September I have moved this meta-tag, but traffic remained at the same level. Hope somebody can help me to solve this issue.

    Read the article

  • Squeezing all the SEO out of a URL as possible.

    - by John Isaacks
    I am working on an ecommerce site, I told our SEO consultant that I plan to make the URL scheme: /products/<id>/<name>. This is similar to Stackoverflow's URLs which are /questions/<id>/<title>. He asked me if I could change the URL scheme to /p/<id>/<name> instead. I know why he wants this change, the word "products" isn't needed to find the correct product, and it doesn't offer any SEO, so shortening it to just p would make the relevant keywords in the <name> weigh more. His main priority is maximizing SEO, but the part that I don't think he is considering is how this effects the semantics of the site. Also having the word "products" looks like it has meaning and a reason for being there, just having a p looks chaotic and ugly to me. I also don't think it makes that much of a difference does it? Stackoverflow doesn't use /q/<id>/<title> and they do just fine, I do realize that theres many factors at play here though, not just the URL. So I want some outside opinions on which is the better way and why?

    Read the article

  • How can I monitor a website for malicious changes to the files

    - by rossmcm
    I had an occasion recently where our website was compromised - a link farm was added to a couple of the pages on one occasion, and on another occasion, a large and nasty aspx file was put on the server. I won't mention the host's name (Hostway), but I was pretty annoyed that someone was able to do this. No, it wasn't a leaky password - around 10 sites hosted by HW with consecutive IP addresses got trashed. Anyway. What I need is a utility or service (preferably free) that takes a snapshot of my websites contents, and then regularly monitors the files (size and datestamp) for unauthorized changes or additions, and alerts me. I've used web services that monitor one file for changes, but I'm looking for something a bit more aggressive.

    Read the article

  • [SSL] Becoming Root CA

    - by Max13
    Hi everybody, I'm the founder of a little non-profit French organization. Currently, we're providing free web and shell hosting. Talking about that, is there a way to become a Trusted Certificate Authority, in order to give free SSL certificates to my customers, but also to avoid being an intermediate (and pay a lot for that), and/or avoid paying a lot for each certificate... Thank you for your help.

    Read the article

  • Question about MochaHost.com Hosting Plans [duplicate]

    - by Wassim
    This question already has an answer here: How to find web hosting that meets my requirements? 5 answers This is not an advertising, I've just found this website (MochaHost) that offers a great things just for 3$/m like : 2 LifeTime FREE Domains UNLIMITED Space and bandwidth SVN (subversion) support SSH access PHP 5, Perl, Python, and Rails I need to know if any of you had taken from them a hosting plans, what do you think about it?

    Read the article

  • Paypal Automatic Billing API

    - by Dale Burrell
    Paypal offer Automatic Billing Buttons (https://merchant.paypal.com/us/cgi-bin/?cmd=_render-content&content_ID=developer/e_howto_html_autobill_buttons#id105ED800NBF) which allow regular billing for different amounts. After a couple of hours googling I cannot find how to access this functionality using the API, so that it can be automated as opposed to done manually via the paypal account. Is it possible? Can someone point me to a sample/reference?

    Read the article

  • Email links open in a new window [closed]

    - by Dan
    I'm asking this as an opinion question. How does everyone treat email links opening in a new window if their default email client is web based? This way? <a href="mailto:[email protected]">email me</a>. It will open fine for app based email clients but open in the same window for web based clients. This way? <a href="mailto:[email protected]" target="_blank">email me</a>. It will open in a new tab for web based email clients but open a blank tab. I cant really seem to find the best of both worlds. What does everyone else do?

    Read the article

  • .com vs .me for personal and blogging sites. Which one is good regarding seo

    - by Sameer Manas
    I basically have a domain under my name with .com extension. I am planning to use it for my portfolio and also as a regular blog. Now considering SEO and ranking stuff, what is the best way to implement this. myname.com - Portfolio || myname.com/blog - Blog page (or) myname.com - Blog || myname.me - Portfolio i have absolutely no idea on how .tld's impact SEO and Ranking, so i seek the experts advice on this. Thanks in advance.

    Read the article

  • Forum engine with full LDAP integration [closed]

    - by Andrian Nord
    We are looking for forum engine which may actually maintain user data into LDAP, maybe via mods. Core point is about ability to maintain the data, i.e. all user profile settings, like nickname, password, email, avatar, birthday and others (preferably configurable). One example of good ldap integration, level of which I'm expecting, is drupal's ldap integration, which allows to map any user's attribute into ldap and keeps it in sync with database. Year ago I've done a small research over existing Free&FOSS engines and find out few forum engines with LDAP integration, namely SFM, phpBB and something else. The most maintained solution were provided by phpBB3, which supports LDAP integration out-of-box, but it is unable to sync data with changes in LDAP server made by other software. Actually it wasn't even propagating changes back, I'm not saying about ability to map additional attributes (other than name/password/email). Also, I haven't found any forum with architecture which have proper abstraction over user settings, thus I doubt that this engines (including phpBB) are possible to mod such functionality without introducing dramatic changes into core codebase. More recent research showed that even some commercial software, like IPB is unable to keep it's database synced with LDAP directory and map additional attributes. In other words, all support I've seen so far is simple user creation upon first user's login, which is not good for us, as forum is not primary site and should not maintain it's own users base (to reduce risk of possible collisions). LDAP import is required due to many other services (ftp, email, jabber, drupal site) using same users base. Currently we have forum embedded into Drupal site, but we are unsatisfied with it's features. BTW, we are using Linux and this is not duplicate of this question, as it's author seems to be satisfied with behaviour described above. So, my question is: Are there any (preferably FOSS&free) forum engines that may import, export, keep in sync, or otherwise integrade with LDAP user database (preferably with ability to map additional fields to ldap attributes)?

    Read the article

  • How to correctly track the analytics when using iframe

    - by Sherry Ann Hernandez
    In our main aspx page we have this analytics code <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-1301114-2']); _gaq.push(['_setDomainName', 'florahospitality.com']); _gaq.push(['_setAllowLinker', true]); _gaq.push(['_trackPageview']); _gaq.push(function() { var pageTracker = _gat._getTrackerByName(); var iframe = document.getElementById('reservationFrame'); iframe.src = pageTracker._getLinkerUrl('https://reservations.synxis.com/xbe/rez.aspx?Hotel=15159&template=flex&shell=flex&Chain=5375&locale=en&arrive=11/12/2012&depart=11/13/2012&adult=2&child=0&rooms=1&start=availresults&iata=&promo=&group='); }); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> Then inside this aspx page is an iframe. Inside the iframe we setup this analytics code <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-1301114-2']); _gaq.push(['_setDomainName', 'reservations.synxis.com']); _gaq.push(['_setAllowLinker', true]); _gaq.push(['_trackPageview', 'AvailabilityResults']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> The problem is I see to pageview when I go to find the AvailabilityResults page. The first one is a direct traffic and the other one is a cpc. How come that they have different source? I was expecting that both of them is using a direct traffic.

    Read the article

  • Why do people crawl sites without downloading pictures?

    - by Michael
    Let me show you what I mean: IP Pages Hits Bandwidth 85.xx.xx.xxx 236 236 735.00 KB 195.xx.xxx.xx 164 164 533.74 KB 95.xxx.xxx.xxx 90 90 293.47 KB It's very clear that these person are crawling my site with bots. There's no way that you could visit my site and use <1MB bandwidth. You might say that there's the possibility that they could be browsing the site using some browser or plug-in that does not download images, js/css files, etc., but the simple fact of the matter is that there are not 90-236 pages that are linked from the home page (outside of WP files), even if you visited every page twice. I could understand if these people were crawling the site for pictures, but once again, the bandwidth indicates that this isn't what is happening. Why, then, would they crawl the site to simply view the HTML/txt/js/etc. files? The only thing that I can come up with is that they are scanning for outdated versions of WordPress, SQL injection vulnerabilities, etc., which makes me inclined to outright ban the IPs, but I'm curious, is it possible that this person is a legitimate user, or at the very least, not intending to be harmful?

    Read the article

  • Make Offscreen Sliding Content Without Hurting SEO [duplicate]

    - by etangins
    This question already has an answer here: How bad is it to use display: none in CSS? 5 answers On my website I have content which is positioned off the screen, and then slides in when you click a button. For example, when you click the news button, content slides in with news. It didn't occur to me that this might be labeled as a black hat SEO technique, because I have content positioned off the screen with CSS that links elsewhere on my site, and a search engine could very easily interpret that as me hiding content for SEO purposes by positioning it off screen. Obviously, my intention was not to hide content, but was to make a sort of UI/UX content slider where content slides into view when a button is clicked. How can I make something to this effect (where content slides in and out), that would not comprise SEO?

    Read the article

  • Daily Blog Archives and Duplicate Content

    - by nemmy
    A few weeks back I realised that my blog software was creating daily post archives. Which basically resulted in duplicate content especially if I only had one post a day. The situation is something like this: www.sitename.com/blog/archives/2013/06/01 - daily archive for 1 June 2013 www.sitename.com/blog/archives/2013/06/my-post-name.html So, here we have two pages that are basically identical except the daily archive has some meaningless title like "Daily Archive for 1 June 2003". And I have no control over which content Google decides is the primary content. It's quite possible (and likely) that the daily archive could be the "primary" content and the actual post itself the "duplicate". Once I realised it was doing this I modified the daily archive template to include <meta name="robots" content="noindex"> Here we are a few weeks later and I still see some daily archives coming up in Google search results. I realise some of those deep pages might not be crawled yet but I am worried that the original post (which should be the PRIMARY content) has been marked duplicate content by Google. Now I've no indexed the daily archives I might end up with no indexed content AND the original articles still flagged as duplicates. And nothing will show up in search at all. Have I screwed myself here or is there a way out?

    Read the article

  • What alternatives to Animated GIFs are supported on all modern browsers?

    - by Clay Nichols
    I was looking for an alternative animated images to Animated GIFS. But per CanIUseit support for APNGs seems to being phased out. And MNG support isn't even listed there and pages about it don't even mention Chrome (suggesting those pages are very very old) Clarification: This is for a web app, so it'll need to support: - Safari on iPad (so can't depend on extensions) - Chrome on Windows and Mac - Safari 6.0+ on Mac - Chrome on Android

    Read the article

  • Easiest solution to setup payments for a conference registration page?

    - by Keith G
    I've got a fair amount of website development experience, but I've been asked to setup a conference registration page in short order. However, I have absolutely zero experience with shopping carts, payment processing, etc. What is the absolutely quickest and easiest way to get this thing up and running? Here are my criteria: Site is currently hosted on Godaddy.com and someone has suggested using their QuickCart We cannot use any option that visits the paypal.com domain because it has been blocked my a large segment of the potential audience (on a military base). Need a $0 option for speakers Cancellations can be accepted, so maybe something that could handle that would be a bonus There is no "product" other than a confirmation that they have registered for the conference.

    Read the article

  • Is there a search engine that indexes source code of a web-page?

    - by Dexter
    I need to search the web for sites that are in our industry that use the same Adwords management company, to ensure that the said company is not violating our contract, as they have been accused of doing. They use a tracking code in the template of every page which has a certain domain in the URL, and I'm wondering if it's possible "Google" the source code using some bot that crawls the code rather than the content? For example, I bought an unlimited license for an image gallery, and I was asked to type the license number in a comment just before the script. I thought it was just so a human could look at the source and find out if someone paid, but it turned out that it was actually that they had a crawler looking for their source code and that comment. If it ran across the code on your site, it would look for the comment, and if it found one, it would check to see if it was an existing one. If not, it would first notify you of your noncompliance, and then notify the owner of the script. Edit: I'm looking to index HTML and JavaScript only, not the server-side languages or Java.

    Read the article

  • # id - urls with id first display full page, then move to #id

    - by guisasso
    I've noticed this in the new version of chrome, and ie9 and 10. Some urls in a photo gallery have a #id tag as they are supposed to display a full view of a picture. Basically, a div in a lower position on the page has that #id that i call via a.com/1.html#id. This has never been an issue until lately, when i noticed a bit of a lag. The issue: The website loads normally, then the view moves to the #id as supposed, but with some lag sometimes, perhaps because of the high resolution of the picture, which is somewhat noticeable. Anyway to avoid this, or make it so the page would move to the correct #id even before fully loaded?

    Read the article

  • Allow access to WordPress site only by links in email newsletters

    - by Shane
    I send out a personal email newsletter, and have been looking into sending it via some service like MailChimp, or sendy.co. Many of these email services suggest, or require, the email newsletter content to be available online, in case the recipient's email app doesn't render it properly, or at all. The thing is I don't want my newsletter contents visible to the whole world. Nor do I want to require existing recipients to make accounts/be assigned accounts, with passwords. So, the question is: How can my WordPress site content be viewable only by clicking on the link to it in the email newsletter. It can't be found in a Google search; but once at the site the visitor can view previous newsletter contents. It seems an .htaccess file would do the trick, but I have been unable to figure out the syntax for this. Thanks for your help. I have copied below two other questions, and answers, which have helped me word my question clearly. Similar to this request about allowing access to a certain group while still restricting access to the world: Is there a way to password protect directory only in cpanel. But the user should not be prompted the password, when they try to access it via web? This persons question is the closest I could find to my situation: Restrict direct folder access via .htaccess except via specific links

    Read the article

  • Multi option step-by-step walkthrough? [closed]

    - by James Simpson
    I'm looking for a service ideally (but script maybe) that would allow me to create a step-by-step walkthrough, customised by options users choose in earlier steps. It is difficult to describe and if there is a better description I could Google for, please let me know! Basically, I want to start with a few options for a user to click on and then change what comes next based on that click and be able to do this through a whole walk through (explaining how to set a service up). As mentioned earlier, if it was a SAAS I could use (in the vein of desk.com) that would be perfect. Thank you and please ask questions if I've described it poorly!

    Read the article

  • Trade off: Lower the number of URLs in sitemap from 43k to 23k or update the sitemap.xml only weekly basis

    - by Tobias
    we rewrote the sitemap creation process. Now the sitemap contains 43.000 URLs. 20k more than before. We have daily changing in URLs. The script that is creating the complete sitemap takes more than 30h. So we can not build it every day. Lets say that increasing the speed of the script is not possible. What should I do? A: Stay with the 23k URLs and update it daily B: Increase number of URLs to 43k and update it weekly

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >