Search Results

Search found 9721 results on 389 pages for 'quicktest pro'.

Page 158/389 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • Why some video posts from the same blog appear in google with thumbs, while others do not?

    - by jayarjo
    We own media blog - which is basically a big collection of various videos streamed through our branded player. Interesting thing is that some of our posts show up in google search results with a thumb denoting that the post in question is in fact a video. But more often they are not. We basically wonder why? What does affect it and can we control it somehow? All posts (their single pages) have facebook og meta tags in place.

    Read the article

  • Does having a Google "stop word" in a domain name have less SEO benefit than not having it?

    - by Dan
    Let me explain. Let's say my keyword I want to optimize is "green giraffes". But the domain greengiraffes.com (singular, plural, no hyphen, hyphen, etc.) is not available. I know that the search results for "green giraffes" and "about green giraffes" are essentially the same because "about" is a "stop word". Does that therefore also mean that the domain name "aboutgreengiraffes.com" is as good as "greengiraffes.com" in terms of SEO value? Are all stop words equal in that regard, or a shorter one (such as "e" or "z") is better?

    Read the article

  • Approach to retrieve files from server

    - by Aerus
    I'm in the process of making a Java application with a corresponding update application. At any given time the user may want to update the application and the updater will ask for a list of files of the latest release. Based on this list, the updater can determine which files need to be downloaded to complete the update. I now have 2 approaches to solve this, but i would like to know what approach will put the least stress on my application and server. I could send a list of files i want to download to my server and the server zips the files and simply returns this compressed file to the application. The updater sents a request for each seperate file to the server, which simply returns the file The application will be used mainly in Belgium and The Netherlands and connections/bandwidth tend to be pretty decent in here. The average size of a single file should be around 100Kb and at most 1Mb. I expect an update to have anywhere between 10 to 50 new files. I expect at most 100 persons/day to update the application, i.e. in the week when a new version is released. I hope this is enough information to sketch my problem and any advice is welcome. If there is another common way to tackle this, i'd be glad to hear it.

    Read the article

  • CSS alignment differs per page, cant find reason [migrated]

    - by Floran
    I list products on my homepage and on a company details page. I use the exact same HTML, but for some reason the product appears different: The productname is "Artikel 1". Here the product is displayed correctly: http://www.zorgbeurs.nl/ Notice how the green price area is right below the product. But here: http://www.zorgbeurs.nl/bedrijven/76/mymedical the green price area is all the way at the bottom of the page. Why?

    Read the article

  • Will many links to the same page without nofollow penalize the host site in the search engine rankings?

    - by Evgeny
    May be a silly question, but I'll give it a shot :). On my forum app I would like to allow users with sufficiently high reputation display links to their home pages under every post - without the nofollow attribute (while lower rep users will have the nofollow) I am happy to help the site contributors improve rankings of their own, but not sure if this can actually deteriorate the rank of the host (the site that hosts those links) - as potentially the same link to the user's home page may be peppered in the pages of the host. What do you think? Thanks.

    Read the article

  • Webmasters hentry error and authorless pages

    - by Ben Racicot
    Within Google Webmasters Search Appearance-Structured data I'm getting a series of errors: Error: Missing required hCard "author". And most of my 44 errors have: Missing: Author Missing: entry-title Missing: updated There seems to be no CLEAR explanation of these errors. It is either because these classes exist without their nested classes, or they are expected to exist because of something else, possibly itemscope or itemtype='' The Question: How do you specify with richsnippets that the page is about a location and there is no human author?

    Read the article

  • navigation menus and SEO

    - by Rodolfo
    I've always have my doubts about navigation menus effect on SEO. You know, the vertical menus on the top that show in every page in the site linking to main sections and subsections. My issue is that if not done dynamically (i.e. after page is loaded or something), from a search engine's point of view it probably looks like a whole bunch of links in the beginning of the page, and links that probably have nothing to do with the page being analyzed, so it's probably not only confusing it, but also giving link 'juice' to the wrong pages or reducing its value. When I've asked SEO people about this, I usually get a "Google is smart, they'll recognize it as a menu and ignore it" response, but I'm not convinced (and the 'Google is smart' argument sounds almost like religion discussion to me). So does it affect SEO negatively or not? Are there any official posts on this topic?

    Read the article

  • Alternative to nofollow: custom 302 url shortener?

    - by Dogweather
    Here's the scenario: lots of blogging platforms make it tedious to insert nofollow into links within the post content. I.e., you need to edit the html, format it correctly, etc. I have a client who posts lots of content with links that should be nofollow'ed, and I thought of a novel way to handle this, since the blogging platform they're using makes it hard: I install a URL shortener web app on the client's domain. The shortener works as normal, except it redirects via 302 instead of 301. The pagerank will therefore stay at the shortener's domain, and not flow on to the target site. Part 2: In order to get the pagerank to collect meaningfully, say on the site's home page, the shortened URLs would be generated like this: /link?12345 instead of /link/12345. And then, the path /link would 301 to the home page. This way, the id is a param, not a path element. And thus, all the incoming shortened links are going to one path, which transfers pagerank to the home page. So that's my idea. I wanted to see if anybody could find problems with it. Thanks!

    Read the article

  • How much to charge for Wordpress installation?

    - by Jack Duluoz
    I know this isn't properly a technical question but I hope this is ok here. The question is simple: how much should I charge a customer for a Wordpress installation & configuration? Configuration simply means I have to install him a theme (which is not provided by me), various plugins and maybe edit some lines of code here and there to make the whole thing work fine. MORE INFO I don't do this for a living, I'm just doing this for this single customer. He told me he wants to customize some features of the blog which I think will require a bit of code editing, but these will be small modifications, because I already told him that more substantial modifications will be billed separately. I don't know exactly how long will this take, but probably just 1 day for the setup and some more days to adapt the blog to the customer requests which will eventually come up later

    Read the article

  • webmaster tools 500 crawl error for asp faceted navigation that does not exist

    - by user19007
    i am getting 2,500 type 500 url errors in google webmaster tools. These pages are faceted navigation results that can not be reached by a site visitor. These pages do not exist. We are using faceted navigation with the Volusion platform (asp.net, I think). I have specified url parameters in webmaster tools so that google will not try to index anything faceted. This does not stop the errors from generating. I am concerned about how this might effect seo (bleeding page rank). I can provide additional information if needed. I am not sure how to solve this. I have started down the path of creating 301's, but having some difficulty there as well.

    Read the article

  • How can I view localized versions of my site?

    - by Max Vernon
    We are adding internationalization to our site. We are getting the client's IP address from the headers and looking it up against the IP2location database to get the client's country. Several of our clients reported seeing a blank page over the weekend. We'd like to be able to get screenshots or use a browser from many different countries on an ongoing basis for testing code changes. I need to know what the site looks like when accessed from various countries since there are several elements that vary by country. I've used Tor and Vidalia, along with the Tor customized Firefox browser however it appears the CSS is getting mangled. I have also used http://webpagetest.org to check the site, however the screenshot it gives is too small to be really useful. Is there a site or a service I can use to get screenshots or interact with my website from various countries?

    Read the article

  • Why am I getting domainpark.cgi being called from my website?

    - by Sean
    I used to test my site on www.exampleone.com and now I have moved to the real domain www.realdomain.com now and www.exampleone.com is now parked by 1and1 (default). Now when I test to see which requests are made by the www.realdomain.comI see domainpark.cgi and park.js from Sedo Parking also being requested as well as the js that serves the ads by adclicks. How do I get rid of this? It's not on the index page at all, and it's causing a lot of strain and slowing my site down.

    Read the article

  • When acquiring a domain name for product xyz, is it still important to buy .net and .org versions too?

    - by Borek
    I am buying a domain name for service xyz and obviously I have bought .com in the first place. In the past it was automatic to also buy the .net and .org versions. However, I've been asking myself, why would I do that? To serve customers who mistakenly enter a different TLD? (Would someone accidentally do that these days?) To avoid a chance that competition will acquire those TLDs and play some dirty game on my customers? If there is a good reason, or a few, to buy the .net and .org versions these days I'd like to see those listed. Thanks.

    Read the article

  • What meta tag or microdata should I use for a dictionary web application?

    - by vonPetrushev
    I have a web application that serves as a dictionary, and it ranks good at google when searching for a rare word in my language (the dictionary's target language). I want the result to appear in the define: some-word, as well as in the search results when someone uses the filter tool Dictionary. Should I add some special meta-tag in the head of the html? How about microdata? Does google have a special webmaster tool for registering dictionaries like: wordnetweb.princeton.edu or en.wiktionary.org ?

    Read the article

  • Why is Google PageRank not showing after redirecting www to non www?

    - by muhammad usman
    I have a fashion website. I had redirected my domain http:// (non-www) to http://www domain and my preferred domain in Google Webmaster Tools was http://www. Now I have redirected http://www to http:// domain and have changed my prefered domain as well. Now Google PageRank is not showing for even a single page. Would any body please help me and let me know if I have done something wrong? Below is my .htaccess redirect code: RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] RewriteCond %{HTTP_HOST} ^www\.deemasfashion\.com$ RewriteRule ^deemasfashion\.com/?(.*)$ http://deemasfashion.com/$1 [R=301,L] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index\.html\ HTTP/ RewriteRule ^index\.html$ http://deemasfashion.com/ [R=301,L] RewriteRule ^index\.htm$ http://deemasfashion.com/ [R=301,L]

    Read the article

  • Does Submit to Index on a page with new content update Content Keywords for the site?

    - by Dan Kanze
    Using Google Webmaster Tools I'm trying to update the Content Keywords of my site. I'm confused about the relationship between Submit to Index and Content Keywords Does Fetch as Google -- Submit to Index on a previously existing indexed page containing new content expidite updating the Content Keywords crawled by the real Google bot? Does Submit to Index only submit new URL's so that previously indexed URL's still point to the older cached version until Google crawls specifically for new content on its own? Does Submit to Index have anything to do with Content Keywords or crawling new content being a previously indexed page or never been indexed page?

    Read the article

  • Are webhosts that require NS instead of a CNAME common?

    - by billpg
    I've just signed up with a webhost (which I prefer not to name) and I'm reasonably happy with it. The only nit was when I was ready to put a site online and I asked the support line to what name I should point my 'www' CNAME to. They responded that they don't do that and I need to set my domain's NS records for the hosting to work. "Why would you ever want to do it that way? Our service to you includes DNS and our servers are probably much better than the one your registrar provides." This was a bit of surprise as all of the other webhosts I've worked with happily support this. I've set up (eg) gallery.myfriend.example for friends by having them configure their DNS to CNAME 'gallery' to the name of a shared server at a webhost and the webhost does name-based hosting for 'gallery.myfriend.example'. (Of course, if the webhost ever tells me I'm being moved from A.webhost.example to B.webhost.example, it would be my responsibility to change where the CNAME points. Really good webhosts would instead create myname.webhost.example for the IP of whichever server my stuff happens to be on, so I'd never have to worry about keeping my CNAME up to date.) Is my impression correct, that most webhosts will happily support a service that begins with a CNAME hosted elsewhere, or is it really more common that webhosts will only provide a service if they control the DNS service too?

    Read the article

  • How long does it take for Google Webmasters to index site after submitting sitemap? [closed]

    - by Venkatesh Hodavdekar
    Possible Duplicate: Why isn't my website in Google search results? I have submitted my website today into Google search using Google Webmasters using sitemaps. The status on the sitemap says OK and it shows that 12 urls have been recognized. I was wondering how long does it take for the link to get indexed, as the indexed url option says "No data available. Please check back soon." I am not sure if it is showing this message due to some error, or everything is fine.

    Read the article

  • Moving to a custom domain in blogger spoiled my pagerank. Need help?

    - by Chankey Pathak
    I had www.chankeypathak.blogspot.com as my blog on Blogger. I then purchased a domain www.chankeypathak.com and then I followed the procedure of redirecting all old pages to new ones (301 redirect). Everything is working fine. But the only problem is that I have lost my pagerank. before it was 2 for www.chankeypathak.blogspot.com and now it is showing unranked for www.chankeypathak.com How can I get my pagerank back?

    Read the article

  • PHP W3 Validator API, Is this good? [closed]

    - by Josh Purcell
    I was trying to find a way to see if my site's code was valid or not but I continuously going over to W3 Validator so I decided to make an "API" however it really isn't! I just wanted to know if anybody can find a better solution to the one I have made. This is what I currently use, with the usage of ?uri=http://www.mydomain.com : <?php if(!$_GET['uri']) { echo "No URI!"; } else { $CheckURI = "http://validator.w3.org/check?uri=".$_GET['uri']; $URL = file_get_contents($CheckURI); $Start = strpos($URL, "<title>") + 7; $End = strpos($URL, "</title>"); $Title = substr($URL, $Start, $End-$Start); if(preg_match('[Invalid]',$Title)) { //Code is INVALID echo "<a href='$CheckURI' title='This is not good!' target='_BLANK'>INVALID Source</a>"; } elseif(preg_match('[Valid]',$Title)) { //Code is VALID echo "<a href='$CheckURI' title='Check It Yourself!' target='_BLANK'>Valid Source</a>"; } else { //It Went WRONG echo ""; } }

    Read the article

  • .htaccess / 301 redirection question

    - by John K
    All my WordPress post URLs generate subdirectories with duplicate content and I do not know what regular expression to use to consistently 301 redirect domain.com/category/post/random-number/ to domain.com/category/post/ and domain.com/category/post/random-number/another-random-number/ also to domain.com/category/post/. Here is an example of my problem: http://www.example.com/features/harb-constitution-not-to-allow-kr-provinces-to-receive-foreign-officials/ http://www.example.com/features/harb-constitution-not-to-allow-kr-provinces-to-receive-foreign-officials/1345257927000/

    Read the article

  • Why would a web site keep my signup information for a limited time only?

    - by Alois Mahdal
    I have just created account at (some web service, well, actually it was Transifex, a localization service). Registration form requested typical things: accont name, e-mail adress, password (twice), and, optional company name and phone number. What confused me was this sentence on confirmation page (the one right after submitting the form): We will store your signup information for 7 days on our server. Can anybody explain what does this mean? What exactly they are referring to by "signup information", if it's something that should be kept for only 7 days? Or is my account going to be destroyed after that time? (Well, that could make sense for some special services, but not for this one.)

    Read the article

  • Active Directory auto login to website for domain users

    - by Darkcat Studios
    I am putting together an Intranet for a company - I have set up authentication to get into the Intranet from a login box linked to AD via LDAP/ However the client wants (if possible) to have users automatically authenticate into the intranet if they are logged into the domain. AD and IIS7.5 are on separate servers (in the same network). I believe that I need to use WindowsAuthentication to do this - but will that work? as the web server is not part of the domain: do I need to tell IIS where the AD server is? The next part could be more complex: once the user has authenticated, I need to drag user details from AD about the user, I guess with LDAP, however I will need to know the user's username in order to do this, won't I? as the user hast had to type this in, how do I get that? The intranet site is in asp.net 4 VB.

    Read the article

  • What is the replacement for the Web Intents HTML standard?

    - by Tom
    "Web Intents" were deprecated in Chrome 24 (November/2011) and are no longer supported in any browser: We also gathered a lot of valuable data and feedback from our experimental support for Web Intents and decided to disable the feature in today's Beta release. Is there an HTML5 standard that I can look into as an alternative to what Web Intents intended? I'm interested in how web services can be stitched together. For example, imagine a website that can import a image from any number of web-services, modify the image in some way, then push the file back to any number of other web-services, all via HTML5 standards.

    Read the article

  • Is it possible for a web-server to send more files than requested for, and have the browser accept them?

    - by Osiris
    I've created a basic web server for a school project, and it serves static content without a problem. I thought of having the server parse all htm/html files for links to .js/.css/image files, and send these files to the client without these files being requested by the client later. eg. The browser requests: index.htm The server responds with intex.htm and image.jpg I modified the server to send two distinct http responses for a "GET /index.html HTTP1.1" (one for the html page and one for the image), but the browser ended up requesting the image when it was good and ready. Is there any way to bypass this? (use a multipart response, perhaps) Will these files be accepted by most browsers, or will they be rejected for security reasons?

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >