Search Results

Search found 9721 results on 389 pages for 'quicktest pro'.

Page 202/389 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Search engine bots accessing strange URLs

    - by casasoft
    We have ELMAH enabled on our site and get errors whenever a Page Not Found error is triggered on the website. We have recently redesigned a new website and so we understand that search engine robots might have previously indexed pages which they try to access and result in a Page Not Found errors. For this reason, we have set up permanent redirects for such previously indexed pages to the respective new pages. The website in mention is www.chambercollege.com and for example, a previously indexed URL was www.chambercollege.com/special-offers.aspx. This page is no longer accessible so we have created the necessary permanent redirect to redirect to the respective page on www.chambercollege.com/en/content/special-offers-161/. Now we are starting to receive Page Not Found errors of search engine bots (e.g. MSN bot) trying to access the URL www.chambercollege.com/special-offers.aspx/images/shadow_right.jpg/. Any idea how could a search engine make up that strange URL and whether you have any suggestions of what to do best?

    Read the article

  • Interactive map using javascript [on hold]

    - by Denis
    Im trying to learn HTML and javascript. But i cant find any information about how to create interactive map/picture using javascript. Ex. I take a map where is a part of my town and write some information about like few buildings there, so after i put my mouse over those buildings the information will be displayed. It should look similar to this http://davidlynch.org/projects/maphilight/docs/demo_usa.html I need to use the javascript to make it done.

    Read the article

  • Multi language switch links translated or in current language?

    - by FFish
    Should I do: A: translate the language links in the current language: (if I am on the English version) <a href="en/">English</a> | <a href="it/">Italian</a> | <a href="fr/">French B: the links in the native languages: <a href="en/">English</a> | <a href="it/">Italiano</a> | <a href="fr/">Français</a> From a user perspective option B is obvious, but what about SEO?

    Read the article

  • Is SEO affected negatively by having densely encoded identifiers of content in URLs?

    - by casperOne
    This isn't about where to put the id of a piece of unique content in URLs, but more about densely packing the URL (or, does it just not matter). Take for example, a hypothetical post in a blog: http://tempuri.org/123456789/seo-friendly-title The ID that uniquely identifies this is 123456789. This corresponds to a look-up and is the direct key in the underlying data store. However, I could encode that in say, hexadecimal, like so: http://tempuri.org/75bcd15/seo-friendly-title And that would be shorter. One could take it even further and have more compact encodings; since URLs are case sensitive, one could imagine an encoding that uses numbers, lowercase and uppercase letters, for a base of 62 (26 upper case + 26 lower case + 10 digits): 0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz For a resulting URL of: http://tempuri.org/8M0kX/seo-friendly-title The question is, does densely packing the ID of the content (the requirement is that an ID is mandatory for look-ups) have a negative impact on SEO (and dare I ask, might it have any positive impact), or is it just not worth the time? Note that this is not for a URL shortening service, so saving space in the URL for browser limitation purposes is not an issue.

    Read the article

  • Selecting Dynamic ID JQuery [migrated]

    - by Vedran Wex Maricevic
    I need to select dynamic id using JQuery, and once I select it then I need to do some action on it. This is the HTML that I have: <input id="content_photos_attributes_1355755712119_image" name="content[photos_attributes][1355755712119][image]" size="30" type="file"> Please note the id value, text is always the same however the number changes (I do not have control over that change). What I need to do is to create on click for that element. This is what I got so far, and it is not working. <script type="text/javascript"> jQuery.noConflict(); jQuery("input[id *= 'content_photos_attributes_']").click(function() { alert("Image deletion is clicked"); }); </script> It really makes no difference whether I select that element by ID or by its name.

    Read the article

  • How will this affect my SEO ranking?

    - by dunc
    I run a fishkeeping website based on a WordPress (PHP) CMS. I've recently put a fairly complex "filter" into place which searches my content for mentions of fish species profiles and turns them into an active link. For example, asdasd this is a test about abdomen to see if the caudal fin will work asdadasdas try again with abdomen and A. panduro and Apistogramma panduro ...becomes asdasd this is a test about abdomen to see if the caudal fin will work asdadasdas try again with abdomen and <a href="/?p=1703" class="link_species">A. panduro</a> and <a href="/?p=1703" class="link_species">Apistogramma panduro</a> On the rest of my website, the species are linked with pretty URLs such as /species/apistogramma-panduro/ but due to the way this filter works, the only information I can get access to is the idof the post. As such, I'm using /?p=1703 or whatever the ID is. What I'd like to know is: how much will this affect my SEO rating/ranking? Will it be detrimental if I don't rewrite the function? Thanks in advance,

    Read the article

  • Redirect Google crawler to different robots.txt via .htaccess

    - by user3474818
    I have googled for the answer all day and still couldn't find an answer. I have a virtual subdomain www.static.example.com which is a mirror site of www.example.com. It means I have just one root folder for subdomain and domain aswell. I want to redirect crawlers to different robots.txt file - robots_static.txt when they see .static in url in which I will forbid indexing via /disallow command. I want to do this because I have duplicated content in Google search results. Subdomain is showing the exact same content as the main domain. Does anyone know how could I achieve that crawlers sees robots_static.txt instead of robots.txt? What I have managed to find so far is this: RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] but when I check in webmaster tools, it still sees robots.txt as my robots file instead of robots_static.txt, so it crawls and index everything twice. What did I do wrong? Thanks EDIT: This is my .htaccess file ## # @package Joomla # @copyright Copyright (C) 2005 - 2013 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / RewriteCond %{THE_REQUEST} ^GET.*index\.php [NC] RewriteCond %{THE_REQUEST} !/system/.* RewriteRule (.*?)index\.php/*(.*) /$1$2 [R=301,L] RewriteCond %{THE_REQUEST} ^GET ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. <FilesMatch "\.(ico|pdf|flv|jpg|ttf|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Wed, 15 Apr 2020 20:00:00 GMT" Header set Cache-Control "public" </FilesMatch> <ifModule mod_headers.c> Header set Connection keep-alive </ifModule> ########## Begin - Remove Etags # FileETag none # ########## End - Remove Etags

    Read the article

  • Pixels - A cry for some insight

    - by CarrotFile
    I'm pretty new to web developing and I'd love some clarification. Although reading more than one book on the topic, I cannot seem to wrap my head around the pixel concept. I encounter problems with this issue when trying to use CSS and pixel units for design that fits different screen sizes. To my understanding a pixel is the most basic unit used by a monitor in order to compose an image on the screen. So if me resolution is 800 by 600, everything on my screen is rendered using those 800*600 basic building blocks. If I were to enlarge my screen resolution, 3 things would accrue: A. The basic image building block(the pixel) would shrink in size B. The pixels would move close together C. Well, more pixels would now be available All these combined lead to a sharper(depending on the viewing distance) and more detail enabling image. Well so far so good. Here is were I start getting lost: To my knowledge a pixel is not a physical, real object. Monitors are not embedded with a few thousand pixels. I am drawn to this conclusion because anyone can change his screen's resolution, making a pixel on his screen bigger or smaller, and adding or subtracting the amount of total pixels on screen. Adding to that, I have herd that different monitors have different pixel densities. For example Apple's retina monitors. Taking all of the above as my knowledge base, These are my questions: If a pixel has no real world constant size, what does comparing different pixel densities matter? Each screen company can define it's own pixel concept and declare the higher density. What does a bigger pixel density mean? Say we take two screens with the same physical dimensions, but with a different pixel density, am I to assert that the main difference would be the larger density screen being able to display a higher max resolution? Or am I to assert that given the same resolution on both monitors, the higher density one would display a sharper, smaller image? If a pixel is not a fixed size within one monitor, is it a fixed size between the same resolution on two different monitors? For example, would two different monitors, set to the same resolution, be comprised of same size, same quantity pixels? I'd love some help (:

    Read the article

  • Where to prepare (legal) "embeded" documents for the web [on hold]

    - by WHITECOLOR
    Say I have to have terms of use on site, a place the text as marked up html. Where it is better to prepare this document initially if it is going to placed to the web? Considering that such documents are prepared and changed by lawyers, it is not very applicable for them to use text editor. And it is not very efficient to manually process content to prepare it for the web as it is changed. For now my solution is: Prepare document text in google docs keeping some simple structure (title, ordered lit of features, simple paragraphs), save as html (it will contain some unnecessary markup) and then use custom conversion tool (some script) to convert saved html from google docs to simple html and inject new version to the site. What is common practiceses for this problem? What is the workflow of preparing and publishing such documents.

    Read the article

  • What happens when you close an Adsense account?

    - by rakibtg
    I need to change my payee name, I have asked in Google Adsense product forum one of top contributor replied me: "You will have to close the account & apply again with using your real payee name. That's why they specifically state that the payee name needs to match the full name on your bank account." https://support.google.com/adsense/answer/47333?hl=en This makes sense, but got few question because the support page do not have sufficient content to help me. My questions are: What happens when you close your Adsense account? If I apply again, then what will be the process to re-gain my account? I mean should I have to apply for a website again, then Adsense team will review and approve that? Is there any chance to disapprove my account? What about current check? I have two check in my hand. So, is Google will send those check again to me with my new payee name? Anyone experienced this problem? I have asked it on Google Forum but got no answer!

    Read the article

  • Is Azure Compatible with JPEG XR?

    - by Shawn Eary
    I just put an F#/MVC app into a Windows Azure solution as a Web Role. Before migration, my JPEG XR (*.WDP) files were getting displayed on the client in IE9 without issue via my local and hosted sites. Now, after migration into Windows Azure, my JPEG XR files neither get displayed in my local Windows Azure compute emulator nor do they get displayed when they are deployed to http://*.cloudapp.net. Is there some sort of conflict with Widows Azure and (JPEG XR) *.wdp files? If so, what is the accepted best practice for overcoming this conflict?

    Read the article

  • Domain name at different host to website

    - by Corbula
    I have someone i'm making a website for with a domain name and current website hosted at fasthosts. I've built them a website hosted at a different host, unlimitedwebhosting. The website i've made them is in a directory like this. www.mysite.com/dev/0002 So fasthosts: Is the registrar for the domain name, it also has all of the email addresses and their current site. unlimitedwebhosting: Has the new site in a sub directory, like .com/dev/0002 Is it possible to keep the domain name and email addresses all hosted at fasthosts and to have the new website hosted in my unlimitedwebhosting account and to somehow have the domain point to the new website?

    Read the article

  • Mediawiki hosting? [closed]

    - by Oatman
    Possible Duplicate: How to find web hosting that meets my requirements? What is the best mediawiki hosting provider? I want to be able to have a fairly simple, reliable wiki attached to a site's subdomain (wiki.mysite.com). I'd prefer it as a service, updates handled for me, I don't want to see any code! I imagine I'll change my dns to point to the provider and pay a few bucks a month. Nice and simple. There seem to be a few providers who offer this, what have you had success with?

    Read the article

  • Link to pages on site without .html extension appearing in browser?

    - by Anime163
    I've modified my .htaccess file to allow access to html files without having to include the extension on the end, for example: www.mysite.com/document directs to www.mysite.com/document.html However, when I want to link to pages within my site using something like <a href="page.html"></a> I still get the .html appearing in the URL. So am I allowed to exclude the extension and leave a link as <a href="page"></a> so that the extension doesn't appear in the browser? Or is there a better way to do it?

    Read the article

  • Caching preventing users seeing site updates

    - by Timmeh
    I'm experiencing a caching issue I can't explain. This is happening across browsers, IPs and ISPs. If a user force-refreshes, they see the new content. If they then refresh or return to the page, the old one displays. I've tried using headers via PHP such as header( 'Expires: Sat, 26 Jul 1997 05:00:00 GMT' ); header( 'Last-Modified: ' . gmdate( 'D, d M Y H:i:s' ) . ' GMT' ); header( 'Cache-Control: no-store, no-cache, must-revalidate' ); header( 'Cache-Control: post-check=0, pre-check=0', false ); header( 'Pragma: no-cache' ); Laid out correctly, at the very beginning of the file. The problem persists. A pan-ISP proxy is unlikely. Suggestions?

    Read the article

  • How should I handle search engines auto-correcting the spelling of a site's name?

    - by Nathan G.
    A client's site and company is called 'Tranin Communications' (Tranin is her last name). It ranks well in searches for her name but rather poorly in searches for the name of her site/company. I realized that this is largely due to* search engines (Google especially) assuming that the query was misspelled and automatically including results for both 'train communications' and 'communications training'. Both of those queries yield many high-ranking sites that completely drown out hers. Sometimes Google even shows results for 'communications training' instead of 'tranin communications', hiding her site altogether. Is there a way to report an incorrect auto-correction to Google or something I can do to discourage this behavior (e.g. a meta tag)? My searches have come up cold, any suggestions would be appreciated. *I've come to this conclusion because her site ranks very highly when the same queries are put in quotes.

    Read the article

  • Next lowest value in MySQL Database [migrated]

    - by Justin Edwards
    SELECT * FROM `experience` WHERE `reqexp` <> '4793' ORDER BY 'lvl' DESC LIMIT 1 Here is what I want to do. I am making an online game for a client, and need to be able to use a mysql query with a random value, and find the level associated with that amount of experience. In this case, I need to find the next value lower than 4793 that already exists in the database so I can determine the players appropriate level. Any Ideas?

    Read the article

  • In terms of SEO, is it better to have a URL broken down by folder, or with dashed names?

    - by VictorKilo
    I am creating a friendly url interpreter for my website. I have read dozens of similar topics on this site, but none that seem to address my particular situation. What I want to know is if it's better to have: A well broken down URL where each category is represented by a folder domain.com/1036/OR/Lane/Lowell/Wetleau-Subdivision -OR- A URL which groups all of the categories and terms together domain.com/1036/Wetleau-Subdivision-Lowell-OR-Lane I am asking only in terms of what is best for SEO, not necessarily human readability. My thinking is that it may be better to group them all together like they are in the second example. My reasoning being that all of those terms represent the page and are more likely to draw a result. I am a complete SEO nub though, and I crave some expert guidance. Thank you in advance for any help given.

    Read the article

  • Will Google penalize my website if I hide the H1 tag?

    - by mickburkejnr
    I've read an article today where the author stated that if you put keywords on to your page but then hide them with CSS, Google will penalize your site. This make sense. This got me thinking though about my own technique when I build a website. If for example when I build a website and the logo contains the name of the website, I tend to put the name of the website in a H1 tag and then hide this tag. I don't know why I do it, I've always done it. I also include any text held in an image in the alt attribute of the img tag. But because I am hiding the H1 tag, does this leave me open to Google penalizing the website because I've hidden this one tag?

    Read the article

  • Determining if a visitor left your server

    - by Jeepstone
    We have an Apache server running a PHP website. The site is an e-commerce shop. We currently use Barclays as the payment provider but are seeing a lot of customers drop out at the point at which we transfer them to the payment gateway (hosted with Barclays) I can see specific instances in the shop where orders have been created but not paid/failed but I need to ascertain if the user has definitely left our server (or just failed to reach Barclays). Is there anything in any of the server/access logs that states when a user transferred to a different domain?

    Read the article

  • Should I Use WordPress Category Archives or Regular Pages When Considering SEO?

    - by user1151640
    I've built a WordPress site based on posts and category archives (no pages). The menu redirects to different category archive pages that have a description, an image, and the relevant posts. Now that almost everything is finished I've started to worry and wonder if that was a good decision from an SEO standpoint Will Google consider category archives a bad idea for sitelinks compared to using regular pages?

    Read the article

  • Redirecting 2 or more domains to same hosting server

    - by mtk
    I have domains A.com, A.co.in and A.in Purchased from site X. I have a hosting space/account purchased from site Y, which has provided me with 2 DNS entries that is to be replaced in the account at the site from where I purchase the domains. I have successfully changed the DNS entries of A.com to these 2 DNS entries and I am able to see my index.html page when I hit A.com. Problem On similar lines, I have changed the DNS entries to the same entries for A.co.in and A.in, but on hitting those sites in browser gives me no response and browser specific page of 'Site not found' is been seen. Please let me know, how to set this, so that when I hit any of the domain, the web-site is rendered from the hosting server? What am I doing wrong here? Note It has been more than 3 days after changing the DNS entries, so I don't think so this is a problem of DNS propagation, which I heard from some people. Please provide some detail explanation, as I am very very new to this. This is my first hosting ;) -Thanks

    Read the article

  • Which Adult Ads Service is best / highest paying [closed]

    - by shamittomar
    I have a sex education & sexual health website. As evident, I can not place Google Adsense and Adbrite advertisements as they disallow mature content and even remotely anything related to it. Now, I want to know what are the other options I have for showing up ads. I do NOT want to place very obscene and nude ads. But, I would like to have some kind of ads on website to make it sustainable. So, what options do I have ? Which adult advertisement publisher gives highest payouts ?

    Read the article

  • Poor backlink profile - search rankings not updated for 2+ months

    - by fistameeny
    I am carrying out some work on a website that is a PR2 with a few good quality, relevant backlinks (PR4-6). It has a presence on Twitter that is updated regularly, a Google Places listing, and listings on some decent directories (Qype etc). The site was rebuilt into Drupal 7 two months ago, with all the basics done - URL rewriting, XML Sitemap submitted to Google, and most importantly, good quality, structured content. I've noticed that Google is still showing "old" URL's from the previous version of the site that was ditched 8 weeks ago. I think the site may be penalised under the Penguin update, as a previous SEO company created many low quality links from link farms/directories. My question is what the correct way to deal with this is. Bing Webmaster Tools can "disavow" links, and I guess I can attempt to contact the link farms to have them removed. I've already submitted a request to Google to request that we have the penalty removed as we're trying to tidy up a bad history. We submit updated sitemaps to Google and Bing daily, and have built some further decent quality, relevant links. Is there anything further I can do?

    Read the article

  • Setting Up CDN...beginner questions

    - by RRN
    I am using MediaTemple web hosting and they are using Edgecast's CDN network. I am planning to join CDN service, so I read through the guide, but I am confused about the 'Update your code' section: Does the CDN apply to my website automatically (without editing HTML) after I updated the DNS zone file? How can I control which files going through the CDN or not? Also, I need to make sure it won't affect the Google Analytics results.

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >