Search Results

Search found 9717 results on 389 pages for 'gkt pro'.

Page 175/389 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • SEO with duplicate content

    - by user16831
    I have a nature photography site with multiple types of photo galleries. Each photo and associated caption on my site appears in several galleries. For instance, a photo of a goldfinch that was taken on a trip to New Mexico in 2008 will appear in the "goldfinch.php" gallery, in the "finches.php" gallery, and in the "New_Mexico_2008.php" gallery. This duplication is useful for my site visitors - User A may want to see goldfinch photos, whereas User B wants to see photos from New Mexico - but I am concerned about the SEO implications. The typical suggestions to deal with duplicate content, such as 301 redirects and canonical tags, probably won't work in this case, because the page content is substantially different (ranging from ~1% to ~90% duplication, depending on the specific example chosen). The obvious solution to me would be to edit robots.txt to only allow search engines to crawl one type of gallery - for instance, if they crawled only the galleries organized by species(e.g. goldfinch.php), all the photos on my site would be found exactly once. However, the Google content guidelines recommend against blocking crawler access to duplicate information. Should I go ahead and use robots.txt anyway? Or is there a better solution?

    Read the article

  • How to determine the amount to spend per phrase on Adwords research?

    - by Anonymous -
    My company would like to start a PPC advertising campaign. Whilst I understand the concept and how to set everything up from a technical point of view, this is something I've never done before. Logically, we'd like to test out a wide range of keywords that we think would lead to conversions, which we've put together through brainstorming and with some help from Google's External Keyword Tool. Sub-question whilst I remember - am I correct in thinking that in Google's keyword tool, keywords that we think will perform well that have a low competition yet high monthly searches are good since there will be less advertisers, meaning our bid per click will be less? Is there a common benchmark or process of doing a round of tests with keywords? Should we wait for 100 clicks on each keyword, see which ones have lead to the most sales (or rather, sales that are sustainable with the cost per click of that keyword), then drop the ones which aren't converting and put that budget onto the converting keywords? We realistically have a few hundred keywords/phrases we would like to test, but spending $100 per keyword/phrase is going to work out as quite an expensive test. It would be nice to be able to spend $5-10 per phrase, but I don't think the sample size would be great enough to determine anything usefully reliable. Another approach might be to setup all the keywords, and those that bring the most sales within x hours/days would be the ones we use. What is the common procedure with things like this? I know there are a plethora of companies that specialize in exactly this, but this is something we anticipate doing a lot in the future, so it would make sense to do it in house if at all possible.

    Read the article

  • Google Analytics Visitors drop-off for certain region of site only

    - by crmpicco
    I have an issue with the tracking on my site where I have seen a dramatic drop off of visitors to the site from a certain region. I have four regions on my site at the moment, these are UK, EU, US and RoW (Rest of the World). The UK, EU and US regions are unaffected, only the RoW region suffers this drop-off. I have included a screen shot below from my GA account, which shows this effect. My GA code, which is included on every page on the site is below. I have changed the UA account number intentionally for this example. There have been no changes made to the GA account or the tracking code in a live environment for some considerable time, but for some reason I am seeing the drop-off for this region only. In the code below I am not tracking page views on certain pages as I have event tracking setup for these pages. <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-18721873-5']); _gaq.push(['_setCookiePath', '/row/']); if ( typeof(p_page) != 'undefined') { // do nothing if user is on above pages // N.B. there are a series of conditions in this if statement checking that we are not on a particular page } else { _gaq.push(['_trackPageview']); } </script>

    Read the article

  • guest blogging, recipricle links & nofollow

    - by sam
    When writing a guest blog for a site and in the blog post i write i link back to my self, that counts as an imbound link. Writing something on my blog, like "have a look at this post i wrote for _" made that a link the links would be recipricle, correct ? thus cancling each other out.. If i was to to make the link back to my article a nofollow link then would i still get the link juice ? If i write guest blog post and the site want to also write a guest blog on my site later on whats the best way to handle it as wont these both cancel each other out and have no effect ?

    Read the article

  • SEO issue - External links rel="nofollow" or NOT!

    - by Mary Melody
    Previously I created a website for promo codes and coupons. That have hundreds of external links to the Retailer's websites and I used rel="nofollow" tag. But my site SEO rank was very very... bad especially on Google. So then I removed the rel="nofollow" tag, but no improvement. The only difference between this site and my other sites is "External links". My other sites have good ranking on Google. Now I'm creating a site for reviews. So this is also similar situation for me. I just want to know about how does SEO react with external links? and then what possibly happened in my case?

    Read the article

  • Should I set NOINDEX header for my JS, CSS and image files?

    - by Yoga
    Are there any harms if my site send NOINDEX headers for all my static assets? For image files, I refer to those valueless, e.g. background images, button images, etc. Update: more background information I have this concern is since recent Google said they also execute JS and they might fetch content via Ajax. So, for example, if I send noindex for my jQuery script, so Google would not be able to use them to load Ajax, I suppose it is not good for my site's SEO, right?

    Read the article

  • What would be the most efficient use of e-mail addresses for ecommerce

    - by user18323
    I am not sure if this is exactly the right section. What do you find the most efficient, clear and reachable use of e-mail addresses to use for the customer? I thought a standard 'sales, 'support' might work but it might look a little samey. Do users respond well when e-mails are replied by staff with named e-mail addresses such as [email protected] or a generic one? I am currently developing a small scale ecommerce website to gain some hands on insight into web retail. Any additional advice would be brilliant.

    Read the article

  • Is there a limit of emails/pictures per Gravatar account?

    - by Steve Taylor
    I'm building a site to connect patients to doctors. Each doctor will have a profile picture. I'm quite happy to manually maintain the profile pictures as there won't be that many doctors nor will they have a need to change their picture very often, if at all. I thought of using Gravatar to host all these profile pictures. The idea is to create a single Gravatar account then keep adding email addresses to it in the form [email protected] and associating each one with a new image. Does anyone know, however, if I will run into any per-account limit? If so, it wouldn't be feasible because I would end up with a bunch of Gravatar accounts instead of just the one.

    Read the article

  • Parse text file on click and display

    - by John R
    I am thinking of a methodology for rapid retrieval of code snippets. I imagine an HTML table with a setup like this: one two ... ten one oneTwo() oneTen() two twoOne() twoTen() ... ten tenOne() tenTwo() When a user clicks a function in this HTML table, a snippet of code is shown in another div tag or perhaps a popup window (I'm open to different solutions). I want to maintain only one PHP file named utitlities.php that contains a class called 'util'. This file & class will hold all the functions referenced in the above table (it is also used on various projects and is functional code). A key idea is that I do not want to update the HTML documentation everytime I write/update a new function in utilities.php. I should be able to click a function in the table and have PHP open the utilities file, parse out the apropriate function and display it in an HTML window. Questions: 1) I will be coding this in PHP and JavaScript but am wondering if similar scripts are available (for all or part) so I don't reinvent the wheel. 2) Quick & easy Ajax suggestions appreciated too (probably will use jquery, but am rusty). 3) Methodology for parsing out the functions from the utilities.php file (I'm not to good with regex).

    Read the article

  • How to combat negative SEO?

    - by Perturbed
    Someone has decided to create a hate blog on a hosted blogging service (wordpress.com) that bashes my company. The blog contains posts that completely flame myself, my service, and contains complete falsehoods about how I run my business. Without going into details, I'm pretty sure the author of this blog is an owner of a competing service (although it is authored completely anonymously). Frankly, I'm not sure if the content would qualify for defamation or not, but I really don't like the idea of spending money on a lawyer to even attempt to prove this. I also have no interest in retorting or even replying to the blog in any sort of way -- I feel this would justify the ludicrous claims that have been posted. Unfortunately, whoever wrote the blog was pretty smart about using key words that people commonly use to search for my service. Because my customer base is relatively small and local, our PageRank is not incredibly high. As a result, when someone Google's our business name, this blog is usually within the top five results (thankfully, it's never above the business' actual website, but it's usually within eyeshot). It's incredibly frustrating to hear from customers who have seen the link (luckily, most of the time they think the author is crazy). Is there anything I can do to combat this? Would it be worthwhile to setup my own hosted wordpress.com branded blog, in an effort to trump this wordpress.com with a blog that is more active of my own? TL;DR: Someone made a hate blog using wordpress.com and is now on the first page of my business' search results. What are my options?

    Read the article

  • php application deployment to redhat [closed]

    - by Subhash Dike
    I am a .net background person with most of the work with Windows environment and IIS etc. I have been given a task to deploy one php application on RedHat Linux box. All I have is the credentials and the server IP of that box. Can someone help about how to get started creating a new website on Apache 2.0 on redhat box and then putting php code over to that. I do not have physical access to that server as well. I know it's very basis question, but I am in dark for these things. Even if you can point me to some documentation that's fine or else at least the analogy with IIS should also help to certain extent.

    Read the article

  • Source code not matching uploaded HTML file

    - by benhowdle89
    I'm not sure if this is the right place to ask but i'm having a hugely frustrating problem with Coda and my website (i'm not sure which one is causing the issue) I'm using Coda to make changes to my website, Coda uses built in FTP to save changes to your web page. So when you hit Save, it uploads the new file. I've been using Coda for months and never had a problem until now. I am making changes in the html of my index.php and hitting save, it's successfully uploading the file but no changes are reflected in the source code in ANY browser. I even logged into cPanel on my website, ie. www.example.com:2082 and looked at the file - the changes have been made successfully. But the actual webpage in browser's source code, no changes?? I have tried adding which made no difference. Interestingly i make changes to style.css and the changes are instant. I have emptied the cache on all of my browsers but i'm still having an issue. Does this sound like a Coda problem or has anyone heard of such a thing?

    Read the article

  • Is there any good reason I would want my website to be framed?

    - by minitech
    I'm building a website that's not security-critical in any way at all, so having somebody put a page in an <iframe> is not particularly dangerous to its users. However, as my website doesn't have script plugins that will be used anywhere else, is there any reason why I shouldn't just apply: X-Frame-Options: Deny to every page on my website? Is there any valid reason for any other website to embed mine? I've seen plenty of content-stealing ones and attempts to hijack user accounts, but never an actual good usage of frames that's not an explicit feature of the website.

    Read the article

  • Is this safe? <a href=http://javascript:...>

    - by KajMagnus
    I wonder if href and src attributes on <a> and <img> tags are always safe w.r.t. XSS attacks, if they start with http:// or https://. For example, is it possible to append javascript: ... to the href and src attribute in some manner, to execute code? Disregarding whether or not the destination page is e.g. a pishing site, or the <img src=...> triggers a terribly troublesome HTTP GET request. Background: I'm processing text with markdown, and then I sanitize the resulting HTML (using Google Caja's JsHtmlSanitizer). Some sample code in Google Caja assumes all hrefs and srcs that start with http:// or https:// are safe -- I wonder if it's safe to use that sample code. Kind regards, Kaj-Magnus

    Read the article

  • Accented characters representation in the URL

    - by Dan
    We have support for various languages in our website, including Spanish, French and Swedish. For now, the links in the site are NOT encoded before sent to the browser, sending the real accented chars (if such exists, i.e. href="www.(dot)example(dot)com/héllo.html") and not their HEX representation. This works & looks good on all browsers, including Chrome, FF and IE. However, we care great deal about SEO. We got this tip that encoding the links before sending them to the browser (so instead of linking to http://www.example.com/héllo, we will link to http://www.example.com/h%E9llo) will improve the way search engines will 'understand' the links and the keywords in the URL. This involves some work at our side, so we wanted to know if there's truth in that tip, but couldn't find anything addressing this issue.

    Read the article

  • Rewrite rule to show as directory using .htaccess

    - by chanchal1987
    I want to implement a rewrite rule in my .htaccess file to show a specific url as a directory of my server. See the code below I written, RewriteRule ^(.*)/$ ?page=$1 [NC] This will rewrites urls like www.mysite.com/abc/ to www.mysite.com/index.php?page=abc. But if I request www.mysite.com/abc then it is throwing an 404 error. How can I write a rewrite rule which will match www.mysite.com/abc and www.mysite.com/abc/ both? Edit: My current .htaccess file (After Litso's answer's 3rd revision) is like below: ## ErrorDocument 401 /index.php?error=401 ErrorDocument 400 /index.php?error=400 ErrorDocument 403 /index.php?error=403 ErrorDocument 500 /index.php?error=500 ErrorDocument 404 /index.php?error=404 DirectoryIndex index.htm index.html index.php RewriteEngine on RewriteBase / Options +FollowSymlinks RewriteRule ^(.+)\.html?$ $1.php RewriteCond !-d RewriteRule ^(.*)/$ ?page=$1 [NC,L] RewriteCond %{REQUEST_URI} !index.php RewriteRule ^(.*)$ ?page=$1 [NC,L] ##

    Read the article

  • How to create email request forms and auto-responder?

    - by mfc
    I'm building a site in css and I'm pretty new to any code or script other than html and css. I'm trying to create a landing page that requires an email from visitors and set up an auto responder to send to that newly submitted email. This would also be a signup for email newsletters. I have some idea how to create the form and have looked into a bit. I don't know how to make it a requirement to get past the landing page and into the actual website or set up the auto-responder. Any help would be much appreciated. Or if someone knows of a source that explains how to do this thing in particular it would be wonderful. I tried lynda.com but everything is so general and I can't seem to find info on exactly how to do this but I know its quite common. Thanks!

    Read the article

  • Redirect public traffic to a different subfolder, while local traffic remains unchanged

    - by ecnepsnai
    I would like to have local (intranet) HTTP traffic go to the /var/www/html folder while any public traffic goes to the subfolder, /var/www/html/public I've tried this configuration, with some variation, in httpd.conf <VirtualHost PRIVATE-IP> DocumentRoot /var/www/html ServerName ecn ErrorLog /var/www/logs/error/private CustomLog /var/www/logs/access/private common </VirtualHost> <VirtualHost PUBLIC-IP> DocumentRoot /var/www/html/public ServerName PUBLIC-DOMAIN-NAME ErrorLog /var/www/logs/error/public CustomLog /var/www/logs/access/public common </VirtualHost> PUBLIC-IP, PRIVATE-IP, and PUBLIC-DOMAIN name are all replaced with the correct values in the actual document. The problem is, local traffic can browse fine but remote traffic is directed to the root folder and getting 403d (because I have that folder blocked off through my .htaccess file). If I append /public to the URL it works fine.

    Read the article

  • Why do 410 pages show as errors in Google Webmaster Tools?

    - by ElHaix
    To remove links from our site, we return a 410 code on on the links we want removed, and shows The page you requested was removed.. In Webmaster Tools, I see all the 410 pages in Crawl Errors / Not Found. I'm worried that because they appear in Crawl Errors that they could be negatively affecting SEO rankings. Is that the case, and if so, should I change the return codes from 410 to something else?

    Read the article

  • No Obvious Answer - Query-Strings and Javascript

    - by nchaud
    Say I have this main page /my-site/all-my-bath-soaps which lists all my products. It has a search filter text box that uses javascript to filter the products they want to see on that page (the URL doesn't change as they filter). Now from many other parts of the site I want to navigate to this products-page and see specific products. E.g. <a href="/my-site/all-my-bath-soaps?filter='Nivea-Soap'"> will go to /all-my-bath-soaps and apply javascript filtering to see just that product and hide all dom nodes for the other products. The problem is if the user changes the text in the filter from 'Nivea-Soap' to 'Lynx' the javascript will work fine and show the new products but the URL stays at ?filter='Nivea-Soap'. Is there anything I can do about this? Of course, I don't want to reload the page with a new query string every time they change the search criteria. Somehow it'd be great to move the ?filter=... criteria into POST data instead - but how can I do this with a link I don't know...

    Read the article

  • Trouble with .htacess redirection

    - by mike23
    I use this redirect rule to redirect users from www.domain.com/admin to www.domain.com/wp-admin on a Wordpress site. RedirectMatch 301 \@admin http://www.domain.com/wp-admin The problem is that instead redirecting to wp-admin/, it redirects to an article called Administrators are awesome people (slug : administrators-are-awesome-people) I can guess what is going on, WP sees that there is an article slug starting with "admin", and redirects to it, overruling my own rule. Is there a way to be more specific, like saying "redirect urls that end with exactly admin ?

    Read the article

  • Webmaster tools showing 404 for non existent folder pages

    - by Jody
    Google webmaster tools is reporting some/many 404 urls that don't exist on my site. The links are things such as domain.com/xyz/ However that doesn't exist, but domain.com/xyz/index.html does exist. The "linked from" pages all show proper links to the "/xyz/index.html". The page without index.html DOES 404, but why is google even trying these urls if they are not linked to? My real question, is there a way to have google stop attempting to load these pages, and ultimately remove these from the crawl errors report. Thanks.

    Read the article

  • Does redirecting old site's URLs to new site's front page hurt a page's ranking?

    - by Kaivosukeltaja
    An old site that is being rewritten needs to have it's URLs redirected to the new site. There are a few hundred pages that may or may not have corresponding pages on the new site, probably with different slugs, and adding mappings manually will require more hours than we can spare. It was suggested that all old URLs be redirected to the new front page, but I remember reading somewhere that this confers a penalty in page rank because it's what link farmers do. Is this true or can we take the easy way out?

    Read the article

  • Object-based content management system

    - by Adam Maras
    I remember hearing within the last year or two about a content management system either being released or developed that was centralized around product/item information. I'm aware that there are several CMSes that have this capability, but this particular one was built specifically for that task. Also, I remember it winning some sort of award or recognition for upcoming software products. However, I can't for the life of me remember what this CMS was called or who was developing it. Does anyone know what package I'm talking about?

    Read the article

  • Storing User-uploaded Images

    - by Nyxynyx
    What is the usual practice for handling user uploaded photos and storing them on the database and server? For a user profile image: After receiving the image file from user, rename file to <image_id>_<username> Move image to /images/userprofile Add img filename to a table users containing their profile details like first_name, last_name, age, gender, birthday For a image for a review done by user: After receiving the image file from user, rename file to <image_id>_<review_id> Move image to /images/reviews Add img filename to a table reviews containing their profile details like review_id, review_content, user_id, score. Question 1: How should I go about storing the image filenames if the user can upload multiple photos for a particular review? Serialize? Question 2: Or have another table review_images with columns review_id, image_id, image_filename just for tracking images? Will doing a JOIN when retriving the image_filename from this table slow down performance noticeably? Question 3: Should all the images be stored in a single folder? Will there be a problem when we have 100K photos in the same folder? Is there a more efficient way to go about doing this?

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >