Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 144/216 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Does SEO optimisation count on the responsive side of a site?

    - by Rick Donohoe
    I'm looking at making some SEO optimisation fixes, and at this point I'm sorting out the heading structure and keywords - H1's, H2's etc We have a site where there are a number of similar blocks, and one is always visible, and one is hidden depending on the screen size. This is our method of making a single site responsive. Firstly, how does this technique affect the SEO, and in general does the responsive side of a site matter at all to search engines? What I mean by this is if the site has different content depending on screen sizes, then which content would the search spider crawl?

    Read the article

  • Google nofollow, Disavow and Link Removal Requests

    - by PsychoDad
    I am the owner of http://www.YouReview.net and I am constantly getting requests from people asking me to remove links to their sites or they will Disavow the links and they threaten me with Google penalties. All of this is a bit frustrating because first I use nofollow on any link outside the YouReview.net domain. Second, I've never heard of Google penalizing a site for linking to other websites. My question is twofold: Do disavowed links penalize the site that was disavowed? and Does the "nofollow" attribute on tags absolutely guarantee that the link is not followed and not counted for search engine ranking? Why don't more people know about nofollow?

    Read the article

  • How important is the uniqueness of your domain name?

    - by Corey
    I've finally come up with a domain name that I like and is available. The name is nonsensical and doesn't translate into anything meaningful in any language, as far as I know. It's something like "FOOBARite". (Don't steal that!) I'm wondering about a few search issues. Results-wise, searching for it in Google currently returns about 15k results, none of which are relevant (dead Twitter pages, various unpopular online handles, and botched french translations). However, Google starts off with a spelling suggestion, which removes a letter. ("Did you mean: FOOBARit?") That returns about 250k results for several different and unrelated websites/organizations by that name. One is some technology provider, another is a sign-language organization, another is the name of a font... None of them seem particularly popular, there's not that much activity on any of those pages. Anyway, the two are pronounced differently, they're just a letter off. Should I go with my idea or is this one-letter variation going to cause me problems? If my site becomes ranked well enough, will Google's spelling suggestion go away? I don't want users to search for my site name and be told they've spelled it wrong.

    Read the article

  • Asterix in URL?

    - by KajMagnus
    Are there any reasons I shouldn't use an asterix (*) in a URL? Background: With asterixes, I could provide these nice and user friendly (or what do you think??) URLs: example.com/some/folder/search-phrase* means search for pages with names starting with "search-phrase", located in /some/folder/. example.com/some/**/*search-phrase* means search for any page with "search-phrase" anywhere in its name. example.com/some/folder/* means list all pages in /some/folder/ (rather than showing the /some/folder/index page).

    Read the article

  • client website compromised, found a strange .php file. any ideas?

    - by Kevin Strong
    I do support work for a web development company and I found a suspicious file today on the website of one of our clients called "hope.php" which contained several eval(gzuncompress(base64_decode('....'))) commands (which on a site like this, usually indicates that they've been hacked). Searching for the compromised site on google, we got a bunch of results which link to hope.php with various query strings that seem to generate different groups of seo terms like so: (the second result from the top is legitimate, all the rest are not) Here is the source of "hope.php": http://pastebin.com/7Ss4NjfA And here is the decoded version I got by replacing the eval()s with echo(): http://pastebin.com/m31Ys7q5 Any ideas where this came from or what it is doing? I've of course already removed the file from the server, but I've never seen code like this so I'm rather curious as to its origin. Where could I go to find more info about something like this?

    Read the article

  • 'Buy the app' landing page implementations: redirect or javascript popup?

    - by benwad
    My site (using Django) has an app that I'm trying to push - I currently have a piece of middleware that redirects the user to a page advertising the app if they're accessing the page on the iPhone, then setting a cookie so that the user isn't bugged by the message every time they visit the site. This works fine, however checking the page with the mobile Googlebot checker shows that the Googlebot gets stuck in the redirect (since it doesn't store cookies) and therefore won't index the proper content. So, I'm trying to think of an alternative implementation that won't hurt the site's Google ranking and won't have any other adverse effects. I've considered a couple of options: Redirect (the current solution), but don't redirect if the user agent matches the Googlebot's UA string. This would be ideal, however I'm not sure if Google like their bot being treated differently from other users, and I'm afraid the site's ranking may be somehow penalised if I go ahead with this. Use a Javascript popup instead of a redirect. This would make sure the Googlebot finds the content it needs, however I envision this approach causing compatibility issues with the myriad mobile devices/browsers out there, and may affect the page load time. How valid are these options? And is there a better option for implementing this feature out there? I've tried researching this topic but surprisingly can't find any reputable-looking blog posts that explore this topic.

    Read the article

  • Covering Yourself For Copyrighted Materials [on hold]

    - by user3177012
    I was thinking about developing a small community website where people of a certain profession can register and post their own blogs (Which includes an optional photo). I then got to thinking about how people might use this and the fact that if they are given the option to add a photo, they might be likely to use one that they simply find on Google, another social network or even an existing online blog/magazine article. So how do I cover myself from getting a fine slapped on me and to make it purely the fault of the individual uploader? I plan on having an option where the user can credit a photo by typing in the original photographers name & web link (optional) and to make them tick a check box stating that the post is their own content and that they have permission to use any images but is that enough to cover myself? How do other sites do it?

    Read the article

  • joomla sometimes messes up urls, probably cache involved

    - by Bakaburg
    Is a bit i'm having this problem and i really cannot get the hang of it... Every once in while my joomla site messes up links url and for example from something like this: http://www.sism.org/index.php?option=com_comprofiler&task=userslist&listid=4&Itemid=123 it becomes like this: http://www.sism.org/index.php/component/k2/administrator/components/com_dump/assets/css/images/stories/inrilievo/sism/htm/index.php?option=com_comprofiler&task=userslist&listid=4&Itemid=123 the new page has the right content but there are no css and other linked resources. Usually i solve the problem by deleting all the cache and turning it off and on again. Of course this is pretty annoying especially for my association. Does any one have any clue on this? Watching the URLs the components involved seems to be K2 and Jdump. Thanks

    Read the article

  • No Obvious Answer - Query-Strings and Javascript

    - by nchaud
    Say I have this main page /my-site/all-my-bath-soaps which lists all my products. It has a search filter text box that uses javascript to filter the products they want to see on that page (the URL doesn't change as they filter). Now from many other parts of the site I want to navigate to this products-page and see specific products. E.g. <a href="/my-site/all-my-bath-soaps?filter='Nivea-Soap'"> will go to /all-my-bath-soaps and apply javascript filtering to see just that product and hide all dom nodes for the other products. The problem is if the user changes the text in the filter from 'Nivea-Soap' to 'Lynx' the javascript will work fine and show the new products but the URL stays at ?filter='Nivea-Soap'. Is there anything I can do about this? Of course, I don't want to reload the page with a new query string every time they change the search criteria. Somehow it'd be great to move the ?filter=... criteria into POST data instead - but how can I do this with a link I don't know...

    Read the article

  • SEO: 301 for a page which has no mirrow path?

    - by Alex
    Hello, I just did a 301 and the domain and some pages which have a mirror file path are fine. But I have one directory which is not going to be part of the new site and I don't know how to redirect the old files that were there. I need something like this: oldDomain/oldDir/file.php and I need to make it redirect to newDomain/differentDir/file.php Is that possible? What is the 301 redirect rule for that? update I just added this rule as suggested by @Itai and it didn't work redirectMatch permanent ^/outdoors/trees/tanoak.php$ http://www.comehike.com/outdoors/trees/129/Tanoak any idea why?

    Read the article

  • .htaccess rules to rewrite URLs to front end page?

    - by Dizzley
    I am adding a new application to my site at example.com/app. I want views at that URL to always open myapp.php. E.g. example.com/app -> example.com/app/myapp.php and example.com/app/ -> example.com/app/myapp.php What's the correct form of rewrite rules in the .htaccess file? I've got: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /app/ RewriteRule ^myapp\.php$ - [L] RewriteRule ^myapp.php$ - [L] RewriteRule . - [L] </IfModule> ...based on what the Wordpress front-end does. But all I see at example.com/app is a directory of files. :( (I put those rewrites at the top of my .htaccess file). Any ideas? Update What actually worked: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^/app(/.*)?$ [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule . /app/myapp.php [L] This is good because: Explicit or implicit calls to app/myapp.php work. example.com/app redirects to app/myapp.php example.com/app/ redirects to app/myapp.php example.com/app/subfunction redirects to app/myapp.php All other calls to example.com/otherstuff are untouched. Item 4 is Wordpress-like Front Controller pattern behaviour. I think that rule RewriteCond %{REQUEST_URI} ^/app.*$ [NC] needs refining as it allows /app-oh-my-goodness etc. through too. Thanks for the answers.

    Read the article

  • Chrome refused to execute this JavaScript file

    - by TestSubject528491
    In the head of my HTML page, I have: <script src="https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js"></script> When I load the page in my browser (Google Chrome v 27.0.1453.116) and enable the developer tools, it says: Refused to execute script from 'https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. Indeed, the script won't run. Why does Chrome think this is a plain text file? It clearly has a .js file extension. Since I'm using HTML5, I omitted the type attribute, so I thought that might be causing the problem. So I added type="text/javascript" to the <script> tag, and got the same result. I even tried type="application/javascript" and still got the same error. Then I tried changing it to type="text/plain" just out of curiosity. The browser did not return an error, but of course the JavaScript did not run either. Finally I thought the periods in the filename might be throwing the browser off. So in my HTML code, I changed all the periods to the URL escape character %2E: <script src="https://raw.github.com/cloudhead/less%2Ejs/master/dist/less-1%2E3%2E3.js"></script> This still did not work. The only thing that truly works (i.e. the browser does not give an error and the JS successfully runs) is if I download the file, upload it to a local directory, and then change the src value to the local file. I'd rather not do this since I'm trying to save space on my own website. How do I get Chrome to recognize that the linked file is actually a JavaScript type?

    Read the article

  • Terms and conditions for a simple website

    - by lonekingc4
    I finished building a website for an online chess club which I am a member of. This is my first website. The site has blogging feature so the members can log in and write blog posts and comment on other posts. The membership is limited to users of an online chess site (freechess.org) and any member of that site can join this site as well. I was wondering, is it needed to put up a terms and conditions for my new website? If so, can I have a model of that? I searched and found some models but they are all for big sites that have e-commerce etc.

    Read the article

  • Is there a way to disallow crawling of only HTTPS in robots.txt?

    - by David Wilkins
    I just realized that Bingbot is crawling my company's website's pages over https. Bing already crawls the site over http, so this seems frivolous. Is there a way to specify Disallow: / for https only? According to Wikipedia, each protocol has its own robots.txt And according to Google's Robots.txt Specification, the robots.txt applies to http AND https I don't want to Disallow: / for Bing totally, just over https.

    Read the article

  • How to support tableless columns with WYSIWYG editor?

    - by Andy
    On the front page of a site I'm working on there's a small slideshow. It's not for pictures in particular, any content can go in, and I'm currently setting up the editing interface for the client. I'd like to be able to have one/two/more columns in the editable area, and ideally that would be via CSS - does anyone know of a WYSIWYG editor that supports this? I'm using Drupal (would prefer not to involve Panels as it would require a bit of work to make it a streamlined workflow for content entry) in case that matters to anyone. To start the ball rolling, one way would be to use templates. I know CKEditor supports templates, and it looks like TinyMCE might have something similar. I don't know how well these work with tableless columns (the CKEditor homepage demo uses tables to achieve its two column effect). Holding out for a cool solution!

    Read the article

  • How to do a 3-tier using PHP [closed]

    - by Ric
    I have a requirement from a client for my PHP Web application to be 3-tier. For example, I would have a web server on Apache in the DMZ, but it should NOT contain any DB connections. It should connect to a Middle server that would host the business objects but be behind the firewall. Then those objects connect to my SQL cluster on another server. I have actually done this using .NET, but I am not sure how to setup my stack using PHP. I suppose I could have my UI front tier call the middle tier using REST based web services if I create my middle tier as a second web server, but this seems overly complex. The main reason for this is advanced security: we can not have any passwords on the DMZ first tier web server. The second reason is scalability - to have multiple server on different tiers that can handle the requests. The Last reason is for deployment - it is easier if I can take one set of servers offline for testing before putting them back in production. Is there a open source project that shows how to do this? The only example I can find is the web server hosting files from a shared drive on another machine (kind of how DotNetNuke pretends to be 3-tier), but that is NOT secure.

    Read the article

  • Using Google Analytics to determine how much time a visitor spends in each section of my site

    - by flossfan
    I have a site with various pages, like: /about/history /about/team /contact/email-us /contact I want to figure out how much time people are spending on the entire /about section, and how much on the /contact section. If I run a query on the Google Analytics API and set the dimension to ga:pagePathLevel1 and the metric to ga:avgTimeOnPage, I get results like this: { pagePathLevel1: /about, avgTimeOnPage: 28 }, { pagePathLevel1: /contact, avgTimeOnPage: 10 } This looks roughly like what I want, but I'm not sure how to intepret it: Is the value of avgTimeOnPage the average time spent by any user on all pages that match that path? Or is it the average time spent by any user on any single page that matches that path? I'm looking for the average time spent across all pages matching that path, but the time estimates look shorter than I'd expect.

    Read the article

  • Issue with image lightbox and enlargement / Jquery Mobile

    - by Matt
    I'm working on a redesign of my weather website using Jquery Mobile. I have it set up so that you drill down through a series of content containers to get to the weather info (each group of info opens in a dialog display). Everything's worked well, but I've run into an issue with my images. I have them sized so that they fit a mobile device's screen nicely, but because of that, when you look at them in a desktop browser, you can't really make out what the image is. I've tried several image lightbox / enlargement solutions, but for some reason, none of them have worked. Either nothing happens or the images open in a new window. I thought that this might be caused by Jquery Mobile somehow overwriting the scripts and css of the lightbox / enlargements I've tried. I'm not completely sure though that this is the case, and if it is, how I can get around it to be able to enlarge the images to their original size, preferably onclick. Here is a working (for the most part - still some kinks to work out) example. If you look under the "Tropical" section at the "Satellite-Derived Products", you'll see what I mean. http://www.suncoaststormwatch.com/Beta/Index.html

    Read the article

  • How to show the right country domain in Google Places?

    - by Baumr
    Background A site has multiple ccTLDs: example.com for the US, example.co.uk for UK users, example.de for Germans, etc. Googling for certain city keywords will return rich snippets with a list of Google Places: Problem When searching on Google Germany, the domain for US users (example.com) appears instead of the corresponding ccTLD (example.de). This is not good user experience, as users would most likely like to book on a site localized for them (e.g. language and currency). Question What solutions are there? Is it possible to return different ccTLDs in rich snippets for Google searches in Germany/UK? Ideas Would implementing the hreflang annotation resolve this? What about entering multiple corresponding URLs in the structured data markup?

    Read the article

  • SEO penalty for "duplicate" content when a site's also accessible via another domain name?

    - by tog22
    While testing searches for keywords on my site, I notice that a mirror of it at http://a8.8d.344a.static.theplanet.com/ sometimes appears at the top result rather than my primary domain. It looks like this is an alternative address for my server. Will the presence of identical content at this domain and at my primary domain result in a Google penalty? If so, what can I do about it? Thanks for any help...

    Read the article

  • Make an agenda view google calendar entry display initially as if it had been clicked

    - by aslum
    So I've got a google calendar embedded in my web page. It's set to agenda view so when you click on an entry it expands and shows you more information on the entry. I'd like to be able to link to the page w/ the embedded calendar from elsewhere, and have a specific entry already expanded (as if it had been clicked). Is this even possible? I'm not really sure where to start. PS: I don't have enough rep on this SE to create tags... and there isn't already a tag for "google-calendar"...

    Read the article

  • 301 re-direct all external links to new domain

    - by Dean Legg
    I have changed the main domain to a sub-domain & would like to re-direct all external links to the new sub domain. Have read a few articles but having no luck editing the .htaccess as it might be interfering with all the rules in there. Old: www.example.co.uk New: https://secure.example.co.uk The current rules are quite handy because it seems to have sorted out the structure for all internal links. It has even updated the file path for images (or this could just be wordpress as the url was updated under general settings). This is the current .htaccess <files wp-config.php> order allow,deny deny from all </files> # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress

    Read the article

  • Is it a good idea to add robots "noindex" meta tags to deep low content pages, e.g. product model data

    - by Cognize
    I'm considering adding robots "noindex, follow" tags to the very numerous product data pages that are linked from the product style pages in our online store. For example, each product style has a page with full text content on the product: http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE Then many data pages with technical data for each model code is linked from the product style page. http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-1 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-2 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-3 It is these technical data pages that I intend to add the no index code to, as I imagine that this might stop these pages from cannibalizing keyword authority for more important content rich pages on the site. Any advice appreciated.

    Read the article

  • Recommendations for a network of student-related content

    - by Javier Marín
    I am running a network of websites with notes, homeworks, essays, etc. where users share their own content. I'm having real trouble with the latest Google updates (penguin, panda, etc) because the content is mainly poor-quality and with the same topic. For that reason, I want to create more websites and have more probabilites to appear in the SERPs. My question is: does Google analyzes related websites in order to exclude it from the results? I've think about distribute the websites around the world, in different hostings, but I'm afraid that Google would link it by their analytics, webmaster tools or adsense account, is that possible? What other recommendations do you have?

    Read the article

  • ISP issue browsing "sonos.com" - need to diagnose and prove [closed]

    - by john
    I am unable to browse to a website "sonos.com" with my ISP (virgin). I have ruled out browsers, PCs, macs, routers, wifi, etc. Other ISPs (even other virgin connections in different areas!) supply this site no problem. I am 99% convinced there is a DNS issue lurking here. There is something fishy about the DNS for the site : What I notice is that online DNS sites tell me the right IP address for "sonos.com", but not for "www.sonos.com". Anyway when I type "sonos.com" the browser (any/all of the 4 I tried) fail to display the page. Firefox gives a "connection was reset" error. If I browse to sonos.com using the IP address it works OK. Browsing to www.sonos.com or sonos.com works fine with other ISPs of course. Questions: 1 Does anyone have any idea what might be going on here? 2 Any suggestions as to tools/monitors to help investigate/prove what is going on I can then take this up with virgin and/or sonos. Thanks

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >