Search Results

Search found 23807 results on 953 pages for 'seo software'.

Page 36/953 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Category to Page and blocking category url via robots.txt -Good for SEO?

    - by user2952353
    I am using a template which in the pages it allows me to add sidebars / more content under and above the content I want to pull from a category which is very helpful. If I create pages to display my categories content wont the page urls go in conflict with the category urls? By conflict I mean causing a duplicate content error? What I thought might help was to block from robots.txt the category urls of the blog ex. /category/books /category/music Would that be a good practice in order to avoid the duplicate content penalty? Any tips appreciated.

    Read the article

  • Is a subdomain per service a good idea for SEO?

    - by Kennie R.
    I am creating a site with quite a few services, such as a free account service, and of course a subdomain for my site's blog and then for article base and other related services, would having them all on subdomains be a good idea? Are there any caveats you are aware of in existing search engines for this? I believe mapping foo.example.com to example.com/foo to provide an alternative just in case is a good idea for sitemaps, I like to keep things clean.

    Read the article

  • SEO: 301 for a page which has no mirrow path?

    - by Alex
    Hello, I just did a 301 and the domain and some pages which have a mirror file path are fine. But I have one directory which is not going to be part of the new site and I don't know how to redirect the old files that were there. I need something like this: oldDomain/oldDir/file.php and I need to make it redirect to newDomain/differentDir/file.php Is that possible? What is the 301 redirect rule for that? update I just added this rule as suggested by @Itai and it didn't work redirectMatch permanent ^/outdoors/trees/tanoak.php$ http://www.comehike.com/outdoors/trees/129/Tanoak any idea why?

    Read the article

  • SEO penalty for "duplicate" content when a site's also accessible via another domain name?

    - by tog22
    While testing searches for keywords on my site, I notice that a mirror of it at http://a8.8d.344a.static.theplanet.com/ sometimes appears at the top result rather than my primary domain. It looks like this is an alternative address for my server. Will the presence of identical content at this domain and at my primary domain result in a Google penalty? If so, what can I do about it? Thanks for any help...

    Read the article

  • What is the proper SEO way to add and remove links to our site and sitemap?

    - by ElHaix
    As content fluctuates on our site, we will obviously add the titles to the new pages to our sitemap, and link list for search engine indexing. Through time, certain links will become less relevant, and would like to know how to avoid them being crawled. The links themselves may not be removed - but we don't want to dilute the link list with less relevant links as time passes. I'm guessing the status code of the page would change - to what? Should they also removed from the sitemap?

    Read the article

  • How should I study a competitor's off page SEO?

    - by Chris Adragna
    What are the things I need to do, and with what tools to know what a competitor has working for him/her off-page (free and paid tools -- please suggest both)? First of all, I'm supposing I want to see all of the sites linking in and see what anchor text is used. Is there something that would report on the anchor text linking in, such as counting the keyword phrases used as anchor text? Also, it would be helpful to see where the PR is coming from by PR, such as, listing inbound links by PR of the page linking in. Lastly, if I'm missing something, here in the way of off-page attributes, please say so.

    Read the article

  • Linking competitor with the same keyword i am targeting : Good or Bad for Seo?

    - by Badal Surana
    i am linking one of my competitors from my site for the same keyword which is i am targeting for my site.(My competitor is paying me for that) For Example: Me and my competitor both are targeting on keyword "foo" and my competitor paying me for linking his site from my site with keyword "foo" What i want to know is if i do that will my site's position go down in Google search results? or it will make no difference??

    Read the article

  • What HTML and CSS markup is best for SEO for a list of questions (like on Stack Exchange sites)

    - by Oleg9
    On the StackOverflow a question block (in the q-list on the index page and so on) represented by the following html code: <div class="question-summary narrow tagged-interesting" id="question-summary-19832613"> <div onclick="window.location.href='/questions/19832613/how-to-display-only-transit-routesfor-trains-in-google-maps-api'" class="cp"> <div class="votes"> <div class="mini-counts">0</div> <div>votes</div> </div> <div class="status unanswered"> <div class="mini-counts">0</div> <div>answers</div> </div> <div class="views"> <div class="mini-counts">3</div> <div>views</div> </div> </div> <div class="summary"> <h3>...</h3> <div class="tags t-javascript t-google-maps t-google t-google-maps-api-3"> </div> <div class="started"> <a href="/questions/19832613/how-to-display-only-transit-routesfor-trains-in-google-maps-api" class="started-link"><span title="2013-11-07 09:52:29Z" class="relativetime">1 min ago</span></a> <a href="/users/1309392/shirish">Shirish</a> <span class="reputation-score" title="reputation score " dir="ltr">189</span> </div> </div> </div> It uses float positioning. My questions is: Would use of css styled tables be a better choice? (It's a table, isn't it?) Or it just depends on what are you prefer to use and doesn't affect the technical side (search engines or something)? The background information (such as number of views, votes etc.) comes first in the code. And I know that search engines have a limit at viewing each page. So would it better to place div's depending on their importance and then markup them on the page using css methods (like negative margins and absolute positioning)? Or it isn't so important in this instance?

    Read the article

  • How do I optimize SEO in a multiblog WordPress install?

    - by user35585
    We are about to launch two product pages plus a corporate website. The goal is to keep a blog in all of the sites, but here it comes the question about how to do it in a way we get everything unified but do not mess with Google's web crawlers. We considered the following options: Putting a blog from which we retrieve two categories with custom CSS, so we have a blog that sub splits two category-dependent blogs; this way we can get the feeds and will point to it Putting two product blogs of which we retrieve their posts into a bigger, corporate blog Putting three independent blogs Despite I was for the first option, so we only have to address our content from the product pages, I would sincerely like to hear your opinion. We are afraid duplicate content or strange link games may make us lose PageRank. How would you do it?

    Read the article

  • How do sites avoid SEO issues / legalities with subdomain unique ids?

    - by JM4
    I was looking through a few websites recently and noticed a trend I'm not sure I understand. Sites are creating unique referral URLs for customers in the form of: http://customname.site.com (If somebody were to use http://www.site.com/customname it would function the same way). I can see the sites are using 302 redirects at some point using Google Chrome then doing some sort of htaccess redirect, taking the subdomain name (customname) and applying it as a referral parameter then keeping in session during the entire process. However, there must be thousands of these custom URLs that people are typing in. How are each one of these "subdomains" not treated as separate URLs which in turn are redirected to the same page (in short, generating tons of links all pointing to the same page which Google would normally frown upon)? Additionally, the links also appear on the site themselves as clickable links so I'm not sure how these are not tracked. Similarly, the "unique" url is not indexed or cached in any Google search results. How is this capability handled? It does NOT highlight the referral aspect, but a true example of this is visiting http://sfgiants.com which does a 302 redirect to the much longer proper San Francisco Giants MLB homepage. I am wondering how SFgiants.com is not indexed (assuming that direct shortened link appears on several MLB pages)? 1 - I know these are 302 redirects, I can see this on the sites network flow. 2 - These links do in fact appear on the page itself because in some areas (for example, the bottom of the page may say: send this page to a friend! http://name.site.com/ which in turn would again redirect to something like http://www.site.com?id=name so the id value could be stored in session

    Read the article

  • How much of a benefit does a changing landing page give in terms of SEO?

    - by Glycan
    I have a friend with a small business with a website. He asked me if he should make a put a section on his landing page under the fold that with his most recent review (or something along those lines). Specifically, he wants to know if that's the most efficient use of his time. Is there a list or such of things google values compared to each other, so that these kinds of answers could be easily answered?

    Read the article

  • SEO: Is promoting your backlinks a good strategy for improving search results for my site's name?

    - by user4394
    I run a website that's been around for about three years in the sports space. I am successfully ranking well for targeted keywords, but searching for the name of my site itself returns very poor results - it shows my site, its FB/Twitter, and then 15 pages of unrelated spam that happen to contain two words that, when combined, form my website's name. After that, my backlinks begin to show up spordically. As far as I can tell, I simply don't have enough backlinks and the backlinks I do have are ranked worse than the spam. (Site Explorer lists 200 external links to any page on our domain and 20 external links directly to the front page). To counter this, my strategy is to promote my backlinks so they get a better page rank than the spam. Does that make sense? Am I going in the right direction or should I just focus on getting more backlinks pointing directly to my site? Thanks in advance and I'd be happy to answer any questions I can (without giving away my site of course).

    Read the article

  • How to reverse engineer the SEO on a website?

    - by Startup Crazy
    I have read this question. My question is a bit different from it. I want to know how can I reverse engineer another website that is ranking the best for some keywords. For example some website called www.bla.com is there and it ranks high for many keywords and I want to learn from it how can my website be of the same authority and get the same ranking (or probably better ranking if I found something that they are missing). Can anyone enlist it as a procedure, how to reverse engineer a website?

    Read the article

  • Can't remove software - Installed in NULL

    - by ChosSimbaOne
    I've installed software to our administration machine. The problem is that i cannot start the software or uninstall it, as it is not in any directory on the machine. I tried to install it at /pack/CST/... but it is not there and a locate on CST or cst returns nothing. The software is installed from a DVD and not a repository. I've tried to reboot the machine, as i thought i might had something to do with the software being loaded in some sort of tmpfs but that didn't help. I've looked through the entire /etc to check for any relations to the software, but unsuccessfully. I'm out of ideas, to what can cause this problem, anyone got any ideas?? EDIT: I downloaded the iso wich i mounted with: sudo mount -o loop /path/to/iso.iso /path/to/mountpoint sudo /path/to/mountpoint/install.sh Ran the install GUI via an X-session. I choose to install the software in /pack/CST/... but when it exited it said that the software had been installed to /tmp/... There was nothing in tmp, so i decided to reboot the machine and did a full find to see if there was anything left of the software, removed what looked like it could be related. It had placed a script in all the /etc/rs* folder which I removed with: sudo update-rc.d -f scriptname -r I rebooted the machine again, just to be sure. When i run the installer again, it tells me that the software is installed in NULL and i have to remove before installing it. /pack/ is a mountpoint for /q/system/pack What i expected was that the software would be installed in /pack/CST, but it seems to be lock in the system, but I am unable to locate where.

    Read the article

  • How should I study a competitor's off page SEO?

    - by Chris Adragna
    What are the things I need to do, and with what tools to know what a competitor has working for him/her off-page (free and paid tools -- please suggest both)? First of all, I'm supposing I want to see all of the sites linking in and see what anchor text is used. Is there something that would report on the anchor text linking in, such as counting the keyword phrases used as anchor text? Also, it would be helpful to see where the PR is coming from by PR, such as, listing inbound links by PR of the page linking in. Lastly, if I'm missing something, here in the way of off-page attributes, please say so.

    Read the article

  • SEO/Google: How should I handle multiple countries and domains?

    - by Valorized
    Hello. I'm the webmaster of an online shop based in Austria (Europe). Therefore we registered "example.at". We also own different other domain names like "example-shop.com" and "example.info". Currently all those domains are redirected (301) to the .at one. Still available is: "example.net" and "example.org" (and .ws/.cc), unfortunately not available: .de/.eu The .com is currently owned by one of our partners, the contract ends in 2012 but until then we have no chance to get this one. Recently I read more about geo-targeting and I noticed ONE big deal. The tld ".at" is hardly recognised in Germany (google.de) whereas it is excellently listed in Austria (google.at). As a result of the .at I cannot set the target location manually (or to unlisted). More info: https://www.google.com/support/webmasters/bin/answer.py?answer=62399&hl=en This is a big problem. I looked at Google Analytics and - although Germany is 10x as big as Austria - there are more visits from Austria. So, how should I config the domain in order to get the best results in both, Germany and Austria? I thought of some solutions: First I could stop redirecting the .info. Then there would be a duplicate of the .at one. Moreover, in Webmastertools, I could set the target location of the .info to Germany. As the .at still targets Austria, both would be targeted - however I don't now if google punishes one of them because of the duplicate content? Same as 1. but with .net or .org (I think .info is not a "nice" domain and moreover I think search engines prefer .com, .net or .org to .info). Same as 1. (or 2.) but with a rel="canonical" on the new one (pointing to the .at). Con: I don't think this will improve the situation, because it still tells google that the .at one is more important, like: "if .info points to .at, the target may still be Austria". rel="canonical" on the .at pointing to the new (.info or .net or .org). However I fear that this will have a negative impact on the listing on google.at because: "Hey, the well-known .at is not important anymore, so let's focus on the .info which is not well-known." - Therefore: bad position in search results. Redirect .at to the new (.info or .net or .org) with a 301-Redirect. Con: Might be worse than 4, we might loose Page-Rank (or "the value of the page", because google says that page rank is not important anymore). Moreover this might be even more confusing for the customers. In 3. or 4. customers don't get redirected, they do not see the canonical-meta-tag. So, dear experts, please tell me what the best option would be! Thank you very much for your advice in advance and please excuse the long question. I really appreciate this network! Please note: It's exactly the same content AND language. In Austria we speak German.

    Read the article

  • Need to add 30K new pages to a 10K page website - troubles ahead? (SEO)

    - by Jurga
    We have a situation with a website where we plan to add a huge amount of new pages. The domain is over 10 years old, approximately 10 thousand indexed pages, and the planned addition is approx. 30K new pages. Any idea how we should go about it? Must we schedule a gradual data release? Have you heard of any industry standards as to how many new pages per day / week / month should be added in order to appear natural and not get in trouble with Google? I.e. should we plan a bi-weekly addition of 5K?

    Read the article

  • Is the title attribute (not tag) important to SEO?

    - by JasonBirch
    The title attribute is an HTML standard element available across most tags. e.g. <li><a title="Widgets listed by household function" href="/widgets/by-function.html">by Function</a></li> I've used this attribute on some sites for usability; many browsers pop up a "tooltip" over the link with the more detailed description of what is on the other side. I've been wondering if doing so is having a negative effect on my rankings (hidden text?) or if it has any effect at all on onsite or offsite keyword relevance calculations. Does anyone know of any research done on this?

    Read the article

  • Is it really necessary to have a competely validated Mark Up and css for SEO purposes

    - by Hitesh Manchanda
    Hi , While validating my CSS on http://jigsaw.w3.org/css-validator/ I am getting following errors: 1.Property zoom doesn't exist : 1 1. 2.Property -webkit-transition doesn't exist : all 200ms ease-in all 200ms ease-in 3.Property opacity doesn't exist in CSS level 2.1 4.Property -moz-border-radius doesn't exist 5.Property -webkit-border-radius doesn't exist Is it really required to validate the MarkUp and CSS completely for SEO or these errors which mostly are browser specific can be ignored for now. If these errors have to removed can someone please suggest the way to do so also.

    Read the article

  • Split html text in a SEO friendly manner

    - by al nik
    I've some html text like <h1>GreenWhiteRed</h1> Is it SEO friendly to split this text in something like <h1><span class="green">Green</span><span class="white">White</span><span class="red">Red</span></h1> Is the text still ranking well and is it interpreted as a single word 'GreenWhiteRed'?

    Read the article

  • Why is it still so hard to write software?

    - by nornagon
    Writing software, I find, is composed of two parts: the Idea, and the Implementation. The Idea is about thinking: "I have this problem; how do I solve it?" and further, "how do I solve it elegantly?" The answers to these questions are obtainable by thinking about algorithms and architecture. The ideas come partially through analysis and partially through insight and intuition. The Idea is usually the easy part. You talk to your friends and co-workers and you nut it out in a meeting or over coffee. It takes an hour or two, plus revisions as you implement and find new problems. The Implementation phase of software development is so difficult that we joke about it. "Oh," we say, "the rest is a Simple Matter of Code." Because it should be simple, but it never is. We used to write our code on punch cards, and that was hard: mistakes were very difficult to spot, so we had to spend extra effort making sure every line was perfect. Then we had serial terminals: we could see all our code at once, search through it, organise it hierarchically and create things abstracted from raw machine code. First we had assemblers, one level up from machine code. Mnemonics freed us from remembering the machine code. Then we had compilers, which freed us from remembering the instructions. We had virtual machines, which let us step away from machine-specific details. And now we have advanced tools like Eclipse and Xcode that perform analysis on our code to help us write code faster and avoid common pitfalls. But writing code is still hard. Writing code is about understanding large, complex systems, and tools we have today simply don't go very far to help us with that. When I click "find all references" in Eclipse, I get a list of them at the bottom of the window. I click on one, and I'm torn away from what I was looking at, forced to context switch. Java architecture is usually several levels deep, so I have to switch and switch and switch until I find what I'm really looking for -- by which time I've forgotten where I came from. And I do that all day until I've understood a system. It's taxing mentally, and Eclipse doesn't do much that couldn't be done in 1985 with grep, except eat hundreds of megs of RAM. Writing code has barely changed since we were staring at amber on black. We have the theoretical groundwork for much more advanced tools, tools that actually work to help us comprehend and extend the complex systems we work with every day. So why is writing code still so hard?

    Read the article

  • Proactively using 'lines of code' (LOC) metric in your software-development process?

    - by manuel aldana
    hi there, I find the LOC (lines of code) metric a simple but nice metric to get an overview of software codebase complexity (see also blog-entry 'implications of lines-of-code'). I wondered how many of you out there are using this metric as a centric part for retrospective (for removing unused functionality or dead code). I think creating awareness that more lines-of-code mean more complexity in maintenance and extension is valuable.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >