Search Results

Search found 20852 results on 835 pages for 'local seo'.

Page 25/835 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • How to specify a SEO friendly url like twitter www.twitter.com/<name> using YII framework

    - by hip10
    Hi, I am currently using the Yii framework, and I would like to know if anyone has any clue on how to setup a SEO friendly url like www.twitter.com/ automatically in Yii? I know how to do so this manually in the config/main.php but I would like to be able to dynamically generate it. I have been able to do it in Grails as well. In Yii, what I know of is that you need to have another parameter like www.twitter.com/l/, but I do not want that parameter, anyone can share? Thanks.

    Read the article

  • joomla - SEO settings and mod_rewrite

    - by Stefano
    Hi I'm using Joomla 1.5.14 and I configured SEO as in the following image Now I need to map a few old URL to the new site let's say that I need to map htp://mysite/old.html to the new Joomla page http://mysite/index.php?option=com_content&view=article&id=32&Itemid=70 I added in my .htaccess file the following RewriteRule ^old\.html$ index.php?option=com_content&view=article&id=32&Itemid=70 #works!! this works fine, but if I use the SEF URL in .htaccess (let's say the above page can be reached with htp://mysite/contacts.html), I obtain a 404 error RewriteRule ^old\.html$ contacts.html #this does not work Now the question: Is it possible use SEF URLs in RewriteRule? where am I wrong? thank you in advance stefano

    Read the article

  • Is W3C Valid XHTML and CSS Code, Semantic and Accessible Mark-up enough for site's SEO?

    - by metal-gear-solid
    I created a web-site for a client with W3C Valid XHTML and CSS Code, Semantic and Accessible Mark-up and I had said to client my code will be SEO friendly. Theyway i code it will be good for your site SEO. I putted my all efforts to make good code Now my non-techie Client is asking me, Should him go for any SEO company even after providing SEO friendly site by me? What other SEO companies will do for him other than what we can't by W3C Valid XHTML, CSS , Semantic and Accessible Mark-up?

    Read the article

  • SEO with image link alt text vs standard text-based link

    - by Infiniti Fizz
    Hi, I'm currently developing a website and the main navigation is made up of image links because the font used for them isn't standard. My client's only worry is will this mess up search engine optimization? Can I just add alt text to the images like "link 1" or use the name attribute of the anchor tag? Or would it be better to just have the navigation as anchor tags with the names of the links in them like: <a href="...">link 1</a>? I'm new to SEO so really don't know which to suggest to him, Thanks for your time, InfinitiFizz

    Read the article

  • What is the ideal length of an URL slug

    - by Sinan
    To make pretty URL's from article titles I am using a simple function. However lately I an concerned about the ideal length of these "slugs". It is said that too many dashes are bad. However some article titles can be long and a too long URL may not be liked by google. Of course that defeats th whole idea of having URL slugs. So does anyone have any idea how long a URL slug should be. Should there be a limit on the "dash" charecters used?

    Read the article

  • Hide a single content block from search engines?

    - by jonas
    A header is automatically added on top of each content URL, but its not relevant for search and messing up the all the results beeing the first line of every page (in the code its the last line but visually its the first, which google is able to notice) Solution1: You could put the header (content to exculde from google searches) in an iframe with a static url domain.com/header.html and a <meta name="robots" content="noindex" /> ? - are there takeoffs of this solution? Solution2: You could deliver it conditionally by apache mod rewrite, php or javascript -takeoff(?): google does not like it? will google ever try pages with a standard users's useragent and compare? -takeoff: The hidden content will be missing in the google cache version as well... example: add-header.php: <?php $path = $_GET['path']; echo file_get_contents($_SERVER["DOCUMENT_ROOT"].$path); ?> apache virtual host config: RewriteCond %{HTTP_USER_AGENT} !.*spider.* [NC] RewriteCond %{HTTP_USER_AGENT} !Yahoo.* [NC] RewriteCond %{HTTP_USER_AGENT} !Bing.* [NC] RewriteCond %{HTTP_USER_AGENT} !Yandex.* [NC] RewriteCond %{HTTP_USER_AGENT} !Baidu.* [NC] RewriteCond %{HTTP_USER_AGENT} !.*bot.* [NC] RewriteCond %{SCRIPT_FILENAME} \.htm$ [NC,OR] RewriteCond %{SCRIPT_FILENAME} \.html$ [NC,OR] RewriteCond %{SCRIPT_FILENAME} \.php$ [NC] RewriteRule ^(.*)$ /var/www/add-header.php?path=%1 [L]

    Read the article

  • Will dynamicaly generated content via Javascript hurt SEO

    - by Luke101
    This is what I would like to do. I would like to load content dynamically. Everything except the actual content will be rendered by javascript. I will place all the require information in a javascript variable or array at the bottom of the page. Then I will use javascript to place the content in the designated area. These are the types of things I would like javascript to render: Login menu Header and logo info Side bar info Footer info Dialog popups Ads All of the MEAT content will not be rendered by javascript. I will use the backend server to put the content in html. My logic is that more of the real content will be in HTML and all the other things will be rendered by javascript. Will this help or hurt SEO?

    Read the article

  • JSP 2.0 SEO friendly links encoding

    - by victor hugo
    Currently I have something like this in my JSP <c:url value="/teams/${contact.id}/${contact.name}" /> The important part of my URL is the ID, I just put the name on it for SEO purposes (just like stackoverflow.com does). I was just wondering if there is a quick and clean way to encode the name (change spaces per +, latin chars removal, etc). I'd like it to be like this: <c:url value="/teams/${contact.id}/${supercool(contact.name)}" /> Is there a function like that out there or should I make my own?

    Read the article

  • Make friendly URL in ASP.NET

    - by Khou
    how do i make my web app friendly URL? currently my app URL looks like this http://www.domain.com/Page.aspx?article=103 but I would like to display the URL to look like this http://www.domain.com/Page.aspx?Google-likes-url-friendly what would i need to do?

    Read the article

  • SEO dynamic / AJAX pages

    - by Andrew
    I have a very dynamic / ajax powered website which also includes iframes and due this reason I have a very bad SEO rank and it come in my mind to make one more additional version of the site (text based / no script) and serve it to the search engines based on the user agent . Please let me know if you think that is a feasible method and if it's not what else would you recommend me to do .. I don't want to loose any fancy ajax feature but I also need to keep the website on the google map :) thank you in advance for any answer ! btw the website is developed in asp.net c# .

    Read the article

  • Using DataPager Control with AJAX and SEO

    - by Jonathan Wood
    I've just taken my first stab at making a ListView, ObjectDataSource, and DataPager run in an AJAX panel. I had trouble getting it to work until I removed the QueryStringField="page" attribute from the DataPager. This attribute causes the current page to be passed as a query argument in the URL. For obvious reasons, I guess that won't work when posting back using AJAX. Now my question is if this hurts my SEO. When I used QueryStringField, the page links appeared as regular links with various query arguments. But now the links are just javascript. Haven't I hurt a search engine's ability to scan related pages? Or is there another approach to this?

    Read the article

  • Create an seo and web accessibility analyzer

    - by rebellion
    I'm thinking of making a little web tool for analyzing the search engine optimization and web accessiblity of a whole website. First of all, this is just a private tool for now. Crawling a whole website takes up alot of resources and time. I've found out that wget is the best option for downloading the markup for a whole site. I plan on using PHP/MySQL (maybe even CodeIgniter), but I'm not quite sure if that's the right way to do it. There's always someone who recommends Python, Ruby or Perl. I only know PHP and a little bit Rails. I've also found a great HTML DOM parser class in PHP on SourceForge. But, the thing is, I need some feedback on what I should and should not do. Everything from how I should make the crawl process to what I should be checking for in regards to SEO and WCAG. So, what comes to your mind when you hear this?

    Read the article

  • ASP.net MVC support for URL's with hyphens

    - by John
    Is there an easy way to get the MvcRouteHandler to convert all hyphens in the action and controller sections of an incoming URL to underscores as hyphens are not supported in method or class names. This would be so that I could support such structures as sample.com/test-page/edit-details mapping to Action edit_details and Controller test_pagecontroller while continuing to use MapRoute method. I understand I can specify an action name attribute and support hyphens in controller names which out manually adding routes to achieve this however I am looking for an automated way so save errors when adding new controllers and actions.

    Read the article

  • SEO/PHP: How to Convert Form-Submit URL (Get-Method) without Javascript SEO-Friendly?

    - by elmas
    hello, i have this code <form action="index.php" method="get" class="search-form"><input type="text" size="35" name="search" class="searchBox" value="" /><input type="submit" value="Start Searching!" /></form> and actually i convert the url with javascript <script type="text/javascript"> $(document).ready(function() { $('.search-form').submit(function() { var value = $('.search-form input:text').val(); value = value = value.replace(/\W/,''); // replace window.location.href = value + "-keyword" + ".html"; return false; }); }); </script> is there a method to convert the url seo-friendly without javascript? maybe with php?

    Read the article

  • Local variabeles in java

    - by Mandar
    Hello , I went through local variables and class variables concept. But I had stuck at a doubt " Why is it so that we cannot declare local variables as static " ? For e.g Suppose we have a play( ) function : void play( ) { static int i=5; System.out.println(i); } It gives me error in eclipse : Illegal modifier for parameter i; Thanks.

    Read the article

  • Local use of MySQL database

    - by waanders
    Hi all, Is it possible to use MySQL local? I mean NOT at a server. I read a lot about MySQL on a webserver with PHP, Joomla etc. I want to program a piece of software and use a database local to store results. Can I use MySQL for that? If so, is ther anyware on the net a good tutorial how to do that?

    Read the article

  • What do these remote addresses, local addresses, and states in TCPview mean?

    - by Joe
    I have been using TCPview lately to see what connections are made by different processes on my PC. Would somebody please explain what the following situations mean? Thanks. TCP Local Address: PC1234567890:3883 Remote Address: PC1234567890:0 State: LISTENING TCP Local Address: PC1234567890:4696 Remote Address: localhost:4697 State: ESTABLISHED Local Address: PC1234567890:4697 Remote Address: localhost:4696 State: ESTABLISHED UDP Local Address: PC1234567890:1234 Remote Address: . State:

    Read the article

  • Do SEO-friendly URLs really affect a page's ranking?

    - by Lee Harold
    SEO-friendly URLs are all the rage these days. But do they actually have a meaningful impact on a page's ranking in Google and other search engines? If so, why? If not, why not? (Note that I would absolutely agree that SEO-friendly URLs are nicer to use for human beings. My question is whether they actually make a difference to the ranking algorithms.) Update: As it turns out, the Google post that endorphine points to here has caused tremendous confusion in the SEO community. For a sampling of the discussion, see here, here, and here. Part of the problem is that the Google post is addressing the worst case where URL rewriting is done poorly and so you'd be better off sticking with a dynamic URL rather than a mangled static "SEO-friendly" URL. There's no question dynamic URLs can be crawled by Google and can achieve high rankings. Maybe it would be easier to reframe the question more concretely: given 2 otherwise equivalent pages, which will rank higher for the search "do seo friendly urls really affect page ranking"? A) http://stackoverflow.com/questions/505793/do-seo-friendly-urls-really-affect-a-pages-ranking or B) http://stackoverflow.com?question=505793 (a fake URL for comparison only)

    Read the article

  • how to combine widget webapp framework with SEO-friendly CSS and JS files

    - by Ali
    Hi guys, I'm writing a webapp using Zend framework and a homebrew widget system. Every widget has a controller and can choose to render one of many views if it chooses. This really helps us modularize and reconfigure and reuse the widgets anywhere on the site. The Problem is that the views of each widget contain their own JS and CSS code, which leads to very messy HTML code when the whole page is put together. You get pockets of style and script tags everywhere. This is bad for a lot of different reasons as I'm sure you know, but it has a profound effect on our SEO as well. Several solutions that I've been able to come up with: Separate the CSS and JS of every view of every widget into its own file - this has serious drawbacks for load times (many more resources have to be loaded separately) and it makes coding very difficult as now you have to have 3-4 files open just to edit a widget. combine the all the widget CSS into a single file (same with JS) - would also lead to a massive load when someone enters the site, mixes up the CSS and the JS for all widgets so it's harder to keep track of them, and other problems that I'm sure you can think of. Create a system that uses method 1 (separate CSS and JS for every widget), when delivering the page, stitches all CSS and JS together. This obviously needs more processing time and of course the creation of such a system, etc. My Question is what you guys think of these solutions or if there are pre-existing solutions that you know of (or any tech that might help) solve this problem. I really appreciate all of your thoughts and comments!! Thanks guys, Ali

    Read the article

  • Discussion on SEO best-practices for site development involving php...

    - by Bradley Herman
    Recently in our work, I've started getting some experience with SEO (finally). It's something I've put off for a long time because I've always maintained that SEO is a buzz-word b.s. pseudo-science and more about providing quality, relevant content (assuming proper header tags and the basics are covered). However, sometimes a client doesn't have stellar content yet still demands SEO and high rankings. While it's not how I design sites 100% of the time (as design dictates structure), I typically create a basic template from the design my boss gives me, then I optimize it, and then strip the top and bottom and move those to header.php and footer.php, using the following to bring in the header and footer based on AJAX versus HTML requests: <?php if($_SERVER['HTTP_X_REQUESTED_WITH']==''){ include('includes/header.php'); }?> #content here <?php if($_SERVER['HTTP_X_REQUESTED_WITH']==''){ include('includes/footer.php'); }?> Then, I use jQuery to intercept page requests and I use AJAX to fill in, for example, a #copy div with the new content. This avoids unnecessarily loading all the header and footer info everytime, but still allows users without Java to access pages without any problems. (also to think about, depending on size of content, do the extra http requests added using this method render it more of a server strain versus a single, larger file?) I don't have a really solid understanding of the meta keywords and their SEO significance, but as I recall reading, the keywords, title, and description on a page should match up to the pages content--ie. each page should have slightly different keywords/description while retaining some common ground. What I'm getting at here is trying to foster a discussion on whether my approach is flawed to begin with, if there are things I can do (within reason) that keep the site structure simple but allow for better SEO practices, or if my SEO understandings are wrong. This isn't a question, per say, but hopefully a constructive discussion here that more than just I can learn from. I appreciate any responses and hope to hear from you. Thanks!

    Read the article

  • SEO, ordering and duplication of content

    - by piquadrat
    I run a specialized news site and am trying to apply a little bit of SEO sauce to it. One of the most important things I hear is to avoid duplication of content. I've covered all the basics but I'm stuck with ordering of content. As an example, the archive of the site is orderable by date, views, and rating. Since we don't have that many news items, an archive page for a particular day has usually only a couple of items, so the following URLs all have the same content, albeit in different ordering: /news/archive/2010/05/16/ /news/archive/2010/05/16/?o=views /news/archive/2010/05/16/?o=rating Do search machines penalize this particular kind of duplication of content? And if yes, what's the best way to avoid said penalty? <link rel="canonical" />? Tell Google & Co. to ingore the o parameter? Marking the ordering links with nofollow? Only allow the indexation of the date-ordered archive sites through robots.txt (not sure if this is even possible)?

    Read the article

  • Apache local configuration to resolve files correctly

    - by Alex E.
    Hello, I am new at this so bare with me. I have just configured Apache and PHP to work on my local Mac OS X computer. Now PHP works fine, except when I try to load the files for my live sites. The live sites have separate directories and are sorted by client name etc. I've created symlinks in the default root for the local web server documents. My issue is that Apache doesn't seem to want to load any of the relative paths that are found in the HTML pages. For example, I have src="/css/main.css" but Apache doesn't load the file, similarly for images, it just resolves as a file not found 404 error. I then thought it might be the symlinks so I copied the full directory into the Apache document root, and still had the same result. I would really love to setup my local development environment to run Apache, PHP, MySQL to develop locally then publish when ready. I also tried the MAMP installation, and had the same issues. Any help at all in this would be greatly appreciated. If my explanation wasn't clear please let me know. Thanks! Alex.

    Read the article

  • Local SSL connections are causing redirect loop (after Ubuntu update)

    - by codeinthehole
    Following a recent Ubuntu update, my local websites are no longer serving their pages over SSL. For example, my .htaccess file attempts to ensure /sign-in is always served over HTTPS: RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} /sign-in RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L,QSA,R=301] However when I make a request to /sign-in on the domain site2-local , I get the error "The page isn't redirecting properly" with the following in /var/log/apache2/error.log [Tue Jun 08 12:20:57 2010] [info] [client 127.0.1.1] Connection to child 0 established (server site1-local:443) [Tue Jun 08 12:20:57 2010] [info] Seeding PRNG with 656 bytes of entropy [Tue Jun 08 12:20:57 2010] [info] Initial (No.1) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.2) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.3) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.4) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.5) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.6) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.7) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.8) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.9) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.10) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:21:12 2010] [info] [client 127.0.1.1] (70007)The timeout specified has expired: SSL input filter read failed. [Tue Jun 08 12:21:12 2010] [info] [client 127.0.1.1] Connection closed to child 0 with standard shutdown (server site2-local:443) There is a connection to site1-local (another site on my machine which shares the certificate), which I don't understand. Anyone know what is causing this issue?

    Read the article

  • Local SSL connections are causing redirect loop (after Ubuntu update)

    - by codeinthehole
    Following a recent Ubuntu update, my local websites are no longer serving their pages over SSL. For example, my .htaccess file attempts to ensure /sign-in is always served over HTTPS: RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} /sign-in RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L,QSA,R=301] However when I make a request to /sign-in on the domain site2-local , I get the error "The page isn't redirecting properly" with the following in /var/log/apache2/error.log [Tue Jun 08 12:20:57 2010] [info] [client 127.0.1.1] Connection to child 0 established (server site1-local:443) [Tue Jun 08 12:20:57 2010] [info] Seeding PRNG with 656 bytes of entropy [Tue Jun 08 12:20:57 2010] [info] Initial (No.1) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.2) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.3) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.4) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.5) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.6) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.7) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.8) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.9) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.10) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:21:12 2010] [info] [client 127.0.1.1] (70007)The timeout specified has expired: SSL input filter read failed. [Tue Jun 08 12:21:12 2010] [info] [client 127.0.1.1] Connection closed to child 0 with standard shutdown (server site2-local:443) There is a connection to site1-local (another site on my machine which shares the certificate), which I don't understand. Anyone know what is causing this issue?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >