Search Results

Search found 8460 results on 339 pages for 'links'.

Page 22/339 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • The Quality of Inbound Links in Search Engine Optimization

    I don't think it would surprise anyone to say that the majority of today's prospects search Google, Bing, Yahoo and other search engines prior to making a purchasing decision, for everything from pets to cars & homes. You need to be on the Internet in some fashion so that relevant prospects can find you, because your competitors will most certainly be there in force.

    Read the article

  • will redirect subdomain lose Google SEO links

    - by user29160
    Because of a change in brand, I want to redirect our subdomain.domain.com to a newdomain.com. The content being exactly the same, I was thinking of using a 301 wild card 301 redirect to newdomain.com. I noticed it is not possible to do a redirect in Google webmasters tool as you can with a root domain. Is there a way I can do this redirect without losing all the backlinks referenced with Google? All the best, mark

    Read the article

  • Can I use asterisks in URLs?

    - by KajMagnus
    Are there any reasons I shouldn't use an asterisk (*) in a URL? Background: With asterisks, I could provide these nice and user friendly (or what do you think??) URLs: example.com/some/folder/search-phrase* means search for pages with names starting with "search-phrase", located in /some/folder/. example.com/some/**/*search-phrase* means search for any page with "search-phrase" anywhere in its name. example.com/some/folder/* means list all pages in /some/folder/ (rather than showing the /some/folder/index page).

    Read the article

  • links for 2010-04-23

    - by Bob Rhubart
    Lip Service: Meeting enterprise architecture communication challenges is critical. The greatest obstacle to successful enterprise architecture is the one that enjoys a good night’s sleep, loves a nice hot shower, and sometimes cheats on its diet. (tags: oracle otn oraclemagazine enterprisearchitecture communication)

    Read the article

  • Internal Links - On Page SEO - How to Do it Right

    One of the main reasons we do on page SEO is to communicate with the search engines. If you think of the code on your website page as being a conversation with Google, you can instruct the search engine to do what you want when it comes to indexing your website.

    Read the article

  • Website Mistake #16 - Broken Links

    Obviously this can be a major problem on your website and we've seen it happen time and time again. It's actually pretty common as well as an easy thing to have happen to you. Just to clarify, what we're talking about are broken hyperlinks on your website.

    Read the article

  • The Importance of Building Links in SEO Services

    SEO or Search Engine Optimization services aim to search engine optimize your website so that it can come up in the search engine results page for popular keyword searches. This is no easy job. Remember, there can be thousands and even millions of web pages out there in your niche that can be your competitors.

    Read the article

  • How to tell Mercurial to never create hard links

    - by scrapdog
    I am planning to use Mercurial in the near future on some projects. These projects will normally reside in a directory on my Windows machine, but I will be sharing these directories using VirtualBox so I can work on them directly from within Linux. I understand that Mercurial will sometimes create hard links when cloning repositories. I'm not sure how a VirtualBox shared directory handles these hard links (or if it even can), so I'd rather just tell Mercurial to never attempt to make hard links and always make a copy. My question: how do I globally disable Mercurial from hard linking? (Although if someone has gotten Mercurial and VirtualBox shared folders to work nicely with hard linking, I'd like to hear about it!)

    Read the article

  • Does spreading content across domains improve ranking? [closed]

    - by usertest
    Possible Duplicate: The SEO Benefit of Breaking Up Content Onto Different Websites I was wondering if (assuming all your content is related) it would be better to put all your content under a single domain or multiple domains that link to each other. Lets say I have Site A which doesn't have a good search ranking. If I have a new product that I'm sure could get a good ranking on its own would I get a better search ranking for Site A if I - Add the new product as a new section to Site A. Or put the product on new Site B and link back to Site A. To give you an example if you were developing a few browser plugins would it be better (in terms of ranking) to showcase them all in the same site, or would you give them each their own domain's that link to each other? Thanks.

    Read the article

  • -ln links to wrong file

    - by user289075
    I've just installed matlab and want to be able to call it from the terminal. It works fine when I explicitly call it from its directory. I cd to /usr/local/bin and type sudo ln -s /usr/local/MATLAB/R2012a/bin/matlab matlab when I then type "matlab" in the terminal, I get the error message "bash: /media/OS/MATLAB/bin/matlab: No such file or directory" I have no idea why it's trying to call matlab from /media. I've tried deleting the file from usr/local/bin but when I create it again the same thing happens. Any help would be very much appreciated.

    Read the article

  • SEO Strategies - When All Else Fails - Build Inbound Links

    There are times when you are trying to break ground in a niche with very high competition that you just seem to be getting no where fast. You have tweaked every page element you can think of to maximise the optimisation of every page on your website, you have built your Sitemap and submitted to all search engines and still you have not moved up the ranks or made any progress.

    Read the article

  • SEO Strategies - When All Else Fails - Build Inbound Links

    There are times when you are trying to break ground in a niche with very high competition that you just seem to be getting no where fast. You have tweaked every page element you can think of to maximise the optimisation of every page on your website, you have built your Sitemap and submitted to all search engines and still you have not moved up the ranks or made any progress.

    Read the article

  • Is the title attribute (not tag) important to SEO?

    - by JasonBirch
    The title attribute is an HTML standard element available across most tags. e.g. <li><a title="Widgets listed by household function" href="/widgets/by-function.html">by Function</a></li> I've used this attribute on some sites for usability; many browsers pop up a "tooltip" over the link with the more detailed description of what is on the other side. I've been wondering if doing so is having a negative effect on my rankings (hidden text?) or if it has any effect at all on onsite or offsite keyword relevance calculations. Does anyone know of any research done on this?

    Read the article

  • Initializing links in jqtouch in AJAX loaded content

    - by Cody Caughlan
    I have an iphone web app built in jqtouch, when content is loaded by links ("a" tags) then any content on the next page is parsed by jqtouch and any links are initialized. However, when I load content via an AJAX call and append() it to an element then any links in that content are NOT initialized by jqtouch. Thus any clicks on those links are full-blown clicks to a new resource and are not handled by jqtouch, so at that point you've effectively broken out of jqtouch. My AJAX code is: #data <script type="text/javascript"> $.ajax({ url: '/mobile/nearby-accounts', type: 'GET', dataType: 'html', data: {lat: lat, lng: lng}, success: function(html) { $('#data').empty().append(html); // Is there some method I call on jqtouch to have any links in $('#data') be hooked up to the jqtouch lifecycle? } </script> Thanks in advance.

    Read the article

  • Regex to find external links from the html file using grep

    - by Amar
    hello, From past few days I'm trying to develop a regex that fetch all the external links from the web pages given to it using grep. Here is my grep command grep -h -o -e "\(\(mailto:\|\(\(ht\|f\)tp\(s\?\)\)\)\://\)\{1\}\(.*\?\)" "/mnt/websites_folder/folder_to_search" -r now the grep seem to return everything after the external links in that given line Example if an html file contain something like this on same line GoogleYahoo then the given grep command return the following result http://www.google.com">Google</a><p><a href='https://yahoo.com'>Yahoo</a></p> the idea here is that if an html file contain more than one links(`irrespective in a,img etc`) in same line then the regex should fetch only the links and not all content of that line I managed to developed the same in rubular.com the regex is as follow ("|')(\b((ht|f)tps?:\/\/)(.*?)\b)("|') with work with the above input but iam not able to replicate the same in grep can anyone help I can't modify the html file so don't ask me to do that neither I can look for each specific tags and check their attributes to to get external links as it addup processing time and my application doesn't demand that Thank You

    Read the article

  • jquery load links not clickable

    - by john morris
    ok i am loading a separate page with links in it, into a page named index.php. it loads fine, but when i click one of the links inside the loaded content, nothing happens. they dont act as links. but if i do a alert('hi'); after the load('page.html'); then it will work. any ideas on getting this to work without alerting something after it loads? oh also i cant use a callback, unless there is a way to update the get variable because the page loading, has a $_GET variable, and the links inside the loaded page are supposed to update the $_GET variable. anyways is there a way to make the links clickable after loading the page?

    Read the article

  • Stream post URL security and wall post links

    - by Jeff Lee
    Our app's mobile client can create wall post links to our app's web-facing pages. Since this happens in the context of a mobile app, we do this on behalf of our user using the Graph API's feed/message endpoint. I noticed that the links showing up in the wall posts are being routed through our app's auth dialog, which is NOT what we want. We just want transparent links, without forcing the client to auth our app, similar to what happens when you share to FB in Path. I went ahead and disabled the "Stream post URL option" several hours ago, but we still seem to be getting the re-routed links for wall posts. The target URLs for these links are within the domain we've registered for our Facebook app. Is there anything else I need to do fix this?

    Read the article

  • IE7 - visited links revert to unvisted after page refresh

    - by Gerald
    Hello, A number of our users have just upgraded from IE6 to IE7. the upgreaded users are reporting an issue with visited links reverting to their unvisited color after a page refresh. This only happens to links that are using javascript instead of a hard coded URL: <script lang="JavaScript"> <!-- function LoadGoogle() { var LoadGoogle = window.open('http://www.google.com'); } --> </script> <a href="javascript:LoadGoogle()">Google using javascript</a> <a href="#" OnClick="javascript:LoadGoogle()">Google using javascript OnClick</a> The above links will revert back to the unvisited color whenever the page is refreshed. It doesn't matter if the page is refreshed because of a post back, manually hitting the refresh or f5 button, or from an auto-refresh function. Please note, the above code is an over simplification of what is actually happening, but I believe it illustrates the issue well enough. This is causing a problem for our users because we are providing them with a list of items that are all opened into new windows via javascript when they are clicked; and refresh the parent page when the users are finished with them. Each time the parent page is refreshed all of these links revert back to their unvisited color, so our users are losing track of which items they've worked on. I've been digging around and it looks like this is intended behavior. IE7 doesn't register these links with the browsers history. Does anyone know a work around that will allow us to keep these javascript links in the visited state without having to do a major overhaul of the apps code? Thank you.

    Read the article

  • Most efficient way to store list structure in XML

    - by Mike
    Starting a new project and was planning on storing all of my web content in XML. I do not have access to a database so this seemed like the next best thing. One thing I'm struggling with is how to structure the XML for links (which will later be transformed using XSLT). It needs to be fairly flexible as well. Below is what I started with, but I'm starting to question it. <links> <link> <url>http://google.com</url> <description>Google</description> <link> <link> <url>http://yahoo.com</url> <description>Yahoo</description> <links> <url>http://yahoo.com/search</url> <description>Search</description> </link> <link> </links> That should get transformed into Google Yahoo Search Perhaps something like this might work better. <links> <link href="http://google.com">Google</link> <link href="http://yahoo.com">Yahoo <link href="http://yahoo.com/search">Search</link> </link> </links> Does anyone perhaps have a link that talks about structuring web content properly in XML? Thank you. :)

    Read the article

  • Drupal theme preprocess function - primary links

    - by slimcady
    I recently wrote a theme function to add a class to my primary links that works great. I then wrote some css classes to style these links with custom background images. Worked out great. Now comes the problem, the link text for the primary links still is displayed. Normally this isn't a problem as I would just wrap the in a with a custom "hide" class. For example: <span class="hide"><a href="#">Link Text</a></span> So my question is how can I loop through the primary links and wrap the text w/ a <span> like my example? Here's my theme function that I used to add my classes. function zkc_preprocess_page(&$vars, $hook) { // Make a shortcut for the primary links variables $primary_links = $vars['primary_links']; // Loop thru the menu, adding a new class for CSS selectors $i = 1; foreach ($primary_links as $link => $attributes){ // Append the new class to existing classes for each menu item $class = $attributes['attributes']['class'] . " item-$i"; // Add revised classes back to the primary links temp variable $primary_links[$link]['$attributes']['class'] = $class; $i++; } // end the foreach loop // reset the variable to contain the new markup $vars['primary_links'] = $primary_links; }

    Read the article

  • Extract Links and save pages

    - by Veejay
    I have a couple of pages, each has around 20 links on it. Each link leads to a different page. I want to provide the user an option to extract all the links on these Page and then download each of the pages(20) to their desktop. Any tools/addons/plugins that I can suggest users?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >