Search Results

Search found 8460 results on 339 pages for 'links'.

Page 69/339 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Splitting a sitemap by content type

    - by James
    I currently am tasked with submitting our website sitemap to the search engines every week. We have a module which does offer sitemap generation but we find using it does not work very well as not all pages are included and it does not split the sitemap by content. I've used various (online and offline) tools to generate the sitemaps which is not the problem. The problem is that after every generation (which takes most of each Monday) I have to manually go through the sitemap and categorise the links in to products, pages, categories and sub categories. I've experimented successfully with XSL to split the sitemap but it is still a labour intensive process. Does anyone know of a good method to split the sitemap? Currently there are around 20,000 links (iirc) in total.

    Read the article

  • No description for any page on the website is available in Google despite robots.txt allowing crawling

    - by Abhijit
    I seem to have the weirdest issue with Search Engine Optimization, and I asked the IT folks at my university, I asked people on Joomla forums and I am trying to sort this issue out using Google Webmaster Tools for more than 2 months to little avail. I want to know if I have some blatantly wrong configuration somewhere that is causing search engines to be unable to index this site. I noticed a similar issue with another website I searched for online (ECEGSA - The University of British Columbia at gsa.ece.ubc.ca), making me believe this might be a concern that people might be looking an answer for. Here are the details: The website in question is: http://gsa.ece.umd.edu/. It runs using Joomla 2.5.x (latest). The site was up since around mid December of 2013, and I noticed right from the get go that the site was not being indexed correctly on Google. Specifically I see the following message when I search for the website on Google: A description for this result is not available because of this site's robots.txt – learn more. The thing is in December till around March I used the default Joomla robots.txt file which is: User-agent: * Disallow: /administrator/ Disallow: /cache/ Disallow: /cli/ Disallow: /components/ Disallow: /images/ Disallow: /includes/ Disallow: /installation/ Disallow: /language/ Disallow: /libraries/ Disallow: /logs/ Disallow: /media/ Disallow: /modules/ Disallow: /plugins/ Disallow: /templates/ Disallow: /tmp/ Nothing there should stop Google from searching my website. And even more confusingly, when I go to Google Webmaster tools, under "Blocked URLs" tab, when I try many of the links on the site, they are all shown up as "Allowed". I then tried adding a sitemap, putting it in the robots.txt file. That did not help. Same exact search result, same behavior in the "Blocked URLs" tab on the webmaster tools. Now additionally, the "sitemaps" tab says for several links an error saying "URL is robotted out". I tried those exact links in the "Blocked URLs" and they are allowed! I then tried deleting the robots.txt file. No use. Same exact problem. Here is an example screenshot from Google's Webmaster Tools: At this point I cannot give a rational explanation to why this is happening and neither can anyone in the IT department here. No one on Joomla forums can seem to understand what is going on. Based on what I explained, does it seem that I have somehow set a setting in the robots.txt or in .htaccess or somewhere else, incorrectly?

    Read the article

  • Will a rel=canonical link pointing to a 301 redirect pass less pagerank than one without a 301?

    - by tobek
    On this official Google page about canonical links it says: Can rel="canonical" be a redirect? Yes, you can specify a URL that redirects as a canonical URL. Google will then process the redirect as usual and try to index it. There is no mention that this might dilute the impact of the canonical link. However, Google has made clear elsewhere that 301 redirects do dilute PageRank - roughly as much as a link dilutes PageRank. Is that relevant here? I'm assuming the answer is "no" but I wanted to confirm. Relevant but not duplicate: Does Rel=Canonical Pass PR from Links or Just Fix Dup Content.

    Read the article

  • Embeding a generic google search with autocomplete - not a custom site search

    - by picxelplay
    Most people's home page is google.com. My homepage is just a custom html page hosted on my computer. I do this because I am a web developer, and I have several projects that I work on a one time, so I like to have quick links to all of them. On that page I usually just have a Link to google.com for when I want to search. But below all of my quick links, I want to add a google search box (with Autocompletions). I first used a simple iframe to embed google.com into the page, but then my search results were confined to that iframe. I wanted to search for something, then my results would open in a new tab. I then came across this code snippet but it doesn't have Autocompletions: http://www.refactory.org/s/google_search/view/2 How can I add Autocompletions to this? Or is there a better way of doing it? Thanks in advance for any advice

    Read the article

  • What effect does using itemprop="significantLinks" on anchors have for SEO?

    - by hdavis84
    So as I've described in a previous post about span tags within head tags, I'm practicing application of microdata via http://schema.org. Anyone who's browsed the documentation there knows that there's a lot of need for improvement for more clear understandings on use for each property. My question on this post is more about the "significantLinks" property and how it effects SEO for on page, in content anchored text. Does anyone have any more information regarding whether its good to use for link optimization? I understand what schema.org means that it's to be used on "non-navigational links" and those links should be relevant to the current page's meaning. But will using this property hurt SEO or make SEO better for each page? Thanks in advance, as by answering this with accurate information you are helping not just me, but many people who are trying to make their customers more successful through helping their rank for relevant keywords to their business, bringing them more search engine traffic.

    Read the article

  • Adding a list of "recent articles" affects SEO

    - by Groo
    We have a site which has a sidebar with sections (or "widgets" if you like) showing stuff like "Recent Articles", "Other Articles by this User", "Similar Articles" etc. The issue is, Google seems to take these links very seriously. In fact, if I have only a single article which is closely related to a certain phrase (and several other pages link to it in their sidebars), when I do a Google search, it lists all those other pages highlighting that one link to the page that should actually be the most relevant one. And these pages don't even mention the phrase anywhere else. It there a common approach with adding these sidebar links? For example, I might add them through ajax after the page is loaded, but then crawlers will have harder time finding them?

    Read the article

  • Which torrent client has command line arguments to start/stop downloads?

    - by virpara
    first of all, I want to create shell script to start/stop downloads in torrent client. I don't need CLI but if you know how I can do that with CLI using shell script then it is okay. I use jDownloader which is GUI based application but has some command line arguments as below which I use to start/stop download. -h/--help Show this help message -a/--add-link(s) Add links -co/--add-container(s) Add containers -d/--start-download Start download -D/--stop-download Stop download -H/--hide Don't open Linkgrabber when adding Links -m/--minimize Minimize download window -f/--focus Get jD to foreground/focus -s/--show Show JAC prepared captchas -t/--train Train a JAC method -r/--reconnect Perform a Reconnect -C/--captcha <filepath or url> <method> Get code from image using JAntiCaptcha -p/--add-password(s) Add passwords -n --new-instance Force new instance if another jD is running So I can easily start/stop download as follows, jdownloader --start-download jdownloader --stop-download now I want torrent client to do that through shell script.

    Read the article

  • Chromium and Google Chrome downloading

    - by user286166
    Well, I have sincerely enjoyed having Google Chrome on my linux machine, and have never had any problems until recently. I found that Google Chrome will simply not allow redirected downloads to start. I can download from direct links, but not pages that redirect and "Start in 3 seconds..." I immediately assumed it was the webpage itself, so I refreshed many times. I then restarted the browser, and after another failed attempt, my computer. After that point, I suspected my internet provider was to blame. I tried the redirection link in an alternative browser (Midori), and it worked perfectly fine. I decided it must be the version of Chrome that Google put out, so I quickly installed Chromium, and to my dismay, ran across the same problem. I can live with copying and pasting the url into Midori for redirected links, but I'd like the convenience of staying in my main browser. Thank you for any advice in advance. c:

    Read the article

  • Some post-VS2010 Launch Resources

    Here are some useful links related to the Vermont .NET VS2010 launch meeting on Monday night with our RECORD Breaking attendance! :) MSDN Visual Studio Developer Center: msdn.microsoft.com/vstudio VS2010 Comparison of various SKUs: http://www.microsoft.com/visualstudio/en-us/products VS2010 Trial Downloads: http://www.microsoft.com/visualstudio/en-us/download Great links from MicrosoftFeed.Com VS2010 Wallpapers for the hardcore: 10+ Beautiful Microsoft Visual Studio 2010 Wallpapers …and...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Identical spam coming from many different (but similar) IP addresses

    - by DisgruntledGoat
    A forum I run has been the victim of spam user accounts recently - several accounts that have been registered and the profile fill with advertising/links. All of this is for the same company, or group of companies. I deleted several accounts weeks ago and blocked some IP addresses, but today they have come back with the same spam. Every account has a different IP address, but they are all of the form 122.179.*.* or 122.169.*.*. I am considering blocking those two IP ranges, but there are potentially thousands of IPs in that range. They appear to be assigned to India (although the spam is for an American company) so given the site is for a western, English-speaking audience maybe it doesn't matter. My questions: How are they posting on so many IPs? Is there likely to be a limit to the number of IPs they have access to? Is there anything else I can do at the IP-level to block them? (I am looking into other measures like blocking usernames/links.)

    Read the article

  • Embeding a generic google search with autocomplete - not a custom site search

    - by picxelplay
    Most people's home page is google.com. My homepage is just a custom html page hosted on my computer. I do this because I am a web developer, and I have several projects that I work on a one time, so I like to have quick links to all of them. On that page I usually just have a Link to google.com for when I want to search. But below all of my quick links, I want to add a google search box (with Autocompletions). I first used a simple iframe to embed google.com into the page, but then my search results were confined to that iframe. I wanted to search for something, then my results would open in a new tab. I then came across this code snippet but it doesn't have Autocompletions: http://www.refactory.org/s/google_search/view/2 How can I add Autocompletions to this? Or is there a better way of doing it? Thanks in advance for any advice

    Read the article

  • duplicate pages

    - by Mert
    I did a small coding mistake and google indexed my site wrongly. this is correct form: https://www.foo.com/urunler/171/TENGA-CUP-DOUBLE-HOLE but google index my site like this : https://www.foo.com/urunler/171/cart.aspx first I fixed the problem and made a site map and only correct link in it. now I checked webmaster tools and I see this; Total indexed 513 Not selected 544 Blocked by robots 0 so I think this can be caused by double indexes and they looks not selected makes my data not selected. I want to know how to fix this "https://www.foo.com/urunler/171/cart.aspx" links. should I fix in code or should I connect to google to reindex my site. If I should redirect wrong/duplicate links to correct ones, what the way should be? thanks for your time in advance.

    Read the article

  • List of backlinks to a specific website, listed by decreasing PageRank

    - by Nicolas Raoul
    With backlinkwatch.com I can get a list of pages that link to a particular website. Unfortunately, it lists tons of obscure blogs and small forums, it has hard to find what link is really important. Is there a similar service, where links would be displayed sorted by "importance"? For instance, a link New York Times would be shown at the top of the list, while links in small blogs would not appear before a few pages. "Importance" can be subjective, so I suggest using the PageRank, but other metrics could be fine too.

    Read the article

  • Hosting a magnet link site which could possibly infringe copyrighted material?

    - by Griff
    I have for the last 3 months built a crawler, indexer and alot of other things for what started out to be a home project for indexing magnet links on the internet. As my project grew I have thought about releasing my collected data (which at the minute is on a public domain but with no access) to the public. Whatever the crawler sucks in goes in, and whatever the indexer decides to index gets indexed as it is a fully automated process. My question is as follows; Considering that most of the data that is collected from what I have built points to illegal copyrighted material (as most magnet links do) where would it be best to host such a site. I notice all of the already public torrent sites are hosted in India is this because there laws are less strict on copyright infringement? Have any of you hosted such a site, and if so what problems have you ran into? And as always any advice on being a webmaster for this type website?

    Read the article

  • How to see the full file name on the Lubuntu Desktop?

    - by vasa1
    If I have a file with several characters in it, only a certain number of initial characters is displayed on the Lubuntu desktop and Xfce desktops. For example, I have a file titled, "interesting-links-on-default-browsers". It shows as "interesting-links-on-def..." with both DEs. However, single-clicking on the file in Xfce shows the full title but the truncated title remains in Lubuntu. It's not a big deal, but if it's just a matter of setting something somewhere to get the full title visible on single-click, I'd be grateful for pointers.

    Read the article

  • SEO penalty for landing page redirects

    - by therealsix
    Using ebay as an example- lets say I have a large number of items whose URLs' look like this: cgi.ebay.com/ebaymotors/1981-VW-Vanagon-manual-seats-seven-/250953153841 I want to give my client the ability to put links to these items on their website EASILY, without knowing or checking my URL. So I created a redirect service that will map their identifier with my URL: ebay.com/fake_redirect_service/shared_identifier9918 would redirect to the link above. This works great- my clients can easily setup these links with information they already have, and the user will see the page as usual. So on to the problem... I'm concerned that this redirecting service will have a negative impact on my SEO ranking. Having a landing page redirect you immediately to a different URL seems like something a typical spam site would do. Will this hurt me? Any better solutions?

    Read the article

  • Free Website Content - Do Articles From Directories Work Anymore? Part 2

    A clever strategy for many SEO experts is to study a site that is ranked highly and then try to copy what those sites do to get so successful. Take a close look at highly ranked sites and you will notice that virtually all of them have a very high number of links pointing to other sites. Let me give an example of a site that is ranked very highly and is exclusively made up of links pointing to other sites, billions of them in fact. I am talking about a site that receives over 100 million hits daily. Learn their secrets in this article.

    Read the article

  • Will Ubuntu be releasing an update for Cedar Trail Processors?

    - by Alan
    There is a file available on the Intel web site with the file name "cdv-gfx-drivers-1.0.1_bee.tar.bz2" and a date of July 6, 2012. It can be found by searching the Intel Download Center for the filename or the string "Linux* PowerVR Graphics/Media Drivers". The download page links to the file, release notes and a link, Enabling hardware accelerated playback that takes one to a page containing links to two pdf documents titled "Enabling Hardware Accelerated Playback for Intel® AtomTM Processor N2000/D2000 Series", one for Ubuntu and one for Fedora. The instruction and release notes speak to working with kernel 3.1.0 and since I do not feel I have the skills, knowledge or training to do anything else but, follow the instructions to the "T", I am very reluctant to try anything on my freshly updated 3.2.0 kernel. I would much rather use a Ubuntu supported kernel that applies these drivers and doesn't break anything in the process. Is it a case where this is so new that Canonical has not yet included these drivers but, soon will do so?

    Read the article

  • vJUG: Worldwide Virtual JUG Created

    - by Tori Wieldt
    London Java Community leader and technical evangelist Simon Maple has created a Meetup called vJUG, with aim toward connecting Java Developers in the virtual world. The aim for vJUG is: Get technical leaders from around the world to present to the vJUG members (without travel cost concerns!). Work with local JUGs to provide worldwide content to their members and help JUGs present to a worldwide audience. Provide content to devs without access to a local JUG. Be a hub that will stream content from other JUG sessions live.  The vJUG is not intended to replace local JUG efforts. "The vJUG can never be, and will never be, as vibrant and valuable to its members as a proper local JUG can. Why? Because the true value in JUG meetings are the face to face interactions and personal networking," said Maple. "However, many people do not have access to a really active JUG with great speakers and awesome content. Or, like me, the closest JUG is about 90 mins away." WebEx and Google Hangouts are great, Maple explained, he hopes vJUG will provide more coordination of online events.  Maple hopes that in the future, vJUG will provide An Events calendar with reminders and links to up coming meetings. A Newsletter with what's coming up and links to previous sessions. Coordination of links to IRC channels which are active during presentations (to create a feeling of virtual community). Comments and forums around sessions and presentations A place where physical JUGs could advertise their sessions (i.e. a NY JUG event) to a worldwide audience, when streamed, via an event that people can sign up to. A common Webex or Hangout. Maple encourages both people who need a JUG and existing JUG members to join vJUG. "I'm looking forward to talking with many of you one to get members, speakers, and JUG support!" Join vJUG now! (I sense a need for a logo...) 

    Read the article

  • High-Powered Sites for low Cost

    - by HighAltitudeCoder
    Ahh, I am experiencing the intimidation of my very first post - visible by the whole world. Ok, here goes.   This first post is nothing exceptional.  It is simply a recommendation based (fittingly, I suppose) upon the job search you may be gearing up for.  I find myself in this very situation right now.  And, I will take my own recommendation after posting this entry. Job-Seekers: To the left you will notice two links under "Recommended Learning".  I have found these links to be invaluable when it comes to re-tooling, re-familiarizing, or otherwise resharping my skills when looking for that next job. Often, you will find job-postings with the text, usually posted after a laborious list of qualifications indicating the company's desire to hire candidates who know what they are doing: "...Looking for a candidate who can hit the ground running...".  The interesting thing about this post to me is I've encountered many individuals who, after speaking and working with them for some time, I've realized are perfectly capable of hitting the ground running - and FAST.  But what if they speed off in the wrong direction? The next time you spearhead a major task in your job, ask yourself: Am I headed in the wrong direction?  There are many ways to do this.  In fact, I've found in this new field there are more tempting ways to steer your project in the wrong direction than there are good ones.  I don't want to suggest that every one of my posts will fall into the "right direction" category, however I do think a healthy dose of introspection of the pros and cons will always be beneficial before you set off. That said, allow me to expound on the previously mentioned links. These web sites are invaluable.  They demonstrate the capabilities of existing as well as new and upcoming tools available in several IDE's.  I've viewed many tutorials in LearnVisualStudio.NET, and only one or two so far in TrainingSpot, however I've been delighted in their simplicity and straightforward approach to proper usage of the particular tool or concept being discussed.  They have not (so far in my experience) demonstrated ways in which to use the tools that become cumbersome, impractical, or error-prone. Each website has step-by-step videos that can be paused, replayed, and most importantly, they are done in real time.  As the author is typing, the viewer gets to experience the coding experience from a first-person perspective, including syntax errors, unexpected behaviors, IDE setup idiosyncracies, everything.  A subtle value I've gained from these videos is that a certain degree of confusion and introspection is normal when working with new tools and exploring new paths.  They (as well as your own experience) are not to be feared, but enjoyed.  I highly recommend them. Good work, guys!

    Read the article

  • The Best Ways For Link Building For New Websites

    Creating a website can be a very interesting and fun activity but you also need to build links so that you can make your website more successful. Building the links helps to make the website more visible and to have more traffic flow. One of the best ways to do this is to create easy and relevant keywords which can be easily remembered so that you can get new readers frequently and you can be able to earn money from the affiliate companies. You should also submit your articles to web directories as this can help you to get hundreds of backlinks which can be very good for your website.

    Read the article

  • SEO: Is promoting your backlinks a good strategy for improving search results for my site's name?

    - by user4394
    I run a website that's been around for about three years in the sports space. I am successfully ranking well for targeted keywords, but searching for the name of my site itself returns very poor results - it shows my site, its FB/Twitter, and then 15 pages of unrelated spam that happen to contain two words that, when combined, form my website's name. After that, my backlinks begin to show up spordically. As far as I can tell, I simply don't have enough backlinks and the backlinks I do have are ranked worse than the spam. (Site Explorer lists 200 external links to any page on our domain and 20 external links directly to the front page). To counter this, my strategy is to promote my backlinks so they get a better page rank than the spam. Does that make sense? Am I going in the right direction or should I just focus on getting more backlinks pointing directly to my site? Thanks in advance and I'd be happy to answer any questions I can (without giving away my site of course).

    Read the article

  • Duplicate pages indexed in Google

    - by Mert
    I did a small coding mistake and Google indexed my site incorrectly. This is the correct form: https://www.foo.com/urunler/171/TENGA-CUP-DOUBLE-HOLE But Google indexed my site like this: https://www.foo.com/urunler/171/cart.aspx First I fixed the problem and made a site map with only the correct link in it. Now I checked webmaster tools and I see this: Total indexed 513 Not selected 544 Blocked by robots 0 So I think this can be caused by double indexes, and it looks like the pages not selected makes the correct pages not indexed. I want to know how to fix the "https://www.foo.com/urunler/171/cart.aspx" links. Should I fix in code or should I connect to Google to re-index my site? If I should redirect wrong/duplicate links to correct ones, how should that be done?

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >