Search Results

Search found 3880 results on 156 pages for 'duplicate'.

Page 6/156 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • How to fix Duplicate sources.list entry?

    - by Harbhag
    I keep getting this warning whenever I try to run sudo apt-get update. W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise-updates/main i386 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise-updates_main_binary-i386_Packages) W: You may want to run apt-get update to correct these problems Below is the output from /etc/apt/sources.list file: deb http://archive.ubuntu.com/ubuntu precise main restricted deb-src http://archive.ubuntu.com/ubuntu precise main restricted deb http://archive.ubuntu.com/ubuntu precise-updates main restricted deb-src http://archive.ubuntu.com/ubuntu precise-updates main restricted deb http://archive.ubuntu.com/ubuntu precise universe deb-src http://archive.ubuntu.com/ubuntu precise universe deb http://archive.ubuntu.com/ubuntu precise-updates universe deb-src http://archive.ubuntu.com/ubuntu precise-updates universe deb http://archive.ubuntu.com/ubuntu precise multiverse deb-src http://archive.ubuntu.com/ubuntu precise multiverse deb http://archive.ubuntu.com/ubuntu precise-updates multiverse deb-src http://archive.ubuntu.com/ubuntu precise-updates multiverse deb http://archive.ubuntu.com/ubuntu precise-security main restricted deb-src http://archive.ubuntu.com/ubuntu precise-security main restricted deb http://archive.ubuntu.com/ubuntu precise-security universe deb-src http://archive.ubuntu.com/ubuntu precise-security universe deb http://archive.ubuntu.com/ubuntu precise-security multiverse deb-src http://archive.ubuntu.com/ubuntu precise-security multiverse How do I fix it?

    Read the article

  • Conditional formatting of duplicate values in Excel

    - by jamiet
    One of the infrequent pleasures of being a data geek like me is that one does occasionally stumble across little-known yet incredibly useful features in a tool that you use day-in, day-out. Today this happened to me and the feature is Excel’s ability to highlight dupicate rows in a worksheet. Check this out: Notice that I have got some data in my worksheet that contains duplicated values and simply by selecting Conditional Formatting->Highlight Cells Rules->Duplicate Values… Excel will highlight (shown here in red) which rows are duplicated. It seem such a simple thing but when you’re working on a data integration project and the data that is being sent is of, well, let’s say dubious quality features like this are worth their weight in gold. I tweeted about this and it happened to catch a few people’s attention so I figured it might be worth blogging too. Note that I am using Excel 2013 but I happen to know that the feature exists in Excel 2010 and possibly in earlier versions too. Have a great weekend! @Jamiet

    Read the article

  • Moving one site in Webmaster Tools to more then one site

    - by Towhid
    I have a Question and Answer site about immigration. now I divided it into 2 sites: mysite.co.uk about immigration to UK mysite.com with sub domains for every country, Like: australia.mysite.com , sweden.mysite.com , ... now I had moved All the content from my first site into .co.uk and .com site and it's sub domains to fill theme. I now that Google will detect my new 2 sites as duplicate of first on and it is very bad for SEO. and I don't think Google webmaster tools has a tool for it. so Please Guide me how to fix this problem.

    Read the article

  • Programming Interview Question [duplicate]

    - by user136494
    This question already has an answer here: How to prepare yourself for programming interview questions? [duplicate] 6 answers I have an upcoming interview in a couple of days and had a question for you guys. I've heard that programming interviews have whiteboard problems where you solve a simple problem on a whiteboard. My question to you is? How many whiteboard problems do you have to solve? Is there more than 1? What are examples of whiteboard problems? Is FizzBuzz one of them? Where can I find practice problems for them? Anyone know of any good web sites?

    Read the article

  • How do I get rid of duplicates in Rhythmbox even though the music in my Home folder is not showing duplicates?

    - by Drake
    I clicked import within Rhythmbox in Ubuntu 12.04 so I could get music to Ubuntu from my windows partition. The music appeared in my Rhythmbox library and I started playing it. However, when I restarted my computer the imported music did not show up. I looked in my music library and it was completely empty. So, I copied all of my music from my Windows partition into my music folder and launched Rhythmbox, but now it shows duplicates of all of the music I have. How can I get rid of the duplicate files if they are not showing in my Home folder?

    Read the article

  • Republishing blog posts on a popular website

    - by Giorgi
    I started my blog about programming yesterday and in order to promote and increase traffic I submitted my rss to Codeproject which pulls my posts and publishes them at Codeproject. While it increases the number of people reading my posts (but they are reading it at codeproject) I am worried that Google will penalize my site for duplicate content (Especially considering that Codeproject has much more reputation compared to my new website). The post at Codeproject has a link back to my blog post but it does not have "rel=canonical". So my question which one is better: a link from a high reputation website and some traffic or should I remove it from codeproject so that my blog is not penalized? What if codeproject adds "rel=canonical" to the link?

    Read the article

  • Detect duplicate in a subset from a set of elements

    - by Abhinav Shrivastava
    I have a set of numbers say : 1 1 2 8 5 6 6 7 8 8 4 2... I want to detect the duplicate element in subsets(of given size say k) of the above numbers... For example : Consider the increasing subsets(for example consider k=3) Subset 1 :{1,1,2} Subset 2 :{1,2,8} Subset 3 :{2,8,5} Subset 4 :{8,5,6} Subset 5 :{5,6,6} Subset 6 :{6,6,7} .... .... So my algorithm should detect that subset 1,5,6 contains duplicates.. My approach : 1)Copy the 1st k elements to a temporary array(vector) 2) using #include file in C++ STL...using unique() I would determine if there's any change in size of vector.. Any other clue how to approach this problem..

    Read the article

  • Will having a website duplicated on multiple top level domains be penalised by search engines [duplicate]

    - by user1020317
    This question already has an answer here: Will having multiple domains improve my seo? 7 answers I'm running a website for a global company, and although we rank first in search engine results here in Ireland, a search done from other countries doesn't rank us as highly. If I register the domain at other top level domain names (eg. example.co.uk, example.nor etc.) and then just mirror the .com site to those other domains, will I be penalised by search engines for having duplicate content? Has anyone else faced a similar problem and found a way to capture the global search engine? Thanks.

    Read the article

  • Bug Tracking Etiquette - Necromancy or Duplicate?

    - by Shauna
    I came across a really old (2+ years) feature request issue in a bug tracker for an open source project that was marked as "resolved (won't fix)" due to the lack of tools required to make the requested enhancement. In the time elapsed since that determination was made, new tools have been developed that would allow it to be resolved, and I'd like to bring that to the attention of the community for that application. However, I'm not sure as to what the generally accepted etiquette is for bug tracking in cases like this. Obviously, if the system explicitly states to not duplicate and will actively mark new items as duplicates (much in the way the SE sites do), then the answer would be to follow what the system says. But what about when the system doesn't explicitly say that, or a new user can't easily find a place that says with the system's preference is? Is it generally considered better to err on the side of duplication or necromancy? Does this differ depending on whether it's a bug or a feature request?

    Read the article

  • Pagination and duplicate content

    - by jazz090
    I have an archive page that displays the number of articles published. Because there were so many, I ran a pagination script: for 127.0.0.1/archive/2/?p=x&pp=y where p is the page number and pp is number of articles to display per page. The pagination looks like this: Prev 1 2 3 4 ... 12 NEXT with each item linking to p like <a href="?p=x">x</a>. I also have the items per page setter: 25 | 50 | 100 (<a href="?pp=y">y</a>). Now I have a PHP script that fixes pp into a session variable. But I am worried about duplicate content (since incrementing pp values will be inclusive) and also content not getting indexed because its not in the pagination link. so in the example above, pages 5-11 will not be indexed. Any ideas on how to fix this?

    Read the article

  • Preventing Duplicates on Google

    - by abel
    I am currently using a rewrite rule to enable access to .php pages, without using the php extension. However to prevent old links from breaking, the pages can still be accessed via links containing the .php extension too. For eg. domain.com/page.php can now be accessed at domain.com/page All the links on the website now use domain.com/page type links within the site. However older incoming links will still link to the .php pages, meaning Google will index both pages and mark them as duplicate. I have two plans to remedy the situation. Use a php 301 redirect: When a page is accessed with the .php extension, I can redirect each page individually using a 301 redirect using php Using Canonical: Place a canonical tag on each page, pointing to the ".php" less version My Question: Are both methods equally efficacious in preventing Google from indexing my ".php" pages? Which method should be preferred, by convention or otherwise?

    Read the article

  • How to disallow indexing but allow crawling?

    - by John Doe
    In the front page of my website, I have some previews to articles (with a small introduction to them) that link to the full articles. I want to disallow the front page to prevent duplicate content. But if I do this (in robots.txt), would it still be crawled? I mean, the full articles would be still reached by the crawler even though I disallowed the only page that links to them? I don't want the webcrawler not to access the page and enter the links in them, but I just don't want it to save the information (that will be repeated in the full articles).

    Read the article

  • time issue in render libgdx [duplicate]

    - by jaysingh
    This question is an exact duplicate of: deWitters Game loop in libgdx(Android) pls. help how to implement this loop in render method next_game_tick and GetTickCount(); always contain same time value so player position not updated. @Override public void render() { float deltaTime = Gdx.graphics.getDeltaTime(); Update(deltaTime); Render(deltaTime); } const int TICKS_PER_SECOND = 50; const int SKIP_TICKS = 1000 / TICKS_PER_SECOND; const int MAX_FRAMESKIP = 10; DWORD next_game_tick = GetTickCount(); int loops; bool game_is_running = true; while( game_is_running ) { loops = 0; while( GetTickCount() > next_game_tick && loops < MAX_FRAMESKIP) { update_game(); next_game_tick += SKIP_TICKS; loops++; } display_game(); }

    Read the article

  • High Traffic Web Host Solution? [duplicate]

    - by Calsy
    Possible Duplicate: How to find web hosting that meets my requirements? Im currently shopping around for a web host for our website we are hoping to release in the near future. This is my first real step into this area. Just wondering what I should be looking for. It is an ASP.net MVC website with an MS SQL Server backend. I need to know that the server will not buckle if the traffic booms. Currently im looking at a managed dedicated server from singlehop. Does anyone know any better or have any advice.

    Read the article

  • Bug Tracking Etiquete - Necromany or Duplicate?

    - by Shauna
    I came across a really old (2+ years) feature request issue in a bug tracker for an open source project that was marked as "resolved (won't fix)" due to the lack of tools required to make the requested enhancement, but since the determination was made, new tools have been developed that would allow it to be resolved, and I'd like to bring that to the attention of the community for that application. However, I'm not sure as to what the generally accepted etiquette is for bug tracking in cases like this. Obviously, if the system explicitly states to not duplicate and will actively mark new items as duplicates (much in the way the SE sites do), then the answer would be to follow what the system says. But what about when the system doesn't explicitly say that, or a new user can't easily find a place that says with the system's preference is? Is it generally considered better to err on the side of duplication or necromancy? Does this differ depending on whether it's a bug or a feature request?

    Read the article

  • Will duplicate international (i18n) content hinder SEO rankings?

    - by Rhys
    Google clearly states that duplicate content within a single, or multiple, domains is not advised. This is understood, but I am not sure of any exceptions for sites with region-specific content that is often replicated across locales. For example, a site's /en-us/about page could be identical to /en-uk/about, whereas most likely /en-ja/about is unique. Are GYM smart enough to understand that the initial URL depth is a locale specifier? Is there any robots.txt or header, etc, trickery that I should include to outline the site's international structure?

    Read the article

  • duplicate pages

    - by Mert
    I did a small coding mistake and google indexed my site wrongly. this is correct form: https://www.foo.com/urunler/171/TENGA-CUP-DOUBLE-HOLE but google index my site like this : https://www.foo.com/urunler/171/cart.aspx first I fixed the problem and made a site map and only correct link in it. now I checked webmaster tools and I see this; Total indexed 513 Not selected 544 Blocked by robots 0 so I think this can be caused by double indexes and they looks not selected makes my data not selected. I want to know how to fix this "https://www.foo.com/urunler/171/cart.aspx" links. should I fix in code or should I connect to google to reindex my site. If I should redirect wrong/duplicate links to correct ones, what the way should be? thanks for your time in advance.

    Read the article

  • seo value of duplicating content externally

    - by Don
    I run a website that includes a blog which was hand-coded by myself and is hosted on the same domain. My partner in this endeavour thinks it would be a good idea to open up a blogger/wordpress blog and duplicate the on-site blog on this off-site blog. AFAIK the main reason for doing this is the SEO benefits of the inbound links that this off-site blog will create. I think this is a bad idea, because: Effectively what we're doing is creating a (very small scale) link farm We're more likely to be punished than rewarded (in SEO terms) for duplicating our content across domains This introduces a problem of synchronising our content across domains. For example, if a blog post is edited on the on-site blog, then ideally the off-site blog should be similarly updated. I know very little about SEO, so would be interested to hear what more informed readers have to say.

    Read the article

  • Duplicate pages indexed in Google

    - by Mert
    I did a small coding mistake and Google indexed my site incorrectly. This is the correct form: https://www.foo.com/urunler/171/TENGA-CUP-DOUBLE-HOLE But Google indexed my site like this: https://www.foo.com/urunler/171/cart.aspx First I fixed the problem and made a site map with only the correct link in it. Now I checked webmaster tools and I see this: Total indexed 513 Not selected 544 Blocked by robots 0 So I think this can be caused by double indexes, and it looks like the pages not selected makes the correct pages not indexed. I want to know how to fix the "https://www.foo.com/urunler/171/cart.aspx" links. Should I fix in code or should I connect to Google to re-index my site? If I should redirect wrong/duplicate links to correct ones, how should that be done?

    Read the article

  • SEO Tips - Updating Your Content and Avoiding Duplicate Filters

    If you are just getting into Search Engine Optimization, it's important that you are aware that content is the most important thing in your website in order to get high ratings. If you decide to hire an SEO Consultant, make sure that you explain to them exactly what you want your website to look like and the content that you would like it to feature. Having fresh and relevant content in your website is the key as that will bring web crawlers back frequently. There are different ways of achieving this while avoiding getting your website removed off of a search engine due to duplicate content.

    Read the article

  • Duplicate page content and the Google index

    - by Kit Sunde
    I have a static pages with dynamically expanding content that google is indexing. I also have deep links into virtually duplicate pages which will pre-expand the relevant section of content into the relevant section. It seems like Google is ignoring all my specialized pages and not putting them in the index. Even after going through web-masters tools, crawling and submitting them to the index manually. I also use the google API for integrating search on the site, and the deep linked pages won't show up. Is there a good solution for this?

    Read the article

  • Duplicate (Spotify) Icon in launcher

    - by user191231
    I have installed spotify on Ubuntu 13.04 and have locked the icon on the launcher. But when I exit the program fully or even restart and use that icon to open spotify, a new icon is created or a different icon is generated with a ? on it. It is a clean install of Ubuntu 13.04 so was wondering if this is a known bug or if there was a way of making sure it just doesn't created a duplicate icon? N.B it has not happened as of yet to any other program I have installed (Chrome & Steam)

    Read the article

  • Pagination, Duplicate Content, and SEO

    - by Iamtotallylost
    Please consider a list of items (forum comments, articles, shoes, doesn't matter) which are spread over multiple pages. Different sort orders are supported (by date, by popularity, by price, etc). So, an URL might look like this (I use the query style here to simplify things): /items?id=1234&page=42&sort=popularity /items?id=1234&page=5&sort=date Now, in terms of SEO, I think I should be worried about duplicate content. After all, each item appears at least as many times as there are sort orders. I've seen Matt Cutts talking about the rel=canonical link tag, but he also said that the canonical page should have very similar content. But this is not the case here because page #1 in a non-canonical sort order might have completely different items than page #1 in the canonical sort order. For a given non-canonical page, there is no clear canonical page listing all the same items, so I think rel=canonical won't help here. Then I thought about using the noindex meta tag on all pages with non-canonical sort order, and not using it on all pages with canonical sort order. However, if I use that method, what will happen with backlinks that are going to non-canonical pages -- will they still spread their page rank juice, even though the first page googlebot (or any other crawler) is going to encounter is marked as "noindex"? Can you please comment on my problem and what you think is the best solution? If you think you have a better solution, please consider that 1) I do not want to use Javascript for this, 2) I do not want all the items to be on one page. Thank you.

    Read the article

  • Duplicate content issue after URL-change with 301-redirects

    - by David
    We got the following problem: We changed all URLs on our page from oldURL.html to newURL.html and set up 301-redirects (ca. 600 URLs) Google re-crawled our page, indexed all the new URLs (newURL.html), but didn't crawl the old URLs (oldURL.html) again, as there were no internal links pointing at those domains anymore after the URL-change. This resulted in massive ranking-drops, etc. because (i) Google thought oldURL.html has exactly the same content as newURL, causing duplicate content issues, and (ii) Google did not transfer the juice from oldURL to newURL, because the 301-redirect was never noticed. Now we reset all internal Links to the old URLs again, which then redirect to the newURLs, in the hope that Google would re-crawl the pages, once there are internal links pointing at them. This is partially happening, but at a really low speed, so it would take multiple months to notice all-redirects. I guess, because Google thinks: "Aah, I already know oldURL.html, so no need to re-crawl it. Possible solutions we thought of are ... Submitting as many of the old URLs to the index as possible via Webmaster Tools, to manually trigger a crawl. Doing that already Submitting a sitemap with all old URLs - but not sure if good idea, because Google does not seem to like 301-redirects in a sitemap ... Both solutions are not perfect - and we cannot wait for three months, just to regain our old rankings. What are your ideas? Best, David

    Read the article

  • Remote Access to MSSQL Database From 1&1 Hosting [duplicate]

    - by Zerkey
    This question already has an answer here: How to find web hosting that meets my requirements? 5 answers I just paid ($6 /month) for shared Windows hosting through 1&1 hosting. I was having trouble connecting to my database from home, so I sent an email to support. I received the following response: As we checked your concern here in our end, please be advised that due to limitation of Shared Hosting services, there is no option to connect the database to your SQL Management Studio or through Visual Studio. It is only possible for Dedicated Server package. You may only access the database using MyLittleAdmin at the Control Panel. A dedicated server is like $200 per month! What is the point of having database access only through a web console? I feel I am missing something here, or maybe the support agent is. Is there a way to access my MS SQL database on their servers through Visual Studio or SQL Management Studio from my machine? If not, is there a web host who allows this for less than $200 a month? EDIT: Marked as duplicate... I'm not asking for a list of web hosts, I'm asking how to remotely connect to my MSSQL database through 1&1's services.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >