Search Results

Search found 4461 results on 179 pages for 'duplicate removal'.

Page 3/179 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Does Google consider my blog page as duplicate page if that page URL and that page URL with ‘showcomment’ cached separately?

    - by John Sanjay
    While I’m searching all the index page of my blog I found that Google cached one of my blog page http://example.com/page.html as well as http://example.com/page.html?showComment=1372054729698 These two pages are showing while I searched site:http://example.com. I’m so afraid about it because these two pages are same with same content. Does google consider these two pages as duplicate? If so what can I do now? Is it really a big problem to my blog?

    Read the article

  • Author Bio on all pages - Is it duplicate content?

    - by Rana Prathap
    In a website with user generated content, I provide a author bio under every article on the site. The author bio will be the same under every article the same author wrote. For some authors, the author bio is no longer then a couple of sentences, but for some descriptive writers, it is a good 100 words. These 100 words get repeated in almost 15 pages, some of them without substantial original content(such as haikus). Will this lead to duplicate content?

    Read the article

  • Does Google penalize pseudo-duplicate pages for different locations?

    - by mikewowb
    My compony's site's home page was not specificly optimized to any location. Now, I am planning to optimize it to Boston, and create ten or so other landing pages for other locations we serve. If we made these new pages by copying the original Boston one and changing the location's name (s/Boston/Montreal/), would Google consider them as duplicate pages and penalize us? What is the best practice for this?

    Read the article

  • Duplicate ping packages in Linux VirtualBox machine

    - by Darkmage
    i cant seem t figure out what is going on here. The Linux machine I am using is running as a VM on a Win7 machine using Virtual Box running as a service. If i ping the win7 Host i get ok result. root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 192.168.1.100 PING 192.168.1.100 (192.168.1.100) 10(38) bytes of data. 18 bytes from 192.168.1.100: icmp_seq=1 ttl=128 time=1.78 ms 18 bytes from 192.168.1.100: icmp_seq=2 ttl=128 time=1.68 ms if i ping localhost im ok root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 localhost PING localhost (127.0.0.1) 10(38) bytes of data. 18 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.255 ms 18 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.221 ms but if i ping gateway i get DUP packets root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 10(38) bytes of data. 18 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.27 ms 18 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.46 ms (DUP!) 18 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=22.1 ms 18 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=22.4 ms (DUP!) if i ping other machine on same LAN i stil get dups. pinging remote hosts also gives (DUP!) result root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 www.vg.no PING www.vg.no (195.88.55.16) 10(38) bytes of data. 18 bytes from www.vg.no (195.88.55.16): icmp_seq=1 ttl=245 time=10.0 ms 18 bytes from www.vg.no (195.88.55.16): icmp_seq=1 ttl=245 time=10.3 ms (DUP!) 18 bytes from www.vg.no (195.88.55.16): icmp_seq=2 ttl=245 time=10.3 ms 18 bytes from www.vg.no (195.88.55.16): icmp_seq=2 ttl=245 time=10.6 ms (DUP!)

    Read the article

  • Highlighting duplicate column-pair and counting the rows Excel

    - by pleasehelpme
    Given the data below, the column-pair with the same values for at least 4 consecutive rows should be highlighted. image here for better visualization: http://i49.tinypic.com/2jeshtt.jpg 2 2 3 4 3 4 3 4 3 4 2 3 1 2 2 2 3 3 3 3 3 3 3 3 2 3 2 3 2 3 2 3 2 2 3 4 3 4 3 4 3 4 3 4 The output should be something like this, where the column-pair values that are the same for at least 4 consecutive rows are highlighted. image here for better visualization: http://i48.tinypic.com/i2lzc8.jpg 2 2 3 4 3 4 3 4 3 4 2 3 1 2 2 2 3 3 3 3 3 3 3 3 2 3 2 3 2 3 2 3 2 2 3 4 3 4 3 4 3 4 3 4 Then, I need to know the number of instances of the N-consecutive equal column-pair. Considering the data above, N=4 should be 3 and N=5 should be 1, where N is the number of rows that the column-pair is consecutively equal.

    Read the article

  • Is a backlink with a duplicate description and title from a news site bad for SEO?

    - by Dejan Pelzel
    I have a blog with over a thousand posts. I have posted some of those to a news aggregator site and included the same preview photo and description that I used for it on my own site and the link to the post on my site. Since the site is mainly videos and images, the description was usually a complete match of 4-6 lines of text. It now looks that I have been affected by panda and since I am not doing any bad stuff, I suspect it might be due to duplicate content. For example, when I search the title of my posts, sometimes my site is not even returned, but the news aggregator site is. Could this be the problem with panda?

    Read the article

  • How to handle possible duplicate content across multiple sites?

    - by ElHaix
    Let's say I have two sites that cover the same vertical/topic. one in the USA and one in Canada. Both sites have local-related content, which is obviously unique by location. However they will share common news or blog pages. How do I avoid getting hit with duplicate content on both sites for those news/blog pages? If the content is exactly the same, I'm guessing I would have to pick which site's content I want to noindex,nofollow, is that correct, and if so, is that all I have to add on the URL links to those pages, and the pages' meta tags?

    Read the article

  • displaying first few pages of a pdf on a page = duplicate content?

    - by Ace
    I am embedding scribd pdfs on my website. These are exam papers pdf which are available on other websites. As it is scribd is an embed/iframe, I think google considers my page as being empty with no content; google does see iframe content right? So I decided to display the first pages of the pdf as text on the page for google. Then, for user experience, i hide the text and replace it with the scribd embed code using javascript. I have 2 worries about this method. Firstly, i am displaying the first pages of the pdf and the latter may be hosted on other websites, will this be considered as duplicate content. Secondly, I am hiding the content and replacing it with the scribd embed with javascript; is it considered bad by google?

    Read the article

  • How can I create multiple mini-sites with similar/duplicate content without hurting my search engine rank?

    - by ekpyrotic
    Essential background: I run a small company that lets members of the public post handwritten letters to their local politician (UK-based). Every week a number of early stage bills (called Early Day Motions) are submitted for debate in the House of Commons, and supporters of the motion will contact their local Members of Parliament, asking them to sign the motion. The crux: I want to target these EDMs with customised mini-sites, so when people search "EDM xxx", they find my customised mini-site, specifically targeting that EDM (i.e., "Send a handwritten letter to your MP asking them to sign EDM xxx"). At the moment, all these mini-sites (and my homepage) have duplicate content with only the relevant EDM name, number, and background image changed. (For example, http://mailmymp.com and http://mailmymp.com/edm/teaching-life-saving-skills-at-school-edm-550.php). The question: Firstly, will this hurt my potential search engine ranking? And, if so, what's the best way to target these political campaigns in an efficient manner without hurting my SEO prospects?

    Read the article

  • I have domain.com and domain.org to the same site, should I use redirects to avoid duplicate content

    - by bunzip
    I have both the .com and the .org for a domain name, and using Apache I point them to the same site content. I think this might be causing problems with the Search Engines because of duplicate content. I want the .org to be the essential website. How do others handle this situation? Should I be using 301 redirects to point all the .com requests to the .org? Should I just use the link rel="canonical" on each page to point to the .org?

    Read the article

  • MySql - Select from - Don't Show Duplicate Words - maybe "on duplicate key"?

    - by ali
    hi, how can I insert "on duplicate key" in this Code to remove duplicate words? or is there a better method that you know? thank you!! this is my code: function sm_list_recent_searches($before = '', $after = '', $count = 20) { // List the most recent successful searches. global $wpdb, $table_prefix; $count = intval($count); $results = $wpdb->get_results( "SELECT `terms`, `datetime` FROM `{$table_prefix}searchmeter_recent` WHERE 3 < `hits` AND CHAR_LENGTH(`terms`) > 4 ORDER BY `datetime` DESC LIMIT $count"); if (count($results)) { foreach ($results as $result) { echo '<a href="'. get_settings('home') . '/search/' . urlencode($result->terms) . '">'. htmlspecialchars($result->terms) .'</a>'.", "; } } }

    Read the article

  • Removing “duplicate objects” with same attributes using Array.map

    - by keruilin
    As you can see in the current code below, I am finding the duplicate based on the attribute recordable_id. What I need to do is find the duplicate based on four matching attributes: user_id, recordable_type, hero_type, recordable_id. How must I modify the code? heroes = User.heroes for hero in heroes hero_statuses = hero.hero_statuses seen = [] hero_statuses.sort! {|a,b| a.created_at <=> b.created_at } # sort by created_at hero_statuses.each do |hero_status| if seen.map(&:recordable_id).include? hero_status.recordable_id # check if the id has been seen already hero_status.revoke else seen << hero_status # if not, add it to the seen array end end end

    Read the article

  • Battery management doesn't recognize removal of power supply

    - by Jason
    I have a Lenovo Y460p running Ubuntu 12.04 (64-bit). The battery does charge normally, but unplugging the power supply only very briefly shows the correct battery indicator. After about 1 second, it reverts to the charging indicator. If the power supply is connected the power statistics show: "Supply Yes" "Online Yes" If it is not connected it shows: "Supply Yes" "Online No" My problem is almost exactly like the one in this post: Ubuntu 11.10 power management does not recognize removal of power supply The only exception is that my system does not dual-boot with Windows. This is Ubuntu only. The computer in the other post is a Lenovo as well; not sure if that has anything to do with it. Any help would be greatly appreciated. Thanks.

    Read the article

  • Removal of libsound2 file causes graphics loss

    - by Sajid Ahmad
    I was trying to install skype on ubuntu 12.10 desktop. but it was giving some error related to libsound2:i386 file. to overcome this problem i removed file libsound2 thinking that will install it later. but it removes all the graphics from my system. after removal of the file system started to give error that system is running in low graphics mode. I tried to install libsound2 file again but couldn't. After it i have upgraded the release of my ubuntu version using command do-release-upgrade think that it will install the missing file. But still there are no graphics on the system. I am using Dell Inspiron 15 . Please help me to tell that how can i get the graphics of system back.

    Read the article

  • Flash removal and installation issue

    - by Theo
    I'm having this issue trying to uninstall and/or upgrade the Adobe flash player plug-in. Here's what I've ran through the terminal: $ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: linux-headers-3.0.0-13-generic libgladeui-1-11 linux-headers-3.0.0-19-generic linux-headers-3.0.0-13 linux-headers-3.0.0-19 erlang-base Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: adobe-flashplugin 0 upgraded, 0 newly installed, 1 to remove and 2 not upgraded. 1 not fully installed or removed. After this operation, 10.2 MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 375840 files and directories currently installed.) Removing adobe-flashplugin ... update-alternatives: error: no alternatives for iceape-flashplugin. update-alternatives: error: no alternatives for iceape-flashplugin. dpkg: error processing adobe-flashplugin (--remove): subprocess installed pre-removal script returned error exit status 2 No apport report written because MaxReports is reached already postinst called with argument `abort-remove' dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: adobe-flashplugin E: Sub-process /usr/bin/dpkg returned an error code (1) Please advise if you can. Let me know if there is any other info you need.

    Read the article

  • Crash due to removal of Elements like CCSprite from NSMutableArray

    - by mayuur
    So, here's how it goes. I am currently working on Cocos2d game, which consists of many Obstacles. One obstacle gets added on the screen at an interval of 10 seconds like this. ObstacleSprite* newObstacle = [ObstacleSprite spriteWithFile:@"Obstacle.png" rect:CGRectMake(0, 0, 20, 20)]; newObstacle.position = ccp(mainPlayer1.position.x,10); [self addChild:newObstacle]; [self.arrayForObstacles addObject:newObstacle]; Now, I insert these obstacles into the arrayForObstacles because I also want to keep checking whether the Obstacles and MainPlayer don't collide. I check it with the help of this function. - (void) checkCollisionWithObstacle { if(mainPlayer1.playerActive) { for(int i = 0; i < [self.arrayForObstacles count]; i++) { ObstacleSprite* newObstacle = [self.arrayForObstacles objectAtIndex:i]; if(newObstacle != nil) { if(CGRectIntersectsRect([mainPlayer1 boundingBox], [newObstacle boundingBox])) { mainPlayer1.livesLeft--; } } } } } THE ISSUE Problem is when I get to certain score, one of the Obstacles gets deleted. Removal of Obstacles works as in First In-First Out (FIFO) mode. So, to delete obstacles, I write the following method : - (void) keepUpdatingScore { //update new score mainPlayer1.score+=10; //remove obstacle when score increases by 5k if(mainPlayer1.score > 5000 && mainPlayer1.score > 0) { mainPlayer1.playerActive = NO; if([self.arrayForObstacles count] > 0) { CCLOG(@"count is %d",[self.arrayForObstacles count]); ObstacleSprite* newObstacle = [self.arrayForObstacles objectAtIndex:0]; [self.arrayForObstacles removeObjectAtIndex:0]; [self removeChild:newObstacle cleanup:YES]; CCLOG(@"count is %d",[self.arrayForObstacles count]); } } else { } } It crashes when score crosses 5000 mark! UPDATE Crash happens when it again goes to the method checkCollisionWithObstacle. This is the THREAD Look. THis is the line Which crashes.

    Read the article

  • Wordpress main website and mobile website duplicate content.

    - by ObjectiveJ
    Basically a client has asked for his WordPress website to be turned into a mobile website as well. I have never attempted this and know nothing about SEO. However the issue has arisen that this may cause duplicate content issues with Google, and therefore both sites may be dropped in the rankings. I was looking at turning the website into a mobile site via one of the available WordPress mobile website plugins. My question is if duplicate content will be an issue? Has anyone ever tried this? After doing some reading I kind of think it may be possible to tell Google not to index the mobile website, although as I understand it It would be the same set of files. So I am unsure that if I tell it not to index one of them, that it will drop the other one as well. Can anyone with WordPress and SEO knowledge clear this up for me?

    Read the article

  • Delphi 7 compile error - “Duplicate resource(s)” between .res and .dfm

    - by Robo
    I got a very similar error to the one below: http://stackoverflow.com/questions/97800/how-can-i-fix-this-delphi-7-compile-error-duplicate-resources However, the error I got is this: [Error] WARNING. Duplicate resource(s): [Error] Type 10 (RCDATA), ID TFMMAINTQUOTE: [Error] File P:\[PATH SNIPPED]\Manufacturing.RES resource kept; file FMaintQuote.DFM resource discarded. Manufacturing.res is the default resource file (application is called Manufacturing.exe), and FMainQuote is one of the forms. .dfm files are plain text files, so I'm not sure what resources is being duplicated, how to find it and fix it? If I tried to compile the project again, it works OK, but the exe's icon is different to the one I've set in Project Options using the "Load Icon" button. The icon on the app is some sort of bell image that I don't recognize.

    Read the article

  • MySQL: updating a row and deleting the original in case it becomes a duplicate

    - by Silvio Donnini
    I have a simple table made up of two columns: col_A and col_B. The primary key is defined over both. I need to update some rows and assign to col_A values that may generate duplicates, for example: UPDATE `table` SET `col_A` = 66 WHERE `col_B` = 70 This statement sometimes yields a duplicate key error. I don't want to simply ignore the error with UPDATE IGNORE, because then the rows that generate the error would remain unchanged. Instead, I want them to be deleted when they would conflict with another row after they have been updated I'd like to write something like: UPDATE `table` SET `col_A` = 66 WHERE `col_B` = 70 ON DUPLICATE KEY REPLACE which unfortunately isn't legal in SQL, so I need help finding another way around. Also, I'm using PHP and could consider a hybrid solution (i.e. part query part php code), but keep in mind that I have to perform this updating operation many millions of times. thanks for your attention, Silvio Reminder: UPDATE's syntax has problems with joins with the same table that is being updated

    Read the article

  • duplicate one element from php array

    - by robertdd
    how i can duplicate one element from array: for example, i have this array: Array ( [LRDEPN] => 0008.jpg [OABCFT] => 0030.jpg [SIFCFJ] => 0011.jpg [KEMOMD] => 0022.jpg [DHORLN] => 0026.jpg [AHFUFB] => 0029.jpg ) if i want to duplicate this: 0011.jpg , how to proceed? i want to get this: Array ( [LRDEPN] => 0008.jpg [OABCFT] => 0030.jpg [SIFCFJ] => 0011.jpg [NEWKEY] => 0011.jpg [KEMOMD] => 0022.jpg [DHORLN] => 0026.jpg [AHFUFB] => 0029.jpg )

    Read the article

  • Yes another ON DUPLICATE KEY UPDATE query

    - by Andy Gee
    I've been reading all the questions on here but I still don't get it I have two identical tables of considerable size. I would like to update table packages_sorted with data from packages_sorted_temp without destroying the existing data on packages_sorted Table packages_sorted_temp contains data on only 2 columns db_id and quality_rank Table packages_sorted contains data on all 35 columns but quality_rank is 0 The primary key on each table is db_id and this is what I want to trigger the ON DUPLICATE KEY UPDATE with. In essence how do I merge these two tables by and change packages_sorted.quality_rank of 0 to the quality_rank stored in packages_sorted_temp under the same primary key Here's what's not working INSERT INTO `packages_sorted` ( `db_id` , `quality_rank` ) SELECT `db_id` , `quality_rank` FROM `packages_sorted_temp` ON DUPLICATE KEY UPDATE `packages_sorted`.`db_id` = `packages_sorted`.`db_id`

    Read the article

  • JQuery: Remove duplicate elements?

    - by Keith Donegan
    Say I have a list of links with duplicate values as below: <a href="#">Book</a> <a href="#">Magazine</a> <a href="#">Book</a> <a href="#">Book</a> <a href="#">DVD</a> <a href="#">DVD</a> <a href="#">DVD</a> <a href="#">Book</a> How would I, using JQuery, remove the dups and be left with the following for example: <a href="#">Book</a> <a href="#">Magazine</a> <a href="#">DVD</a> Basically I am looking for a way to remove any duplicate values found and show 1 of each link.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >