Search Results

Search found 886 results on 36 pages for 'duplicates'.

Page 3/36 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Remove Duplicates from JavaScript Array

    - by kramden88
    This seems like such a simple need but I've spent an inordinate amount of time trying to do this to no avail. I've looked at other questions on SO and I haven't found what I need. I have a very simple JavaScript array such as peoplenames = new Array("Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"); that may or may not contain duplicates and I need to simply remove the duplicates and put the unique values in a new array. That's it. I could point to all the codes that I've tried but I think it's useless because they don't work. If anyone has done this and can help me out I'd really appreciate it. JavaScript or jQuery solutions are both acceptable.

    Read the article

  • assignment vs std::swap and merging and keeping duplicates in seperate object

    - by rubenvb
    Say I have two std::set<std::string>s. The first one, old_options, needs to be merged with additional options, contained in new_options. I can't just use std::merge (well, I do, but not only that) because I also check for doubles and warn the user about this accordingly. To this effect, I have void merge_options( set<string> &old_options, const set<string> &new_options ) { // find duplicates and create merged_options, a stringset containing the merged options // handle duplicated the way I want to // ... old_options = merged_options; } Is it better to use std::swap( merged_options, old_options ); or the assignment I have? Is there a better way to filter duplicates and return the merged set than consecutive calls to std::set_intersection and std::set_union to detect dupes and merge the sets? I know it's slower than one traversal and doing both at once, but these sets are small (performance is not critical) and I trust the Standard more than I trust myself.

    Read the article

  • Finding mySQL duplicates, then merging data

    - by Michael Pasqualone
    I have a mySQL database with a tad under 2 million rows. The database is non-interactive, so efficiency isn't key. The (simplified) structure I have is: `id` int(11) NOT NULL auto_increment `category` varchar(64) NOT NULL `productListing` varchar(256) NOT NULL Now the problem I would like to solve is, I want to find duplicates on productListing field, merge the data on the category field into a single result - deleting the duplicates. So given the following data: +----+-----------+---------------------------+ | id | category | productListing | +----+-----------+---------------------------+ | 1 | Category1 | productGroup1 | | 2 | Category2 | productGroup1 | | 3 | Category3 | anotherGroup9 | +----+-----------+---------------------------+ What I want to end up is with: +----+----------------------+---------------------------+ | id | category | productListing | +----+----------------------+---------------------------+ | 1 | Category1,Category2 | productGroup1 | | 3 | Category3 | anotherGroup9 | +----+----------------------+---------------------------+ What's the most efficient way to do this either in pure mySQL query or php?

    Read the article

  • Squirrelmail receiving duplicate emails

    - by Austin
    A client of mine is experiencing issues with his email, it appears that whenever he receives email from a certain domain it comes as duplicates. Not only are they duplicates but the duplicated items have a (+) sign next to them which usually indicates an attachment. Could this be because of a forwarding issue? Here are the headers: Return-Path: <[email protected]> Received: from bigcat.centralmasswebdesign.com (root@localhost) by tarbellconstruction.com (8.13.1/8.13.1) with ESMTP id o4OFnO23003379 for <[email protected]>; Mon, 24 May 2010 11:49:24 -0400 X-ClientAddr: 72.249.26.200 Received: from mf3.spamfiltering.com (mf3.spamfiltering.com [72.249.26.200]) by bigcat.centralmasswebdesign.com (8.13.1/8.13.1) with ESMTP id o4OFnOjF005520 for <[email protected]>; Mon, 24 May 2010 11:49:24 -0400 X-Envelope-From: [email protected] X-Envelope-To: [email protected] Received: From 67-132-16-226.dia.static.qwest.net (67.132.16.226) by mf3.spamfiltering.com (MAILFOUNDRY) id 6lzIAmdLEd+oFQAw for [email protected]; Mon, 24 May 2010 15:49:23 -0000 (GMT) Received: from mail pickup service by WMA2-EXCH1.NELCO-USA.net with Microsoft SMTPSVC; Mon, 24 May 2010 11:49:18 -0400 Content-Transfer-Encoding: 7bit Importance: normal Priority: normal X-MimeOLE: Produced By Microsoft MimeOLE V6.00.3790.4325 Content-Class: urn:content-classes:message MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----_=_NextPart_001_01CAFB58.AAB268D0" Subject: weekly activity report for week ending May 22, 2010 Date: Mon, 24 May 2010 11:49:16 -0400 Message-ID: <15BCC4D99E8CBF48A2FA37A318CFF5C801209CCC@wma2-exch1.NELCO-USA.net> X-MS-Has-Attach: yes X-MS-TNEF-Correlator: Thread-Topic: weekly activity report for week ending May 22, 2010 thread-index: Acr7WKpdCelRCiocT1eBY2YN5Ma8DA== From: "Mike LeBlanc" <[email protected]> To: "Keith Berube" <[email protected]>, "Ken Tarbell" <[email protected]> X-OriginalArrivalTime: 24 May 2010 15:49:18.0361 (UTC) FILETIME=[AB546890:01CAFB58]

    Read the article

  • Finding duplicate values in a SQL table

    - by Alex
    It's easy to find duplicates with one field SELECT name, COUNT(email) FROM users GROUP BY email HAVING ( COUNT(email) > 1 ) So if we have a table ID NAME EMAIL 1 John [email protected] 2 Sam [email protected] 3 Tom [email protected] 4 Bob [email protected] 5 Tom [email protected] This query will give us John, Sam, Tom, Tom because they all have the same e-mails. But what I want, is to get duplicates with the same e-mails and names. I want to get Tom, Tom. I made a mistake, and allowed to insert duplicate name and e-mail values. Now I need to remove/change the duplicates. But I need to find them first.

    Read the article

  • Finding duplicate values in a SQL table - ADVANCED

    - by Alex
    It's easy to find duplicates with one field SELECT name, COUNT(email) FROM users GROUP BY email HAVING ( COUNT(email) > 1 ) So if we have a table ID NAME EMAIL 1 John [email protected] 2 Sam [email protected] 3 Tom [email protected] 4 Bob [email protected] 5 Tom [email protected] This query will give us John, Sam, Tom, Tom because they all have the same e-mails. But what I want, is to get duplicates with the same e-mails and names. I want to get Tom, Tom. I made a mistake, and allowed to insert duplicate name and e-mail values. Now I need to remove/change the duplicates. But I need to find them first.

    Read the article

  • How to keep only duplicates efficiently?

    - by Marc Eaddy
    Given an STL vector, I'd like an algorithm that outputs only the duplicates in sorted order, e.g., INPUT : { 4, 4, 1, 2, 3, 2, 3 } OUTPUT: { 2, 3, 4 } The algorithm is trivial, but the goal is to make it as efficient as std::unique(). My naive implementation modifies the container in-place: My naive implementation: void keep_duplicates(vector<int>* pv) { // Sort (in-place) so we can find duplicates in linear time sort(pv->begin(), pv->end()); vector<int>::iterator it_start = pv->begin(); while (it_start != pv->end()) { size_t nKeep = 0; // Find the next different element vector<int>::iterator it_stop = it_start + 1; while (it_stop != pv->end() && *it_start == *it_stop) { nKeep = 1; // This gets set redundantly ++it_stop; } // If the element is a duplicate, keep only the first one (nKeep=1). // Otherwise, the element is not duplicated so erase it (nKeep=0). it_start = pv->erase(it_start + nKeep, it_stop); } } If you can make this more efficient, elegant, or general, please let me know. For example, a custom sorting algorithm, or copy elements in the 2nd loop to eliminate the erase() call.

    Read the article

  • SQL to get list of dates as well as days before and after without duplicates

    - by Nathan Koop
    I need to display a list of dates, which I have in a table SELECT mydate AS MyDate, 1 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; Jan 1, 2010 - 1 Jan 2, 2010 - 1 Jan 10, 2010 - 1 No problem. However, I now need to display the date before and the date after as well with a different DateType. Dec 31, 2009 - 2 Jan 1, 2010 - 1 Jan 2, 2010 - 1 Jan 3, 2010 - 2 Jan 9, 2010 - 2 Jan 10, 2010 - 1 Jan 11, 2010 - 2 I thought I could use a union SELECT MyDate, DateType FROM ( SELECT mydate - 1 AS MyDate, 2 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; UNION SELECT mydate + 1 AS MyDate, 2 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; UNION SELECT mydate AS MyDate, 1 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; ) AS myCombinedDateTable This however includes duplicates of the original dates. Dec 31, 2009 - 2 Jan 1, 2009 - 2 Jan 1, 2010 - 1 Jan 2, 2010 - 2 Jan 2, 2010 - 1 Jan 3, 2010 - 2 Jan 9, 2010 - 2 Jan 10, 2010 - 1 Jan 11, 2010 - 2 How can I best remove these duplicates? I am considering a temporary table, but am unsure if that is the best way to do it. This also appears to me that it may provide performance issues as I am running the same query three separate times. What would be the best way to handle this request?

    Read the article

  • How to remove duplicate entries from a mysql db?

    - by Yegor
    I have a table with some ids + titles. I want to make the title column unique, but it has over 600k records already, some of which are duplicates (sometimes several dozen times over). How do I remove all duplicates, except one, so I can add a UNIQUE key to the title column after?

    Read the article

  • switch duplicates packets and forward in two route

    - by sami
    there is a network including a router, two hosts and a switch which connects hosts to router. i have a virtual machine on my system. the network adapter is set to act as bridge. so the virtual machine and real OS are my 2 hosts on different LAN. they use one network card and are connected to a switch. when each of host send a packet to the other one, the switch duplicate the packet and forward it to both router and the other host. how can I solve the duplicate packet problem? Thanks.

    Read the article

  • Sending mails via Mutt and Gmail: Duplicates

    - by Chris
    I'm trying to setup mutt wiht gmail for the first time. It seems to work pretty well, however when I send a mail from Mutt i appears twice in Gmail's sent folder. (I assume it's also sent twice - I'm trying to validate that) My configuration (Stripped of coloring): # A basic .muttrc for use with Gmail # Change the following six lines to match your Gmail account details set imap_user = "XX" set smtp_url = "[email protected]@smtp.gmail.com:587/" set from = "XX" set realname = "XX" # Change the following line to a different editor you prefer. set editor = "vim" # Basic config, you can leave this as is set folder = "imaps://imap.gmail.com:993" set spoolfile = "+INBOX" set imap_check_subscribed set hostname = gmail.com set mail_check = 120 set timeout = 300 set imap_keepalive = 300 set postponed = "+[Gmail]/Drafts" set record = "+[Gmail]/Sent Mail" set header_cache=~/.mutt/cache/headers set message_cachedir=~/.mutt/cache/bodies set certificate_file=~/.mutt/certificates set move = no set include set sort = 'threads' set sort_aux = 'reverse-last-date-received' set auto_tag = yes hdr_order Date From To Cc auto_view text/html bind editor <Tab> complete-query bind editor ^T complete bind editor <space> noop # Gmail-style keyboard shortcuts macro index,pager y "<enter-command>unset trash\n <delete-message>" "Gmail archive message" macro index,pager d "<enter-command>set trash=\"imaps://imap.googlemail.com/[Gmail]/Bin\"\n <delete-message>" "Gmail delete message" macro index,pager gl "<change-folder>" macro index,pager gi "<change-folder>=INBOX<enter>" "Go to inbox" macro index,pager ga "<change-folder>=[Gmail]/All Mail<enter>" "Go to all mail" macro index,pager gs "<change-folder>=[Gmail]/Starred<enter>" "Go to starred messages" macro index,pager gd "<change-folder>=[Gmail]/Drafts<enter>" "Go to drafts" macro index,pager gt "<change-folder>=[Gmail]/Sent Mail<enter>" "Go to sent mail" #Don't prompt on exit set quit=yes ## ================= #Color definitions ## ================= set pgp_autosign

    Read the article

  • Want to create an SQL function that removes table row duplicates [migrated]

    - by Hoser
    I'd be following the procedure outlined here (unless of course someone has a better way to do it), and I'm wondering if I could just have some help being pointed in the right direction on how to start. Basically I need help first on HOW to create functions, and general tips on making it adjustable for varying number of columns etc. This may be a very complicated task, as I have no previous experience making SQL functions, so please let me know if this is a difficult task for an SQL noobie working with MS SQL 2005.

    Read the article

  • How to avoid duplicates when copying files that have been renamed at the destination

    - by Benoitt
    I have to get pictures from a folder – with subfolders which are updated automatically – with their extensions. These files have to be copied in a folder where a website based on PHP will edit them (by renaming and creating an XML file) to be downloadable and integrated in an XML feed. Because of the rename function of the script, when I perform the copy gain, all the files are duplicated, because the script has renamed the original ones already. I've tried a few things with rsync but I'm looking for something more powerful because I can't copy files with an external "history". #!/bin/bash find '/home/name/picture' -name '*.jpg' | while read FILE ; do rsync --backup --backup-dir=incremental --suffix=.old "$FILE" /var/www/media ; done wget --spider 'http://myscript.php' ; #exit 0 PS: As a little addition, I'd like to replace '.' with a 'space' just after the *.jpeg copy. My PHP script has some problem to define files with comma because of the extension. I'm finking about a command with find – like I did before – with a sed function? Is that a good idea?

    Read the article

  • rsync delete remote duplicates

    - by BlakBat
    I'm trying to delete remote duplicate files without transferring the non-existing files, and without updating the existing files. If I specify both --existing and --ignore-existing (along with "-av --remove-source-files", the operation is a no-op and nothing will be transfered, but nothing will be deleted either. The best I got so far is to make a local copy of destination, use rsync without --ignore-existing, then rsync my local copy on top of the destination

    Read the article

  • Windows 7 Mapped Network Drive Multiplying to Create Duplicates all the way to Z:

    - by bendiy
    A strange issue came in today from some users. At least two Windows 7 x64 boxes that have duplicate mappings of a network drive. The drive is not mapped with a log in script, but done manual through "Map Network Drive". Everything has been fine for months, but all of the sudden, Explorer looks like this: Files (\\fileServerPath) (S:) Files (\\fileServerPath) (T:) Files (\\fileServerPath) (U:) Files (\\fileServerPath) (V:) Files (\\otherServerPath) (W:) Files (\\fileServerPath) (X:) Files (\\fileServerPath) (Y:) Files (\\fileServerPath) (Z:) There are some other networks drives mixed in there that did not duplicate. The drive is normally mapped to S:\, but it decided to make its way to Z:. What is going on here? I've found this and will be trying soon: http://social.technet.microsoft.com/Forums/en/w7itpronetworking/thread/b5647cc3-15d0-4776-bb00-a869bd8f930b

    Read the article

  • Case insensitive duplicates SQL

    - by hdx
    So I have a users table where the user.username has many duplicates like: username and Username and useRnAme john and John and jOhn That was a bug and these three records should have been only one. I'm trying to come up with a SQL query that lists all of these cases ordered by their creation date, so ideally the result should be something like this: username jan01 useRnAme jan02 Username jan03 john feb01 John feb02 jOhn feb03 Any suggestions will be much appreciated

    Read the article

  • Swapping of columns in a file and remove duplicates

    - by LucaB
    Hi all i have a file like this: term1 term2 term3 term4 term2 term1 term5 term3 ..... ..... what i need to do is to remove duplicates in any order they appear, such as: term1 term2 and term2 term1 is a duplicate to me. It is a really long file, so I'm not sure what can be faster. Does anyone has an idea on how to do this? awk perhaps?

    Read the article

  • Best way to detect duplicates when using Spring Hibernate Template

    - by Dean Povey
    We have an application which needs to detect duplicates in certain fields on create. We are using Hibernate as our persistence layer and using Spring's HibernateTemplate. My question is whether it is better to do so an upfront lookup for the item before creating, or to attempt to catch the DataIntegrityViolation exception and then check if this is caused by a duplicate entry.

    Read the article

  • Checking for duplicates in a vector

    - by xbonez
    I have to check a vector for duplicates. What is the best way to approach this: I take the first element, compare it against all other elements in the vector. Then take the next element and do the same and so on. Is this the best way to do it, or is there a more efficient way to check for dups?

    Read the article

  • Problem with re.findall (duplicates)

    - by user559385
    Hello, I tried to fetch source of 4chan site, and get links to threads. I have problem with regexp (isn't working). Source: import urllib2, re req = urllib2.Request('http://boards.4chan.org/wg/') resp = urllib2.urlopen(req) html = resp.read() print re.findall("res/[0-9]+", html) #print re.findall("^res/[0-9]+$", html) The problem is that: print re.findall("res/[0-9]+", html) is giving duplicates. I can't use: print re.findall("^res/[0-9]+$", html) I have read python docs but they didn't help.

    Read the article

  • Duplicate DNS Zones (Error 4515 in Event Log )

    - by Campo
    I am getting these two error in the DNS Event log (errors at end of question). I have confirmed I do have duplicate zones. I am wondering which ones to delete. The DomainDNSZone contains all of our DNS records but it does not have the _msdcs zone.... that is in the ForestDNSZone with the duplicates that are not in use. here is a picture of that 3 Questions. I understand the advantages of having DNS in the ForestDNSZone. so... Why is DNS using the DomainDNSZone and is that acceptable considering _msdcs... is in the ForestDNSZone? If so, should I just delete the DC=1.168.192.in-addr.arpa and DC=supernova.local from the ForestDNSZone? Or should I try to get those to be the ones in use? What are those steps? I understand how to delete. That is simple but if i must move zones some info would be appreaciated there. Just to confirm. from my understanding. I can delete the two duplicates in the ForestDNSZone and leave the _msdcs.supernova.local as thats required there. This will resolve the erros I see. Just fyi when I look in those folders from the ForestDNSZone they have just 2 and 1 entries respectively. So obviously not in use compared to the others. I am pretty sure I understand the steps to complete this. But if you would like to provide that info, bonus points! Event Type: Warning Event Source: DNS Event Category: None Event ID: 4515 Date: 1/4/2011 Time: 2:14:18 PM User: N/A Computer: STANLEY Description: The zone 1.168.192.in-addr.arpa was previously loaded from the directory partition DomainDnsZones.supernova.local but another copy of the zone has been found in directory partition ForestDnsZones.supernova.local. The DNS Server will ignore this new copy of the zone. Please resolve this conflict as soon as possible. If an administrator has moved this zone from one directory partition to another this may be a harmless transient condition. In this case, no action is necessary. The deletion of the original copy of the zone should soon replicate to this server. If there are two copies of this zone in two different directory partitions but this is not a transient caused by a zone move operation then one of these copies should be deleted as soon as possible to resolve this conflict. To change the replication scope of an application directory partition containing DNS zones and for more details on storing DNS zones in the application directory partitions, please see Help and Support. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 89 25 00 00 %.. AND Event Type: Warning Event Source: DNS Event Category: None Event ID: 4515 Date: 1/4/2011 Time: 2:14:18 PM User: N/A Computer: STANLEY Description: The zone supernova.local was previously loaded from the directory partition DomainDnsZones.supernova.local but another copy of the zone has been found in directory partition ForestDnsZones.supernova.local. The DNS Server will ignore this new copy of the zone. Please resolve this conflict as soon as possible. If an administrator has moved this zone from one directory partition to another this may be a harmless transient condition. In this case, no action is necessary. The deletion of the original copy of the zone should soon replicate to this server. If there are two copies of this zone in two different directory partitions but this is not a transient caused by a zone move operation then one of these copies should be deleted as soon as possible to resolve this conflict. To change the replication scope of an application directory partition containing DNS zones and for more details on storing DNS zones in the application directory partitions, please see Help and Support. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 89 25 00 00 %..

    Read the article

  • Removing Duplicate entries in grub2 Ubuntu 9.10

    - by Anders
    I have made a custom grub2 menu however, both the default and the custom show together. So my grub looks like the list below, the bolded entries are my custom ones. How do I get rid of the duplicates? I have tried apt-get remove and deleting old kernels. I am a bit lost. Thanks! in Advance. ubuntu,linux ... ubuntu,linux recovery memtest memtest windows7 windows7 ubuntu linux ubuntu linux recover I have tried apt-get remove I have tried marking and removing older kernels. This is how I made my custom grub by the way. I copied and pasted the grub.cfg menuentry code into the custom one and just renamed the titles so it would be perfectly clear for the user who doesn't want to know what version # it is.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >