Search Results

Search found 886 results on 36 pages for 'no duplicates'.

Page 13/36 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Create a group indicator (SQL)

    - by user1723699
    I am looking to create a group indicator for a query using SQL (Oracle specifically). Basically, I am looking for duplicate entries for certain columns and while I can find those what I also want is some kind of indicator to say what rows the duplicates are from. Below is an example of what I am looking to do (looking for duplicates on Name, Zip, Phone). The rows with Name = aaa are all in the same group, bb are not, and c are. Is there even a way to do this? I was thinking something with OVER (PARTITION BY ... but I can't think of a way to only increment for each group. +----------+---------+-----------+------------+-----------+-----------+ | Name | Zip | Phone | Amount | Duplicate | Group | +----------+---------+-----------+------------+-----------+-----------+ | aaa | 1234 | 5555555 | 500 | X | 1 | | aaa | 1234 | 5555555 | 285 | X | 1 | | bb | 545 | 6666666 | 358 | | 2 | | bb | 686 | 7777777 | 898 | | 3 | | aaa | 1234 | 5555555 | 550 | X | 1 | | c | 5555 | 8888888 | 234 | X | 4 | | c | 5555 | 8888888 | 999 | X | 4 | | c | 5555 | 8888888 | 230 | X | 4 | +----------+---------+-----------+------------+-----------+-----------+

    Read the article

  • Populating a PHP array within a foreach loop

    - by patrick
    I am wanting to add each user into an array and check for duplicates before I do. $spotcount = 10; for ($topuser_count = 0; $topuser_count < $spotcount; $topuser_count++) //total spots { $spottop10 = $ids[$topuser_count]; $top_10 = $gowalla->getSpotInfo($spottop10); $usercount = 0; $c = 0; $array = array(); foreach($top_10['top_10'] as $top10) //loop each spot { //$getuser = substr($top10['url'],7); //strip the url $getuser = ltrim($top10['url'], " users/" ); if ($usercount < 3) //loop only certain number of top users { if (($getuser != $userurl) && (array_search($getuser, $array) !== true)) { //echo " no duplicates! <br /><br />"; echo ' <a href= "http://gowalla.com'.$top10['url'].'"><img width="90" height="90" src= " '.$top10['image_url'].' " title="'.$top10['first_name'].'" alt="Error" /></a> '; $array[$c++] = $getuser; } else { //echo "duplicate <br /><br />"; } } $usercount++; } print_r($array); } The previous code prints: Array ( [0] => 62151 [1] => 204501 [2] => 209368 ) Array ( [0] => 62151 [1] => 33116 [2] => 122485 ) Array ( [0] => 120728 [1] => 205247 [2] => 33116 ) Array ( [0] => 150883 [1] => 248551 [2] => 248558 ) Array ( [0] => 157580 [1] => 77490 [2] => 52046 ) Which is wrong. It does check for duplicates, but only the contents of each foreach loop instead of the entire array. How is this if I am storing everything into $array?

    Read the article

  • What is the difference between the add and offer methods in a queue?

    - by Finbarr
    Take the PriorityQueue for example http://java.sun.com/j2se/1.5.0/docs/api/java/util/PriorityQueue.html#offer(E) According to the Collection API entry http://java.sun.com/j2se/1.5.0/docs/api/java/util/Collection.html the add method will often seek to ensure that an element exists within the Collection rather than adding duplicates. So my question is, what is the difference between the add and offer methods? Is it that the Offer method will add duplicates regardless? (I doubt that it is because if a Collection should only have distinct elements this would circumvent that).

    Read the article

  • SQL - suppressing duplicate *adjacent* records

    - by Trevel
    I need to run a Select statement (DB2 SQL) that does not pull adjacent row duplicates based on a certain field. In specific, I am trying to find out when data changes, which is made difficult because it might change back to its original value. That is to say, I have a table that vaguely resembles: A, 5, Jan A, 12, Feb A, 12, Mar A, 12, Apr A, 9, May A, 9, Jun A, 5, Jul And I want to get the results: A, 5, Jan A, 12, Feb A, 9, May A, 5, Jul discarding adjacent duplicates but keeping the last row. The obvious: Select Letter, Number, Min(Month) from Table group by Letter, Number does not work -- it doesn't include the last row.

    Read the article

  • Consolidating values in a junction table

    - by senloe
    I have the following schema: Parcels Segments SegmentsParcels ========= ========== ================= ParcelID SegmentID ParcelID ... Name SegmentID ... id A user of the data wants to consolidate Segments.Names and gave me a list of current Segment.Names mapped to new Segment.Names (all of which currently exist). So now I have this list in a temporary table with the currentID and newID to map to. What I want to do is update the SegmentID in SegmentsParcels based on this map. I could use the statement: update SegmentParcels set segmentID = [newID] from newsegments where segmentID = currentid but this will create some duplicates I have a unique constraint on ParcelID and SegmentID in SegmentParcels. What is the best way to go about this? I considered removing the constraint and then dealing with removing the duplicates (which I did at one point and could probably do again) but I was hoping there was a simpler way.

    Read the article

  • Filtering subsets using Linq

    - by Nathan Matthews
    Hi All, Imagine a have a very long enunumeration, too big to reasonably convert to a list. Imagine also that I want to remove duplicates from the list. Lastly imagine that I know that only a small subset of the initial enumeration could possibly contain duplicates. The last point makes the problem practical. Basically I want to filter out the list based on some predicate and only call Distinct() on that subset, but also recombine with the enumeration where the predicate returned false. Can anyone think of a good idiomatic Linq way of doing this? I suppose the question boils down to the following: With Linq how can you perform selective processing on a predicated enumeration and recombine the result stream with the rejected cases from the predicate?

    Read the article

  • Trim email list into domain list

    - by hanjaya
    The function below is part of a script to trim email list from a file into domain list and removes duplicates. /* define a function that can accept a list of email addresses */ function getUniqueDomains($list) { // iterate over list, split addresses and add domain part to another array $domains = array(); foreach ($list as $l) { $arr = explode("@", $l); $domains[] = trim($arr[1]); } // remove duplicates and return return array_unique($domains); } What does $domains[] = trim($arr[1]); mean? Specifically the $arr[1]. What does [1] mean in this context? How come variable $arr becomes an array variable?

    Read the article

  • Equivalence Classes LISP

    - by orcik
    I need to write a program for equivalence classes and get this outputs... (equiv '((a b) (a c) (d e) (e f) (c g) (g h))) => ((a b c g h) (d e f)) (equiv '((a b) (c d) (e f) (f g) (a e))) => ((a b e f g) (c d)) Basically, A set is a list in which the order doesn't matter, but elements don't appear more than once. The function should accept a list of pairs (elements which are related according to some equivalence relation), and return a set of equivalence classes without using iteration or assignment statements (e.g. do, set!, etc.). However, set utilities such as set-intersection, set-union and a function which eliminates duplicates in a list and built-in functions union, intersection, and remove-duplicates are allowed. Thanks a lot!

    Read the article

  • MySQL Query still executing after a day..?

    - by Matt Jarvis
    Hi - I'm trying to isolate duplicates in a 500MB database and have tried two ways to do it. One creating a new table and grouping: CREATE TABLE test_table as SELECT * FROM items WHERE 1 GROUP BY title; But it's been running for an hour and in MySQL Admin it says the status is Locked. The other way I tried was to delete duplicates with this: DELETE bad_rows.* from items as bad_rows inner join ( select post_title, MIN(id) as min_id from items group by title having count(*) 1 ) as good_rows on good_rows.post_title = bad_rows.post_title; ..and this has been running for 24hours now, Admin telling me it's Sending data... Do you think either or these queries are actually still running? How can I find out if it's hung? (with Apple OS X 10.5.7)

    Read the article

  • How can I remove a duplicate object from a MongoDB array?

    - by andrewrk
    My data looks like this: foo_list: [ { id: '98aa4987-d812-4aba-ac20-92d1079f87b2', name: 'Foo 1', slug: 'foo-1' }, { id: '98aa4987-d812-4aba-ac20-92d1079f87b2', name: 'Foo 1', slug: 'foo-1' } { id: '157569ec-abab-4bfb-b732-55e9c8f4a57d', name: 'Foo 3', slug: 'foo-3' } ] Where foo_list is a field in a model called Bar. Notice that the first and second objects in the array are complete duplicates. Aside from the obvious solution of switching to PostgresSQL, what MongoDB query can I run to remove duplicate entries from foo_list? Similar answers that do not quite cut it: http://stackoverflow.com/a/16907596/432 http://stackoverflow.com/a/18804460/432 These questions answer the question if the array had bare strings in it. However in my situation the array is filled with objects. I hope it is clear that I am not interested in a query; I want the duplicates to be gone from the database forever.

    Read the article

  • Canonicalization of single, small pages like reviews or product categories [SEO]

    - by Valorized
    In general I pretty much like the idea of canonicalization. And in most cases, Google explains possible procedures in a clear way. For example: If I have duplicates because of parameters (eg: &sort=desc) it's clear to use the canonical for the site, provided the within the head-tag. However I'm wondering how to handle "small - no to say thin content - sites". What's my definition of a small site? An Example: On one of my main sites, we use a directory based url-structure. Let's see: example.com/ (root) example.com/category-abc/ example.com/category-abc/produkt-xy/ Moreover we provide on page, that includes all products example.com/all-categories/ (lists all products the same way as in the categories) In case of reviews, we use a similar structure: example.com/reviews/product-xy/ shows all review for one certain product example.com/reviews/product-xy/abc-your-product-is-great/ shows one certain review example.com/reviews/ shows all reviews for all products (latest first) Let's make it even more complicated: On every product site, there are the latest 2 reviews at the end of the page. So you see, a lot of potential duplicates. Q1: Should I create canonicals for a: example.com/category-abc/ to example.com/all-categories/ b: example.com/reviews/product-xy/abc-your-product-is-great/ to example.com/reviews/product-xy/ or to example.com/review/ or none of them? Q2: Can I link the collection of categories (all-categories/) and collection of all reviews (reviews/ and reviews/product-xy/) to the single category respectively to the single review. Example: example.com/reviews/ includes - let's say - 100 reviews. Can I somehow use a markup that tells search engines: "Hey, wait, you are now looking at a collection of 100 reviews - do not index this collection, you should rather prefer indexing every single review as a single page!". In HTML it might be something like that (which - of course - does not work, it's only to show you what I mean): <div class="review" rel="canonical" href="http://example.com/reviews/product-xz/abc-your-product-is-great/">HERE GOES THE REVIEW</div> Reason: I don't think it is a great user experience if the user searches for "your product is great" and lands on example.com/reviews/ instead of example.com/reviews/product-xy/abc-your-product-is-great/. On the first site, he will have to search and might stop because of frustration. The second result, however, might lead to a conversion. The same applies for categories. If the user is searching for category-Z, he might land on the all-categories page and he has to scroll down to the (last) category, to find what he searched for (Z). So what's best practice? What should I do? Thank you for your help!

    Read the article

  • Canonicalization of single, small pages like reviews or product categories

    - by Valorized
    In general I pretty much like the idea of canonicalization. And in most cases, Google explains possible procedures in a clear way. For example: If I have duplicates because of parameters (eg: &sort=desc) it's clear to use the canonical for the site, provided the within the head-tag. However I'm wondering how to handle "small - no to say thin content - sites". What's my definition of a small site? An Example: On one of my main sites, we use a directory based url-structure. Let's see: example.com/ (root) example.com/category-abc/ example.com/category-abc/produkt-xy/ Moreover we provide on page, that includes all products example.com/all-categories/ (lists all products the same way as in the categories) In case of reviews, we use a similar structure: example.com/reviews/product-xy/ shows all review for one certain product example.com/reviews/product-xy/abc-your-product-is-great/ shows one certain review example.com/reviews/ shows all reviews for all products (latest first) Let's make it even more complicated: On every product site, there are the latest 2 reviews at the end of the page. So you see, a lot of potential duplicates. Q1: Should I create canonicals for a: example.com/category-abc/ to example.com/all-categories/ b: example.com/reviews/product-xy/abc-your-product-is-great/ to example.com/reviews/product-xy/ or to example.com/review/ or none of them? Q2: Can I link the collection of categories (all-categories/) and collection of all reviews (reviews/ and reviews/product-xy/) to the single category respectively to the single review. Example: example.com/reviews/ includes - let's say - 100 reviews. Can I somehow use a markup that tells search engines: "Hey, wait, you are now looking at a collection of 100 reviews - do not index this collection, you should rather prefer indexing every single review as a single page!". In HTML it might be something like that (which - of course - does not work, it's only to show you what I mean): <div class="review" rel="canonical" href="http://example.com/reviews/product-xz/abc-your-product-is-great/"> HERE GOES THE REVIEW</div> Reason: I don't think it is a great user experience if the user searches for "your product is great" and lands on example.com/reviews/ instead of example.com/reviews/product-xy/abc-your-product-is-great/. On the first site, he will have to search and might stop because of frustration. The second result, however, might lead to a conversion. The same applies for categories. If the user is searching for category-Z, he might land on the all-categories page and he has to scroll down to the (last) category, to find what he searched for (Z). So what's best practice? What should I do?

    Read the article

  • How do I find and delete duplicate music tracks?

    - by John McKean Pruitt
    My issue is that for some reason I have duplicates of some music tracks. However they are not named identically. For instance: Music/Prefuse 73/One Word Extinguisher/07. Detchibe.mp3 & Music/Prefuse 73/One Word Extinguisher/07 - Detchibe.mp3 Notice they are duplicate songs but the 07*.* & the 07 - is tricking duplicate file finders that search based on file names. Any suggestions?

    Read the article

  • Algorithm to map an area [on hold]

    - by user37843
    I want to create a crawler that starts in a room and from that room to move North,East,West and South until there aren't any new rooms to visit. I don't want to have duplicates and the output format per line to be something like this: current room, neighbour 1, neighbour 2 ... and in the end to apply BFS algorithm to find the shortest path between 2 rooms. Can anyone offer me some suggestion what to use? Thanks

    Read the article

  • How to clean and add options to the Open With list of apps

    - by Luis Alvarado
    After installing several PPAs (Wine, PoL) and opening several files with other apps (Like changing from Totem to VLC) I discovered that the Open With option had 2 problems: Many items on the list are duplicated (As seen on the image for "A Wine Program") Sometimes the app I want to use to open is not shown there (For example, Virtualbox or VLC) So how can I edit this list to clean the duplicates and add missing apps from the list.

    Read the article

  • Argument list too long and copying to Samba Share

    - by Copy Run Start
    Ubuntu 12.04 LTS 64 bit. I'm trying to make a scheduled task copy from a directory with thousands of files to a samba share (while skipping duplicates). I mapped my Samba share through the GUI. The command I tried: cp /home/security/Brick/* ~/.gvfs/"cam on atm-bak-01.local/Brick" -n I found this but I don't know how to change the syntax to what I need. find -maxdepth 1 -name '*.prj' -exec mv -t ../prjshp {} + Any hints are greatly appreciated.

    Read the article

  • Duplicate ping response when running Ubuntu as virtual machine (VMWare)

    - by Stonerain
    I have the following setup: My router - 192.168.0.1 My host computer (Windows 7) - 192.168.0.3 And Ubuntu is running as virtual machine on the host. VMWare network settings is Bridged mode. I've modified Ubuntu network settings in /etc/netowrk/interfaces, set the following config: iface eth0 inet static address 192.168.0.220 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 Internet works correctly, I can install packages. But it gets weird if I try to ping something I get this: PING belpak.by (193.232.248.80) 56(84) bytes of data. From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=250 time=17.0 ms 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=249 time=17.0 ms (DUP! ) 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=248 time=17.0 ms (DUP! ) 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=247 time=17.0 ms (DUP! ) 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=246 time=17.0 ms (DUP! ) ^CFrom 192.168.0.1 icmp_seq=2 Time to live exceeded --- belpak.by ping statistics --- 2 packets transmitted, 1 received, +4 duplicates, +6 errors, 50% packet loss, ti me 999ms rtt min/avg/max/mdev = 17.023/17.041/17.048/0.117 ms I think even more interesting are the results of pinging the router itself: stonerain@ubuntu:~$ ping 192.168.0.1 -c 1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. From 192.168.0.3: icmp_seq=1 Redirect Network(New nexthop: 192.168.0.1) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=254 time=6.64 ms --- 192.168.0.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 6.644/6.644/6.644/0.000 ms But if I set -c 2: ... 64 bytes from 192.168.0.1: icmp_seq=1 ttl=252 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=251 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=254 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=253 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=252 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=251 time=13.5 ms (DUP!) From 192.168.0.3: icmp_seq=2 Redirect Network(New nexthop: 192.168.0.1) 64 bytes from 192.168.0.1: icmp_seq=2 ttl=254 time=7.87 ms --- 192.168.0.1 ping statistics --- 2 packets transmitted, 2 received, +256 duplicates, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 6.666/10.141/13.556/2.410 ms Pinging host machine on the other hand works absolutely correctly: no DUPs, no errors. What seems to be the problem and how can I fix it? Thank you.

    Read the article

  • How to separate production and test assets during development?

    - by bcsanches
    Hi Folks, this is like a complement for Assets Management, database or versioning system?. I am wondering about how to separate development, specially programmers assets from production assets? For example, if we keep all the assets on the same repository, how do you keep with programmers assets and final game assets? Do you keep a separate directory for each of those, allowing duplicates? Or do you use some fancy scheme for striping out the "development" and "test" assets from final build?

    Read the article

  • Best way to set up servers for .NET performance [migrated]

    - by msigman
    Assume we have 3 physical servers and let's say we are only interested in performance, and not reliability. Is it better to give each server a specific function or make them all duplicates and split the traffic between them? In other words dedicate 1 as DB server, 1 as web server, and 1 as reporting server/data warehouse, or better to put all three services on each server and use them as web farm?

    Read the article

  • Auto-archive IMAP mail folders on OS X

    - by Pradeep
    Hi, I am trying to achieve the following. Download all messages from mail server(and remove downloaded messages from server). Downloaded messages should be in a local mailbox preserving folder structure as was defined on server. The download process should be automatic and shouldn't create duplicates. I am on OSX and looking for solutions using Apple Mail or Thunderbird or similar. So far I have found POP is not the way to go (as it looses folder structure and potentially can cause duplicates). The solution described here seems very good but isn't yet available for thunderbird or apple mail. http://getsatisfaction.com/mozilla_messaging/topics/auto_archive_and_keep_folder_structure. The other alternative is outlook which has auto archive which is paid and I think exports to pst instead of the more common mbox format. Yet another alternative is http://www.pop4.org/ which adds support for folder management to POP. Which I don't think is going to become usable soon. Any other better solutions.? Thank you

    Read the article

  • Libraries merged folder views

    - by Stigma
    So I pretty much love the Windows 7 Libraries feature, and saw one use for them that I thought would be perfect, but I can't seem to manage it. Basically, a merged view of different folder structures. Suppose I make a new generic library and add three locations to it: C:\Test\, D:\Test\ and D:\temp\Test\. Now, these may look somewhat okay as long as there are no duplicates in these folders. (It wants to group them based on the included directory, which one can work around by looking on google - I don't have the precise trick on hand I am afraid.) But when you get collisions and, say, two of those directories have a Sub directory in them, stuff becomes unusable (assuming Arrange by: Folder view). You'll have multiple folders listed named Sub, which is pretty useless when looking for data. I want folders to get 'merged', which ought to be possible somehow since it can create these merged views based on artist, album etc in other views. So all subdirectories that are double (and recursively checking for doubles inside those, etc) ought to be merged for as far the View is concerned. If files have a collision, I don't really care what happens - hide one, show both, filter out duplicates, whatever. (Although an option would be nice...) Anyhow, is there anyone who knows how to get such a 'merged folder structure' functionality for Libraries? It would be really useful for me.

    Read the article

  • Tracking down source of duplicate email messages in Outlook / Exchange environment

    - by Ken Pespisa
    I have a few users, who are also Blackberry users, that occasionally have duplicate emails generated from their "mailbox". I put mailbox in quotes because I'm not exactly sure where the duplicates are created. One of these users is in non-cached mode, and the other is in cached mode, and both experience the problem. In fact, the non-cached mode user was originally experiencing the problem while in cached mode, and I made the switch a few weeks ago to attempt to solve the problem. Today I discovered the issue still exists. I'm not sure if the fact that they are blackberry users could be causing the problem at all. I don't see how, but felt I should mention it anyway. Does anyone have ideas on how I might begin to troubleshoot this? I can see in the non-cached user's mailbox "Sent Items" that the message was sent only once. I confirmed the message does not state that there was a conflict and in fact that makes sense because they are in non-cached mode. On the server, we have a mail journaling feature turned on for our third-party mail archiving system, and I can see that that system sees two sent messages. And likewise, the recipient does in fact have two messages in their inbox with consecutive message IDs ([email protected]) and ([email protected]). It would seem to me that the duplicates are generated on the client, but is there a way to tell for sure?

    Read the article

  • Check for unique rows, but ignore one particular column

    - by user269148
    I have an XML document, that looks like this: Column A to S with headers, and there are 1922 rows. This is an backup of some SMS, and I want to get rid of duplicates. The problem is, that the Time in the readable_date header has been messed up. There is nothing wrong with the date, but the clock time is wrong, so I have split that column in three, with Year, day and clock. I know I can use a standard filter, but it only looks for unique rows in a single column. What I want to perform, is to make a row check similar to this: F(x)=Check if Column 2A to (infinate) is equal to Column 3A to (infinate), but ignore column(R). IF True, then delete Column 3A to (infinate) Otherwise Check IF column 2A to (infinate) is equal Column 4A to (infinate) and so on. I need to ignore a particular column in a row every time, and need to do this for a complete sheet. And the formula check should apply for every row, when the first one is done checking for duplicates... If anyone else has a better solution, please say so. Anyway, anyone who can help?

    Read the article

  • Distinctly LINQ &ndash; Getting a Distinct List of Objects

    - by David Totzke
    Let’s say that you have a list of objects that contains duplicate items and you want to extract a subset of distinct items.  This is pretty straight forward in the trivial case where the duplicate objects are considered the same such as in the following example: List<int> ages = new List<int> { 21, 46, 46, 55, 17, 21, 55, 55 }; IEnumerable<int> distinctAges = ages.Distinct(); Console.WriteLine("Distinct ages:"); foreach (int age in distinctAges) { Console.WriteLine(age); } /* This code produces the following output: Distinct ages: 21 46 55 17 */ What if you are working with reference types instead?  Imagine a list of search results where items in the results, while unique in and of themselves, also point to a parent.  We’d like to be able to select a bunch of items in the list but then see only a distinct list of parents.  Distinct isn’t going to help us much on its own as all of the items are distinct already.  Perhaps we can create a class with just the information we are interested in like the Id and Name of the parents.  public class SelectedItem { public int ItemID { get; set; } public string DisplayName { get; set; } } We can then use LINQ to populate a list containing objects with just the information we are interested in and then get rid of the duplicates. IEnumerable<SelectedItem> list = (from item in ResultView.SelectedRows.OfType<Contract.ReceiptSelectResults>() select new SelectedItem { ItemID = item.ParentId, DisplayName = item.ParentName }) .Distinct(); Most of you will have guessed that this didn’t work.  Even though some of our objects are now duplicates, because we are working with reference types, it doesn’t matter that their properties are the same, they’re still considered unique.  What we need is a way to define equality for the Distinct() extension method. IEqualityComparer<T> Looking at the Distinct method we see that there is an overload that accepts an IEqualityComparer<T>.  We can simply create a class that implements this interface and that allows us to define equality for our SelectedItem class. public class SelectedItemComparer : IEqualityComparer<SelectedItem> { public new bool Equals(SelectedItem abc, SelectedItem def) { return abc.ItemID == def.ItemID && abc.DisplayName == def.DisplayName; } public int GetHashCode(SelectedItem obj) { string code = obj.DisplayName + obj.ItemID.ToString(); return code.GetHashCode(); } } In the Equals method we simply do whatever comparisons are necessary to determine equality and then return true or false.  Take note of the implementation of the GetHashCode method.  GetHashCode must return the same value for two different objects if our Equals method says they are equal.  Get this wrong and your comparer won’t work .  Even though the Equals method returns true, mismatched hash codes will cause the comparison to fail.  For our example, we simply build a string from the properties of the object and then call GetHashCode() on that. Now all we have to do is pass an instance of our IEqualitlyComarer<T> to Distinct and all will be well: IEnumerable<SelectedItem> list =     (from item in ResultView.SelectedRows.OfType<Contract.ReceiptSelectResults>()         select new SelectedItem { ItemID = item.dahfkp, DisplayName = item.document_code })                         .Distinct(new SelectedItemComparer());   Enjoy. Dave Just because I can… Technorati Tags: LINQ,C#

    Read the article

  • LINQ To SQL ignore unique constraint exception and continue

    - by Martin
    I have a single table in a database called Users Users ------ ID (PK, Identity) Username (Unique Index) I have setup a unique index on the Username table to prevent duplicates. I am then enumerating through a collection and creating a new user in the database for each item. What I want to do is just insert a new user and ignore the exception if the unique key constraint is violated (as it's clearly a duplicate record in that case). This is to avoid having to craft where not exists kind of queries. First off, is this going to be any more efficient or should my insert code be checking for duplicates instead? I'm drawn more to the database having that logic as this prevents any other type of client from inserting duplicate data. My other issue is related to LINQ To SQL. I have the following code: public class TestRepo { DatabaseDataContext database = new DatabaseDataContext(); public void Add(string username) { database.Users.InsertOnSubmit(new User() { Username = username }); } public void Save() { database.SubmitChanges(); } } And then I iterate over a collection and insert new users, ignoring any exceptions: TestRepo repo = new TestRepo(); foreach (var name in new string[] { "Tim", "Bob", "John" }) { try { repo.Add(name); repo.Save(); } catch { } } The first time this is run, great I have three users in the table. If I remove the second one and run this code again, nothing is inserted. I expected the first insert to fail with the exception, the second to succeed (as I just removed that item from the DB) and the third to then fail. What seems to be happening is that once the SqlException is thrown (even though the loop continues to iterate) all of the next inserts fail - even when there isn't a row in the table that would cause a unique violation. Can anyone explain this? P.S. The only workaround I could find was to instantiate the repo each time before the insert, then it worked exactly as excepted - indicating that it's something to do with the LINQ To SQL DataContext. Thanks.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >