Search Results

Search found 886 results on 36 pages for 'no duplicates'.

Page 6/36 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Split comma separated string to count duplicates

    - by josepv
    I have the following data in my database (comma separated strings): "word, test, hello" "test, lorem, word" "test" ... etc How can I transform this data into a Dictionary whereby each string is separated into each distinct word together with the number of times that it occurs, i.e. {"test", 3}, {"word", 2}, {"hello", 1}, {"lorem", 1} I will have approximately 3000 rows of data in case this makes a difference to any solution offered. Also I am using .NET 3.5 (and would be interested to see any solution using linq)

    Read the article

  • Sampling Duplicates

    - by user3640982
    I have a dataset from which I need to sample. It is set up with an ID field and a year field. I want every record from the most current year and then I want the most current ID's but sampled from every 3rd year going back. The data is ordered by year. For example ID<-rep(1:3, 5) Year<-rep(c(1,2,3,4,5),each=3) df<-data.frame(ID,Year) ID Year 1 1 1 2 2 1 3 3 1 4 1 2 5 2 2 6 3 2 7 1 3 8 2 3 9 3 3 10 1 4 11 2 4 12 3 4 13 1 5 14 2 5 15 3 5 So from this example, I would want to return ID Year 1 1 1 2 2 1 3 3 1 4 1 4 5 2 4 6 3 4 I'm thinking that some combination of duplicated() and which() should get what I want, but the problem is duplicated() just tells if it has been repeated; it doesn't say which record is being repeated. which(duplicated(df$ID)) [1] 4 5 6 7 8 9 10 11 12 13 14 15 This a problem since not every ID exists in every year. Any help would be appreciated. Thanks, Eric

    Read the article

  • SQL to search duplicates

    - by Ram
    I have a table for animals like Lion Tiger Elephant Jaguar List item Cheetah Puma Rhino I want to insert new animals in this table and I am t reading the animal names from a CSV file. Suppose I got following names in the file Lion,Tiger,Jaguar as these animals are already in "Animals" table, What should be a single SQL query that will determine if the animals are already exist in the table.

    Read the article

  • How to prevent duplicates, macro or something?

    - by blez
    Well, the problem is that I've got a lot of code like this for each event passed to the GUI, how can I shortify this? Macros wont do the work I guess. Is there a more generic way to do something like a 'template' ? private delegate void DownloadProgressDelegate(object sender, DownloaderProgressArgs e); void DownloadProgress(object sender, DownloaderProgressArgs e) { if (this.InvokeRequired) { this.BeginInvoke(new DownloadProgressDelegate(DownloadProgress), new object[] { sender, e }); return; } label2.Text = d.speedOutput.ToString(); } private delegate void DownloadSpeedDelegate(object sender, DownloaderProgressArgs e); void DownloadSpeed(object sender, DownloaderProgressArgs e) { if (this.InvokeRequired) { this.BeginInvoke(new DownloadSpeedDelegate(DownloadSpeed), new object[] { sender, e }); return; } string speed = ""; speed = (e.DownloadSpeed / 1024).ToString() + "kb/s"; label3.Text = speed; }

    Read the article

  • filter duplicates in SQL join

    - by Will
    When using a SQL join, is it possible to keep only rows that have a single row for the left table? For example: select * from A, B where A.id = B.a_id; a1 b1 a2 b1 a2 b2 In this case, I want to remove all except the first row, where a single row from A matched exactly 1 row from B. I'm using MySQL.

    Read the article

  • MSSQL - Select one random record not showing duplicates

    - by Lukes123
    I have two tables, events and photos, which relate together via the 'Event_ID' column. I wish to select ONE random photo from each event and display them. How can I do this? I have the following which displays all the photos which are associated. How can I limit it to one per event? SELECT Photos.Photo_Id, Photos.Photo_Path, Photos.Event_Id, Events.Event_Title, Events.Event_StartDate, Events.Event_EndDate FROM Photos, Events WHERE Photos.Event_Id = Events.Event_Id AND Events.Event_EndDate < GETDATE() AND Events.Event_EndDate IS NOT NULL AND Events.Event_StartDate IS NOT NULL ORDER BY NEWID() Thanks Luke Stratton

    Read the article

  • XSLT1.0: remove duplicates combined with an xsl:key

    - by Jannibal
    I have the following piece of XML: <research> <research.record> <research.record_number>1</research.record_number> <research.type> <value lang="en-US">some research type</value> </research.type> <research.type> <value lang="en-US">some other type of research</value> </research.type> <project.record> <priref>101</priref> <project.type> <value lang="en-US">some type of project</value> </project.type> </project.record> </research.record> </research> <research> <research.record> <research.record_number>2</research.record_number> <research.type> <value lang="en-US">some other type of research</value> </research.type> <research.type> <value lang="en-US">a third type of research</value> </research.type> <project.record> <priref>101</priref> <project.type> <value lang="en-US">some type of project</value> </project.type> </project.record> </research.record> </research> <research> <research.record> <research.record_number>3</research.record_number> <research.type> <value lang="en-US">some other type of research</value> </research.type> <research.type> <value lang="en-US">a fourth type</value> </research.type> <project.record> <priref>201</priref> <project.type> <value lang="en-US">some other type of project</value> </project.type> </project.record> </research.record> </research> <research> ... etc ... With XSLT 1.0 I transform this XML into a list of unique project records by using xsl:key. So far, so good... The problem is: I also want to show unique research types for each unique project record. My wanted output would be: project.record 101: some research type, some other type of research, a third type of research project.record 201: some other type of research, a fourth type Hope someone can help me out with the right XSLT/XPATH. (Can only use XSLT1.0)

    Read the article

  • Stop duplicates from being added to an array of Ruby objects

    - by Dom
    how can I eliminate duplicate elements from an array of ruby objects using an attribute of the object to match identical objects. with an array of basic types I can use a set.. eg. array_list = [1, 3, 4 5, 6, 6] array_list.to_set => [1, 2, 3, 4, 5, 6] can I adapt this technique to work with object attributes? thanks

    Read the article

  • SQL Server - Select one random record not showing duplicates

    - by Lukes123
    I have two tables, events and photos, which relate together via the 'Event_ID' column. I wish to select ONE random photo from each event and display them. How can I do this? I have the following which displays all the photos which are associated. How can I limit it to one per event? SELECT Photos.Photo_Id, Photos.Photo_Path, Photos.Event_Id, Events.Event_Title, Events.Event_StartDate, Events.Event_EndDate FROM Photos, Events WHERE Photos.Event_Id = Events.Event_Id AND Events.Event_EndDate < GETDATE() AND Events.Event_EndDate IS NOT NULL AND Events.Event_StartDate IS NOT NULL ORDER BY NEWID() Thanks Luke Stratton

    Read the article

  • Grails: Duplicates & unique constraint validation

    - by rukoche
    OK here is stripped down version of what I have in my app Artist domain: class Artist { String name Date lastMined def artistService static transients = ['artistService'] static hasMany = [events: Event] static constraints = { name(unique: true) lastMined(nullable: true) } def mine() { artistService.mine(this) } } Event domain: class Event { String name String details String country String town String place String url String date static belongsTo = [Artist] static hasMany = [artists: Artist] static constraints = { name(unique: true) url(unique: true) } } ArtistService: class ArtistService { def results = [ [ name:"name", details:"details", country:"country", town:"town", place:"place", url:"url", date:"date" ] ] def mine(Artist artist) { results << results[0] // now we have a duplicate results.each { def event = new Event(it) if (event.validate()) { if (artist.events.find{ it.name == event.name }) { log.info "grrr! valid duplicate name: ${event.name}" } artist.addToEvents(event) } } artist.lastMined = new Date() if (artist.events) { artist.save(flush: true) } } } In theory event.validate() should return false and event will not be added to artist, but it doesn't.. which results in DB exception on artist.save() Although I noticed that if duplicate event is persisted first everything works as intended. Is it bug or feature? :P

    Read the article

  • Indexing table with duplicates MySQL/MSSQL with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In MSSQL you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • is there a better way of replacing duplicates in a list (python)

    - by myeu2
    Given a list: l1: ['a', 'b', 'c', 'a', 'a', 'b'] output: ['a', 'b', 'c', 'a'_1, 'a'_2, 'b'_1 ] I created the following code to get the output. Its messyyy.. for index in range(len(l1)): counter = 1 list_of_duplicates_for_item = [dup_index for dup_index, item in enumerate(l1) if item == l1[index] and l1.count(l1[index]) > 1] for dup_index in list_of_duplicates_for_item[1:]: l1[dup_index] = l1[dup_index] + '_' + str(counter) counter = counter + 1 Is there a more pythonic way of doing this? I couldnt find anything on the web.

    Read the article

  • Remove duplicates in entries, scala way

    - by andersbohn
    I have a list of entries stating a production value in a given interval. Entries stating the exact same value at a later time adds no information and can safely be left out. case class Entry(minute:Int, production:Double) val entries = List(Entry(0, 100.0), Entry(5, 100.0), Entry(10, 100.0), Entry(20, 120.0), Entry(30, 100.0), Entry(180, 0.0)) Experimenting with the scala 2.8 collection functions, so far I have this working implementation: entries.foldRight(List[Entry]()) { (entry, list) => list match { case head :: tail if (entry.production == head.production) => entry :: tail case head :: tail => entry :: list case List() => entry :: List() } } res0: List[Entry] = List(Entry(0,100.0), Entry(20,120.0), Entry(30,100.0), Entry(180,0.0)) Any comments? Am I missing out on some scala magic?

    Read the article

  • sed script to remove file name duplicates

    - by dma_k
    Dear community, I hope the below task will be very easy for sed lovers. I am not sed-guru, but I need to express the following task in sed, as sed is more popular on Linux systems. The input text stream is something which is produced by "make depends" and looks like following: pgm2asc.o: pgm2asc.c ../include/config.h amiga.h list.h pgm2asc.h pnm.h \ output.h gocr.h unicode.h ocr1.h ocr0.h otsu.h barcode.h progress.h box.o: box.c gocr.h pnm.h ../include/config.h unicode.h list.h pgm2asc.h \ output.h database.o: database.c gocr.h pnm.h ../include/config.h unicode.h list.h \ pgm2asc.h output.h detect.o: detect.c pgm2asc.h pnm.h ../include/config.h output.h gocr.h \ unicode.h list.h I need to catch only C++ header files (i.e. ending with .h), make the list unique and print as space-separated list prepending src/ as a path-prefix. This is achieved by the following perl script: make libs-depends | perl -e 'while (<>) { while (/ ([\w\.\/]+?\.h)/g) { $a{$1} = 1; } } print join " ", map { "src/$_" } keys %a;' The output is: src/unicode.h src/pnm.h src/progress.h src/amiga.h src/ocr0.h src/ocr1.h src/otsu.h src/barcode.h src/gocr.h src/../include/config.h src/list.h src/pgm2asc.h src/output.h Please, help to express this in sed.

    Read the article

  • Python - Removing duplicates from a string

    - by Daniel
    def remove_duplicates(strng): """ Returns a string which is the same as the argument except only the first occurrence of each letter is present. Upper and lower case letters are treated as different. Only duplicate letters are removed, other characters such as spaces or numbers are not changed. >>> remove_duplicates('apple') 'aple' >>> remove_duplicates('Mississippi') 'Misp' >>> remove_duplicates('The quick brown fox jumps over the lazy dog') 'The quick brown fx jmps v t lazy dg' >>> remove_duplicates('121 balloons 2 u') '121 balons 2 u' """ s = strng.split() return strng.replace(s[0],"") Writing a function to get rid of duplicate letters but so far have been playing around for an hour and can't get anything. Help would be appreciated, thanks.

    Read the article

  • Handling primary key duplicates in a data warehouse load

    - by Meff
    I'm currently building an ETL system to load a data warehouse from a transactional system. The grain of my fact table is the transaction level. In order to ensure I don't load duplicate rows I've put a primary key on the fact table, which is the transaction ID. I've encountered a problem with transactions being reversed - In the transactional database this is done via a status, which I pick up and I can work out if the transaction is being done, or rolled back so I can load a reversal row in the warehouse. However, the reversal row will have the same transaction ID and so I get a primary key violation. I've solved this for now by negating the primary key, so transaction ID 1 would be a payment, and transaction ID -1 (In the warehouse only) would be the reversal. I have considered an alternative of generating a BIT column, where 0 is normal and 1 is reversal, then making the PK the transaction ID and the BIT column. My question is, is this a good practice, and has anyone else encountered anything like this? For reference, this is a payment processing system, so values will not be modified, so there will only ever be transactions and reversals.

    Read the article

  • merging two tables, while applying aggregates on the duplicates (max,min and sum)

    - by cloudraven
    I have a table (let's call it log) with a few millions of records. Among the fields I have Id, Count, FirstHit, LastHit. Id - The record id Count - number of times this Id has been reported FirstHit - earliest timestamp with which this Id was reported LastHit - latest timestamp with which this Id was reported This table only has one record for any given Id Everyday I get into another table (let's call it feed) with around half a million records with these fields among many others: Id Timestamp - Entry date and time. This table can have many records for the same id What I want to do is to update log in the following way. Count - log count value, plus the count() of records for that id found in feed FirstHit - the earliest of the current value in log or the minimum value in feed for that id LastHit - the latest of the current value in log or the maximum value in feed for that id. It should be noticed that many of the ids in feed are already in log. The simple thing that worked is to create a temporary table and insert into it the union of both as in Select Id, Min(Timestamp) As FirstHit, MAX(Timestamp) as LastHit, Count(*) as Count FROM feed GROUP BY Id UNION ALL Select Id, FirstHit,LastHit,Count FROM log; From that temporary table I do a select that aggregates Min(firsthit), max(lasthit) and sum(Count) Select Id, Min(FirstHit),Max(LastHit),Sum(Count) FROM @temp GROUP BY Id; and that gives me the end result. I could then delete everything from log and replace it with everything with temp, or craft an update for the common records and insert the new ones. However, I think both are highly inefficient. Is there a more efficient way of doing this. Perhaps doing the update in place in the log table?

    Read the article

  • Indexing table with duplicates MySQL/SQL Server with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In SQL Server you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • PHP mail sending duplicates with different timestamp

    - by brhea
    Hi all, I've got a PHP/AJAX form on my site at http://www.brianrhea.com (click Request Project) When I test the form from multiple computers, it works fine for me. However, I will sporadically receive a duplicate e-mail and have heard from at least one user who tried to submit that it gave them an alert error which I am unable to duplicate. This is the PHP that I'm using. Is there anything that stands out as a potential issue? <?php //Retrieve form data. //GET - user submitted data using AJAX //POST - in case user does not support javascript, we'll use POST instead $name = ($_GET['name']) ? $_GET['name'] : $_POST['name']; $email = ($_GET['email']) ?$_GET['email'] : $_POST['email']; $subject = ($_GET['subject']) ?$_GET['subject'] : $_POST['subject']; $comments = ($_GET['comments']) ?$_GET['comments'] : $_POST['comments']; //flag to indicate which method it uses. If POST set it to 1 if ($_POST) $post=1; //Simple server side validation for POST data, of course, you should validate the email if (!$name) $errors[count($errors)] = 'Please enter your name.'; if (!$email) $errors[count($errors)] = 'Please enter your email.'; if (!$subject) $errors[count($errors)] = 'Please choose a subject.'; if (!$comments) $errors[count($errors)] = 'Please enter your comments.'; //if the errors array is empty, send the mail if (!$errors) { //recipient $to = '[email protected]'; //sender $from = $name . ' <' . $email . '>'; //subject and the html message $subject = 'Comment from ' . $name; $message = ' <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head></head> <body> <table> <tr><td>Name</td><td>' . $name . '</td></tr> <tr><td>Email</td><td>' . $email . '</td></tr> <tr><td>Subject</td><td>' . $subject . '</td></tr> <tr><td>Comments</td><td>' . nl2br($comments) . '</td></tr> </table> </body> </html>'; //send the mail $result = sendmail($to, $subject, $message, $from); //if POST was used, display the message straight away if ($_POST) { if ($result) echo 'Thank you! We have received your message.'; else echo 'Please verify that you have entered a valid email address.'; //else if GET was used, return the boolean value so that //ajax script can react accordingly //1 means success, 0 means failed } else { echo $result; } //if the errors array has values } else { //display the errors message for ($i=0; $i<count($errors); $i++) echo $errors[$i] . '<br/>'; echo '<a href="form.php">Back</a>'; exit; } //Simple mail function with HTML header function sendmail($to, $subject, $message, $from) { $headers = "MIME-Version: 1.0" . "\r\n"; $headers .= "Content-type:text/html;charset=iso-8859-1" . "\r\n"; $headers .= 'From: ' . $from . "\r\n"; $result = mail($to,$subject,$message,$headers); if ($result) return 1; else return 0; } ?>

    Read the article

  • Checking for duplicates with nested forms

    - by Cyrus
    I'm making a rails 3.2.9 app that allows users to create pages and they can embed youtube videos through a nested form. I'm trying to figure out how to make it so that I can prevent duplicate video records from being stored in my db. So I have a Video model that takes the youtube url and just parses out the video id and stores that instead of the full user submitted youtube url, which may have extraneous url query parameters. So here's the situation that I'm trying to figure out: There's page1 with video1 - url: 123 and video2 - url: abc Then another user creates page2 and submits video3 - url: def and video4 - url: 123 Currently each page has_many videos. But I think I should change it to a many-to-many relationship. But how would I make it so that the url submitted as video4 in the nested form points to video1? Also I how would I make a nested form that creates objects that are connected through a join table?

    Read the article

  • Difference Between Two Lists with Many Duplicates in Python

    - by Paul
    I have several lists that contain many of the same items and many duplicate items. I want to check which items in one list are not in the other list. For example, I might have one list like this: l1 = ['a', 'b', 'c', 'b', 'c'] and one list like this: l2 = ['a', 'b', 'c', 'b'] Comparing these two lists I would want to return a third list like this: l3 = ['c'] I am currently using some terrible code that I made a while ago that I'm fairly certain doesn't even work properly shown below. def list_difference(l1,l2): for i in range(0, len(l1)): for j in range(0, len(l2)): if l1[i] == l1[j]: l1[i] = 'damn' l2[j] = 'damn' l3 = [] for item in l1: if item!='damn': l3.append(item) return l3 How can I better accomplish this task?

    Read the article

  • program to determine number of duplicates in a sentence

    - by bhavna raghuvanshi
    public class duplicate { public static void main(String[] args)throws IOException { System.out.println("Enter words separated by spaces ('.' to quit):"); Set<String> s = new HashSet<String>(); Scanner input = new Scanner(System.in); while (true) { String token = input.next(); if (".".equals(token)) break; if (!s.add(token)) System.out.println("Duplicate detected: " + token); } System.out.println(s.size() + " distinct words:\n" + s); } } my program detects and prints duplicate words but i need to print the number of duplicate words also. pls help me do it.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >