Search Results

Search found 886 results on 36 pages for 'no duplicates'.

Page 31/36 | < Previous Page | 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Unit Testing - Algorithm or Sample based ?

    - by ohadsc
    Say I'm trying to test a simple Set class public IntSet : IEnumerable<int> { Add(int i) {...} //IEnumerable implementation... } And suppose I'm trying to test that no duplicate values can exist in the set. My first option is to insert some sample data into the set, and test for duplicates using my knowledge of the data I used, for example: //OPTION 1 void InsertDuplicateValues_OnlyOneInstancePerValueShouldBeInTheSet() { var set = new IntSet(); //3 will be added 3 times var values = new List<int> {1, 2, 3, 3, 3, 4, 5}; foreach (int i in values) set.Add(i); //I know 3 is the only candidate to appear multiple times int counter = 0; foreach (int i in set) if (i == 3) counter++; Assert.AreEqual(1, counter); } My second option is to test for my condition generically: //OPTION 2 void InsertDuplicateValues_OnlyOneInstancePerValueShouldBeInTheSet() { var set = new IntSet(); //The following could even be a list of random numbers with a duplicate var values = new List<int> { 1, 2, 3, 3, 3, 4, 5}; foreach (int i in values) set.Add(i); //I am not using my prior knowledge of the sample data //the following line would work for any data CollectionAssert.AreEquivalent(new HashSet<int>(values), set); } Of course, in this example, I conveniently have a set implementation to check against, as well as code to compare collections (CollectionAssert). But what if I didn't have either ? This is the situation when you are testing your real life custom business logic. Granted, testing for expected conditions generically covers more cases - but it becomes very similar to implementing the logic again (which is both tedious and useless - you can't use the same code to check itself!). Basically I'm asking whether my tests should look like "insert 1, 2, 3 then check something about 3" or "insert 1, 2, 3 and check for something in general" EDIT - To help me understand, please state in your answer if you prefer OPTION 1 or OPTION 2 (or neither, or that it depends on the case, etc )

    Read the article

  • Stretch ListBox Items hit area to full width if the ListBox?

    - by Nicholas
    I've looked around for an answer on this, but the potential duplicates are more concerned with presentation than interaction. I have a basic list box, and each item's content is a simple string. The ListBox itself is stretched to fill it's grid container, but each ListBoxItem's hitarea does not mirror the ListBox width. It looks as if the hitarea (pointer contact area) for each item is only the width of the text content. How do I make this stretch all the way across, regardless of the text size. I've set HorizontalContentAlignment to Stretch, but this doesn't solve my problem. My only other guess is that the content is actually stretching, but the background is invisible and so not capturing the mouse pointer. <ListBox Grid.Row="1" x:Name="ProjectsListBox" DisplayMemberPath="Name" ItemsSource="{Binding Path=Projects}" SelectedItem="{Binding Path=SelectedProject}" HorizontalContentAlignment="Stretch"/> The XAML is pretty straight forward on this. If I mouse over the text in one of the items, then the entire width of the item becomes active. I guess I just need to know how to create an interactive background that is invisible.

    Read the article

  • auto-document exceptions on methods in C#/.NET

    - by Sarah Vessels
    I would like some tool, preferably one that plugs into VS 2008/2010, that will go through my methods and add XML comments about the possible exceptions they can throw. I don't want the <summary> or other XML tags to be generated for me because I'll fill those out myself, but it would be nice if even on private/protected methods I could see which exceptions could be thrown. Otherwise I find myself going through the methods and hovering on all the method calls within them to see the list of exceptions, then updating that method's <exception list to include those. Maybe a VS macro could do this? From this: private static string getConfigFilePath() { return Path.Combine(Environment.CurrentDirectory, CONFIG_FILE); } To this: /// <exception cref="System.ArgumentException"/> /// <exception cref="System.ArgumentNullException"/> /// <exception cref="System.IO.IOException"/> /// <exception cref="System.IO.DirectoryNotFoundException"/> /// <exception cref="System.Security.SecurityException"/> private static string getConfigFilePath() { return Path.Combine(Environment.CurrentDirectory, CONFIG_FILE); } Update: it seems like the tool would have to go through the methods recursively, e.g., method1 calls method2 which calls method3 which is documented as throwing NullReferenceException, so both method2 and method1 are documented by the tool as also throwing NullReferenceException. The tool would also need to eliminate duplicates, like if two calls within a method are documented as throwing DirectoryNotFoundException, the method would only list <exception cref="System.IO.DirectoryNotFoundException"/> once.

    Read the article

  • Uniq in awk; removing duplicate values in a column using awk

    - by D W
    I have a large datafile in the following format below: ENST00000371026 WDR78,WDR78,WDR78, WD repeat domain 78 isoform 1,WD repeat domain 78 isoform 1,WD repeat domain 78 isoform 2, ENST00000371023 WDR32 WD repeat domain 32 isoform 2 ENST00000400908 RERE,KIAA0458, atrophin-1 like protein isoform a,Homo sapiens mRNA for KIAA0458 protein, partial cds., The columns are tab separated. Multiple values within columns are comma separated. I would like to remove the duplicate values in the second column to result in something like this: ENST00000371026 WDR78 WD repeat domain 78 isoform 1,WD repeat domain 78 isoform 1,WD repeat domain 78 isoform 2, ENST00000371023 WDR32 WD repeat domain 32 isoform 2 ENST00000400908 RERE,KIAA0458 atrophin-1 like protein isoform a,Homo sapiens mRNA for KIAA0458 protein, partial cds., I tried the following code below but it doesn't seem to remove the duplicate values. awk ' BEGIN { FS="\t" } ; { split($2, valueArray,","); j=0; for (i in valueArray) { if (!( valueArray[i] in duplicateArray)) { duplicateArray[j] = valueArray[i]; j++; } }; printf $1 "\t"; for (j in duplicateArray) { if (duplicateArray[j]) { printf duplicateArray[j] ","; } } printf "\t"; print $3 }' knownGeneFromUCSC.txt How can I remove the duplicates in column 2 correctly?

    Read the article

  • Periodically iterating over a collection that's constantly changing

    - by rwmnau
    I have a collection of objects that's constantly changing, and I want to display some information about objects (my application is multi-threaded, and differently threads are constantly submitting requests to modify an object in the collection, so it's unpredictable), and I want to display some information about what's currently in the collection. If I lock the collection, I can iterate over it and get my information without any problems - however, this causes problems with the other threads, since they could have submitted multiple requests to modify the collection in the meantime, and will be stalled. I've thought of a couple ways around this, and I'm looking for any advice. Make a copy of the collection and iterate over it, allowing the original to continue updating in the background. The collection can get large, so this isn't ideal, but it's safe. Iterate over it using a For...Next loop, and catch an IndexOutOfBounds exception if an item is removed from the collection while we're iterating. This may occasionally cause duplicates to appear in my snapshot, so it's not ideal either. Any other ideas? I'm only concerned about a moment-in-time snapshot, so I'm not concerned about reflecting changes in my application - my main concern is that the collection be able to be updated with minimal latency, and that updates never be lost.

    Read the article

  • Mapping element via joining table with NHibernate

    - by NhibernateIdiot
    This is stuff ive done lots of times before but my mind is just blanking at the moment, i will try and give a simple overview of my current situation. I currently have 3 tables as shown below: Office > id, name Person > id, name Office_Personnel > office_id, person_id I then have a model for Person (id, name) and Office, however the Office model contains personnel information: public class Office { int Id {get;set;} string Name {get;set;} ICollection<Person> Personnel {get;set;} } Mapping person is easy, but now im a bit stumped as to why office wont map properly. I chose to use a set when I was mapping the Personnel as there shouldn't be any duplicates, however it doesn't seem to work as I would expect... <set name="Personnel" table="office_personnel" cascade="all"> <key column="office_id" /> <one-to-many class="Person"/> </set> Now one thing that strikes me as odd is that there is no indication as to what person should be binding to (person_id). It keeps trying to find *office_id* column within the Person table. I'm sure this is just some simple problem and im being an idiot, but any help would be great! On a side note, I was weighing up if I should even bother having a middle man table, as I could directly put an Office_Id column within the Person table, but im not 100% sure if in my real project the Person class could be in multiple Offices further down the line...

    Read the article

  • Replace duplicate values in array with new randomly generated values

    - by RussellDias
    I have below a function (created by Gordon in a previous question that went unanswered) that creates an array with n amount of values. The sum of the array is equal to $max. function randomDistinctPartition($n, $max) { $partition= array(); for($i=1; $i < $n; $i++) { $maxSingleNumber = $max - $n; $partition[] = $number = rand(1, $maxSingleNumber); } $max -= $number; } $partition[] = $max; return $partition; } For example: If I set $n = 4 and $max = 30. Then I should get the following. array(5, 7, 10, 8); However, this function does not take into account duplicates and 0s. What I would like - and have been trying to accomplish - is to generate an array with unique numbers that add up to my predetermined variable $max. No Duplicate numbers and No 0 and/or negative integers.

    Read the article

  • PHP – Slow String Manipulation

    - by Simon Roberts
    I have some very large data files and for business reasons I have to do extensive string manipulation (replacing characters and strings). This is unavoidable. The number of replacements runs into hundreds of thousands. It's taking longer than I would like. PHP is generally very quick but I'm doing so many of these string manipulations that it's slowing down and script execution is running into minutes. This is a pain because the script is run frequently. I've done some testing and found that str_replace is fastest, followed by strstr, followed by preg_replace. I've also tried individual str_replace statements as well as constructing arrays of patterns and replacements. I'm toying with the idea of isolating string manipulation operation and writing in a different language but I don't want to invest time in that option only to find that improvements are negligible. Plus, I only know Perl, PHP and COBOL so for any other language I would have to learn it first. I'm wondering how other people have approached similar problems? I have searched and I don't believe that this duplicates any existing questions.

    Read the article

  • Parse a CSV file extracting some of the values but not all.

    - by Yallaa
    Good day, I have a local csv file with values that change daily called DailyValues.csv I need to extract the value field of category2 and category4. Then combine, sort and remove duplicates (if any) from the extracted values. Then save it to a new local file NewValues.txt. Here is an example of the DailyValues.csv file: category,date,value category1,2010-05-18,value01 category1,2010-05-18,value02 category1,2010-05-18,value03 category1,2010-05-18,value04 category1,2010-05-18,value05 category1,2010-05-18,value06 category1,2010-05-18,value07 category2,2010-05-18,value08 category2,2010-05-18,value09 category2,2010-05-18,value10 category2,2010-05-18,value11 category2,2010-05-18,value12 category2,2010-05-18,value13 category2,2010-05-18,value14 category2,2010-05-18,value30 category3,2010-05-18,value16 category3,2010-05-18,value17 category3,2010-05-18,value18 category3,2010-05-18,value19 category3,2010-05-18,value20 category3,2010-05-18,value21 category3,2010-05-18,value22 category3,2010-05-18,value23 category3,2010-05-18,value24 category4,2010-05-18,value25 category4,2010-05-18,value26 category4,2010-05-18,value10 category4,2010-05-18,value28 category4,2010-05-18,value11 category4,2010-05-18,value30 category2,2010-05-18,value31 category2,2010-05-18,value32 category2,2010-05-18,value33 category2,2010-05-18,value34 category2,2010-05-18,value35 category2,2010-05-18,value07 I've found some helpful parsing examples at http://www.php.net/manual/en/function.fgetcsv.php and managed to extract all the values of the value column but don't know how to restrict it to only extract the values of category2/4 then sort and clean duplicate. The solution needs to be in php, perl or shell script. Any help would be much appreciated. Thank you in advance.

    Read the article

  • Objective-c: Comparing two strings not working correctly.

    - by Mr. McPepperNuts
    - (NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section { id <NSFetchedResultsSectionInfo> sectionInfo = [[self.fetchedResultsController sections] objectAtIndex:section]; if(tempArray != nil){ for (int i = 0; i < [tempArray count]; i++) { if([[sectionInfo indexTitle] isEqualToString:[tempArray objectAtIndex:i]]) // if([sectionInfo indexTitle] == [tempArray objectAtIndex:i]) { NSLog(@"found"); break; } else { NSLog(@"Not found %@", [sectionInfo indexTitle]); [tempArray addObject:[sectionInfo indexTitle]]; NSLog(@"array %@", tempArray); return [tempArray objectAtIndex:i]; } } } } The comparison of the strings in the if statement never resolves to true. The sample data has two instances of duplicates for testing purposes. The commented line is an alternate, although I believe incorrect, attempt to compare the section with the string in the tempArray. What am I doing incorrectly? Also, all data is in capital letters, so the comparison is not a matter of lower to upper case.

    Read the article

  • On counting pairs of words that differ by one letter

    - by Quintofron
    Let us consider n words, each of length k. Those words consist of letters over an alphabet (whose cardinality is n) with defined order. The task is to derive an O(nk) algorithm to count the number of pairs of words that differ by one position (no matter which one exactly, as long as it's only a single position). For instance, in the following set of words (n = 5, k = 4): abcd, abdd, adcb, adcd, aecd there are 5 such pairs: (abcd, abdd), (abcd, adcd), (abcd, aecd), (adcb, adcd), (adcd, aecd). So far I've managed to find an algorithm that solves a slightly easier problem: counting the number of pairs of words that differ by one GIVEN position (i-th). In order to do this I swap the letter at the ith position with the last letter within each word, perform a Radix sort (ignoring the last position in each word - formerly the ith position), linearly detect words whose letters at the first 1 to k-1 positions are the same, eventually count the number of occurrences of each letter at the last (originally ith) position within each set of duplicates and calculate the desired pairs (the last part is simple). However, the algorithm above doesn't seem to be applicable to the main problem (under the O(nk) constraint) - at least not without some modifications. Any idea how to solve this?

    Read the article

  • MSSQL Server using multiple ID Numbers

    - by vincer
    I have an web application that creates printable forms, these forms have a unique number on them, the problem is I have 2 forms that separate numbers need to be created for them. ie) Form1- Numbered 2000000-2999999 Form2- Numbered 3000000-3999999 dbo.test2 - is my form information table Tsel - is my autoinc table for the 3000000 series numbers Tadv - is my autoinc table for the 2000000 series numbers What I have done is create 2 tables with just autoinc row (one for 2000000 series numbers and one for 3000000 series numbers), I then created a trigger to add a record to the coresponding table, read back the autoinc number and add it to my table that stores the form information including the just created autoinc number for the right series of forms. Although it does work, I'm concerned that the numbers will get messed up under load. I'm not sure the @@IDENTITY will always return the right value when many people are using the system. (I cannot have duplicates and I need to use the numbering form show above. Thanks for any help See code below. ** TRIGGER ** CREATE TRIGGER MAKEANID2 ON dbo.test2 AFTER INSERT AS SET NOCOUNT ON declare @someid int declare @someid2 int declare @startfrom int declare @test1 varchar(10) select @someid=@@IDENTITY select @test1 = (Select name1 from test2 where sysid = @someid ) if @test1 = 'select' begin insert into Tsel Default values select @someid2 = @@IDENTITY end if @test1 = 'adv' begin insert into Tadv Default values select @someid2 = @@IDENTITY end update test2 set name2=(@someid2) where sysid = @someid SET NOCOUNT OFF

    Read the article

  • C++ map performance - Linux (30 sec) vs Windows (30 mins) !!!

    - by sonofdelphi
    I need to process a list of files. The processing action should not be repeated for the same file. The code I am using for this is - using namespace std; vector<File*> gInputFileList; //Can contain duplicates, File has member sFilename map<string, File*> gProcessedFileList; //Using map to avoid linear search costs void processFile(File* pFile) { File* pProcessedFile = gProcessedFileList[pFile->sFilename]; if(pProcessedFile != NULL) return; //Already processed foo(pFile); //foo() is the action to do for each file gProcessedFileList[pFile->sFilename] = pFile; } void main() { size_t n= gInputFileList.size(); //Using array syntax (iterator syntax also gives identical performance) for(size_t i=0; i<n; i++){ processFile(gInputFileList[i]); } } The code works correctly, but... My problem is that when the input size is 1000, it takes 30 minutes - HALF AN HOUR - on Windows/Visual Studio 2008 Express (both Debug and Release builds). For the same input, it takes only 40 seconds to run on Linux/gcc! What could be the problem? The action foo() takes only a very short time to execute, when used separately. Should I be using something like vector::reserve for the map?

    Read the article

  • When to use CTEs to encapsulate sub-results, and when to let the RDBMS worry about massive joins.

    - by IanC
    This is a SQL theory question. I can provide an example, but I don't think it's needed to make my point. Anyone experienced with SQL will immediately know what I'm talking about. Usually we use joins to minimize the number of records due to matching the left and right rows. However, under certain conditions, joining tables cause a multiplication of results where the result is all permutations of the left and right records. I have a database which has 3 or 4 such joins. This turns what would be a few records into a multitude. My concern is that the tables will be large in production, so the number of these joined rows will be immense. Further, heavy math is performed on each row, and the idea of performing math on duplicate rows is enough to make anyone shudder. I have two questions. The first is, is this something I should care about, or will SQL Server intelligently realize these rows are all duplicates and optimize all processing accordingly? The second is, is there any advantage to grouping each part of the query so as to get only the distinct values going into the next part of the query, using something like: WITH t1 AS ( SELECT DISTINCT... [or GROUP BY] ), t2 AS ( SELECT DISTINCT... ), t3 AS ( SELECT DISTINCT... ) SELECT... I have often seen the use of DISTINCT applied to subqueries. There is obviously a reason for doing this. However, I'm talking about something a little different and perhaps more subtle and tricky.

    Read the article

  • sql query selecting one name no matter how many rows it was mentioned in

    - by Baruch
    Basically what I'm trying to do is get the information from column x no matter how many times it was mentioned. means that if I have this kind of table: x | y | z ------+-------+-------- hello | one | bye hello | two | goodbye hi | three | see you so what I'm trying to do is create a query that would get all of the names that are mentions in the x column without duplicates and put it into a select list. my goal is that I would have a select list with TWO not THREE options, hello and hi this is what I have so far which isn't working. hope you guys know the answer to that: function getList(){ $options="<select id='names' style='margin-right:40px;'>"; $c_id = $_SESSION['id']; $sql="SELECT * FROM names"; $result=mysql_query($sql); $options.="<option value='blank'>-- Select something --</option>" ; while ($row=mysql_fetch_array($result)) { $name=$row["x"]; $options.="<option value='$name'>$name</option>"; } $options.= "</SELECT>"; return "$options"; } Sorry for confusing... i edited my source

    Read the article

  • restrict duplicate rows in specific columns in mysql

    - by JPro
    I have a query like this : select testset, count(distinct results.TestCase) as runs, Sum(case when Verdict = "PASS" then 1 else 0 end) as pass, Sum(case when Verdict <> "PASS" then 1 else 0 end) as fails, Sum(case when latest_issue <> "NULL" then 1 else 0 end) as issues, Sum(case when latest_issue <> "NULL" and issue_type = "TC" then 1 else 0 end) as TC_issues from results join testcases on results.TestCase = testcases.TestCase where platform = "T1_PLATFORM" AND testcases.CaseType = "M2" and testcases.dummy <> "flag_1" group by testset order by results.TestCase The result set I get is : testset runs pass fails issues TC_issues T1 66 125 73 38 33 T2 18 19 16 16 15 T3 57 58 55 55 29 T4 52 43 12 0 0 T5 193 223 265 130 22 T6 23 12 11 0 0 My problem is, this is a result table which has testcases running multiple times. So, I am able to restrict the runs using the distinct TestCases but when I want the pass and fails, since I am using case I am unable to eliminate the duplicates. Is there any way to achieve what I want? any help please? thanks.

    Read the article

  • Optimize Duplicate Detection

    - by Dave Jarvis
    Background This is an optimization problem. Oracle Forms XML files have elements such as: <Trigger TriggerName="name" TriggerText="SELECT * FROM DUAL" ... /> Where the TriggerText is arbitrary SQL code. Each SQL statement has been extracted into uniquely named files such as: sql/module=DIAL_ACCESS+trigger=KEY-LISTVAL+filename=d_access.fmb.sql sql/module=REP_PAT_SEEN+trigger=KEY-LISTVAL+filename=rep_pat_seen.fmb.sql I wrote a script to generate a list of exact duplicates using a brute force approach. Problem There are 37,497 files to compare against each other; it takes 8 minutes to compare one file against all the others. Logically, if A = B and A = C, then there is no need to check if B = C. So the problem is: how do you eliminate the redundant comparisons? The script will complete in approximately 208 days. Script Source Code The comparison script is as follows: #!/bin/bash echo Loading directory ... for i in $(find sql/ -type f -name \*.sql); do echo Comparing $i ... for j in $(find sql/ -type f -name \*.sql); do if [ "$i" = "$j" ]; then continue; fi # Case insensitive compare, ignore spaces diff -IEbwBaq $i $j > /dev/null # 0 = no difference (i.e., duplicate code) if [ $? = 0 ]; then echo $i :: $j >> clones.txt fi done done Question How would you optimize the script so that checking for cloned code is a few orders of magnitude faster? System Constraints Using a quad-core CPU with an SSD; trying to avoid using cloud services if possible. The system is a Windows-based machine with Cygwin installed -- algorithms or solutions in other languages are welcome. Thank you!

    Read the article

  • C/C++ Bit Array or Bit Vector

    - by MovieYoda
    Hi, I am learning C/C++ programming & have encountered the usage of 'Bit arrays' or 'Bit Vectors'. Am not able to understand their purpose? here are my doubts - Are they used as boolean flags? Can one use int arrays instead? (more memory of course, but..) What's this concept of Bit-Masking? If bit-masking is simple bit operations to get an appropriate flag, how do one program for them? is it not difficult to do this operation in head to see what the flag would be, as apposed to decimal numbers? I am looking for applications, so that I can understand better. for Eg - Q. You are given a file containing integers in the range (1 to 1 million). There are some duplicates and hence some numbers are missing. Find the fastest way of finding missing numbers? For the above question, I have read solutions telling me to use bit arrays. How would one store each integer in a bit?

    Read the article

  • Generating all unique crossword puzzle grids

    - by heydenberk
    I want to generate all unique crossword puzzle grids of a certain grid size (4x4 is a good size). All possible puzzles, including non-unique puzzles, are represented by a binary string with the length of the grid area (16 in the case of 4x4), so all possible 4x4 puzzles are represented by the binary forms of all numbers in the range 0 to 2^16. Generating these is easy, but I'm curious if anyone has a good solution for how to programmatically eliminate invalid and duplicate cases. For example, all puzzles with a single column or single row are functionally identical, hence eliminating 7 of those 8 cases. Also, according to crossword puzzle conventions, all squares must be contiguous. I've had success removing all duplicate structures, but my solution took several minutes to execute and probably was not ideal. I'm at something of a loss for how to detect contiguity so if anyone has ideas on this it'd be much appreciated. I'd prefer solutions in python but write in whichever language you prefer. If anyone wants, I can post my python code for generating all grids and removing duplicates, slow as it may be.

    Read the article

  • MYSQL variables - SET @var

    - by Lizard
    I am attempting to create a mysql snippet that will analyse a table and remove duplicate entries (duplicates are based on two fields not entire record) I have the following code that works when I hard code the variables in the queries, but when I take them out and put them as variables I get mysql errors, below is the script SET @tblname = 'mytable'; SET @fieldname = 'myfield'; SET @concat1 = 'checkfield1'; SET @concat2 = 'checkfield2'; ALTER TABLE @tblname ADD `tmpcheck` VARCHAR( 255 ) NOT NULL; UPDATE @tblname SET `tmpcheck` = CONCAT(@concat1,'-',@concat2); CREATE TEMPORARY TABLE `tmp_table` ( `tmpfield` VARCHAR( 100 ) NOT NULL ) ENGINE = MYISAM ; INSERT INTO `tmp_table` (`tmpfield`) SELECT @fieldname FROM @tblname GROUP BY `tmpcheck` HAVING ( COUNT(`tmpcheck`) > 1 ); DELETE FROM @tblname WHERE @fieldname IN (SELECT `tmpfield` FROM `tmp_table`); ALTER TABLE @tblname DROP `tmpcheck`; I am getting the following error: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '@tblname ADD `tmpcheck` VARCHAR( 255 ) NOT NULL' at line 1 Is this because I can't use a variable for a table name? What else could be wrong or how wopuld I get around this issue. Thanks in adavnce

    Read the article

  • MySQL Multiple Table Join

    - by hitman001
    I have a 3 tables that I'm trying to join and get distinct results. CREATE TABLE `car` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL DEFAULT '', PRIMARY KEY (`id`) ) ENGINE=InnoDB mysql> select * from car; +----+-------+ | id | name | +----+-------+ | 1 | acura | +----+-------+ CREATE TABLE `tires` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `tire_desc` varchar(255) DEFAULT NULL, `car_id` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `new_fk_constraint` (`car_id`), CONSTRAINT `new_fk_constraint` FOREIGN KEY (`car_id`) REFERENCES `car` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB mysql> select * from tires; +----+-------------+--------+ | id | tire_desc | car_id | +----+-------------+--------+ | 1 | front_right | 1 | | 2 | front_left | 1 | +----+-------------+--------+ CREATE TABLE `lights` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `lights_desc` varchar(255) NOT NULL, `car_id` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `new1_fk_constraint` (`car_id`), CONSTRAINT `new1_fk_constraint` FOREIGN KEY (`car_id`) REFERENCES `car` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB mysql> select * from lights; +----+-------------+--------+ | id | lights_desc | car_id | +----+-------------+--------+ | 1 | right_light | 1 | | 2 | left_light | 1 | +----+-------------+--------+ Here is my query. mysql> SELECT name, group_concat(tire_desc), group_concat(lights_desc) FROM car left join tires on car.id = tires.car_id left join lights on car.id = car_id group by car.id; +-------+-----------------------------------------------+-----------------------------------------------+ | name | group_concat(tire_desc) | group_concat(lights_desc) | +-------+-----------------------------------------------+-----------------------------------------------+ | acura | front_right,front_right,front_left,front_left | right_light,left_light,right_light,left_light | +-------+-----------------------------------------------+-----------------------------------------------+ I get duplicate entires and this is what I would like to get. +-------+-----------------------------------------------+--------------------------------+ | name | group_concat(tire_desc) | group_concat(lights_desc) | +-------+-----------------------------------------------+--------------------------------+ | acura | front_right,front_left | right_light,left_light | +-------+-----------------------------------------------+--------------------------------+ I cannot use distinct in group_concat because I might have legitimate duplicates which I would like to keep. Is there any way to do this query using joins and not using inner selects like the statement below? SELECT name, (select group_concat(tire_desc) from tires where car.id = tires.car_id), (select group_concat(lights_desc) from lights where car.id = lights.car_id) FROM car Also, if I will use inner selects, will there be any performance issues over joins?

    Read the article

  • How can I get distinct values using Linq to NHibernate?

    - by Chris
    I've been trying to get distinct values using Linq to NHibernate and I'm failing miserably. I've tried: var query = from requesters in _session.Linq<Requesters>() orderby requesters.Requestor ascending select requesters; return query.Distinct(); As well as var query = from requesters in _session.Linq<Requesters>() orderby requesters.Requestor ascending select requesters; return query.Distinct(new RequestorComparer()); Where RequestorComparer is public class RequestorComparer : IEqualityComparer<Requesters> { #region IEqualityComparer<Requesters> Members bool IEqualityComparer<Requesters>.Equals(Requesters x, Requesters y) { //return x.RequestorId.Value.Equals(y.RequestorId.Value); return ((x.RequestorId == y.RequestorId) && (x.Requestor == y.Requestor)); } int IEqualityComparer<Requesters>.GetHashCode(Requesters obj) { return obj.RequestorId.Value.GetHashCode(); } #endregion } No matter how I structure the syntax, it never seems to hit the .Distinct(). Without .Distinct() there are multiple duplicates by default in the table I'm querying, on order of 195 total records but there should only be 22 distinct values returned. I'm not sure what I'm doing wrong but would greatly appreciate any assistance that can be provided. Thanks

    Read the article

  • Why do I get 'Connection refused - connect(2)' for some models?

    - by Will
    I have a rails application running for the past 90 days that suddenly stopped working. Debugging the problem I found that I can read from the DB but not write to it. At least for certain models. There is one model that I can save whereas all others return Connection refused - connect(2) when I attempt to save them. They all used to work fine last month. I have no idea how to determine what the problem may be. Unfortunately I do not have access to the actual server remotely right now so I am limited in my debugging ability. I was able to get some non-tech people to run simple commands though that may help identify my problem. I will also be getting access tomorrow at some point. 1 Check from the console ./script/console >> a = Post.last.clone => #<Post id: nil, title: "test"... >> a.ex_id = 7 >> a.save Connection refused - connect(2) ... ... >> b = Story.last.console => #<Story id: nil, title: "test"... >> a.ex_id = 7 >> a.save => true I am not sure why this works for story and not post. This is consistent over many tests. 2 Check from mysql ./script/dbconsole -p mysql> INSERT INTO Posts (`title`,`body`, `ex_id`) SELECT `title`, `body`, 7 FROM Posts WHERE ID = 1; Query OK, 1 row affected (0.01 sec) Records: 1 Duplicates: 0 Warnings: 0 And as you can see I am able to write to the table with the same credientials that Rails uses? Does anyone know why I get connection refused in the console?

    Read the article

  • MYSQL how to sum rows with same key, then delete the duplicate rows

    - by Bhante-S
    What I have: key data 1      22 1       5 2       6 3       1 3      -3 What I want: key data 1      27 2       6 3      -2 I don’t mind doing this with two or more queries, esp. if they are simple--makes for easier maintenance. Also the tables are fairly small (<2,000 records). The ‘key’ field is indexed and allows duplicates. Muchas Gracias

    Read the article

  • getting duplicate array output - java

    - by dowln
    Hello, Can someone could be kind and help me out here. Thanks in advance... My code below outputs the string as duplicates. I don't want to use Sets or ArrayList. I am using java.util.Random. I am trying to write a code that checks if string has already been randomly outputted and if it does, then it won't display. Where I am going wrong and how do I fix this. public class Worldcountries { private static Random nums = new Random(); private static String[] countries = { "America", "Candada", "Chile", "Argentina" }; public static int Dice() { return (generator.nums.nextInt(6) + 1); } public String randomCounties() { String aTemp = " "; int numOfTimes = Dice(); int dup = 0; for(int i=0 ; i<numOfTimes; i++) { // I think it's in the if statement where I am going wrong. if (!countries[i].equals(countries[i])) { i = i + 1; } else { dup--; } // and maybe here aTemp = aTemp + countries[nums.nextInt(countries.length)]; aTemp = aTemp + ","; } return aTemp; } } So the output I am getting (randomly) is, "America, America, Chile" when it should be "America, Chile".

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36  | Next Page >