Search Results

Search found 886 results on 36 pages for 'duplicates'.

Page 24/36 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • How to select only the first rows for each unique value of a column

    - by nuit9
    Let's say I have a table of customer addresses: CName | AddressLine ------------------------------- John Smith | 123 Nowheresville Jane Doe | 456 Evergreen Terrace John Smith | 999 Somewhereelse Joe Bloggs | 1 Second Ave In the table, one customer like John Smith can have multiple addresses. I need the select query for this table to return only first row found where there are duplicates in 'CName'. For this table it should return all rows except the 3rd (or 1st - any of those two addresses are okay but only one can be returned). Is there a keyword I can add to the SELECT query to filter based on whether the server has already seen the column value before?

    Read the article

  • Need an alternative to two left joins.

    - by Scarface
    Hey guys quick question, I always use left join, but when I left join twice I always get funny results, usually duplicates. I am currently working on a query that Left Joins twice to retrieve the necessary information needed but I was wondering if it were possible to build another select statement in so then I do not need two left joins or two queries or if there were a better way. For example, if I could select the topic.creator in table.topic first AS something, then I could select that variable in users and left join table.scrusersonline. Thanks in advance for any advice. SELECT * FROM scrusersonline LEFT JOIN users ON users.id = scrusersonline.id LEFT JOIN topic ON users.username = topic.creator WHERE scrusersonline.topic_id = '$topic_id' The whole point of this query is to check if the topic.creator is online by retrieving his name from table.topic and matching his id in table.users, then checking if he is in table.scrusersonline. It produces duplicate entries unfortunately and is thus inaccurate in my mind.

    Read the article

  • How to sort TBB concurrent_vector or concurrent_queue?

    - by Jackie
    Now I have a solver in that I need to keep a set of self-defined data type objects in a concurrent_vector or queue. It has to be concurrent because the objects come from different threads.With this concurrent container, I hope to sort these objects, eliminate duplicates and send them back when other threads need them. However, I know TBB offers concurrent_vector and concurrent_queue which can be read and written concurrently from different threads. But how to sort the objects inside a container? Does everyone know how to do that? Thanks.

    Read the article

  • sql insert query needed

    - by masfenix
    Hey guys, so I have two tables. They are pictured below. I have a master table "all_reports". And a user table "user list". The master table may have users that do not exist in the user list. I need to add them to the user list. The master table may have duplicates in them (check picture). The master list does not contain all the information that the user list requires (no manager, no HR status, no department.. again check picture).

    Read the article

  • Finding matches between multiple JavaScript Arrays

    - by Chris Barr
    I have multiple arrays with string values and I want to compare them and only keep the matching results that are identical between ALL of them. Given this example code: var arr1 = ['apple', 'orange', 'banana', 'pear', 'fish', 'pancake', 'taco', 'pizza']; var arr2 = ['taco', 'fish', 'apple', 'pizza']; var arr3 = ['banana', 'pizza', 'fish', 'apple']; I would like to to produce the following array that contains matches from all given arrays: ['apple', 'fish', 'pizza'] I know I can combine all the arrays with var newArr = arr1.concat(arr2, arr3); but that just give me an array with everything, plus the duplicates. Can this be done easily without needing the overhead of libraries such as underscore.js? (Great, and now i'm hungry too!) EDIT I suppose I should mention that there could be an unknown amount of arrays, I was just using 3 as an example.

    Read the article

  • when is java faster than c++ (or when is JIT faster then precompiled)?

    - by kostja
    I have heard that under certain circumstances, Java programs or rather parts of java programs are able to be executed faster than the "same" code in C++ (or other precompiled code) due to JIT optimizations. This is due to the compiler being able to determine the scope of some variables, avoid some conditionals and pull similar tricks at runtime. Could you give an (or better - some) example, where this applies? And maybe outline the exact conditions under which the compiler is able to optimize the bytecode beyond what is possible with precompiled code? NOTE : This question is not about comparing Java to C++. Its about the possibilities of JIT compiling. Please no flaming. I am also not aware of any duplicates. Please point them out if you are.

    Read the article

  • Queue-like data structure with fast search and insertion

    - by Max
    I need a datastructure with the following properties: It contains integer numbers, no duplicates. After it reaches the maximal size the first element is removed. So if the capacity is 3, then this is how it would look when putting in it sequential numbers: {}, {1}, {1, 2}, {1, 2, 3}, {2, 3, 4}, {3, 4, 5} etc. Only two operations are needed: inserting a number into this container (INSERT) and checking if the number is already in the container (EXISTS). The number of EXISTS operations is expected to be approximately 2 * number of INSERT operations. I need these operations to be as fast as possible. What would be the fastest data structure or combination of data structures for this scenario?

    Read the article

  • Java HashSet is allowing dupes; problem with comparable?

    - by IVR Avenger
    Hi, all. I've got a class, "Accumulator", that implements the Comparable compareTo method, and I'm trying to put these objects into a HashSet. When I add() to the HashSet, I don't see any activity in my compareTo method in the debugger, regardless of where I set my breakpoints. Additionally, when I'm done with the add()s, I see several duplicates within the Set. What am I screwing up, here; why is it not Comparing, and therefore, allowing the dupes? Thanks, IVR Avenger

    Read the article

  • Removing duplicate SQL records to permit a unique key

    - by j pimmel
    I have a table ('sales') in a MYSQL DB which should have rightfully have had a unique constraint enforced to prevent duplicates. To first remove the dupes and set the constraint is proving a bit tricky. Table structure (simplified): 'id (unique, autoinc)' product_id The goal is to enforce uniqueness for product_id. The de-duping policy I want to apply is to remove all duplicate records except the most recently created, eg: the highest id Or to put another way, I would like to delete duplicate records, excluding the ids matched by the following query: select id from sales s inner join (select product_id, max(id) as maxId from sales group by product_id having count(product_id) > 1) groupedByProdId on s.product_id and s.id = groupedByProdId.maxId I've struggled with this on two fronts - writing the query to select the correct records to delete and then also the constraint in MYSQL where a subselect FROM clause of a DELETE cannot reference the same table from which data is being removed.

    Read the article

  • Longest substring that appears n times

    - by xcoders
    For a string of length L, I want to find the longest substring that appears n (n<L) or more times in ths string. For example, the longest substring that occurs 2 or more times in "BANANA" is "ANA", once starting from index 1, and once again starting from index 3. The substrings are allowed to overlap. In the string "FFFFFF", the longest string that appears 3 or more times is "FFFF". The brute force algorithm for n=2 would be selecting all pairs of indexes in the string, then running along until the characters are different. The running-along part takes O(L) and the number of pairs is O(L^2) (duplicates are not allowed but I'm ignoring that) so the complexity of this algorithm for n=2 would be O(L^3). For greater values of n, this grows exponentially. Is there a more efficient algorithm for this problem?

    Read the article

  • How do you store sets in Cassandra?

    - by Ben W
    I'd like to convert this JSON to a data model in Cassandra, where each of the arrays is a set with no duplicates: var data = { "data1": { "100": [1, 2, 3], "200": [3, 4] }, "data2": { "k1", [1], "k2", [4, 5] } } I'd like to query like this: data["data1"]["100"] to retrieve the sets. Anyone know how you might model this in Cassandra? (The only thing I came up with was columns whose name was a set value and the value of the column was an empty string, but that felt wrong.) It's not OK to serialize the sets as JSON or some other string, which would make this much easier. Also, I should note that it's OK to split data1 and data2 into separate ColumnFamilies, it's not necessary that they're keys in the same one.

    Read the article

  • email tracking image duplicate requests

    - by DEH
    I am embedding tracking images within emails that are being sent from a custom-built opt-in CRM system. The image src is an encoded .gif, such as src="12_34_675.gif". The image is served by an ASP.NET httphandler that decodes the src encoding and serves a transparent image. Everything works fine, but some email clients request the image multiple times, creating duplicate entries. Some clients make three calls all within one second, and some seem to make tens of calls over a day or so. Mostly email clients make single calls, but these few duplicates are very perplexing. I know I can code around them, but I'd really like to understand what's going on. I've checked the IIS log files, which show that the duplicate requests are coming from the client machines. I can't think what might be causing these duplicate http requests. Help!

    Read the article

  • Creating a unique key based on file content in python

    - by Cawas
    I got many, many files to be uploaded to the server, and I just want a way to avoid duplicates. Thus, generating a unique and small key value from a big string seemed something that a checksum was intended to do, and hashing seemed like the evolution of that. So I was going to use hash md5 to do this. But then I read somewhere that "MD5 are not meant to be unique keys" and I thought that's really weird. What's the right way of doing this? edit: by the way, I took two sources to get to the following, which is how I'm currently doing it and it's working just fine, with Python 2.5: import hashlib def md5_from_file (fileName, block_size=2**14): md5 = hashlib.md5() f = open(fileName) while True: data = f.read(block_size) if not data: break md5.update(data) f.close() return md5.hexdigest()

    Read the article

  • Prolog: using the sort/2 predicate

    - by Øyvind Hauge
    So I'm trying to get rid of the wrapper clause by using the sort library predicate directly inside split. What split does is just generating a list of numbers from a list that looks like this: [1:2,3:2,4:6] ---split-- [1,2,3,2,4,6]. But the generated list contains duplicates, and I don't want that, so I'm using the wrapper to combine split and sort, which then generates the desired result: [1,2,3,4,6]. I'd really like to get rid of the wrapper and just use sort within split, however I keep getting "ERROR: sort/2: Arguments are not sufficiently instantiated." Any ideas? Thanks :) split([],[]). split([H1:H2|T],[H1,H2|NT]) :- split(T,NT). wrapper(L,Processed) :- split(L,L2), sort(L2,Processed).

    Read the article

  • ASP.NET - DropDown DataBinding (Rebind?)

    - by Bob Fincheimer
    I have a drop down which has a method which binds data to it: dropDown.Items.Clear() dropDown.AppendDataBoundItems = True Select Case selType Case SelectionTypes.Empty dropDown.Items.Insert(0, New ListItem("", "")) Case SelectionTypes.Any dropDown.Items.Insert(0, New ListItem("ANY", "")) Case SelectionTypes.Select dropDown.Items.Insert(0, New ListItem("Select One", "")) End Select BindDropDown(val) The BindDropDown method simply sets the datasource, datakeyname, datavaluename, and then databinds the data. For a reason which I cannot avoid, I MUST call this method twice sometimes. When it is called twice, All of the databound items show up two times, but the top item (the one I manually insert) is there only once. Is ASP doing something wierd when i databind twice even though i clear the list between? Or does it have to do something with the viewstate/controlstate? EDIT__ The entire page, and this control has EnableViewState="false" EDIT___ The dropdown is inside a form view. After the selected value is set I have to rebind the dropdown just in case the selected value is not there [because it is an inactive user]. After this, the formview duplicates the databound items.

    Read the article

  • Can you explain this odd behavior with dragging items into nested sortables?

    - by RDL
    I have the following setup: A sortable list where one of the <li> has a table with lists in each cell. All of the lists are sortable with each other. Draggable items that can be added to any of the sortables Issue: When adding a draggable item ('drag 1', 'drag 2', 'drag 3') to one of the lists in the horizontal lists (table of lists) it duplicates the draggable when dropped. Sometimes it will create both copies in the same list or one in the item list and one in the column list. Here is a demo: http://jsfiddle.net/MQTgA/ Question: How do I prevent the second item being created when dropping the draggable?

    Read the article

  • Compilation hangs for a class with field double d = 2.2250738585072012e-308

    - by 01es
    I have come across an interesting situation. A coworker committed some changes, which would not compile on my machine neither from the IDE (Eclipse) nor from a command line (Maven). The problem manifested in the compilation process taking 100% CPU and only killing the process would help to stop it. After some analysis the cause of the problem was located and resolved. It turned out be a line "double d = 2.2250738585072012e-308" (without semicolon at the end) in one of the interfaces. The following snipped duplicates it. public class WeirdCompilationIssue { double d = 2.2250738585072012e-308 } Why would compiler hang? A language edge case?

    Read the article

  • LINQ2SQL: how to merge two columns from the same table into a single list

    - by TomL
    this is probably a simple question, but I'm only a beginner so... Suppose I have a table containing home-work locations (cities) certain people use. Something like: ID(int), namePerson(string), homeLocation(string), workLocation(string) where homeLocation and workLocation can both be null. Now I want all the different locations that are used merged into a single list. Something like: var homeLocation = from hm in Places where hm.Home != null select hm.Home; var workLocation = from wk in Places where wk.Work != null select wk.Work; List<string> locationList = new List<string>(); locationList = homeLocation.Distinct().ToList<string>(); locationList.AddRange(workLocation.Distinct().ToList<string>()); (which I guess would still allow duplicates if they have the same value in both columns, which I don't really want...) My question: how this be put into a single LINQ statement? Thanks in advance for your help!

    Read the article

  • Learning JavaScript - What is the BEST ONLINE RESOURCE?

    - by Chris Jacob
    The Goal: Use votes to rank nominated sites. The first answer to reach 100+ votes will be accepted. Please answer following these 5 simple rules: ONE SITE per answer. Link to each page if nominating a "series" of resources on a SITE. No "offline" books. Only online resources (tutorials, API references, blogs, screencasts, etc). Don't add "subjective" details/notes in your answer. Add them as a comment to the answer. Don't post duplicates. If your favourite is already listed Up Vote It! Example Answer: Site Name http://www.example.com Example Answer (site with a series of resources): Site Name http://www.example.com Series Name A http://www.example.com/video/a/1 http://www.example.com/video/a/2 Series Name B http://www.example.com/video/b/1

    Read the article

  • Learning HTML - What is the BEST ONLINE RESOURCE?

    - by Chris Jacob
    The Goal: Use votes to rank nominated sites. The first answer to reach 100+ votes will be accepted. Please answer following these 5 simple rules: ONE SITE per answer. Link to each page if nominating a "series" of resources on a SITE. No "offline" books. Only online resources (tutorials, API references, blogs, screencasts, etc). Don't add "subjective" details/notes in your answer. Add them as a comment to the answer. Don't post duplicates. If your favourite is already listed Up Vote It! Example Answer: Site Name http://www.example.com Example Answer (site with a series of resources): Site Name http://www.example.com Series Name A http://www.example.com/video/a/1 http://www.example.com/video/a/2 Series Name B http://www.example.com/video/b/1

    Read the article

  • Why is the 'this' keyword not a reference type in C++ [closed]

    - by Dave Tapley
    Possible Duplicates: Why ‘this’ is a pointer and not a reference? SAFE Pointer to a pointer (well reference to a reference) in C# The this keyword in C++ gets a pointer to the object I currently am. My question is why is the type of this a pointer type and not a reference type. Are there any conditions under which the this keyword would be NULL? My immediate thought would be in a static function, but Visual C++ at least is smart enough to spot this and report static member functions do not have 'this' pointers. Is this in the standard?

    Read the article

  • Best datastructure for frequently queried list of objects

    - by panzerschreck
    Hello, I have a list of objects say, List. The Entity class has an equals method,on few attributes ( business rule ) to differentiate one Entity object from the other. The task that we usually carry out on this list is to remove all the duplicates something like this : List<Entity> noDuplicates = new ArrayList<Entity>(); for(Entity entity: lstEntities) { int indexOf = noDuplicates.indexOf(entity); if(indexOf >= 0 ) { noDuplicates.get(indexOf).merge(entity); } else { noDuplicates.add(entity); } } Now, the problem that I have been observing is that this part of the code, is slowing down considerably as soon as the list has objects more than 10000.I understand arraylist is doing a o(N) search. Is there a faster alternative, using HashMap is not an option, because the entity's uniqueness is built upon 4 of its attributes together, it would be tedious to put in the key itself into the map ? will sorted set help in faster querying ? Thanks

    Read the article

  • Pass information back from an iframe?

    - by ThinkBohemian
    Right now i'm building a firefox plugin that duplicates some functionality on my website. It takes in an email address and then returns information to the user. The easiest way to do this in the plugin is to use an Iframe and render that super simple form on my website. All of this works great, but to make the plugin really useful, i would like the plugin to have access to the information that the iframe renders, so it can use it in the current window that the user is in. Is it possible to pass information back through an Iframe in this manner? I know there are quite a few domain access restrictions with Iframes, so any help or insight is appreciated!!

    Read the article

  • Mod_Rewrite: Insert "/" between variables and values in URL

    - by Abs
    Hello all, I am attempting clean useful URLs using mod_rewrite. I am sure this is a common question but I am not so hot with mod_rewrite: I have this URL: http://mysite.com/user.php?user=fatcatmat&sort=popularv I want to be able to rewrite it like this: http://mysite.com/user/user/fatcatmat/sort/popularv (Is there a way to remove duplicates in a URL?) I think I have managed to do the removal of the PHP extension. RewriteRule ^(.*)\$ $1.php [nc] Is the above correct? For the separate pages, I would have something like this. RewriteRule ^/?user(/)?$ user.php Main Question: Its a bit tedious to do all the above but is there a MEGA rewrite rule that will just place "/" in between variables and their values and remove the .php extension from all pages? Thank you for any help.

    Read the article

  • How to re-order a List<String>

    - by tarka
    I have created the following method: public List<String> listAll() { List worldCountriesByLocal = new ArrayList(); for (Locale locale : Locale.getAvailableLocales()) { final String isoCountry = locale.getDisplayCountry(); if (isoCountry.length() > 0) { worldCountriesByLocal.add(isoCountry); Collections.sort(worldCountriesByLocal); } } return worldCountriesByLocal; } Its pretty simple and it returns a list of world countries in the users locale. I then sort it to get it alphabetic. This all works perfectly (except I seem to occasionally get duplicates of countries!). Anyway, what I need is to place the US, and UK at the top of the list regardless. The problem I have is that I can't isolate the index or the string that will be returned for the US and UK because that is specific to the locale! Any ideas would be really appreciated.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >