Search Results

Search found 886 results on 36 pages for 'no duplicates'.

Page 14/36 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Does replace into have a where clause?

    - by Lajos Arpad
    I'm writing an application and I'm using MySQL as DBMS, we are downloading property offers and there were some performance issues. The old architecture looked like this: A property is updated. If the number of affected rows is not 1, then the update is not considered successful, elseway the update query solves our problem. If the update was not successful, and the number of affected rows is more than 1, we have duplicates and we delete all of them. After we deleted duplicates if needed if the update was not successful, an insert happens. This architecture was working well, but there were some speed issues, because properties are deleted if they were not updated for 15 days. Theoretically the main problem is deleting properties, because some properties are alive for months and the indexes are very far from each other (we are talking about 500, 000+ properties). Our host told me to use replace into instead of deleting properties and all deprecated properties should be considered as DEAD. I've done this, but problems started to occur because of syntax error and I couldn't find anywhere an example of replace into with a where clause (I'd like to replace a DEAD property with the new property instead of deleting the old property and insert a new to assure optimization). My query looked like this: replace into table_name(column1, ..., columnn) values(value1, ..., valuen) where ID = idValue Of course, I've calculated idValue and handled everything but I had a syntax error. I would like to know if I'm wrong and there is a where clause for replace into. I've found an alternative solution, which is even better than replace into (using simply an update query) because deletes are happening behind the curtains if I use replace into, but I would like to know if I'm wrong when I say that replace into doesn't have a where clause. For more reference, see this link: http://dev.mysql.com/doc/refman/5.0/en/replace.html Thank you for your answers in advance, Lajos Árpád

    Read the article

  • How do I detect if there is already a similar document stored in Lucene index.

    - by Jenea
    Hi. I need to exclude duplicates in my database. The problem is that duplicates are not considered exact match but rather similar documents. For this purpose I decided to use FuzzyQuery like follows: var fuzzyQuery = new global::Lucene.Net.Search.FuzzyQuery( new Term("text", queryText), 0.8f, 0); hits = _searcher.Search(query); The idea was to set the minimal similarity to 0.8 (that I think is high enough) so only similar documents will be found excluding those that are not sufficiently similar. To test this code I decided to see if it finds already existing document. To the variable queryText was assigned a value that is stored in the index. The code from above found nothing, in other words it doesn't detect even exact match. Index was build by this code: doc.Add(new global::Lucene.Net.Documents.Field( "text", text, global::Lucene.Net.Documents.Field.Store.YES, global::Lucene.Net.Documents.Field.Index.TOKENIZED, global::Lucene.Net.Documents.Field.TermVector.WITH_POSITIONS_OFFSETS)); I followed recomendations from bellow and the results are: TermQuery doesn't return any result. Query contructed with var _analyzer = new RussianAnalyzer(); var parser = new global::Lucene.Net.QueryParsers .QueryParser("text", _analyzer); var query = parser.Parse(queryText); var _searcher = new IndexSearcher (Settings.General.Default.LuceneIndexDirectoryPath); var hits = _searcher.Search(query); Returns several results with the maximum score the document that has exact match and other several documents that have similar content.

    Read the article

  • Script to copy files on CD and not on hard disk to a new directory

    - by John22
    I need to copy files from a set of CDs that have a lot of duplicate content, with each other, and with what's already on my hard disk. The file names of identical files are not the same, and are in sub-directories of different names. I want to copy non-duplicate files from the CD into a new directory on the hard disk. I don't care about the sub-directories - I will sort it out later - I just want the unique files. I can't find software to do that - see my post at SuperUser http://superuser.com/questions/129944/software-to-copy-non-duplicate-files-from-cd-dvd Someone at SuperUser suggested I write a script using GNU's "find" and the Win32 version of some checksum tools. I glanced at that, and have not done anything like that before. I'm hoping something exists that I can modify. I found a good program to delete duplicates, Duplicate Cleaner (it compares checksums), but it won't help me here, as I'd have to copy all the CDs to disk, and each is probably about 80% duplicates, and I don't have room to do that - I'd have to cycle through a few at a time copying everything, then turning around and deleting 80% of it, working the hard drive a lot. Thanks for any help.

    Read the article

  • Removing specific XML tags

    - by iTayb
    I'd like to make an application that removes duplicates from my wpl (Windows PlayList) files. The wpl struct is something like this: <?wpl version="1.0"?> <smil> <head> <meta name="Generator" content="Microsoft Windows Media Player -- 11.0.5721.5145"/> <meta name="AverageRating" content="55"/> <meta name="TotalDuration" content="229844"/> <meta name="ItemCount" content="978"/> <author/> <title>english</title> </head> <body> <seq> <media src="D:\Anime con 2006\Shits\30 Seconds to Mars - 30 Seconds to Mars\30 Seconds to Mars - Capricorn.mp3" tid="{BCC6E6B9-D0F3-449C-91A9-C6EEBD92FFAE}" cid="{D38701EF-1764-4331-A332-50B5CA690104}"/> <media src="E:\emule.incoming\Ke$ha - Coming Unglued.mp3" tid="{E2DB18E5-0449-4FE3-BA09-9DDE18B523B1}"/> <media src="E:\emule.incoming\Lady Gaga - Bad Romance.mp3" tid="{274BD12B-5E79-4165-9314-00DB045D4CD8}"/> <media src="E:\emule.incoming\David Guetta -Sexy Bitch Feat. Akon.mp3" tid="{46DA1363-3DFB-4030-A7A9-88E13DF30677}"/> </seq> </body> </smil> This looks like standard XML file. How can I load the file and get the src value of each media tag? How can I remove specific media, in case of duplicates? Thank you very much.

    Read the article

  • Remove duplicate records/objects uniquely identified by multiple attributes

    - by keruilin
    I have a model called HeroStatus with the following attributes: id user_id recordable_type hero_type (can be NULL!) recordable_id created_at There are over 100 hero_statuses, and a user can have many hero_statuses, but can't have the same hero_status more than once. A user's hero_status is uniquely identified by the combination of recordable_type + hero_type + recordable_id. What I'm trying to say essentially is that there can't be a duplicate hero_status for a specific user. Unfortunately, I didn't have a validation in place to assure this, so I got some duplicate hero_statuses for users after I made some code changes. For example: user_id = 18 recordable_type = 'Evil' hero_type = 'Halitosis' recordable_id = 1 created_at = '2010-05-03 18:30:30' user_id = 18 recordable_type = 'Evil' hero_type = 'Halitosis' recordable_id = 1 created_at = '2009-03-03 15:30:00' user_id = 18 recordable_type = 'Good' hero_type = 'Hugs' recordable_id = 1 created_at = '2009-02-03 12:30:00' user_id = 18 recordable_type = 'Good' hero_type = NULL recordable_id = 2 created_at = '2009-012-03 08:30:00' (Last two are not a dups obviously. First two are.) So what I want to do is get rid of the duplicate hero_status. Which one? The one with the most-recent date. I have three questions: How do I remove the duplicates using a SQL-only approach? How do I remove the duplicates using a pure Ruby solution? Something similar to this: http://stackoverflow.com/questions/2790004/removing-duplicate-objects. How do I put a validation in place to prevent duplicate entries in the future?

    Read the article

  • Java HashSet using a specified method

    - by threenplusone
    I have a basic class 'HistoryItem' like so: public class HistoryItem private Date startDate; private Date endDate; private Info info; private String details; @Override public int hashCode() { int hash = (startDate == null ? 0 : startDate.hashCode()); hash = hash * 31 + (endDate == null ? 0 : endDate.hashCode()); return hash; } } I am currently using a HashSet to remove duplicates from an ArrayList on the startDate & endDate fields, which is working correctly. However I also need to remove duplicates on different fields (info & details). My question is this. Is there a way to specify a different method which HashSet will use in place of hashCode()? Something like this: public int hashCode_2() { int hash = (info == null ? 0 : info.hashCode()); hash = hash * 31 + (details == null ? 0 : details.hashCode()); return hash; } Set<HistoryItem> removeDups = new HashSet<HistoryItem>(); removeDups.setHashMethod(hashCode_2); Or is there another way that I should be doing this?

    Read the article

  • Heavy Mysql operation & Time Constraints [closed]

    - by Rahul Jha
    There is a performance issue where that I have stuck with my application which is based on PHP & MySql. The application is for Data Migration where data has to be uploaded and after various processes (Cleaning from foreign characters, duplicate check, id generation) it has to be inserted into one central table and then to 5 different tables. There, an id is generated and that id has to be updated to central table. There are different sets of records and validation rules. The problem I am facing is that when I insert say(4K) rows file (containing 20 columns) it is working fine within 15 min it gets inserted everywhere. But, when I insert the same records again then at this time it is taking one hour to insert (ideally it should get inserted by marking earlier inserted data as duplicate). After going through the log file, I noticed is that there is a Mysql select statement where I am checking the duplicates and getting ID which are duplicates. Then I am calling a function inside for loop which is basically inserting records into 5 tables and updates id to central table. This Calling function is major time of whole process. P.S. The records has to be inserted record by record.. Kindly Suggest some solution.. //This is that sample code $query=mysql_query("SELECT DISTINCT p1.ID FROM table1 p1, table2 p2, table3 a WHERE p2.datatype =0 AND (p1.datatype =1 || p1.datatype=2) AND p2.ID =0 AND p1.ID = a.ID AND p1.coulmn1 = p2.column1 AND p1.coulmn2 = p2.coulmn2 AND a.coulmn3 = p2.column3"); $num=mysql_num_rows($query); for($i=0;$i<$num;$i++) { $f=mysql_result($query,$i,"ID"); //calling function RecordInsert($f); }

    Read the article

  • Generate and download a text file in javascript

    - by Mark B
    All my research so far suggests this can't be done, but I'm hoping someone here has some cunning ideas. I have a form on a website which allows users to bulk upload lots of URLs to add to a list on the server. There's quite a lot of server-side processing to do on each URL, so to avoid timeouts and to display progress, I've implemented the upload using jQuery to submit the URLs one at a time using ajax. This is all working nicely. However, part of the processing on each URL is deduplicating it against the complete list. The ajax call returns a status indicating either a successful upload or a rejection due to duplication. As the upload progresses, I tell the user how many URLs have been rejected as duplicates (along with overall progress and ETA). The problem now is how to give the user a complete list of the failed duplicate URLs. I've kept them in an array in my jQuery, and would like the user to be able to click on a link on the form to download a text file containing those URLs. Is this possible just using client-side processing? The server-side processing basically handles a single keyword at a time. I'd rather not have to store the duplicates in a database table with some kind of session key which gets sent with every ajax call, and is then used at the end to generate the text file server-side (and then gets cleaned up some time later). I can see how to do this, but it seems very clunky and a bit 20th century.

    Read the article

  • Java ArrayList remove dupes without sets

    - by Kieran
    I'm having problems removing duplicates from an ArrayList. It's for an assignment for college. Here's the code I have already: public int numberOfDiffWords() { ArrayList<String> list = new ArrayList<>(); for(int i=0; i<words.size()-1; i++) { for(int j=i+1; j<words.size(); j++) { if(words.get(i).equals(words.get(j))) { // do nothing } else { list.add(words.get(i)); } } } return list.size(); } The problem is in the numberOfDiffWords() method. The populate list method is working correctly, as my instructor has given me a sample string (containing 4465 words) to analyse - printing words.size() gives the correct result. I want to return the size of the new ArrayList with all duplicates removed. words is an ArrayList class attribute. UPDATE: I should have mentioned I'm only allowed to use dynamic indexed-based storage for this part of the assignment, which means no hash-based storage.

    Read the article

  • How to create a generic method in C# that's all applicable to many types - ints, strings, doubles et

    - by satyajit
    Let's I have a method to remove duplicates in an integer Array public int[] RemoveDuplicates(int[] elems) { HashSet<int> uniques = new HashSet<int>(); foreach (int item in elems) uniques.Add(item); elems = new int[uniques.Count]; int cnt = 0; foreach (var item in uniques) elems[cnt++] = item; return elems; } How can I make this generic such that now it accepts a string array and remove duplicates in it? How about a double array? I know I am probably mixing things here in between primitive and value types. For your reference the following code won't compile public List<T> RemoveDuplicates(List<T> elems) { HashSet<T> uniques = new HashSet<T>(); foreach (var item in elems) uniques.Add(item); elems = new List<T>(); int cnt = 0; foreach (var item in uniques) elems[cnt++] = item; return elems; } The reason is that all generic types should be closed at run time. Thanks for you comments

    Read the article

  • Algorithm to determine indices i..j of array A containing all the elements of another array B

    - by Skylark
    I came across this question on an interview questions thread. Here is the question: Given two integer arrays A [1..n] and B[1..m], find the smallest window in A that contains all elements of B. In other words, find a pair < i , j such that A[i..j] contains B[1..m]. If A doesn't contain all the elements of B, then i,j can be returned as -1. The integers in A need not be in the same order as they are in B. If there are more than one smallest window (different, but have the same size), then its enough to return one of them. Example: A[1,2,5,11,2,6,8,24,101,17,8] and B[5,2,11,8,17]. The algorithm should return i = 2 (index of 5 in A) and j = 9 (index of 17 in A). Now I can think of two variations. Let's suppose that B has duplicates. This variation doesn't consider the number of times each element occurs in B. It just checks for all the unique elements that occur in B and finds the smallest corresponding window in A that satisfies the above problem. For example, if A[1,2,4,5,7] and B[2,2,5], this variation doesn't bother about there being two 2's in B and just checks A for the unique integers in B namely 2 and 5 and hence returns i=1, j=3. This variation accounts for duplicates in B. If there are two 2's in B, then it expects to see at least two 2's in A as well. If not, it returns -1,-1. When you answer, please do let me know which variation you are answering. Pseudocode should do. Please mention space and time complexity if it is tricky to calculate it. Mention if your solution assumes array indices to start at 1 or 0 too. Thanks in advance.

    Read the article

  • Which options do I have for Java process communication?

    - by Dmitriy Matveev
    We have a place in a code of such form: void processParam(Object param) { wrapperForComplexNativeObject result = jniCallWhichMayCrash(param); processResult(result); } processParam - method which is called with many different arguments. jniCallWhichMayCrash - a native method which is intended to do some complex processing of it's parameter and to create some complex object. It can crash in some cases. wrapperForComplexNativeObject - wrapper type generated by SWIG processResult - a method written in pure Java which processes it's parameter by creation of several kinds (by the kinds I'm not meaning classes, maybe some like hierarchies) of objects: 1 - Some non-unique objects which are referencing each other (from the same hierarchy), these objects can have duplicates created from the invocations of processParam() method with different parameter values. Since it's costly to keep all the duplicates it's necessary to cache them. 2 - Some unique objects which are referencing each other (from the same hierarchy) and some of the objects of 1st kind. After processParam is executed for each of the arguments from some set the data created in processResult will be processed together. The problem is in fact that jniCallWhichMayCrash method may crash the entire JVM and this will be very bad. The reason of crash may be such that it can happen for one argument value and not for the other. We've decided that it's better to ignore crashes inside of JVM and just skip some chunks of data when such crashes occur. In order to do this we should run processParam function inside of separate process and pass the result somehow (HOW? HOW?! This is a question) to the main process and in case of any crashes we will only lose some part of data (It's ok) without lose of everything else. So for now the main problem is implementation of transport between different processes. Which options do I have? I can think about serialization and transmitting of binary data by the streams, but serialization may be not very fast due to object complexity. Maybe I have some other options of implementing this?

    Read the article

  • Sharing mobile broadband between two MacBooks [closed]

    - by Poita_
    Possible Duplicates: Is there a software alternative to Mac OS X built in internet sharing services? How to troubleshoot problems sharing internet connection via WiFi on Mac OS X Me and my wife both have MacBooks (one regular, one MackBook Pro). We're staying in temporary accommodation with no internet so we got one of those mobile broadband USB dongles. We only have one (dongle) and were just wondering if there was anyway we can share the internet connection between the two MacBooks. Thanks in advance.

    Read the article

  • Monitor utilities

    - by Adam Davis
    I'm a big fan of good monitor usage, but only currently use a few utilities to help me attain display nirvana across several systems and monitors. Part of this is due to not knowing what's available. Please list one monitor utility that you use and what it does for you per answer, and avoid duplicates - comment on and vote up the existing answers rather than adding a duplicate. Also, if there are existing questions that delve more specifically into one area of monitor utilities link that in a separate answer. -Adam

    Read the article

  • Looking for an application to act as a file bucket / consolidate - with auto tag and duplicate elimi

    - by Notitze
    Is there a windows application that can help with consolidating file libraries (documents, projects, pictures, music) into an archive? I'm looking for: eliminate redundant files by replacing duplicates with links add tags from keywords and date (year) allow an easy dump from multiple external media files (USB sticks, USB drives) Nice to have would be: auto indexing for easy search anything else that makes it easy to find and retrieve a file

    Read the article

  • Duplicate file finder

    - by Andrija
    I need free duplicate file finder/remover app, with ability to find duplicate files/folders by name and/or by size and to remove one of duplicates. Can you please recommend any? and why? Thanks EDIT: Changed to CW. Please add more apps to list if you know any.

    Read the article

  • Deleting Duplicated Lines In TEXT File?

    - by echolab
    I am trying to cleanup a text and for some reason every line duplicated 3 times am i able to get ride of duplicates with regex or tricks or do you know a software which could do that , text file is like this Party Started 10:17 (89/1/2) Party Started 10:17 (89/1/2) Party Started 10:17 (89/1/2) Jessica At Dinner 17:54 (89/1/2) Jessica At Dinner 17:54 (89/1/2) Jessica At Dinner 17:54 (89/1/2) How can i clean it up , and get ride of duplicated lines , it's about 69,587 lines

    Read the article

  • Import into Aperture 3 without moving files

    - by delwin
    I like managing my own files, and this is definitely possible with Aperture 3. But there seems to be two ways to import into Aperture: either by dragging and dropping, or importing one folder at a time, manually, through the Aperture import window. BUT whenever I drag and drop the photos, it imports them into the Aperture library, making duplicates of everything. And if I add them manually through Aperture, I have to add each folder at a time, which is extremely tedious. Is there any solution?

    Read the article

  • How to force iscsi initiator to login only once

    - by Disco
    Trying to setup a few CentOS nodes to connect to a Dell MD3600i array, i'm running into the issue that the MD3600i shows 4 different portals (with different IP addresses) and when i launch the initiator on host side well, it connects to every IP address it has seen during the discovery phase; resulting in duplicates. How can I 'force' the initiator to discard every other IP and let me choose only one IP portal to connect to ? Must be damn stupid but I can't figure out how. Thx

    Read the article

  • Is there a way to remove duplicate emails from a remote account?

    - by Mister IT Guru
    I have a user who has multiple duplicated emails across multiple folders on his IMAP account. How he managed to create them is beyond me, but mine is not to reason why, just to fix it! Can anyone recommend an application that I may use to remove the duplicates. (we're talking mailboxes in excess of 9G, and it's a remote server) I don't mind what OS I have to use to clean up the mailbox, I'm just looking for some recommendations. Thanks

    Read the article

  • What is the best way to remove duplicate files on web hosting's FTP server?

    - by Eric Harrison
    For some reason(Happened before I started working on this project)- my client's website has 2 duplicates of every single file. Effectively tripling the size of the site. The files look much like this: wp-comments-post.php | 3,982 bytes wp-comments-post (john smith's conflicted copy 2012-01-12).php | 3,982 bytes wp-comments-post (JohnSmith's conflicted copy 2012-01-14).php | 3,982 bytes The hosting that the website is on has no access to bash or SSH. In your opinion, what would be the easiest way to delete these duplicate files that would take the least time?

    Read the article

  • delete duplicate files with windows batch file?

    - by Chris Sobolewski
    I have a program that automatically copies files to a directory, and if it creates a duplicate it will name it like so: file with duplicate.xxx file with duplicate - 1.xxx file with duplicate - 2.xxx I need a way to delete all the duplicates with a windows batch file. Something along the lines of: FOR %f IN (C:\files\*.*) DO del "%f - 1" However, that will not work because that would resolve to file with duplicate - 1.xxx - 1

    Read the article

  • How do I remove the Never-Ending Dropbox Folder?

    - by KronoS
    I don't know if this is an error with eclipse or dropbox, but I seem to be unable to delete a file folder within my dropbox folder. Somehow, there was a creation of over 8300 file folders, all duplicates of the previous folder. I tried deleting from both command line, and explorer but get the same error: I've also deleted the share from the dropbox website, and it deleted correctly from there, but during the sync from the client (my pc) to the site there is an error: Any ideas of what to do?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >