Search Results

Search found 886 results on 36 pages for 'no duplicates'.

Page 9/36 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • A tool for finding duplicate code in PHP

    - by Toby
    Are there any tools available that can scan multiple .php files and report back duplicated lines/chunks of code? It doesn't have to be really smart but basically give me a starting point for manual scans to improve the codebase of some of my apps.

    Read the article

  • How to find the duplicate and highest value in an array

    - by Jerry
    Hello guys I have an array like this array={'a'=>'2','b'=>'5', 'c'=>'6', 'd'=>'6', 'e'=>'2'}; The array value might be different depending on the $_POST variables. My question is how to find the highest value in my array and return the index key. In my case, I need to get 'c' and 'd' and the value of 6. Not sure how to do this. Any helps would be appreciated. Thanks.

    Read the article

  • remove duplicate from string in PHP

    - by Adnan
    Hello, I am looking for the fastest way to remove duplicate values in a string separated by commas. So my string looks like this; $str = 'one,two,one,five,seven,bag,tea'; I can do it be exploding the string to values and then compare, but I think it will be slow. what about preg_replace() will it be faster? Any one did it using this function?

    Read the article

  • MySQL INSERT IGNORE not working

    - by gAMBOOKa
    Here's my table with some sample data a_id | b_id ------------ 1 225 2 494 3 589 When I run this query INSERT IGNORE INTO table_name (a_id, b_id) VALUES ('4', '230') ('2', '494') It inserts both those rows when it's supposed to ignore the second value pair (2, 494) No indexes defined, neither of those columns are primary. What don't I know?

    Read the article

  • XPath to find an element with a similar sibling

    - by user364902
    Suppose I have this XML: <x> <e a='1' b='A'/> <e a='1' b='B'/> <e a='1' b='A'/> </x> I'd like to write an xpath to find any elements e which: Have attribute @b = 'A' Have the same value for attribute @a The xpath can't reference the literal value of attribute @a, however. It can reference the literal value of attribute @b. Or more generally, I want to find if there are any instances where there are two or more elements e[@b=A'] with the same value for attribute @a. Is this possible?

    Read the article

  • changing existing duplicate entries in mysql

    - by Mladen
    sorry for the (probably) noob question, but I', new at this stuff. i have a table that has column 'position', and I want to, when inserting a new row, change positions of every row that has position lower than row that is inserted. for example, if I add row with position 4, if there is a duplicate, it should become 5, 5 should shift to 6 and so on... also, is there a way to get highest value for column besides testing it in every row via php?

    Read the article

  • Moving x,y position of all array objects every frame in actionscript 3?

    - by Dylan Gallardo
    I have my code setup so that I have an movieclip in my library a class called "block" being duplicated multiple times and added into an array like this: function makeblock(e:Event){ newblock=new block; newblock.x=10; newblock.y=10; addChild(newblock); myarray[counter] = newblock; //adds a newblock object into array counter += 1; } Then I have a loop with a currently primitive way of handling my problem: stage.addEventListener(Event.ENTER_FRAME, gameloop); function gameloop(evt:Event):void { if (moveright==true){ myarray[0].x += 5; myarray[1].x += 5; myarray[2].x += 5 -(and so on)- My question is how can I change x,y values every frame for new objects duplicated into the array, along with the previous ones that were added. Of course with a more elegant way than writing it out myself... array[0].x += 5, array[1], array[2], array[3] etc. Ideally I would like this to go up to 500 or more array objects for one array so obviously I don't want to be writing it out individually haha, I also need it to be consistent with performance so using a for loop or something to loop through the whole array and move each x += 5 wouldn't work would it? Anyway, if anyone has any ideas that'd be great!

    Read the article

  • Adding to database. No repeat on refresh

    - by kevstarlive
    I have this code: Episode.php <?$feedback = new feedback; $articles = $feedback->fetch_all(); if (isset($_POST['name'], $_POST['post'])) { $cast = $_GET['id']; $name = $_POST['name']; $email = $_POST['email']; $post = nl2br ($_POST['post']); $ipaddress = $_SERVER['REMOTE_ADDR']; if (empty($name) or empty($post)) { $error = 'All Fields Are Required!'; }else{ $query = $pdo->prepare('INSERT INTO comments (cast, name, email, post, ipaddress) VALUES(?, ?, ?, ?, ?)'); $query->bindValue(1, $cast); $query->bindValue(2, $name); $query->bindValue(3, $email); $query->bindValue(4, $post); $query->bindValue(5, $ipaddress); $query->execute(); } }?> <div align="center"> <strong>Give us your feedback?</strong><br /><br /> <?php if (isset($error)) { ?> <small style="color:#aa0000;"><?php echo $error; ?></small><br /><br /> <?php } ?> <form action="episode.php?id=<?php echo $data['cast_id']; ?>" method="post" autocomplete="off" enctype="multipart/form-data"> <input type="text" name="name" placeholder="Name" /> / <input type="text" name="email" placeholder="Email" /><small style="color:#aa0000;">*</small><br /><br /> <textarea rows="10" cols="50" name="post" placeholder="Comment"></textarea><br /><br /> <input type="submit" onclick="myFunction()" value="Add Comment" /> <br /><br /> <small style="color:#aa0000;">* <b>Email will not be displayed publicly</b></small><br /> </form> </div> Include.php class feedback { public function fetch_all(){ global $pdo; $query = $pdo->prepare("SELECT * FROM comments"); $query->bindValue(1, $cast); $query->execute(); return $query->fetchAll(); } } This code updates to the database as it is suppose to. But after submission it reloads the current page as mentioned in the form action. But when I refresh the page to see the comment being added it asks to re submit. If I hit submit then the comment adds again. How can I stop this from happening? Maybe I could hide the comment box and display a thank you message but that would not stop a repeat entry. Please help. Thank you. Kev

    Read the article

  • Entityframework duplicate record on second insert

    - by Delysid
    I am building an application for recipe/meal planning, and i have come across a problem i cant seem to figure out. i have a table for units of measure, where i keep the used units in, i only want unique units in here (for grocery list calculation and so forth) but if i use a unit from the table on a recipe, the first time it is okay, nothing is inserted in units of measure, but the second time i get a "duplicate". i suspect it has something to do with entitykey, because the primary key is identity column on the sql server (2008 r2) for some reason it works to change the objectstate on some objects (courses, see code) and that does not generate a duplicate, but that does not work on the unit of measure my insert methods looks like this : public recipe Create(recipe recipe) { using (RecipeDataContext ctx = new RecipeDataContext()) { foreach (recipe_ingredient rec_ing in recipe.recipe_ingredient) { if (rec_ing.ingredient.ingredient_id == 0) { ingredient ing = (from _ing in ctx.ingredients where _ing.name == rec_ing.ingredient.name select _ing).FirstOrDefault(); if (ing != null) { rec_ing.ingredient_id = ing.ingredient_id; rec_ing.ingredient = null; } } if (rec_ing.unit_of_measure.unit_of_measure_id == 0) { unit_of_measure _uom = (from dbUom in ctx.unit_of_measure where dbUom.unit == rec_ing.unit_of_measure.unit select dbUom).FirstOrDefault(); if (_uom != null) { rec_ing.unit_of_measure_id = _uom.unit_of_measure_id; rec_ing.unit_of_measure = null; } } ctx.Recipes.AddObject(recipe); //for some reason it works to change object state of this, and not generate a duplicate ctx.ObjectStateManager.ChangeObjectState(recipe.courses[0], EntityState.Unchanged); } ctx.SaveChanges(); } return recipe; } My datamodel looks like this : http://i.imgur.com/NMwZv.png

    Read the article

  • Printing distinct integers in an array

    - by ???
    I'm trying to write a small program that prints out distinct numbers in an array. For example if a user enters 1,1,3,5,7,4,3 the program will only print out 1,3,5,7,4. I'm getting an error on the else if line in the function checkDuplicate. Here's my code so far: import javax.swing.JOptionPane; public static void main(String[] args) { int[] array = new int[10]; for (int i=0; i<array.length;i++) { array[i] = Integer.parseInt(JOptionPane.showInputDialog("Please enter" + "an integer:")); } checkDuplicate (array); } public static int checkDuplicate(int array []) { for (int i = 0; i < array.length; i++) { boolean found = false; for (int j = 0; j < i; j++) if (array[i] == array[j]) { found = true; break; } if (!found) System.out.println(array[i]); } return 1; } }

    Read the article

  • PHP - Pull out array keys that have matching values?

    - by MAZUMA
    Is there a way to look inside an array and pull out the keys that have keys with matching values? Questions like this have been asked, but I'm not finding anything conclusive. So if my array looks like this Array ( [0] => Array ( [title] => Title 1 [type] => [message] => ) [1] => Array ( [title] => Title 2 [type] => [message] => ) [2] => Array ( [title] => Title 3 [type] => [message] => ) [3] => Array ( [title] => Title 2 [type] => Limited [message] => 39 ) [4] => Array ( [title] => Title 4 [type] => Offline [message] => 41 ) [5] => Array ( [title] => Title 5 [type] => [message] => ) And I want to get this Array ( [1] => Array ( [title] => Title 2 [type] => [message] => ) [3] => Array ( [title] => Title 2 [type] => Limited [message] => 39 ) )

    Read the article

  • find a duplicate series in SQL

    - by SomeMiscGuy
    I have a table with 3 columns containing a variable number of records based off of the first column which is a foreign key. I am trying to determine if I can detect when there is a duplicate across multiple rows for an entire series declare @finddupseries table ( portid int, asset_id int, allocation float ) ; INSERT INTO @finddupseries SELECT 250, 6, 0.05 UNION ALL SELECT 250, 66, 0.8 UNION ALL SELECT 250, 2, 0.105 UNION ALL SELECT 250, 4, 0.0225 UNION ALL SELECT 250, 5, 0.0225 UNION ALL SELECT 251, 13, 0.6 UNION ALL SELECT 251, 2, 0.3 UNION ALL SELECT 251, 5, 0.1 UNION ALL SELECT 252, 13, 0.8 UNION ALL SELECT 252, 2, 0.15 UNION ALL SELECT 252, 5, 0.05 UNION ALL SELECT 253, 13, 0.4 UNION ALL SELECT 253, 2, 0.45 UNION ALL SELECT 253, 5, 0.15 UNION ALL SELECT 254, 6, 0.05 UNION ALL SELECT 254, 66, 0.8 UNION ALL SELECT 254, 2, 0.105 UNION ALL SELECT 254, 4, 0.0225 UNION ALL SELECT 254, 5, 0.0225 select * from @finddupseries The records for portid 250 and 254 match. Is there any way I can write a query to detect this? edit: yes, the entire series must match. Also, if there was a way to determine which one it DID match would be helpful as the actual table has around 10k records. thanks!

    Read the article

  • left join without duplicate values using MIN()

    - by Clipper87
    I have a table_1: id custno 1 1 2 2 3 3 and a table_2: id custno qty descr 1 1 10 a 2 1 7 b 3 2 4 c 4 3 7 d 5 1 5 e 6 1 5 f When I run this query to show the minimum order quantities from every customer: SELECT DISTINCT table_1.custno,table_2.qty,table_2.descr FROM table_1 LEFT OUTER JOIN table_2 ON table_1.custno = table_2.custno AND qty = (SELECT MIN(qty) FROM table_2 WHERE table_2.custno = table_1.custno ) Then I get this result: custno qty descr 1 5 e 1 5 f 2 4 c 3 7 d Customer 1 appears twice each time with the same minimum qty (& a different description) but I only want to see customer 1 appear once. I don't care if that is the record with 'e' as a description or 'f' as a description. How could I do this ? Thx!

    Read the article

  • Remove all rows in duplication (different from distinct row selection)

    - by user1671401
    How can I remove EVERY duplicating row in a DataTable, based on the value of two columns that are in duplication. Unfortunately, I am unable to find the equivalent LINQ Query. (I dont want distinct values even). The table below shall explain my problem I want to delete every row in duplication based on Column_A and Column_B COLUMN_A      COLUMN_B      COLUMN_C     COLUMN_D..... A                       B C                       D E                       F G                       H A                       B E                       F EXPECTED OUTPUT: COLUMN_A      COLUMN_B      COLUMN_C     COLUMN_D..... C                       D G                       H Please help

    Read the article

  • Duplicate values multi array

    - by BETA911
    As the title states I'm searching for a unique solution in multi arrays. PHP is not my world so I can't make up a good and fast solution. I basically get this from the database: http://pastebin.com/vYhFCuYw . I want to check on the 'id' key, and if the array contains a duplicate 'id', then the 'aantal' should be added to each other. So basically the output has to be this: http://pastebin.com/0TXRrwLs . Thanks in advance! EDIT As asked, attempt 1 out of many: function checkDuplicates($array) { $temp = array(); foreach($array as $k) { foreach ($array as $v) { $t_id = $k['id']; $t_naam = $k['naam']; $t_percentage = $k['percentage']; $t_aantal = $k['aantal']; if ($k['id'] == $v['id']) { $t_aantal += $k['aantal']; array_push($temp, array( 'id' => $t_id, 'naam' => $t_naam, 'percentage' => $t_percentage, 'aantal' => $t_aantal, ) ); } } } return $temp; }

    Read the article

  • While making an RSS reader which saves articles, how can I prevent duplicates?

    - by Koning Baard
    Lets say I have a RSS feed which lists the 3 newest questions on SO. At 1 o'clock, the feed looks like this: While making an RSS reader which saves articles, how can I prevent duplicates? Convert char array to UNICODE in MFC C++ How to deploy a Java Swing application with an embedded JavaDB database? At 2 o'clock, this feed looks like: django url from another template than the one associated with the view-function While making an RSS reader which saves articles, how can I prevent duplicates? Convert char array to UNICODE in MFC C++ (duplicate articles are bold) I want to download the RSS feed every 5 minutes, parse it and save the articles that aren't already saved, but I do not want duplicates (items that remain in the new, updated feed like the examples above). What can I use to determine if an article is already saved? Thanks

    Read the article

  • What algorithms can I use to detect if articles or posts are duplicates?

    - by michael
    I'm trying to detect if an article or forum post is a duplicate entry within the database. I've given this some thought, coming to the conclusion that someone who duplicate content will do so using one of the three (in descending difficult to detect): simple copy paste the whole text copy and paste parts of text merging it with their own copy an article from an external site and masquerade as their own Prepping Text For Analysis Basically any anomalies; the goal is to make the text as "pure" as possible. For more accurate results, the text is "standardized" by: Stripping duplicate white spaces and trimming leading and trailing. Newlines are standardized to \n. HTML tags are removed. Using a RegEx called Daring Fireball URLs are stripped. I use BB code in my application so that goes to. (ä)ccented and foreign (besides Enlgish) are converted to their non foreign form. I store information about each article in (1) statistics table and in (2) keywords table. (1) Statistics Table The following statistics are stored about the textual content (much like this post) text length letter count word count sentence count average words per sentence automated readability index gunning fog score For European languages Coleman-Liau and Automated Readability Index should be used as they do not use syllable counting, so should produce a reasonably accurate score. (2) Keywords Table The keywords are generated by excluding a huge list of stop words (common words), e.g., 'the', 'a', 'of', 'to', etc, etc. Sample Data text_length, 3963 letter_count, 3052 word_count, 684 sentence_count, 33 word_per_sentence, 21 gunning_fog, 11.5 auto_read_index, 9.9 keyword 1, killed keyword 2, officers keyword 3, police It should be noted that once an article gets updated all of the above statistics are regenerated and could be completely different values. How could I use the above information to detect if an article that's being published for the first time, is already existing within the database? I'm aware anything I'll design will not be perfect, the biggest risk being (1) Content that is not a duplicate will be flagged as duplicate (2) The system allows the duplicate content through. So the algorithm should generate a risk assessment number from 0 being no duplicate risk 5 being possible duplicate and 10 being duplicate. Anything above 5 then there's a good possibility that the content is duplicate. In this case the content could be flagged and linked to the article's that are possible duplicates and a human could decide whether to delete or allow. As I said before I'm storing keywords for the whole article, however I wonder if I could do the same on paragraph basis; this would also mean further separating my data in the DB but it would also make it easier for detecting (2) in my initial post. I'm thinking weighted average between the statistics, but in what order and what would be the consequences...

    Read the article

  • What is the easiest way to copy Chrome's login/passses into KeePass without creating duplicates?

    - by ldigas
    Okey, here's the thing. I have most of my login info in two places; one is in Keepass file and the other is in Chrome. Being a lazy sort of person, and since Chrome/Keepass integration never really started to work the way it should, a couple times a year I use the Nirsoft tool to get the Chrome login/passwords into a textual .csv file and then import it in Keepass. Creating lots of duplicates in the process which I then clean and so on. In the meantime, all the new logins I accumulate just stay in Chrome. As you might notice, this is not really the best way to do it. Is there a faster way to do this; copy logins from Chrome to Keepass without creating duplicates in Keepass, or has anyone perhaps found a way to get Keepass to work with Chrome under Win XP SP3? Keepass 1.0 or 2.0, doesn't make the difference as long as it works.

    Read the article

  • In SQL Server what is most efficient way to compare records to other records for duplicates with in

    - by Glenn
    We have an SQL Server that gets daily imports of data files from clients. This data is interrelated and we are always scrubbing it and having to look for suspect duplicate records between these files. Finding and tagging suspect records can get pretty complicated. We use logic that requires some field values to be the same, allows some field values to differ, and allows a range to be specified for how different certain field values can be. The only way we've found to do it is by using a cursor based process, and it places a heavy burden on the database. So I wanted to ask if there's a more efficient way to do this. I've heard it said that there's almost always a more efficient way to replace cursors with clever JOINS. But I have to admit I'm having a lot of trouble with this one. For a concrete example suppose we have 1 table, an "orders" table, with the following 6 fields. order_id, customer_id product_id, quantity, sale_date, price We want to look through the records to find suspect duplicates on the following example criteria. These get increasingly harder. 1. Records that have the same product_id, sale_date, and quantity but different customer_id's should be marked as suspect duplicates for review. 2. Records that have the same customer_id, product_id, quantity and have sale_dates within five days of each other should be marked as suspect duplicates for review 3. Records that have the same customer_id, product_id, but different quantities within 20 units, and sales dates within five days of each other should be considered suspect. Is it possible to satisfy each one of these criteria with a single SQL Query that uses JOINS? Is this the most efficient way to do this?

    Read the article

  • MySQL, return only rows where there are duplicates among two columns.

    - by Richard Waite
    I have a table in MySQL of contact information ; first name, last name, address, etc. I would like to run a query on this table that will return only rows with first and last name combinations which appear in the table more than once. I do not want to group the "duplicates" (which may only be duplicates of the first and last name, but not other information like address or birthdate) - I want to return all the "duplicate" rows so I can look over the results and determine if they are dupes or not. This seemed like it would be a simple thing to do, but it has not been. Every solution I can find either groups the dupes and gives me a count only (which is not useful for what I need to do with the results) or doesn't work at all. Is this kind of logic even possible in a query ? Should I try and do this in Python or something?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >