Search Results

Search found 13305 results on 533 pages for 'remove duplicates'.

Page 26/533 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • stdio's remove() not always deleting on time.

    - by Kyte
    For a particular piece of homework, I'm implementing a basic data storage system using sequential files under standard C, which cannot load more than 1 record at a time. So, the basic part is creating a new file where the results of whatever we do with the original records are stored. The previous file's renamed, and a new one under the working name is created. The code's compiled with MinGW 5.1.6 on Windows 7. Problem is, this particular version of the code (I've got nearly-identical versions of this floating around my functions) doesn't always remove the old file, so the rename fails and hence the stored data gets wiped by the fopen(). FILE *archivo, *antiguo; remove("IndiceNecesidades.old"); // This randomly fails to work in time. rename("IndiceNecesidades.dat", "IndiceNecesidades.old"); // So rename() fails. antiguo = fopen("IndiceNecesidades.old", "rb"); // But apparently it still gets deleted, since this turns out null (and I never find the .old in my working folder after the program's done). archivo = fopen("IndiceNecesidades.dat", "wb"); // And here the data gets wiped. Basically, anytime the .old previously exists, there's a chance it's not removed in time for the rename() to take effect successfully. No possible name conflicts both internally and externally. The weird thing's that it's only with this particular file. Identical snippets except with the name changed to Necesidades.dat (which happen in 3 different functions) work perfectly fine. // I'm yet to see this snippet fail. FILE *antiguo, *archivo; remove("Necesidades.old"); rename("Necesidades.dat", "Necesidades.old"); antiguo = fopen("Necesidades.old", "rb"); archivo = fopen("Necesidades.dat", "wb"); Any ideas on why would this happen, and/or how can I ensure the remove() command has taken effect by the time rename() is executed? (I thought of just using a while loop to force call remove() again so long as fopen() returns a non-null pointer, but that sounds like begging for a crash due to overflowing the OS with delete requests or something.)

    Read the article

  • Loop through multi-dimensional array and remove certain keys

    - by Webkungen
    Hi! I've got a nested tree structure which is based on the array below: Array ( [1] = Array ( [id] = 1 [parent] = 0 [name] = Startpage [uri] = 125 [basename] = index.php [child] = ) [23] = Array ( [id] = 23 [parent] = 0 [name] = Events [uri] = 0 [basename] = [child] = Array ( [24] = Array ( [id] = 24 [parent] = 23 [name] = Public news [uri] = 0 [basename] = [child] = Array ( [27] = Array ( [id] = 27 [parent] = 24 [name] = Add [uri] = 100 [basename] = news.public.add.php [child] = ) [28] = Array ( [id] = 28 [parent] = 24 [name] = Overview [uri] = 101 [basename] = news.public.overview.php [child] = ) ) ) [25] = Array ( [id] = 25 [parent] = 23 [name] = Private news [uri] = 0 [basename] = [child] = Array ( [29] = Array ( [id] = 29 [parent] = 25 [name] = Add [uri] = 67 [basename] = news.private.add.php [child] = ) [30] = Array ( [id] = 30 [parent] = 25 [name] = Overview [uri] = 68 [basename] = news.private.overview.php [child] = ) ) ) [26] = Array ( [id] = 26 [parent] = 23 [name] = Calendar [uri] = 0 [basename] = [child] = Array ( [31] = Array ( [id] = 31 [parent] = 26 [name] = Add [uri] = 69 [basename] = news.event.add.php [child] = ) [32] = Array ( [id] = 32 [parent] = 26 [name] = Overview [uri] = 70 [basename] = news.event.overview.php [child] = ) ) ) ) ) ) I'm looking for a function to loop (recursive?) through the array and remove some keys. I my system I can allow users to certain functions/pages and if I deny access to the whole "block" "Events", the array will look like this: Array ( [1] = Array ( [id] = 1 [parent] = 0 [name] = Startpage [uri] = 125 [basename] = index.php [child] = ) [23] = Array ( [id] = 23 [parent] = 0 [name] = Events [uri] = 0 [basename] = [child] = Array ( [24] = Array ( [id] = 24 [parent] = 23 [name] = Public news [uri] = 0 [basename] = [child] = ) [25] = Array ( [id] = 25 [parent] = 23 [name] = Private news [uri] = 0 [basename] = [child] = ) [26] = Array ( [id] = 26 [parent] = 23 [name] = Calendar [uri] = 0 [basename] = [child] = ) ) ) ) As you can see above, the whole "block" "Events" is useless right now, becuase there is no page associated with each option. So I need to find all "keys" where "basename" is null AND where child is not an array or where the array is empty and remove them. I found this function when searching the site: function searchAndDestroy(&$a, $key, $val){ foreach($a as $k = &$v){ if(is_array($v)){ $r = searchAndDestroy($v, $key, $val); if($r){ unset($a[$k]); } }elseif($key == $k && $val == $v){ return true; } } return false; } It can be used to remove a key any where in the array, but only based in one thing, for example remove all keys where "parent" equals "23". But I need to find and remove (unset) all keys where "basename" is null AND where child isn't an array or where the array is empty. Can anyone help me out and possibly tweak the function above? Thank you,

    Read the article

  • How to simply remove everything from a directory on Linux

    - by Tometzky
    How to simply remove everything from a current or specified directory on Linux? Several approaches: rm -fr * rm -fr dirname/* Does not work — it will leave hidden files — the one's that start with a dot, and files starting with a dash in current dir, and will not work with too many files rm -fr -- * rm -fr -- dirname/* Does not work — it will leave hidden files and will not work with too many files rm -fr -- * .* rm -fr -- dirname/* dirname/.* Don't try this — it will also remove a parent directory, because ".." also starts with a "." rm -fr * .??* rm -fr dirname/* dirname/.??* Does not work — it will leave files like ".a", ".b" etc., and will not work with too many files find -mindepth 1 -maxdepth 1 -print0 | xargs -0 rm -fr find dirname -mindepth 1 -maxdepth 1 -print0 | xargs -0 rm -fr As far as I know correct but not simple. find -delete find dirname -delete AFAIK correct for current directory, but used with specified directory will delete that directory also. find -mindepth 1 -delete find dirname -mindeph 1 -delete AFAIK correct, but is it the simplest way?

    Read the article

  • SQL to retrieve the latest records, grouping by unique foreign keys

    - by jbox
    I'm creating query to retrieve the latest posts in a forum using a SQL DB. I've got a table called "Post". Each post has a foreign key relation to a "Thread" and a "User" as well as a creation date. The trick is I don't want to show two posts by the same user or two posts in the same thread. Is it possible to create a query that contains all this logic? # Grab the last 10 posts. SELECT id, user_id, thread_id FROM posts ORDER BY created_at DESC LIMIT 10; # Grab the last 10 posts, max one post per user SELECT id, user_id, thread_id FROM post GROUP BY user_id ORDER BY date DESC LIMIT 10; # Grab the last 10 posts, max one post per user, max one post per thread???

    Read the article

  • How to delete duplicate vectors within a multidimensional vector?

    - by David
    I have a vector of vectors: vector< vector<int> > BigVec; It contains an arbitrary number of vectors, each of an arbitrary size. I want to delete not duplicate elements of each vector, but any vectors that are the exact same as another. I don't need to preserve the order of the vectors so I can sort etc.. It should be a really simple problem to solve but I'm new to this, my (not-working) best effort: for (int i = 0; i < BigVec.size(); i++) { for (int j = 1; j < BigVec.size() ; j++ ) { if (BigVec[i][0] == BigVec [j][i]); { BigVec.erase(BigVec.begin() + j); i = 0; // because i get the impression deleting a j = 1; // vector messes up a simple iteration through } } } I think there might be a solution using Unique(), but I can't get that to work either.

    Read the article

  • Why would I get a duplicate key error when updating a row?

    - by hdx
    I'm using postgres and I'm getting the duplicate key error when updating a row: cursor.execute("UPDATE jiveuser SET userenabled = 0 WHERE userid = %s" % str(userId)) psycopg2.IntegrityError: duplicate key value violates unique constraint "jiveuser_pk" I don't understand how updating a row can cause this error... any help will be much appreciated.

    Read the article

  • Xcode - duplicate Target - new Target fails to build

    - by SirRatty
    Hi all, using Xcode 3.2.5 on 10.6.6 (10J521) I have an Xcode project containing 1 Target: "MyApp". It builds and runs successfully. As well as source and resource files, the Target contains a "Copy Files" build phase which copies "Sparkle.framework" in. The framework is in the same directory as the project. I want to duplicate this Target. Steps taken: Did "Clean all Targets". Right-clicked on the "MyApp" Target within Xcode, and then chose "Duplicate". Renamed the duplicated target to "MyAppTarget2". Selected "MyAppTarget2" as the Active Target from the popup menu in the top-left. Did "Build". The problem: error: Sparkle/Sparkle.h: No such file or directory This is puzzling! Each Build step appears to have been replicated in the duplicated Target, including the "Copy Files" phase. The Sparkle.framework exists at the path indicated by [Get Info on the Copy Phase item]. If I right-click on the Sparkle.framework file within the "Copy Files" build phase of the duplicated Target, and select "Reveal in Finder", then the correct Sparkle.framework file is shown. The required file exists at Sparkle.framework/Headers/Sparkle.h If I switch back to the original "MyApp" target, it builds and runs successfully. Am I doing something obviously wrong here? Thanks.

    Read the article

  • R counting the occurance of similar rows of data frame

    - by Matt
    I have data in the following format called DF (this is just a made up simplified sample): eval.num, eval.count, fitness, fitness.mean, green.h.0, green.v.0, offset.0 random 1 1 1500 1500 100 120 40 232342 2 2 1000 1250 100 120 40 11843 3 3 1250 1250 100 120 40 981340234 4 4 1000 1187.5 100 120 40 4363453 5 1 2000 2000 200 100 40 345902 6 1 3000 3000 150 90 10 943 7 1 2000 2000 90 90 100 9304358 8 2 1800 1900 90 90 100 284333 However, the eval.count column is incorrect and I need to fix it. It should report the number of rows with the same values for (green.h.0, green.v.0, and offset.0) by only looking at the previous rows. The example above uses the expected values, but assume they are incorrect. How can I add a new column (say "count") which will count all previous rows which have the same values of the specified variables? I have gotten help on a similar problem of just selecting all rows with the same values for specified columns, so I supposed I could just write a loop around that, but it seems inefficient to me.

    Read the article

  • Access MP3 audio data independently of ID3 tags?

    - by kyl191
    Hi, this is a 2 part question. First off, is it possible to access the audio data in an MP3 independently of the ID3 tags, and secondly, is there any way to do so using available libraries? I recently consolidated my music collection from 3 computers and ended up with songs which had changed ID3 tags, but the audio data itself was unmodified. Running a search for duplicate files failed because the file changed with the ID3 tag change, but I think it should be possible to identify duplicate files if I just run a deduplication using the audio data for comparison. I know that it's possible to seek to a particular position past the ID3 header in the file, and directly read the data, but was wondering if there's a library that would expose the audio data so I could just extract the data, run a checksum on it, and store the computed result somewhere, then look for identical checksums. (Also, I'd probably have to use some kind of library when you take into account variable length headers.)

    Read the article

  • SQL Query Returning Duplicate Results

    - by Jesse Bunch
    Hi, I've been working out this query now for a while and I thought I had it where I wanted it, but apparently not. There are two records in the database (orders). The query should return two different rows, but instead returns two rows that have exactly the same values. I think it may be something to do with the GROUP BY or derived tables I'm using but my eyes are tired and not seeing the problem. Can any of you help? Thanks in advance. SELECT orders.billerID, orders.invoiceDate, orders.txnID, orders.bName, orders.bStreet1, orders.bStreet2, orders.bCity, orders.bState, orders.bZip, orders.bCountry, orders.sName, orders.sStreet1, orders.sStreet2, orders.sCity, orders.sState, orders.sZip, orders.sCountry, orders.paymentType, orders.invoiceNotes, orders.pFee, orders.shipping, orders.tax, orders.reasonCode, orders.txnType, orders.customerID, customers.firstName AS firstName, customers.lastName AS lastName, customers.businessName AS businessName, orderStatus.statusName AS orderStatus, IFNULL(orderItems.itemTotal, 0.00) + orders.shipping + orders.tax AS orderTotal, IFNULL(orderItems.itemTotal, 0.00) + orders.shipping + orders.tax - IFNULL(payments.totalPayments, 0.00) AS orderBalance FROM orders LEFT JOIN customers ON orders.customerID = customers.id LEFT JOIN orderStatus ON orders.orderStatus = orderStatus.id LEFT JOIN ( SELECT orderItems.orderID, SUM(orderItems.itemPrice * orderItems.itemQuantity) as itemTotal FROM orderItems GROUP BY orderItems.orderID ) orderItems ON orderItems.orderID = orders.id LEFT JOIN ( SELECT payments.orderID, SUM(payments.amount) as totalPayments FROM payments GROUP BY payments.orderID ) payments ON payments.orderID = orders.id

    Read the article

  • Duplicate a UITableViewCell - iPhone

    - by ncohen
    Hi everyone, I would like to create an effect to a cell of a UITableView. The effect is: duplicate the cell and move the duplicated cell (the original stays at its place). My problem is to duplicate the cell... I've tried: Code: UITableViewCell *animatedCell = [[UITableViewCell alloc] init]; animatedCell = [[self cellForRowAtIndexPath:indexPath] copy]; but UIView doesn't seem to implement the copy... How can I do it? Thanks

    Read the article

  • iptables captive portal remove user

    - by Burgos
    I followed this guide: http://aryo.info/labs/captive-portal-using-php-and-iptables.html I am implementing captive portal using iptables. I've setup web server and iptables on linux router, and everything is working as it should. I can allow user to access internet with sudo iptables -I internet -t mangle -m mac --mac-source USER_MAC_ADDRESS -j RETURN and I can remove access with sudo iptables -D internet -t mangle -m mac --mac-source USER_MAC_ADDRESS -j RETURN However, on removal, user can still open last viewed page as many times he wants (if he restart his Ethernet adapter, future connections will be closed). On blog page I found a script /usr/sbin/conntrack -L \ |grep $1 \ |grep ESTAB \ |grep 'dport=80' \ |awk \ "{ system(\"conntrack -D --orig-src $1 --orig-dst \" \ substr(\$6,5) \" -p tcp --orig-port-src \" substr(\$7,7) \" \ --orig-port-dst 80\"); }" Which should remove their "redirection" connection track, as it is written, but when I execute that script, nothing happens - user still have access to that page. When I execute /usr/sbin/conntrack -L | grep USER_IP after executing script I am having nothing returned, so my questions: Is there anything else that can help me clean these track? Obviously - I can't reset nor mine, nor users network adapter.

    Read the article

  • R counting the occurrences of similar rows of data frame

    - by Matt
    I have data in the following format called DF (this is just a made up simplified sample): eval.num, eval.count, fitness, fitness.mean, green.h.0, green.v.0, offset.0 random 1 1 1500 1500 100 120 40 232342 2 2 1000 1250 100 120 40 11843 3 3 1250 1250 100 120 40 981340234 4 4 1000 1187.5 100 120 40 4363453 5 1 2000 2000 200 100 40 345902 6 1 3000 3000 150 90 10 943 7 1 2000 2000 90 90 100 9304358 8 2 1800 1900 90 90 100 284333 However, the eval.count column is incorrect and I need to fix it. It should report the number of rows with the same values for (green.h.0, green.v.0, and offset.0) by only looking at the previous rows. The example above uses the expected values, but assume they are incorrect. How can I add a new column (say "count") which will count all previous rows which have the same values of the specified variables? I have gotten help on a similar problem of just selecting all rows with the same values for specified columns, so I supposed I could just write a loop around that, but it seems inefficient to me.

    Read the article

  • How to triage this MySQL duplicate entry error after running Rails migration?

    - by keruilin
    I get the following error when I try to run this migration: == AddUniquenessConstraintOnAwards: migrating ================================ -- add_index(:awards, [:badge_id, :game_week_id], {:unique=>true, :name=>:game_badge_index}) rake aborted! An error has occurred, all later migrations canceled: Mysql::Error: Duplicate entry '35-8192' for key 'game_badge_index': CREATE UNIQUE INDEX `game_badge_index` ON `awards` (`badge_id`, `game_week_id`) Has anyone encountered? What's the error telling me? How did you troubleshoot it and ultimately fix it?

    Read the article

  • How to compare 2 lists and merge them in Python/MySQL?

    - by NJTechGuy
    I want to merge data. Following are my MySQL tables. I want to use Python to traverse though a list of both Lists (one with dupe = 'x' and other with null dupes). For instance : a b c d e f key dupe -------------------- 1 d c f k l 1 x 2 g h j 1 3 i h u u 2 4 u r t 2 x From the above sample table, the desired output is : a b c d e f key dupe -------------------- 2 g c h k j 1 3 i r h u u 2 What I have so far : import string, os, sys import MySQLdb from EncryptedFile import EncryptedFile enc = EncryptedFile( os.getenv("HOME") + '/.py-encrypted-file') user = enc.getValue("user") pw = enc.getValue("pw") db = MySQLdb.connect(host="127.0.0.1", user=user, passwd=pw,db=user) cursor = db.cursor() cursor2 = db.cursor() cursor.execute("select * from delThisTable where dupe is null") cursor2.execute("select * from delThisTable where dupe is not null") result = cursor.fetchall() result2 = cursor2.fetchall() for cursorFieldname in cursor.description: for cursorFieldname2 in cursor2.description: if cursorFieldname[0] == cursorFieldname2[0]: ### How do I compare the record with same key value and update the original row null field value with the non-null value from the duplicate? Please fill this void... cursor.close() cursor2.close() db.close() Thanks guys!

    Read the article

  • R selecting duplicate rows

    - by Matt
    Okay, I'm fairly new to R and I've tried to search the documentation for what I need to do but here is the problem. I have a data.frame called heeds.data in the following form (some columns omitted for simplicity) eval.num, eval.count, ... fitness, fitness.mean, green.h.0, green.v.0, offset.0, green.h.1, green.v.1,...green.h.7, green.v.7, offset.7... And I have selected a row meeting the following criteria: best.fitness <- min(heeds.data$fitness.mean[heeds.data$eval.count = 10]) best.row <- heeds.data[heeds.data$fitness.mean == best.fitness] Now, what I want are all of the other rows with that have columns green.h.0 to offset.7 (a contiguous section of columns) equal to the best.row Basically I'm looking for rows that have some of the conditions the same as the "best" row. I thought I could just do this, heeds.best <- heeds.data$fitness[ heeds.data$green.h.0 == best.row$green.h.0 & ... ] But with 24 columns it seems like a stupid method. Looking for something a bit simpler with less manual typing. Thanks!

    Read the article

  • Datamapper Clone Record w/ New ID

    - by BouncePast
    class Item include DataMapper::Resource property :id, Serial property :title, String end item = Item.new(:title = 'Title 1') # :id = 1 item_clone = Item.first(:id = 1).clone item_clone.save This does "clone" the object as described but how can this be done so it applies a different ID once the record is saved, e.g. #

    Read the article

  • How can I ignore an http request without clearing the browser?

    - by Timid Developer
    To prevent duplicate requests (i.e. pressing F5 right after clicking a command button), I've setup my page base class to ignore the request if it's detected as a duplicate. When I say 'ignore' I mean Response.End() Now I thought I've seen this work before, where there's an issue, I just Response.End() and the users page just does nothing. I don't know the exact circumstance in which this worked, but I'm unable to repeat it now. Now when I call Response.End(), I just get an empty browser. More specifically, I get this html. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML><HEAD> <META http-equiv=Content-Type content="text/html; charset=utf-8"></HEAD> <BODY></BODY></HTML> I setup the following test app to confirm the problem is not elsewhere in my app. Here it is: Add the following to an aspx form <asp:Label ID="lbl" Text="0" runat="server" /><br /> <asp:Button ID="btnAdd1" Text="Add 1" runat="server" /><br /> <asp:Button ID="btnAdd2" Text="Add 2" runat="server" /><br /> <asp:Button ID="btnAdd3" Text="Add 3" runat="server" /><br /> And here's the code behind file using System; namespace TestDupRequestCancellation { public partial class _Default : System.Web.UI.Page { protected void Page_Init(object sender, EventArgs e) { btnAdd1.Click += btnAdd1_Click; btnAdd2.Click += btnAdd2_Click; btnAdd3.Click += btnAdd3_Click; } protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) CurrentValue = 0; else if (Int32.Parse(lbl.Text) != CurrentValue) Response.End(); } protected void Page_PreRender(object sender, EventArgs e) { lbl.Text = CurrentValue.ToString(); } protected int CurrentValue { get { return Int32.Parse(Session["CurrentValue"].ToString()); } set { Session["CurrentValue"] = value.ToString(); } } void btnAdd3_Click(object sender, EventArgs e) { CurrentValue += 3; } void btnAdd2_Click(object sender, EventArgs e) { CurrentValue += 2; } void btnAdd1_Click(object sender, EventArgs e) { CurrentValue += 1; } } } When you load the page, clicking any button does what is expected, but if you press F5 at any time after pressing one of the buttons, it will detect it as a duplicate request and call Response.End() which promptly ends the task. Which leaves the user with an empty browser. Is there anyway to leave the user with the page as it was, so they can just click a button? Also; please note that this code is the simplest code I could come up with to demonstrate my problem. It's not meant to demonstrate how to check for dup requests.

    Read the article

  • BB Code Parser (in formatting phase) with jQuery jammed due to messed up loops most likely

    - by Oskar
    Greetings everyone, I'm making a BB Code Parser but I'm stuck on the JavaScript front. I'm using jQuery and the caret library for noting selections in a text field. When someone selects a piece of text a div with formatting options will appear. I have two issues. Issue 1. How can I make this work for multiple textfields? I'm drawing a blank as it currently will detect the textfield correctly until it enters the $("#BBtoolBox a").mousedown(function() { } loop. After entering it will start listing one field after another in a random pattern in my eyes. !!! MAIN Issue 2. I'm guessing this is the main reason for issue 1 as well. When I press a formatting option it will work on the first action but not the ones afterwards. It keeps duplicating the variable parsed. (if I only keep to one field it will never print in the second) Issue 3 If you find anything especially ugly in the code, please tell me how to improve myself. I appriciate all help I can get. Thanks in advance $(document).ready(function() { BBCP(); }); function BBCP(el) { if(!el) { el = "textarea"; } // Stores the cursor position of selection start $(el).mousedown(function(e) { coordX = e.pageX; coordY = e.pageY; // Event of selection finish by using keyboard }).keyup(function() { BBtoolBox(this, coordX, coordY); // Event of selection finish by using mouse }).mouseup(function() { BBtoolBox(this, coordX, coordY); // Event of field unfocus }).blur(function() { $("#BBtoolBox").hide(); }); } function BBtoolBox(el, coordX, coordY) { // Variable containing the selected text by Caret selection = $(el).caret().text; // Ignore the request if no text is selected if(selection.length == 0) { $("#BBtoolBox").hide(); return; } // Print the toolbox if(!document.getElementById("BBtoolBox")) { $(el).before("<div id=\"BBtoolBox\" style=\"left: "+ ( coordX + 5 ) +"px; top: "+ ( coordY - 30 ) +"px;\"></div>"); // List of actions $("#BBtoolBox").append("<a href=\"#\" onclick=\"return false\"><img src=\"./icons/text_bold.png\" alt=\"B\" title=\"Bold\" /></a>"); $("#BBtoolBox").append("<a href=\"#\" onclick=\"return false\"><img src=\"./icons/text_italic.png\" alt=\"I\" title=\"Italic\" /></a>"); } else { $("#BBtoolBox").css({'left': (coordX + 3) +'px', 'top': (coordY - 30) +'px'}).show(); } // Parse the text according to the action requsted $("#BBtoolBox a").mousedown(function() { switch($(this).children(":first").attr("alt")) { case "B": // bold parsed = "[b]"+ selection +"[/b]"; break; case "I": // italic parsed = "[i]"+ selection +"[/i]"; break; } // Changes the field value by replacing the selection with the variable parsed $(el).val($(el).caret().replace(parsed)); $("#BBtoolBox").hide(); return false; }); }

    Read the article

  • How to find duplicate values in SQL Server

    - by hgulyan
    Hi, I'm using SQL Server 2008. I have a table Customers customer_number int field1 varchar field2 varchar field3 varchar field4 varchar ... and a lot more columns, that doesn't matter for my queries. Column *customer_number* is pk. I'm trying to find duplicate values and some differences between them. Please, help me to find all rows that have same 1) field1, field2, field3, field4 2) only 3 columns are equal and one of them isn't (except rows from list 1) 3) only 2 columns equal and two of them aren't (except rows from list 1 and list 2) In the end, I'll have 3 tables with this results and additional groupId, which will be same for a group of similars (For example, for 3 column equals, rows that have 3 same columns equal will be a seperate group) Thank you.

    Read the article

  • How to list all duplicated rows which may include NULL columns?

    - by Yousui
    Hi guys, I have a problem of listing duplicated rows that include NULL columns. Lemme show my problem first. USE [tempdb]; GO IF OBJECT_ID(N'dbo.t') IS NOT NULL BEGIN DROP TABLE dbo.t END GO CREATE TABLE dbo.t ( a NVARCHAR(8), b NVARCHAR(8) ); GO INSERT t VALUES ('a', 'b'); INSERT t VALUES ('a', 'b'); INSERT t VALUES ('a', 'b'); INSERT t VALUES ('c', 'd'); INSERT t VALUES ('c', 'd'); INSERT t VALUES ('c', 'd'); INSERT t VALUES ('c', 'd'); INSERT t VALUES ('e', NULL); INSERT t VALUES (NULL, NULL); INSERT t VALUES (NULL, NULL); INSERT t VALUES (NULL, NULL); INSERT t VALUES (NULL, NULL); GO Now I want to show all rows that have other rows duplicated with them, I use the following query. SELECT a, b FROM dbo.t GROUP BY a, b HAVING count(*) > 1 which will give us the result: a b -------- -------- NULL NULL a b c d Now if I want to list all rows that make contribution to duplication, I use this query: WITH duplicate (a, b) AS ( SELECT a, b FROM dbo.t GROUP BY a, b HAVING count(*) > 1 ) SELECT dbo.t.a, dbo.t.b FROM dbo.t INNER JOIN duplicate ON (dbo.t.a = duplicate.a AND dbo.t.b = duplicate.b) Which will give me the result: a b -------- -------- a b a b a b c d c d c d c d As you can see, all rows include NULLs are filtered. The reason I thought is that I use equal sign to test the condition(dbo.t.a = duplicate.a AND dbo.t.b = duplicate.b), and NULLs cannot be compared use equal sign. So, in order to include rows that include NULLs in it in the last result, I have change the aforementioned query to WITH duplicate (a, b) AS ( SELECT a, b FROM dbo.t GROUP BY a, b HAVING count(*) > 1 ) SELECT dbo.t.a, dbo.t.b FROM dbo.t INNER JOIN duplicate ON (dbo.t.a = duplicate.a AND dbo.t.b = duplicate.b) OR (dbo.t.a IS NULL AND duplicate.a IS NULL AND dbo.t.b = duplicate.b) OR (dbo.t.b IS NULL AND duplicate.b IS NULL AND dbo.t.a = duplicate.a) OR (dbo.t.a IS NULL AND duplicate.a IS NULL AND dbo.t.b IS NULL AND duplicate.b IS NULL) And this query will give me the answer as I wanted: a b -------- -------- NULL NULL NULL NULL NULL NULL NULL NULL a b a b a b c d c d c d c d Now my question is, as you can see, this query just include two columns, in order to include NULLs in the last result, you have to use many condition testing statements in the query. As the column number increasing, the condition testing statements you need in your query is increasing astonishingly. How can I solve this problem? Great thanks.

    Read the article

  • Removing python and then re-installing on Mac OSX

    - by JudoWill
    I was wondering if anyone had tips on how to completely remove a python installation form Mac OSX (10.5.8) ... including virtual environments and its related binaries. Over the past few years I've completely messed up the installed site-packages, virtual-environments, etc. and the only way I can see to fix it is to just uninstall everything and re-install. I'd like to completely re-do everything and use virtualenv, pip, etc. from the beginning. On the other hand if anyone knows a way to do this without removing python and re-installing I'd be happy to here about it. Thanks, Will

    Read the article

  • query structure - ignoring entries for the same event from multiple users?

    - by Andrew Heath
    One table in my MySQL database tracks game plays. It has the following structure: SCENARIO_VICTORIES [ID] [scenario_id] [game] [timestamp] [user_id] [winning_side] [play_date] ID is the autoincremented primary key. timestamp records the moment of submission for the record. winning_side has one of three possible values: 1, 2, or 0 (meaning a draw) One of the queries done on this table calculates the victory percentage for each scenario, when that scenario's page is viewed. The output is expressed as: Side 1 win % Side 2 win % Draw % and queried with: SELECT winning_side, COUNT(scenario_id) FROM scenario_victories WHERE scenario_id='$scenID' GROUP BY winning_side ORDER BY winning_side ASC and then processed into the percentages and such. Sorry for the long setup. My problem is this: several of my users play each other, and record their mutual results. So these battles are being doubly represented in the victory percentages and result counts. Though this happens infrequently, the userbase isn't large and the double entries do have a noticeable effect on the data. Given the table and query above - does anyone have any suggestions for how I can "collapse" records that have the same play_date & game & scenario_id & winning_side so that they're only counted once?

    Read the article

  • Duplicated Label in add_menu_page

    - by Blackdream
    I have created a function for a theme customization. function create_theme_option() { add_menu_page( 'Manage Options', //Page Title 'Theme Option', //WP Administrator Menu Title 'manage_options', // 'theme-options', //Link to a page to your Administration Area 'deploy_theme_options', //Function Name get_template_directory_uri() . '/Plugins/Background Changer/images/icons/icon.png',//Menu Icon 99); add_submenu_page("theme-options", "Theme Settings", "Theme Settings", 1, "theme-settings", "theme_settings"); add_submenu_page("theme-options", "Manage Header", "Manage Header", 1, "manage-header", "manage_header"); add_submenu_page("theme-options", "Social Media", "Social Media Links", 1, "social-media", "social_media"); add_submenu_page("theme-options", "Catalog Manager", "Catalog Manager", 1, "catalog-manager", "catalog_manager"); } but I noticed that after the label "Theme Option" there is another text appear next to it as "Theme Option". Check the Image below: How can I fix this? Please help!

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >