Search Results

Search found 6392 results on 256 pages for 'reduce duplicate'.

Page 51/256 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • How do I de-duplicate a list of nodes in XSLT - and return the last node encountered?

    - by Broam
    I've seen lots of "de-duplicate this xml" questions but everyone wants the first node or the nodes are identical. I have a bit of a bigger puzzle. I have a list of articles in XML, a relevant snippet is shown: <item><key>Article1</key><stamp>100</stamp></item> <item><key>Article1</key><stamp>130</stamp></item> <item><key>Article2</key><stamp>800</stamp></item> <item><key>Article1</key><stamp>180</stamp></item> <item><key>Article3</key><stamp>900</stamp></item> <item><key>Article3</key><stamp>950</stamp></item> <item><key>Article4</key><stamp>990</stamp></item> <item><key>Article5</key><stamp>999</stamp></item> I'd like a list of nodes where the keys are unique and where the last instance is returned, not the first: Stamp (integer) is always increasing for elements of a particular key. Ideally I'd like "largest stamp" but they're always in order so the shortcut is ok. Desired result: (Order doesn't really matter.) <item><key>Article2</key><stamp>800</stamp></item> <item><key>Article1</key><stamp>180</stamp></item> <item><key>Article3</key><stamp>950</stamp></item> <item><key>Article4</key><stamp>990</stamp></item> <item><key>Article5</key><stamp>999</stamp></item> I'm somewhat confused on how to get this list. Any ideas? I'm using the Saxon processor if it matters.

    Read the article

  • BST insert operation. don't insert a node if a duplicate exists already

    - by jeev
    the following code reads an input array, and constructs a BST from it. if the current arr[i] is a duplicate, of a node in the tree, then arr[i] is discarded. count in the struct node refers to the number of times a number appears in the array. fi refers to the first index of the element found in the array. after the insertion, i am doing a post-order traversal of the tree and printing the data, count and index (in this order). the output i am getting when i run this code is: 0 0 7 0 0 6 thank you for your help. Jeev struct node{ int data; struct node *left; struct node *right; int fi; int count; }; struct node* binSearchTree(int arr[], int size); int setdata(struct node**node, int data, int index); void insert(int data, struct node **root, int index); void sortOnCount(struct node* root); void main(){ int arr[] = {2,5,2,8,5,6,8,8}; int size = sizeof(arr)/sizeof(arr[0]); struct node* temp = binSearchTree(arr, size); sortOnCount(temp); } struct node* binSearchTree(int arr[], int size){ struct node* root = (struct node*)malloc(sizeof(struct node)); if(!setdata(&root, arr[0], 0)) fprintf(stderr, "root couldn't be initialized"); int i = 1; for(;i<size;i++){ insert(arr[i], &root, i); } return root; } int setdata(struct node** nod, int data, int index){ if(*nod!=NULL){ (*nod)->fi = index; (*nod)->left = NULL; (*nod)->right = NULL; return 1; } return 0; } void insert(int data, struct node **root, int index){ struct node* new = (struct node*)malloc(sizeof(struct node)); setdata(&new, data, index); struct node** temp = root; while(1){ if(data<=(*temp)->data){ if((*temp)->left!=NULL) *temp=(*temp)->left; else{ (*temp)->left = new; break; } } else if(data>(*temp)->data){ if((*temp)->right!=NULL) *temp=(*temp)->right; else{ (*temp)->right = new; break; } } else{ (*temp)->count++; free(new); break; } } } void sortOnCount(struct node* root){ if(root!=NULL){ sortOnCount(root->left); sortOnCount(root->right); printf("%d %d %d\n", (root)->data, (root)->count, (root)->fi); } }

    Read the article

  • How can I remove rows with unique values? As in only keeping rows with duplicate values?

    - by user1456405
    Here's the conundrum, I'm a complete and utter noob when it comes to programming. I understand the basics, but am still learning javascript. I have a spreadsheet of surveys, in which I need to see how particular users have varied over time. As such, I need to disregard all rows with unique values in a particular column. The data looks like this: Response Date Response_ID Account_ID Q.1 10/20/2011 12:03:43 PM 23655956 1168161 8 10/20/2011 03:52:57 PM 23660161 1168152 0 10/21/2011 10:55:54 AM 23672903 1166121 7 10/23/2011 04:28:16 PM 23694471 1144756 9 10/25/2011 06:30:52 AM 23732674 1167449 7 10/25/2011 07:52:28 AM 23734597 1087618 5 I've found a way to do so in VBA, which sucks as I have to use excel, per below: Sub Del_Unique() Application.ScreenUpdating = False Columns("B:B").Insert Shift:=xlToRight Columns("A:A").Copy Destination:=Columns("B:B") i = Application.CountIf(Range("A:A"), "<>") + 50 If i > 65536 Then i = 65536 Do If Application.CountIf(Range("B:B"), Range("A" & i)) = 1 Then Rows(i).Delete End If i = i - 1 Loop Until i = 0 Columns("B:B").Delete Application.ScreenUpdating = True End Sub But that requires mucking about. I'd really like to do it in Google Spreadsheets with a script that won't have to be changed. Closest I can get is retrieving all duplicate user ids from the range, but can't associate that with the row. That code follows: function findDuplicatesInSelection() { var activeRange = SpreadsheetApp.getActiveRange(); var values = activeRange.getValues(); // values that appear at least once var once = {}; // values that appear at least twice var twice = {}; // values that appear at least twice, stored in a pretty fashion! var final = []; for (var i = 0; i < values.length; i++) { var inner = values[i]; for (var j = 0; j < inner.length; j++) { var cell = inner[j]; if (cell == "") continue; if (once.hasOwnProperty(cell)) { if (!twice.hasOwnProperty(cell)) { final.push(cell); } twice[cell] = 1; } else { once[cell] = 1; } } } if (final.length == 0) { Browser.msgBox("No duplicates found"); } else { Browser.msgBox("Duplicates are: " + final); } } Anyhow, sorry if this is the wrong place or format, but half of what I've found so far has been from stack, I thought it was a good place to start. Thanks!

    Read the article

  • The PASS Board of Directors Q&A Session

    - by andyleonard
    Friday afternoon (18 Oct 2013), the PASS Board of Directors met with interested members of the SQL Server Community to answer questions. Paraphrases of some questions and notes I collected during the session follow (Please note: this is not a transcript): Elections Kendall Van Dyke asked about duplicate voting. The Board responded that they had looked into the matter and identified duplicate memberships based on names and addresses, but with different email addresses. After filtering for duplicate...(read more)

    Read the article

  • Installing PHP-GTK with PHP 5.3 on OS X

    - by Shabbyrobe
    I'm having trouble getting php-gtk installed with php 5.3 on os x. I'm currently using macports to do it and when I try to install php-gtk, it spews 'duplicate static' errors: Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_php_php5-gtk/work/php-gtk-2.0.1" && /usr/bin/make -j2 all " returned error 2 Command output: ext/gtk+/gen_pango.c:2951: error: duplicate 'static' ext/gtk+/gen_pango.c:2957: error: duplicate 'static' ext/gtk+/gen_pango.c:3097: error: duplicate 'static' ext/gtk+/gen_pango.c:3103: error: duplicate 'static' Is there a way to coerce it into building, or an alternative way to install it?

    Read the article

  • Validation Meta tag for Bing [closed]

    - by Yannis Dran
    Note of the author: I did my research before posting and the old "duplicate" generated over a year ago and things have changed since then. In addition, it was generic question but mine is targeting only ONE search engine machine: BING. FAQ is not clear about how should we deal with these, "duplicate" cases. The "duplicate" can be found here: Validation Meta tags for search engines Should I remove the validation meta tag for the Bing search engine after I validate the website?

    Read the article

  • Combinationally unique MySQL tables

    - by Jack Webb-Heller
    So, here's the problem (it's probably an easy one :P) This is my table structure: CREATE TABLE `users_awards` ( `user_id` int(11) NOT NULL, `award_id` int(11) NOT NULL, `duplicate` int(11) NOT NULL DEFAULT '0', UNIQUE KEY `award_id` (`award_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 So it's for a user awards system. I don't want my users to be granted the same award multiple times, which is why I have a 'duplicate' field. The query I'm trying is this (with sample data of 3 and 2) : INSERT INTO users_awards (user_id, award_id) VALUES ('3','2') ON DUPLICATE KEY UPDATE duplicate=duplicate+1 So my MySQL is a little rusty, but I set user_id to be a primary key, and award_id to be a UNIQUE key. This (kind of) created the desired effect. When user 1 was given award 2, it entered. If he/she got this twice, only one row would be in the table, and duplicate would be set to 1. And again, 2, etc. When user 2 was given award 1, it entered. If he/she got this twice, duplicate updated, etc. etc. But when user 1 is given award 1 (after user 2 has already been awarded it), user 2 (with award 1)'s duplicate field increases and nothing is added to user 1. Sorry if that's a little n00bish. Really appreciate the help! Jack

    Read the article

  • Simplest way to flatten document to a view in RavenDB

    - by degorolls
    Given the following classes: public class Lookup { public string Code { get; set; } public string Name { get; set; } } public class DocA { public string Id { get; set; } public string Name { get; set; } public Lookup Currency { get; set; } } public class ViewA // Simply a flattened version of the doc { public string Id { get; set; } public string Name { get; set; } public string CurrencyName { get; set; } // View just gets the name of the currency } I can create an index that allows client to query the view as follows: public class A_View : AbstractIndexCreationTask<DocA, ViewA> { public A_View() { Map = docs => from doc in docs select new ViewA { Id = doc.Id, Name = doc.Name, CurrencyName = doc.Currency.Name }; Reduce = results => from result in results group on new ViewA { Id = result.Id, Name = result.Name, CurrencyName = result.CurrencyName } into g select new ViewA { Id = g.Key.Id, Name = g.Key.Name, CurrencyName = g.Key.CurrencyName }; } } This certainly works and produces the desired result of a view with the data transformed to the structure required at the client application. However, it is unworkably verbose, will be a maintenance nightmare and is probably fairly inefficient with all the redundant object construction. Is there a simpler way of creating an index with the required structure (ViewA) given a collection of documents (DocA)? FURTHER INFORMATION The issue appears to be that in order to have the index hold the data in the transformed structure (ViewA), we have to do a Reduce. It appears that a Reduce must have both a GROUP ON and a SELECT in order to work as expected so the following are not valid: INVALID REDUCE CLAUSE 1: Reduce = results => from result in results group on new ViewA { Id = result.Id, Name = result.Name, CurrencyName = result.CurrencyName } into g select g.Key; This produces: System.InvalidOperationException: Variable initializer select must have a lambda expression with an object create expression Clearly we need to have the 'select new'. INVALID REDUCE CLAUSE 2: Reduce = results => from result in results select new ViewA { Id = result.Id, Name = result.Name, CurrencyName = result.CurrencyName }; This prduces: System.InvalidCastException: Unable to cast object of type 'ICSharpCode.NRefactory.Ast.IdentifierExpression' to type 'ICSharpCode.NRefactory.Ast.InvocationExpression'. Clearly, we also need to have the 'group on new'. Thanks for any assistance you can provide. (Note: removing the type (ViewA) from the constructor calls has no effect on the above)

    Read the article

  • Correlation formula explanation needed d3.js

    - by divakar
    function getCorrelation(xArray, yArray) { alert(xArray); alert(yArray); function sum(m, v) {return m + v;} function sumSquares(m, v) {return m + v * v;} function filterNaN(m, v, i) {isNaN(v) ? null : m.push(i); return m;} // clean the data (because we know that some values are missing) var xNaN = _.reduce(xArray, filterNaN , []); var yNaN = _.reduce(yArray, filterNaN , []); var include = _.intersection(xNaN, yNaN); var fX = _.map(include, function(d) {return xArray[d];}); var fY = _.map(include, function(d) {return yArray[d];}); var sumX = _.reduce(fX, sum, 0); var sumY = _.reduce(fY, sum, 0); var sumX2 = _.reduce(fX, sumSquares, 0); var sumY2 = _.reduce(fY, sumSquares, 0); var sumXY = _.reduce(fX, function(m, v, i) {return m + v * fY[i];}, 0); var n = fX.length; var ntor = ( ( sumXY ) - ( sumX * sumY / n) ); var dtorX = sumX2 - ( sumX * sumX / n); var dtorY = sumY2 - ( sumY * sumY / n); var r = ntor / (Math.sqrt( dtorX * dtorY )); // Pearson ( http://www.stat.wmich.edu/s216/book/node122.html ) var m = ntor / dtorX; // y = mx + b var b = ( sumY - m * sumX ) / n; // console.log(r, m, b); return {r: r, m: m, b: b}; } I have finding correlation between the points i plot using this function which is not written by me. my xarray=[120,110,130,132,120,118,134,105,120,0,0,0,0,137,125,120,127,120,160,120,148] yarray=[80,70,70,80,70,62,69,70,70,62,90,42,80,72,0,0,0,0,78,82,68,60,58,82,60,76,86,82,70] I can t able to understand the function perfectly. Can anybody explain it with the data i pasted here. I also wanted to remove the zeros getting calculated from this function.

    Read the article

  • ERP Customizations...Are your CEMLI’s Holding You Back?

    - by Di Seghposs
    Upgrading your Oracle applications can be an intimidating and nerve-racking experience depending on your organization’s level of customizations. Often times they have an on-going effect on your organization causing increased complexity, less flexibility, and additional maintenance cost. Organizations that reduce their dependency on customizations: Reduce complexity by up to 50% Reduce the cost of future maintenance and upgrades  Create a foundation for easier enablement of new product functionality and business value Oracle Consulting offers a complimentary service called Oracle CEMLI Benchmark and Analysis, which is an effective first step used to evaluate your E-Business Suite application CEMLI complexity.  The service will help your organization understand the number of customizations you have, how you rank against your peer groups and identifies target areas for customization reduction by providing a catalogue of customizations by object type, CEMLI ID or Project ID and Business Process. Whether you’re currently deployed on-premise, managed private cloud or considering a move to the cloud, understanding your customizations is critical as you begin an upgrade.  Learn how you can reduce complexity and overall TCO with this informative screencast.  For more information or to take advantage of this complimentary service today, contact Oracle Consulting directly at [email protected]

    Read the article

  • Accenture Foundation Platform for Oracle (AFPO) – Your pre-build & tested middleware platform

    - by JuergenKress
    The Accenture Foundation Platform for Oracle (AFPO) is a pre-built, tested reference application, common services framework and development accelerator for Oracle’s Fusion Middleware 11g product suite that can help to reduce development time and cost by up to 30 percent. AFPO is a unique accelerator that includes documentation, day one deliverables and quick start virtual machine images, along with access to a skilled team of resources, to reduce risk and cost while improving project quality. It can be delivered all at once or in stages, on-site, hosted, or as a cloud solution. Accenture recently released AFPO v5 for use with their clients. Accenture added significant updates in v5 including Day 1 images & documentation for Webcenter & ADF Mobile that are integrated with 30 other Oracle Middleware products that signifigantly reduced the services aspect to standing these products up. AFPO v5 also features rapid configuration and implementation capabilities for SOA/BPM integrated with Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Business Intelligence, Oracle Identity Management and Oracle ADF Mobile.  AFPO v5 also delivers a starter kit for Oracle SOA Suite which builds upon the integration methodology, leading practices and extended tooling contained within the Oracle Foundation Pack. The combination of the AFPO starter kit and Foundation Pack jump-start and streamline Oracle SOA Suite implementation initiatives, helping to reduce the risk of deploying new technologies and making architectural decisions, so clients can ultimately reduce cost, risk and the time needed for an implementation.  You'll find more information at: Accenture's website:  www.accenture.com/afpo YouTube AFPO Telestration:  http://www.youtube.com/watch?v=_x429DcHEJs Press Release Brochure Contacts: [email protected] Patrick J Sullivan (Accenture – Global Oracle Technology Lead), [email protected] SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: AFPO,Accenture,middleware platform,oracle middleware,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Accenture Foundation Platform for Oracle (AFPO)

    - by Lionel Dubreuil
    The Accenture Foundation Platform for Oracle (AFPO) is a pre-built, tested reference application, common services framework and development accelerator for Oracle’s Fusion Middleware 11g product suite that can help to reduce development time and cost by up to 30 percent. AFPO is a unique accelerator that includes documentation, day one deliverables and quick start virtual machine images, along with access to a skilled team of resources, to reduce risk and cost while improving project quality. It can be delivered all at once or in stages, on-site, hosted, or as a cloud solution. Accenture recently released AFPO v5 for use with their clients. Accenture added significant updates in v5 including Day 1 images & documentation for Webcenter & ADF Mobile that are integrated with 30 other Oracle Middleware products that signifigantly reduced the services aspect to standing these products up. AFPO v5 also features rapid configuration and implementation capabilities for SOA/BPM integrated with Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Business Intelligence, Oracle Identity Management and Oracle ADF Mobile.  AFPO v5 also delivers a starter kit for Oracle SOA Suite which builds upon the integration methodology, leading practices and extended tooling contained within the Oracle Foundation Pack. The combination of the AFPO starter kit and Foundation Pack jump-start and streamline Oracle SOA Suite implementation initiatives, helping to reduce the risk of deploying new technologies and making architectural decisions, so clients can ultimately reduce cost, risk and the time needed for an implementation.  You'll find more information at: Accenture's website:  www.accenture.com/afpo YouTube AFPO Telestration:  http://www.youtube.com/watch?v=_x429DcHEJs Press Release Brochure  Contacts: [email protected] Patrick J Sullivan (Accenture – Global Oracle Technology Lead), [email protected]

    Read the article

  • How to avoid using duplicate savepoint names in nested transactions in nested stored procs?

    - by Gary McGill
    I have a pattern that I almost always follow, where if I need to wrap up an operation in a transaction, I do this: BEGIN TRANSACTION SAVE TRANSACTION TX -- Stuff IF @error <> 0 ROLLBACK TRANSACTION TX COMMIT TRANSACTION That's served me well enough in the past, but after years of using this pattern (and copy-pasting the above code), I've suddenly discovered a flaw which comes as a complete shock. Quite often, I'll have a stored procedure calling other stored procedures, all of which use this same pattern. What I've discovered (to my cost) is that because I'm using the same savepoint name everywhere, I can get into a situation where my outer transaction is partially committed - precisely the opposite of the atomicity that I'm trying to achieve. I've put together an example that exhibits the problem. This is a single batch (no nested stored procs), and so it looks a little odd in that you probably wouldn't use the same savepoint name twice in the same batch, but my real-world scenario would be too confusing to post. CREATE TABLE Test (test INTEGER NOT NULL) BEGIN TRAN SAVE TRAN TX BEGIN TRAN SAVE TRAN TX INSERT INTO Test(test) VALUES (1) COMMIT TRAN TX BEGIN TRAN SAVE TRAN TX INSERT INTO Test(test) VALUES (2) COMMIT TRAN TX DELETE FROM Test ROLLBACK TRAN TX COMMIT TRAN TX SELECT * FROM Test DROP TABLE Test When I execute this, it lists one record, with value "1". In other words, even though I rolled back my outer transaction, a record was added to the table. What's happening is that the ROLLBACK TRANSACTION TX at the outer level is rolling back as far as the last SAVE TRANSACTION TX at the inner level. Now that I write this all out, I can see the logic behind it: the server is looking back through the log file, treating it as a linear stream of transactions; it doesn't understand the nesting/hierarchy implied by either the nesting of the transactions (or, in my real-world scenario, by the calls to other stored procedures). So, clearly, I need to start using unique savepoint names instead of blindly using "TX" everywhere. But - and this is where I finally get to the point - is there a way to do this in a copy-pastable way so that I can still use the same code everywhere? Can I auto-generate the savepoint name on the fly somehow? Is there a convention or best-practice for doing this sort of thing? It's not exactly hard to come up with a unique name every time you start a transaction (could base it off the SP name, or somesuch), but I do worry that eventually there would be a conflict - and you wouldn't know about it because rather than causing an error it just silently destroys your data... :-(

    Read the article

  • How do I programmatically duplicate UIViews built in interface builder?

    - by Cam Spiers
    I want to build up a sub-UIView in Interface Builder (contained within a UIScrollView) with some UILabels contained, I then want to programmatically "copy" this view adjust its left position (by multiples of its width) to achieve a "floating" left effect inside a UIScrollView. I then want to add this copied UIView back into the UIScrollView with different text for the UILabels. The problem is I don't know how and if you can "copy" UIViews.

    Read the article

  • Duplicate content in ASP.NET MVC because of custom routes MapRoute(), are areas the rescue?

    - by artvolk
    I use custom routes for my URLs and my action become accessible via two URLs (not counting trailing slash and lower\upper case letters): one via my custom route /my-custom-route-url/ and one via default /controller/action. I see one possible solution -- put all controllers which use default routing (they are mostly backend) in one area, and place all others in separate area and use it without default route. May be there is a better way?

    Read the article

  • How can I duplicate the UI functionality of Google Fastflip?

    - by marcamillion
    Trying to get the same functionality as Google FastFlip. Not just with the thumbnails, but even for the larger images too. Notice how when you click on a story, you see the main story front and center, and you see smaller half-screens of previous and next stories. How can I get that functionality? Is there a plugin for one of the JS frameworks that does this already? I also love the speed and feel of flipping through the images.

    Read the article

  • removing duplicate strings from a massive array in java efficiently?

    - by Preator Darmatheon
    I'm considering the best possible way to remove duplicates from an (Unsorted) array of strings - the array contains millions or tens of millions of stringz..The array is already prepopulated so the optimization goal is only on removing dups and not preventing dups from initially populating!! I was thinking along the lines of doing a sort and then binary search to get a log(n) search instead of n (linear) search. This would give me nlogn + n searches which althout is better than an unsorted (n^2) search = but this still seems slow. (Was also considering along the lines of hashing but not sure about the throughput) Please help! Looking for an efficient solution that addresses both speed and memory since there are millions of strings involved without using Collections API!

    Read the article

  • How to find a duplicate element in an array of shuffled consecutive integers?

    - by SysAdmin
    I recently came across a question somewhere Suppose you have an array of 1001 integers. The integers are in random order, but you know each of the integers is between 1 and 1000 (inclusive). In addition, each number appears only once in the array, except for one number, which occurs twice. Assume that you can access each element of the array only once. Describe an algorithm to find the repeated number. If you used auxiliary storage in your algorithm, can you find an algorithm that does not require it? what i am interested to know is the second part. i.e without using auxiliary storage . do you have any idea?

    Read the article

  • Is there a tool for detecting Visual Studio projects with duplicate GUIDs?

    - by sharptooth
    When creating new Visual Studio C++ projects there're two ways: either run the wizard and then painfully change all the necessary settings in the project or just copy and existing project, rename everything there and add files into it. The second variant is great except that the .vcproj file stores a project GUID. This GUID is used to track project dependencies and the startup project when two or more projects are in one solution. If any two projects in one solution have identical GUIDs problems can arise - dependencies are lost and the startup prject is reset on next solution reload. Clearly there's a need for a tool that would scan the filesystem subtree and detect projects with identical GUIDs here. Before I start writing one ... is there a ready tool for that?

    Read the article

  • How to duplicate "title" <a> as a "alt" of "img". using regex, manually?

    - by jitendra
    How to add title of link as a alt of img. using regex and in dreamweaver. I have to do in a large document. and in multiple files Before <a title="Whatever is written here" href="#" target="_blank"> <img width="14" height="14" src="#" /></a> after <a title="Whatever is written here" href="#" target="_blank"> <img width="14" height="14" src="#" alt="Whatever is written here" /></a>

    Read the article

  • Insert into a generic dictionary with possibility of duplicate keys?

    - by Chris Clark
    Is there any reason to favor one of these approaches over the other when inserting into a generic dictionary with the possibility of a key conflict? I'm building an in-memory version of a static collection so in the case of a conflict it doesn't matter whether the old or new value is used. If Not mySettings.ContainsKey(key) Then mySettings.Add(key, Value) End If Versus mySettings(key) = Value And then of course there is this, which is obviously not the right approach: Try mySettings.Add(key, Value) Catch End Try Clearly the big difference here is that the first and second approaches actually do different things, but in my case it doesn't matter. It seems that the second approach is cleaner, but I'm curious if any of you .net gurus have any deeper insight. Thanks!

    Read the article

  • JPA DAO integration test not throwing exception when duplicate object saved?

    - by HDave
    I am in the process of unit testing a DAO built with Spring/JPA and Hibernate as the provider. Prior to running the test, DBUnit inserted a User record with username "poweruser" -- username is the primary key in the users table. Here is the integration test method: @Test @ExpectedException(EntityExistsException.class) public void save_UserTestDataSaveUserWithPreExistingId_EntityExistsException() { User newUser = new UserImpl("poweruser"); newUser.setEmail("[email protected]"); newUser.setFirstName("New"); newUser.setLastName("User"); newUser.setPassword("secret"); dao.persist(newUser); } I have verified that the record is in the database at the start of this method. Not sure if this is relevant, but if I do a dao.flush() at the end of this method I get the following exception: javax.persistence.PersistenceException: org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >