Search Results

Search found 695 results on 28 pages for 'deletes'.

Page 24/28 | < Previous Page | 20 21 22 23 24 25 26 27 28  | Next Page >

  • Creating meaningful routes in wizard style ASP.NET MVC form

    - by R0MANARMY
    I apologize in advance for a long question, figured better have a bit more information than not enough. I'm working on an application with a fairly complex form (~100 fields on it). In order to make the UI a little more presentable the fields are organized into regions and split across multiple (~10) tabs (not unlike this, but each tab does a submit/redirect to next tab). This large input form can also be in one of 3 views (read only, editable, print friendly). The form represents a large domain object (let's call it Foo). I have a controller for said domain object (FooController). It makes sense to me to have the controller be responsible for all the CRUD related operations. Here are the problems I'm having trouble figuring out. Goals: I'd like to keep to conventions so that Foo/Create creates a new record Foo/Delete deletes a record Foo/Edit/{foo_id} takes you to the first tab of the form ...etc I'd like to be able to not repeat the data access code such that I can have Foo/Edit/{foo_id}/tab1 Foo/View/{foo_id}/tab1 Foo/Print/{foo_id}tab1 ...etc use the same data access code to get the data and just specify which view to use to render it. My current implementation has a massive FooController with Create, Delete, Tab1, Tab2, etc actions. Tab actions are split out into separate files for organization (using partial classes, which may or may not be abuse of partial classes). Problem I'm running into is how to organize my controller(s) and routes to make that happen. I have the default route {controller}/{action}/{id} Which handles goal 1 properly but doesn't quite play nice with goal 2. I tried to address goal 2 by defining extra routes like so: routes.MapRoute( "FooEdit", "Foo/Edit/{id}/{action}", new { controller = "Foo", action = "Tab1", mode = "Edit", id = (string)null } ); routes.MapRoute( "FooView", "Foo/View/{id}/{action}", new { controller = "Foo", action = "Tab1", mode = "View", id = (string)null } ); routes.MapRoute( "FooPrint", "Foo/Print/{id}/{action}", new { controller = "Foo", action = "Tab1", mode = "Print", id = (string)null } ); However defining these extra routes causes the Url.Action to generate routs like Foo/Edit/Create instead of Foo/Create. That leads me to believe I designed something very very wrong, but this is my first attempt an asp.net mvc project and I don't know any better. Any advice with this particular situation would be awesome, but feedback on design in similar projects is welcome.

    Read the article

  • How to remove lowercase sentence fragments from text?

    - by Aaron
    Hello: I'm tyring to remove lowercase sentence fragments from standard text files using regular expresions or a simple Perl oneliner. These are commonly referred to as speech or attribution tags, for example - he said, she said, etc. This example shows before and after using manual deletion: Original: "Ah, that's perfectly true!" exclaimed Alyosha. "Oh, do leave off playing the fool! Some idiot comes in, and you put us to shame!" cried the girl by the window, suddenly turning to her father with a disdainful and contemptuous air. "Wait a little, Varvara!" cried her father, speaking peremptorily but looking at them quite approvingly. "That's her character," he said, addressing Alyosha again. "Where have you been?" he asked him. "I think," he said, "I've forgotten something... my handkerchief, I think.... Well, even if I've not forgotten anything, let me stay a little." He sat down. Father stood over him. "You sit down, too," said he. All lower case sentence fragments manually removed: "Ah, that's perfectly true!" "Oh, do leave off playing the fool! Some idiot comes in, and you put us to shame!" "Wait a little, Varvara!" "That's her character," "Where have you been?" "I think," "I've forgotten something... my handkerchief, I think.... Well, even if I've not forgotten anything, let me stay a little." He sat down. Father stood over him. "You sit down, too," I've changed straight quotes " to balanced and tried: ” (...)+[.] Of course, this removes some fragments but deletes some text in balanced quotes and text starting with uppercase letters. [^A-Z] didn't work in the above expression. I realize that it may be impossible to achieve 100% accuracy but any useful expression, perl, or python script would be deeply appreciated. Cheers, Aaron

    Read the article

  • ExecuteNonQuery on a stored proc causes it to be deleted

    - by FinancialRadDeveloper
    This is a strange one. I have a Dev SQL Server which has the stored proc on it, and the same stored proc when used with the same code on the UAT DB causes it to delete itself! Has anyone heard of this behaviour? SQL Code: -- Check if user is registered with the system IF OBJECT_ID('dbo.sp_is_valid_user') IS NOT NULL BEGIN DROP PROCEDURE dbo.sp_is_valid_user IF OBJECT_ID('dbo.sp_is_valid_user') IS NOT NULL PRINT '<<< FAILED DROPPING PROCEDURE dbo.sp_is_valid_user >>>' ELSE PRINT '<<< DROPPED PROCEDURE dbo.sp_is_valid_user >>>' END go create procedure dbo.sp_is_valid_user @username as varchar(20), @isvalid as int OUTPUT AS BEGIN declare @tmpuser as varchar(20) select @tmpuser = username from CPUserData where username = @username if @tmpuser = @username BEGIN select @isvalid = 1 END else BEGIN select @isvalid = 0 END END GO Usage example DECLARE @isvalid int exec dbo.sp_is_valid_user 'username', @isvalid OUTPUT SELECT valid = @isvalid The usage example work all day... when I access it via C# it deletes itself in the UAT SQL DB but not the Dev one!! C# Code: public bool IsValidUser(string sUsername, ref string sErrMsg) { string sDBConn = ConfigurationSettings.AppSettings["StoredProcDBConnection"]; SqlCommand sqlcmd = new SqlCommand(); SqlDataAdapter sqlAdapter = new SqlDataAdapter(); try { SqlConnection conn = new SqlConnection(sDBConn); sqlcmd.Connection = conn; conn.Open(); sqlcmd.CommandType = CommandType.StoredProcedure; sqlcmd.CommandText = "sp_is_valid_user"; // params to pass in sqlcmd.Parameters.AddWithValue("@username", sUsername); // param for checking success passed back out sqlcmd.Parameters.Add("@isvalid", SqlDbType.Int); sqlcmd.Parameters["@isvalid"].Direction = ParameterDirection.Output; sqlcmd.ExecuteNonQuery(); int nIsValid = (int)sqlcmd.Parameters["@isvalid"].Value; if (nIsValid == 1) { conn.Close(); sErrMsg = "User Valid"; return true; } else { conn.Close(); sErrMsg = "Username : " + sUsername + " not found."; return false; } } catch (Exception e) { sErrMsg = "Error :" + e.Source + " msg: " + e.Message; return false; } }

    Read the article

  • Populating fields in modal form using PHP, jQuery

    - by Benjamin
    I have a form that adds links to a database, deletes them, and -- soon -- allows the user to edit details. I am using jQuery and Ajax heavily on this project and would like to keep all control in the same page. In the past, to handle editing something like details about another website (link entry), I would have sent the user to another PHP page with form fields populated with PHP from a MySQL database table. How do I accomplish this using a jQuery UI modal form and calling the details individually for that particular entry? Here is what I have so far- <?php while ($linkDetails = mysql_fetch_assoc($getLinks)) {?> <div class="linkBox ui-corner-all" id="linkID<?php echo $linkDetails['id'];?>"> <div class="linkHeader"><?php echo $linkDetails['title'];?></div> <div class="linkDescription"><p><?php echo $linkDetails['description'];?></p> <p><strong>Link:</strong><br/> <span class="link"><a href="<?php echo $linkDetails['url'];?>" target="_blank"><?php echo $linkDetails['url'];?></a></span></p></div> <p align="right"> <span class="control"> <span class="delete addButton ui-state-default">Delete</span> <span class="edit addButton ui-state-default">Edit</span> </span> </p> </div> <?php }?> And here is the jQuery that I am using to delete entries- $(".delete").click(function() { var parent = $(this).closest('div'); var id = parent.attr('id'); $("#delete-confirm").dialog({ resizable: false, modal: true, title: 'Delete Link?', buttons: { 'Delete': function() { var dataString = 'id='+ id ; $.ajax({ type: "POST", url: "../includes/forms/delete_link.php", data: dataString, cache: false, success: function() { parent.fadeOut('slow'); $("#delete-confirm").dialog('close'); } }); }, Cancel: function() { $(this).dialog('close'); } } }); return false; }); Everything is working just fine, just need to find a solution to edit. Thanks!

    Read the article

  • Should I allow sending complete structures when using PUT for updates in a REST API or not?

    - by dafmetal
    I am designing a REST API and I wonder what the recommended way to handle updates to resources would be. More specifically, I would allow updates through a PUT on the resource, but what should I allow in the body of the PUT request? Always the complete structure of the resource? Always the subpart (that changed) of the structure of the resource? A combination of both? For example, take the resource http://example.org/api/v1/dogs/packs/p1. A GET on this resource would give the following: Request: GET http://example.org/api/v1/dogs/packs/p1 Accept: application/xml Response: <pack> <owner>David</owner> <dogs> <dog> <name>Woofer</name> <breed>Basset Hound</breed> </dog> <dog> <name>Mr. Bones</name> <breed>Basset Hound</breed> </dog> </dogs> </pack> Suppose I want to add a dog (Sniffers the Basset Hound) to the pack, would I support either: Request: PUT http://example.org/api/v1/dogs/packs/p1 <dog> <name>Sniffers</name> <breed>Basset Hound</breed> </dog> Response: HTTP/1.1 200 OK or Request: PUT http://example.org/api/v1/dogs/packs/p1 <pack> <owner>David</owner> <dogs> <dog> <name>Woofer</name> <breed>Basset Hound</breed> </dog> <dog> <name>Mr. Bones</name> <breed>Basset Hound</breed> </dog> <dog> <name>Sniffers</name> <breed>Basset Hound</breed> </dog> </dogs> </pack> Response: HTTP/1.1 200 OK or both? If supporting updates through subsections of the structure is recommended, how would I handle deletes (such as when a dog dies)? Through query parameters?

    Read the article

  • What's the best way to do base36 arithmetic in perl?

    - by DVK
    What's the best way to do base36 arithmetic in Perl? To be more specific, I need to be able to do the following: Operate on positive N-digit numbers in base 36 (e.g. digits are 0-9 A-Z) N is finite, say 9 Provide basic arithmetic, at the very least the following 3: Addition (A+B) Subtraction (A-B) Whole division, e.g. floor(A/B). Strictly speaking, I don't really need a base10 conversion ability - the numbers will 100% of time be in base36. So I'm quite OK if the solution does NOT implement conversion from base36 back to base10 and vice versa. I don't much care whether the solution is brute-force "convert to base 10 and back" or converting to binary, or some more elegant approach "natively" performing baseN operations (as stated above, to/from base10 conversion is not a requirement). My only 3 considerations are: It fits the minimum specifications above It's "standard". Currently we're using and old homegrown module based on base10 conversion done by hand that is buggy and sucks. I'd much rather replace that with some commonly used CPAN solution instead of re-writing my own bicycle from scratch, but I'm perfectly capable of building it if no better standard possibility exists. It must be fast-ish (though not lightning fast). Something that takes 1 second to sum up 2 9-digit base36 numbers is worse than anything I can roll on my own :) P.S. Just to provide some context in case people decide to solve my XY problem for me in addition to answering the technical question above :) We have a fairly large tree (stored in DB as a bunch of edges), and we need to superimpose order on a subset of that tree. The tree dimentions are big both depth- and breadth- wise. The tree is VERY actively updated (inserts and deletes and branch moves). This is currently done by having a second table with 3 columns: parent_vertex, child_vertex, local_order, where local_order is an 9-character string built of A-Z0-9 (e.g. base 36 number). Additional considerations: It is required that the local order is unique per child (and obviously unique per parent), Any complete re-ordering of a parent is somewhat expensive, and thus the implementation is to try and assign - for a parent with X children - the orders which are somewhat evenly distributed between 0 and 36**10-1, so that almost no tree inserts result in a full re-ordering.

    Read the article

  • Change a UIButton View's background upon press

    - by Michael
    This should be easy but its got me stumped. I've got a button on each row of a table cell. The button is to delete the file associated with that row. I have an image that indicates if the file is present on the iPhone or not present. When the user presses the button, the target action method (showDeleteSheet) then calls a UIActionSheet with a variable to give it the indexPath of the row pressed. Then the user presses delete on the action sheet and the action sheet's clickedButtonAtIndex method deletes the file. Now, I need to update the button's background image property to change the image in the appropriate table cell to the not downloaded image. This can be done in either the button's target action method or the action sheet's clickedButtonAtIndex method. Here is some code: In the table's cellForRowAtIndexPath method I create the button: UIButton *deleteButton = [[UIButton buttonWithType:UIButtonTypeCustom] retain]; [deleteButton addTarget:self action:@selector(showDeleteSheet:) forControlEvents:UIControlEventTouchDown]; deleteButton.frame = CGRectMake(246, 26, 30, 30); if (![AppData isDownloaded:[dets objectForKey:@"fileName"]]) { [deleteButton setBackgroundImage:notdownloadedImage forState:UIControlStateNormal]; } else { [deleteButton setBackgroundImage:notdownloadedImage forState:UIControlStateNormal]; } In the button's target method: -(void)showDeleteSheet:(id)sender { //get the cell row that the button is in NSIndexPath *indexPath = [table indexPathForCell:(UITableViewCell *)[[sender superview] superview]]; NSInteger currentSection = indexPath.section; NSInteger currentIndexRow = indexPath.row; NSLog(@"section = %i row = %i", currentSection, currentIndexRow); UIButton *deleteButton = [UIButton buttonWithType:UIButtonTypeCustom]; [table reloadData]; //if deleteButton is declared in .h then the other instances of deleteButton show 'local declaration hides instance variable' [deleteButton setBackgroundImage:[UIImage imageNamed:@"not-downloaded.png"] forState:UIControlStateNormal]; UIActionSheet *actionSheet = [[UIActionSheet alloc] initWithTitle:@"Delete this file?" delegate:self cancelButtonTitle:@"Cancel" destructiveButtonTitle:@"Delete" otherButtonTitles:nil]; actionSheet.tag = currentIndexRow; actionSheet.actionSheetStyle = UIActionSheetStyleDefault; [actionSheet showInView:self.view]; [actionSheet release]; } Perhaps the issue is that the call to [deleteButton setBackgroundImage...]; doesn't know which cell's button should be updated. If so I don't know how to tell it which. Because I test if the file is downloaded when the button is made and the background image is set, when I scroll down so the cell in question is de-queued then scroll back up it show's the correct image. I've tried to force the table to reload but [table reloadData]; is doing nothing. I've tried reloadRowsAtIndexPaths and still no good. Any one care to educate me on how this is done?

    Read the article

  • AD - DirectoryServices: VBNET2.0 - Speaking architecture...

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

  • AD-DirectoryServices: .NET2.0 - Speaking architecture, approach and best practices... Suggestions?

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. I have been able to add groups to the AD; I have been able to add users to the AD; The created user is automatically disabled? I seem to get confused with the use of a LDAP path to add objects. For instance, one needs two instances of a System.DirectoryServices.DirectoryEntry class to add a group, for instance. Why this? Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

  • a question about rails general practice with REST, json, and ajax

    - by Nik
    Hi all, I have this question concerning REST I think: I have read a few rest tutorials and the feeling I get from them is that each action in a restful controller tends to be lean and almost single purpose: "Index gives off a collection of a model show gives off one model edit/new a prep place for changing/create a model update/create changes and makes new model deletes removes one model" After reading all these tutorials, rest seems to be to be a means to create an interface for a model, much like active resource type of thing. the mantra seems to be "controller provides data and data only and is also pretty convention over configuration, so expect projects_path to return a bunch of projects" I can understand that, and I like the cleanliness. But here's when I run into some trouble in reality in applying these guidelines: say three models, Project with attrib title, User with attrib name, and Location with attrib address. Say in views/users/index.html.erb, I want to use Ajax to fetch and display a project in a div#project_display when the user clicks on a project element, I know that I can use views/projects/show.js.rjs like this: page.replace_html 'project_display' "#{@project.name}" where in the projects_controller.rb def show @project = Project.find(params[:id]) repsond_to do |format| format.js and other formats... end end I have no problem in doing that for a couple of years now. BUT doesn't that mean that my JS response for the project#show action is LOCkED to present data to div#project_display element and show only whatever I that rjs template says it should show? That's very limiting and doesn't sound very "interface" like. I have never used JSON before or much XML, so I thought, maybe the JS response should send back raw stuff, like JSON and somehow the page on which the ajax request was called has the instruction on what do to with these raw data. That sounds a lot more flexible, doesn't it? Because look back at that exmpale, what if in the views/locations/index.html.erb, I want to do the exact same thing except I want to put the response in div#project_goes_here and the response should be #{project.name} I know this is a trivial change but that's the point: the RJS only allows one template at a time. So I think the JSON route is the way to go, but how does the already loaded page, the one that the ajax call came from, know when or how to "look forward" to incoming data? I read that PrototypeJS has this template thing, I wouldn't mind using it with JSON, but if you can demonstrate this or other means for displaying received-from-ajax data, I am all attention. Thank You

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • Parsing language for both binary and character files

    - by Thorsten S.
    The problem: You have some data and your program needs specified input. For example strings which are numbers. You are searching for a way to transform the original data in a format you need. And the problem is: The source can be anything. It can be XML, property lists, binary which contains the needed data deeply embedded in binary junk. And your output format may vary also: It can be number strings, float, doubles.... You don't want to program. You want routines which gives you commands capable to transform the data in a form you wish. Surely it contains regular expressions, but it is very good designed and it offers capabilities which are sometimes much more easier and more powerful. Something like a super-grep which you can access (!) as program routines, not only as tool. It allows: joining/grouping/merging of results inserting/deleting/finding/replacing write macros which allows to execute a command chain repeatedly meta-grouping (lists-tables-hypertables) Example (No, I am not looking for a solution to this, it is just an example): You want to read xml strings embedded in a binary file with variable length records. Your tool reads the record length and deletes the junk surrounding your text. Now it splits open the xml and extracts the strings. Being Indian number glyphs and containing decimal commas instead of decimal points, your tool transforms it into ASCII and replaces commas with points. Now the results must be stored into matrices of variable length....etc. etc. I am searching for a good language / language-design and if possible, an implementation. Which design do you like or even, if it does not fulfill the conditions, wouldn't you want to miss ? EDIT: The question is if a solution for the problem exists and if yes, which implementations are available. You DO NOT implement your own sorting algorithm if Quicksort, Mergesort and Heapsort is available. You DO NOT invent your own text parsing method if you have regular expressions. You DO NOT invent your own 3D language for graphics if OpenGL/Direct3D is available. There are existing solutions or at least papers describing the problem and giving suggestions. And there are people who may have worked and experienced such problems and who can give ideas and suggestions. The idea that this problem is totally new and I should work out and implement it myself without background knowledge seems for me, I must admit, totally off the mark.

    Read the article

  • How can i tell jaxb / Maven to genereate multiple schema packages?

    - by M.R.
    Example: </plugin> <plugin> <groupId>org.jvnet.jaxb2.maven2</groupId> <artifactId>maven-jaxb2-plugin</artifactId> <version>0.7.1</version> <executions> <execution> <goals> <goal>generate</goal> </goals> </execution> </executions> <configuration> <schemaDirectory>src/main/resources/dir1</schemaDirectory> <schemaIncludes> <include>schema1.xsd</include> </schemaIncludes> <generatePackage>schema1.package</generatePackage> </configuration> </plugin> <plugin> <groupId>org.jvnet.jaxb2.maven2</groupId> <artifactId>maven-jaxb2-plugin</artifactId> <version>0.7.1</version> <executions> <execution> <goals> <goal>generate</goal> </goals> </execution> </executions> <configuration> <schemaDirectory>src/main/resources/dir2</schemaDirectory> <schemaIncludes> <include>schema2.xsd</include> </schemaIncludes> <generatePackage>schema2.package</generatePackage> </configuration> </plugin> </plugins> What happened: Maven executes the the first plugin. Then deletes the target folder and creates the second package, which then is visible. I tried to set target/somedir1 for the first configuration and target/somedir2 for the second configuration. But the behavior does not not change? Any ideas? I do not want to generate the packages directly in the src/main/java folder, because these packages are genereated and should not be mixed with manual created classes.

    Read the article

  • How do I delete a [sub]hash based off of the keys/values of another hash?

    - by Zack
    Lets assume I have two hashes. One of them contains a set of data that only needs to keep things that show up in the other hash. e.g. my %hash1 = ( test1 => { inner1 => { more => "alpha", evenmore => "beta" } }, test2 => { inner2 => { more => "charlie", somethingelse => "delta" } }, test3 => { inner9999 => { ohlookmore => "golf", somethingelse => "foxtrot" } } ); my %hash2 = ( major=> { test2 => "inner2", test3 => "inner3" } ); What I would like to do, is to delete the whole subhash in hash1 if it does not exist as a key/value in hash2{major}, preferably without modules. The information contained in "innerX" does not matter, it merely must be left alone (unless the subhash is to be deleted then it can go away). In the example above after this operation is preformed hash1 would look like: my %hash1 = ( test2 => { inner2 => { more => "charlie", somethingelse => "delta" } }, ); It deletes hash1{test1} and hash1{test3} because they don't match anything in hash2. Here's what I've currently tried, but it doesn't work. Nor is it probably the safest thing to do since I'm looping over the hash while trying to delete from it. However I'm deleting at the each which should be okay? This was my attempt at doing this, however perl complains about: Can't use string ("inner1") as a HASH ref while "strict refs" in use at while(my ($test, $inner) = each %hash1) { if(exists $hash2{major}{$test}{$inner}) { print "$test($inner) is in exists.\n"; } else { print "Looks like $test($inner) does not exist, REMOVING.\n"; #not to sure if $inner is needed to remove the whole entry delete ($hash1{$test}{$inner}); } }

    Read the article

  • Optimize MySQL query (ngrams, COUNT(), GROUP BY, ORDER BY)

    - by Gerardo
    I have a database with thousands of companies and their locations. I have implemented n-grams to optimize search. I am making one query to retrieve all the companies that match with the search query and another one to get a list with their locations and the number of companies in each location. The query I am trying to optimize is the latter. Maybe the problem is this: Every company ('anunciante') has a field ('estado') to make logical deletes. So, if 'estado' equals 1, the company should be retrieved. When I run the EXPLAIN command, it shows that it goes through almost 40k rows, when the actual result (the reality matching companies) are 80. How can I optimize this? This is my query (XXX represent the n-grams for the search query): SELECT provincias.provincia AS provincia, provincias.id, COUNT(*) AS cantidad FROM anunciantes JOIN anunciante_invertido AS a_i0 ON anunciantes.id = a_i0.id_anunciante JOIN indice_invertido AS indice0 ON a_i0.id_invertido = indice0.id LEFT OUTER JOIN domicilios ON anunciantes.id = domicilios.id_anunciante LEFT OUTER JOIN localidades ON domicilios.id_localidad = localidades.id LEFT OUTER JOIN provincias ON provincias.id = localidades.id_provincia WHERE anunciantes.estado = 1 AND indice0.id IN (SELECT invertido_ngrama.id_palabra FROM invertido_ngrama JOIN ngrama ON ngrama.id = invertido_ngrama.id_ngrama WHERE ngrama.ngrama = 'XXX') AND indice0.id IN (SELECT invertido_ngrama.id_palabra FROM invertido_ngrama JOIN ngrama ON ngrama.id = invertido_ngrama.id_ngrama WHERE ngrama.ngrama = 'XXX') AND indice0.id IN (SELECT invertido_ngrama.id_palabra FROM invertido_ngrama JOIN ngrama ON ngrama.id = invertido_ngrama.id_ngrama WHERE ngrama.ngrama = 'XXX') AND indice0.id IN (SELECT invertido_ngrama.id_palabra FROM invertido_ngrama JOIN ngrama ON ngrama.id = invertido_ngrama.id_ngrama WHERE ngrama.ngrama = 'XXX') AND indice0.id IN (SELECT invertido_ngrama.id_palabra FROM invertido_ngrama JOIN ngrama ON ngrama.id = invertido_ngrama.id_ngrama WHERE ngrama.ngrama = 'XXX') GROUP BY provincias.id ORDER BY cantidad DESC And this is the query explained (hope it can be read in this format): id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY anunciantes ref PRIMARY,estado estado 1 const 36669 Using index; Using temporary; Using filesort 1 PRIMARY domicilios ref id_anunciante id_anunciante 4 db84771_viaempresas.anunciantes.id 1 1 PRIMARY localidades eq_ref PRIMARY PRIMARY 4 db84771_viaempresas.domicilios.id_localidad 1 1 PRIMARY provincias eq_ref PRIMARY PRIMARY 4 db84771_viaempresas.localidades.id_provincia 1 1 PRIMARY a_i0 ref PRIMARY,id_anunciante,id_invertido PRIMARY 4 db84771_viaempresas.anunciantes.id 1 Using where; Using index 1 PRIMARY indice0 eq_ref PRIMARY PRIMARY 4 db84771_viaempresas.a_i0.id_invertido 1 Using index 6 DEPENDENT SUBQUERY ngrama const PRIMARY,ngrama ngrama 5 const 1 Using index 6 DEPENDENT SUBQUERY invertido_ngrama eq_ref PRIMARY,id_palabra,id_ngrama PRIMARY 8 func,const 1 Using index 5 DEPENDENT SUBQUERY ngrama const PRIMARY,ngrama ngrama 5 const 1 Using index 5 DEPENDENT SUBQUERY invertido_ngrama eq_ref PRIMARY,id_palabra,id_ngrama PRIMARY 8 func,const 1 Using index 4 DEPENDENT SUBQUERY ngrama const PRIMARY,ngrama ngrama 5 const 1 Using index 4 DEPENDENT SUBQUERY invertido_ngrama eq_ref PRIMARY,id_palabra,id_ngrama PRIMARY 8 func,const 1 Using index 3 DEPENDENT SUBQUERY ngrama const PRIMARY,ngrama ngrama 5 const 1 Using index 3 DEPENDENT SUBQUERY invertido_ngrama eq_ref PRIMARY,id_palabra,id_ngrama PRIMARY 8 func,const 1 Using index 2 DEPENDENT SUBQUERY ngrama const PRIMARY,ngrama ngrama 5 const 1 Using index 2 DEPENDENT SUBQUERY invertido_ngrama eq_ref PRIMARY,id_palabra,id_ngrama PRIMARY 8 func,const 1 Using index

    Read the article

  • Problem with tcp server when converting to service

    - by djerry
    Hello lads, I'm working on monitoring some object (cdr-packets). I'm setting up a tcp-server and am listening on port 50043 for the packages. The program as a console application is working just fine, my server is working like it should and i'm receiving the packets. When i try to use it as a service, i cannot seem to get a client connected to my server. Is there something i need to change to deploy this as a service? Code below is from my application: this is my service class where i start protected override void OnStart(string[] args) { server = new TcpServer(); server.StartServer(); } this is the constructor of TcpServer public TcpServer() { try { _server = new TcpListener(IPAddress.Any, 50043); } catch (Exception) { _server = null; } } this is the method i call after initialising the class public void StartServer() { if (_server != null) { // Create a ArrayList for storing SocketListeners before starting the server. _socketListenersList = new ArrayList(); // Start the Server and start the thread to listen client requests. _server.Start(); _serverThread = new Thread(new ThreadStart(ServerThreadStart)); _serverThread.Start(); // Create a low priority thread that checks and deletes client // SocktConnection objcts that are marked for deletion. _purgingThread = new Thread(new ThreadStart(PurgingThreadStart)); _purgingThread.Priority = ThreadPriority.Lowest; _purgingThread.Start(); } } this is the thread that keep checking if any client tries to connect private void ServerThreadStart() { // Client Socket variable; Socket clientSocket = null; TcpSocketListener socketListener = null; while (!_stopServer) { try { // Wait for any client requests and if there is any request from any //client accept it (Wait indefinitely). clientSocket = _server.AcceptSocket(); // Create a SocketListener object for the client. socketListener = new TcpSocketListener(clientSocket); // Add the socket listener to an array list in a thread safe fashon. lock (_socketListenersList) { _socketListenersList.Add(socketListener); } // Start a communicating with the client in a different thread. socketListener.StartSocketListener(); } catch (SocketException se) { _stopServer = true; } } } when for the first time a packet waits to be read, and i get to "clientSocket = _server.AcceptSocket();", it throws an exception (service, not very good debugable) Does anyone recognize this problem or can help me? Thanks in advance

    Read the article

  • Unable to add item to dataset in Linq to SQL

    - by Mike B
    I am having an issue adding an item to my dataset in Linq to SQL. I am using the exact same method in other tables with no problem. I suspect I know the problem but cannot find an answer (I also suspect all i really need is the right search term for Google). Please keep in mind this is a learning project (Although it is in use in a business) I have posted my code and datacontext below. What I am doing is: Create a view model (Relevant bits are shown) and a simple wpf window that allows editing of 3 properties that are bound to the category object. Category is from the datacontext. Edit works fine but add does not. If I check GetChangeSet() just before the db.submitChanges() call there are no adds, edits or deletes. I suspect an issue with the fact that a Category added without a Subcategory would be an orphan but I cannot seem to find the solution. Command code to open window: CategoryViewModel vm = new CategoryViewModel(); AddEditCategoryWindow window = new AddEditCategoryWindow(vm); window.ShowDialog(); ViewModel relevant stuff: public class CategoryViewModel : ViewModelBase { public Category category { get; set; } // Constructor used to Edit a Category public CategoryViewModel(Int16 categoryID) { db = new OITaskManagerDataContext(); category = QueryCategory(categoryID); } // Constructor used to Add a Category public CategoryViewModel() { db = new OITaskManagerDataContext(); category = new Category(); } } The code for saving changes: // Don't close window unless all controls are validated if (!vm.IsValid(this)) return; var changes = vm.db.GetChangeSet(); // DEBUG try { vm.db.SubmitChanges(ConflictMode.ContinueOnConflict); } catch (ChangeConflictException) { vm.db.ChangeConflicts.ResolveAll(RefreshMode.KeepChanges); vm.db.SubmitChanges(); } The Xaml (Edited fror brevity): <TextBox Text="{Binding category.CatName, Mode=TwoWay, ValidatesOnDataErrors=True, UpdateSourceTrigger=PropertyChanged}" /> <TextBox Text="{Binding category.CatDescription, ValidatesOnDataErrors=True, UpdateSourceTrigger=PropertyChanged}" /> <CheckBox IsChecked="{Binding category.CatIsInactive, Mode=TwoWay}" /> IssCategory in the Issues table is the old, text based category. This field is no longer used and will be removed from the database as soon as this is working and pushed live.

    Read the article

  • A better way to delete a list of elements from multiple tables

    - by manyxcxi
    I know this looks like a 'please write the code' request, but some basic pointer/principles for doing this the right way should be enough to get me going. I have the following stored procedure: CREATE PROCEDURE `TAA`.`runClean` (IN idlist varchar(1000)) BEGIN DECLARE EXIT HANDLER FOR NOT FOUND ROLLBACK; DECLARE EXIT HANDLER FOR SQLEXCEPTION ROLLBACK; DECLARE EXIT HANDLER FOR SQLWARNING ROLLBACK; START TRANSACTION; DELETE FROM RunningReports WHERE run_id IN (idlist); DELETE FROM TMD_INDATA_INVOICE WHERE run_id IN (idlist); DELETE FROM TMD_INDATA_LINE WHERE run_id IN (idlist); DELETE FROM TMD_OUTDATA_INVOICE WHERE run_id IN (idlist); DELETE FROM TMD_OUTDATA_LINE WHERE run_id IN (idlist); DELETE FROM TMD_TEST WHERE run_id IN (idlist); DELETE FROM RunHistory WHERE id IN (idlist); COMMIT; END $$ It is called by a PHP script to clean out old run history. It is not particularly efficient as you can see and I would like to speed it up. The PHP script gathers the ids to remove from the tables with the following query: $query = "SELECT id, stop_time FROM RunHistory WHERE config_id = $configId AND save = 0 AND NOT(stop_time IS NULL) ORDER BY stop_time"; It keeps the last five run entries and deletes all the rest. So using this query to bring back all the IDs, it determines which ones to delete and keeps the 'newest' five. After gathering the IDs it sends them to the stored procedure to remove them from the associated tables. I'm not very good with SQL, but I ASSUME that using an IN statement and not joining these tables together is probably the least efficient way I can do this, but I don't know enough to ask anything but "how do I do this better?" If possible, I would like to do this all in my stored procedure using a query to gather all the IDs except for the five 'newest', then delete them. Another twist, run entries can be marked save (save = 1) and should not be deleted. The RunHistory table looks like this: CREATE TABLE `TAA`.`RunHistory` ( `id` int(11) NOT NULL auto_increment, `start_time` datetime default NULL, `stop_time` datetime default NULL, `config_id` int(11) NOT NULL, [...] `save` tinyint(1) NOT NULL default '0', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;

    Read the article

  • EXC_MEMORY_ACCESS when trying to delete from Core Data ($cash solution)

    - by llloydxmas
    I have an application that downloads an xml file, parses the file, and creates core data objects while doing so. In the parse code I have a function called 'emptydatacontext' that removes all items from Core Data before creating replacements items from the xml data. This method looks like this: -(void) emptyDataContext { NSFetchRequest * allCon = [[NSFetchRequest alloc] init]; [allCon setEntity:[NSEntityDescription entityForName:@"Condition" inManagedObjectContext:managedObjectContext]]; NSError * error = nil; NSArray * conditions = [managedObjectContext executeFetchRequest:allCon error:&error]; DebugLog(@"ERROR: %@",error); DebugLog(@"RETRIEVED: %@", conditions); [allCon release]; for (NSManagedObject * condition in conditions) { [managedObjectContext deleteObject:condition]; } // Update the data model effectivly removing the objects we removed above. //NSError *error; if (![managedObjectContext save:&error]) { DebugLog(@"%@", [error domain]); } } The first time this runs it deletes all objects and functions as it should - creating new objects from the xml file. I created a 'update' button that starts the exact same process of retrieving the file the proceeding with the parse & build. All is well until its time to delete the core data objects. This 'deleteObject' call creates a "EXC_BAD_ACCESS" error each time. This only happens on the second time through. Captured errors return null. If I log the 'conditions' array I get a list of NSManagedObjects on the first run. On the second this log request causes a crash exactly as the deleteObject call does. I have a feeling it is something very simple I'm missing or not doing correctly to cause this behavior. The data works great on my tableviews - its only when trying to update I get the crashes. I have spent days & days on this trying numerous alternative methods. Whats left of my hair is falling out. I'd be willing to ante up some cash for anyone willing to look at my code and see what I'm doing wrong. Just need to get past this hurdle. Thanks in advance for the help!

    Read the article

  • SQL Native Client 10 Performance miserable (due to server-side cursors)

    - by namezero
    we have an application that uses ODBC via CDatabase/CRecordset in MFC (VS2010). We have two backends implemented. MSSQL and MySQL. Now, when we use MSSQL (with the Native Client 10.0), retrieving records with SELECT is dramatically slow via slow links (VPN, for example). The MySQL ODBC driver does not exhibit this nasty behavior. For example: CRecordset r(&m_db); r.Open(CRecordset::snapshot, L"SELECT a.something, b.sthelse FROM TableA AS a LEFT JOIN TableB AS b ON a.ID=b.Ref"); r.MoveFirst(); while(!r.IsEOF()) { // Retrieve CString strData; crs.GetFieldValue(L"a.something", strData); crs.MoveNext(); } Now, with the MySQL driver, everything runs as it should. The query is returned, and everything is lightning fast. However, with the MSSQL Native Client, things slow down, because on every MoveNext(), the driver communicates with the server. I think it is due to server-side cursors, but I didn't find a way to disable them. I have tried using: ::SQLSetConnectAttr(m_db.m_hdbc, SQL_ATTR_ODBC_CURSORS, SQL_CUR_USE_ODBC, SQL_IS_INTEGER); But this didn't help either. There are still long-running exec's to sp_cursorfetch() et al in SQL Profiler. I have also tried a small reference project with SQLAPI and bulk fetch, but that hangs in FetchNext() for a long time, too (even if there is only one record in the resultset). This however only happens on queries with LEFT JOINS, table-valued functions, etc. Note that the query doesn't take that long - executing the same SQL via SQL Studio over the same connection returns in a reasonable time. Question1: Is is possible to somehow get the native client to "cache" all results locally use local cursors in a similar fashion as the MySQL driver seems to do it? Maybe this is the wrong approach altogether, but I'm not sure how else to do this. All we want is to retrieve all data at once from a SELECT, then never talk the server again until the next query. We don't care about recordset updates, deletes, etc or any of that nonsense. We only want to retrieve data. We take that recordset, get all the data, and delete it. Question2: Is there a more efficient way to just retrieve data in MFC with ODBC?

    Read the article

  • Sort a list of pointers.

    - by YuppieNetworking
    Hello all, Once again I find myself failing at some really simple task in C++. Sometimes I wish I could de-learn all I know from OO in java, since my problems usually start by thinking like Java. Anyways, I have a std::list<BaseObject*> that I want to sort. Let's say that BaseObject is: class BaseObject { protected: int id; public: BaseObject(int i) : id(i) {}; virtual ~BaseObject() {}; }; I can sort the list of pointer to BaseObject with a comparator struct: struct Comparator { bool operator()(const BaseObject* o1, const BaseObject* o2) const { return o1->id < o2->id; } }; And it would look like this: std::list<BaseObject*> mylist; mylist.push_back(new BaseObject(1)); mylist.push_back(new BaseObject(2)); // ... mylist.sort(Comparator()); // intentionally omitted deletes and exception handling Until here, everything is a-ok. However, I introduced some derived classes: class Child : public BaseObject { protected: int var; public: Child(int id1, int n) : BaseObject(id1), var(n) {}; virtual ~Child() {}; }; class GrandChild : public Child { public: GrandChild(int id1, int n) : Child(id1,n) {}; virtual ~GrandChild() {}; }; So now I would like to sort following the following rules: For any Child object c and BaseObject b, b<c To compare BaseObject objects, use its ids, as before. To compare Child objects, compare its vars. If they are equal, fallback to rule 2. GrandChild objects should fallback to the Child behavior (rule 3). I initially thought that I could probably do some casts in Comparator. However, this casts away constness. Then I thought that probably I could compare typeids, but then everything looked messy and it is not even correct. How could I implement this sort, still using list<BaseObject*>::sort ? Thank you

    Read the article

  • Authenticating users in iPhone app

    - by Myron
    I'm developing an HTTP api for our web application. Initially, the primary consumer of the API will be an iPhone app we're developing, but I'm designing this with future uses in mind (such as mobile apps for other platforms). I'm trying to decide on the best way to authenticate users so they can access their accounts from the iPhone. I've got a design that I think works well, but I'm no security expert, so I figured it would be good to ask for feedback here. The design of the user authentication has 3 primary goals: Good user experience: We want to allow users to enter their credentials once, and remain logged in indefinitely, until they explicitly log out. I would have considered OAuth if not for the fact that the experience from an iPhone app is pretty awful, from what I've heard (i.e. it launches the login form in Safari, then tells the user to return to the app when authentication succeeds). No need to store the user creds with the app: I always hate the idea of having the user's password stored in either plain text or symmetrically encrypted anywhere, so I don't want the app to have to store the password to pass it to the API for future API requests. Security: We definitely don't need the intense security of a banking app, but I'd obviously like this to be secure. Overall, the API is REST-inspired (i.e. treating URLs as resources, and using the HTTP methods and status codes semantically). Each request to the API must include two custom HTTP headers: an API Key (unique to each client app) and a unique device ID. The API requires all requests to be made using HTTPS, so that the headers and body are encrypted. My plan is to have an api_sessions table in my database. It has a unique constraint on the API key and unique device ID (so that a device may only be logged into a single user account through a given app) as well as a foreign key to the users table. The API will have a login endpoint, which receives the username/password and, if they match an account, logs the user in, creating an api_sessions record for the given API key and device id. Future API requests will look up the api_session using the API key and device id, and, if a record is found, treat the request as being logged in under the user account referenced by the api_session record. There will also be a logout API endpoint, which deletes the record from the api_sessions table. Does anyone see any obvious security holes in this?

    Read the article

  • Why is Dictionary.First() so slow?

    - by Rotsor
    Not a real question because I already found out the answer, but still interesting thing. I always thought that hash table is the fastest associative container if you hash properly. However, the following code is terribly slow. It executes only about 1 million iterations and takes more than 2 minutes of time on a Core 2 CPU. The code does the following: it maintains the collection todo of items it needs to process. At each iteration it takes an item from this collection (doesn't matter which item), deletes it, processes it if it wasn't processed (possibly adding more items to process), and repeats this until there are no items to process. The culprit seems to be the Dictionary.Keys.First() operation. The question is why is it slow? Stopwatch watch = new Stopwatch(); watch.Start(); HashSet<int> processed = new HashSet<int>(); Dictionary<int, int> todo = new Dictionary<int, int>(); todo.Add(1, 1); int iterations = 0; int limit = 500000; while (todo.Count > 0) { iterations++; var key = todo.Keys.First(); var value = todo[key]; todo.Remove(key); if (!processed.Contains(key)) { processed.Add(key); // process item here if (key < limit) { todo[key + 13] = value + 1; todo[key + 7] = value + 1; } // doesn't matter much how } } Console.WriteLine("Iterations: {0}; Time: {1}.", iterations, watch.Elapsed); This results in: Iterations: 923007; Time: 00:02:09.8414388. Simply changing Dictionary to SortedDictionary yields: Iterations: 499976; Time: 00:00:00.4451514. 300 times faster while having only 2 times less iterations. The same happens in java. Used HashMap instead of Dictionary and keySet().iterator().next() instead of Keys.First().

    Read the article

  • Delete duplicate rows, do not preserve one row

    - by Radley
    I need a query that goes through each entry in a database, checks if a single value is duplicated elsewhere in the database, and if it is - deletes both entries (or all, if more than two). Problem is the entries are URLs, up to 255 characters, with no way of identifying the row. Some existing answers on Stackoverflow do not work for me due to performance limitations, or they use uniqueid which obviously won't work when dealing with a string. Long Version: I have two databases containing URLs (and only URLs). One database has around 3,000 urls and the other around 1,000. However, a large majority of the 1,000 urls were taken from the 3,000 url database. I need to merge the 1,000 into the 3,000 as new entries only. For this, I made a third database with combined URLs from both tables, about 4,000 entries. I need to find all duplicate entries in this database and delete them (Both of them, without leaving either). I have followed the query of a few examples on this site, but whenever I try to delete both entries it ends up deleting all the entries, or giving sql errors. Alternatively: I have two databases, each containing the separate database. I need to check each row from one database against the other to find any that aren't duplicates, and then add those to a third database. Edit: I've got my own PHP solution which is pretty hacky, but works. I cannot answer my own question for 8 hours because I'm new, so here it is for now: I went with a PHP script to accomplish this, as I'm more familiar with PHP than MySQL. This generates a simple list of urls that only exist in the target database, but not both. If you have more than 7,000 entries to parse this may take awhile, and you will need to copy/paste the results into a text file or expand the script to store them back into a database. I'm just doing it manually to save time. Note: Uses MeekroDB <pre> <?php require('meekrodb.2.1.class.php'); DB::$user = 'root'; DB::$password = ''; DB::$dbName = 'testdb'; $all = DB::query('SELECT * FROM old_urls LIMIT 7000'); foreach($all as $row) { $test = DB::query('SELECT url FROM new_urls WHERE url=%s', $row['url']); if (!is_array($test)) { echo $row['url'] . "\n"; }else{ if (count($test) == 0) { echo $row['url'] . "\n"; } } } ?> </pre>

    Read the article

  • Improving performance for WRITE operation on Oracle DB in Java

    - by Lucky
    I've a typical scenario & need to understand best possible way to handle this, so here it goes - I'm developing a solution that will retrieve data from a remote SOAP based web service & will then push this data to an Oracle database on network. Also, this will be a scheduled task that will execute every 15 minutes. I've event queues on remote service that contains the INSERT/UPDATE/DELETE operations that have been done since last retrieval, & once I retrieve the events for last 15 minutes, it again add events for next retrieval. Now, its just pushing data to Oracle so all my interactions are INSERT & UPDATE statements. There are around 60 tables on Oracle with some of them having 100+ columns. Moreover, for every 15 minutes cycle there would be around 60-70 Inserts, 100+ Updates & 10-20 Deletes. This will be an executable jar file that will terminate after operation & will again start on next 15 minutes cycle. So, I need to understand how should I handle WRITE operations (best practices) to improve performance for this application as whole ? Current Test Code (on every cycle) - Connects to remote service to get events. Creates a connection with DB (single connection object). Identifies the type of operation (INSERT/UPDATE/DELETE) & table on which it is done. After above, calls the respective method based on type of operation & table. Uses Preparedstatement with positional parameters, & retrieves each column value from remote service & assigns that to statement parameters. Commits the statement & returns to get event class to process next event. Above is repeated till all the retrieved events are processed after which program closes & then starts on next cycle & everything repeats again. Thanks for help !

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28  | Next Page >