Search Results

Search found 908 results on 37 pages for 'cascading deletes'.

Page 32/37 | < Previous Page | 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Many-to-Many Relationship mapping does not trigger the EventListener OnPostInsert or OnPostDelete Ev

    - by san
    I'm doing my auditing using the Events listeners that nHibernate provides. It works fine for all mappings apart from HasmanyToMany mapping. My Mappings are as such: Table("Order"); Id(x => x.Id, "OrderId"); Map(x => x.Name, "OrderName").Length(150).Not.Nullable(); Map(x => x.Description, "OrderDescription").Length(800).Not.Nullable(); Map(x => x.CreatedOn).Not.Nullable(); Map(x => x.CreatedBy).Length(70).Not.Nullable(); Map(x => x.UpdatedOn).Not.Nullable(); Map(x => x.UpdatedBy).Length(70).Not.Nullable(); HasManyToMany(x => x.Products) .Table("OrderProduct") .ParentKeyColumn("OrderId") .ChildKeyColumn("ProductId") .Cascade.None() .Inverse() .AsSet(); Table("Product"); Id(x => x.Id, "ProductId"); Map(x => x.ProductName).Length(150).Not.Nullable(); Map(x => x.ProductnDescription).Length(800).Not.Nullable(); Map(x => x.Amount).Not.Nullable(); Map(x => x.CreatedOn).Not.Nullable(); ; Map(x => x.CreatedBy).Length(70).Not.Nullable(); Map(x => x.UpdatedOn).Not.Nullable(); Map(x => x.UpdatedBy).Length(70).Not.Nullable(); HasManyToMany(x => x.Orders) .Table("OrderProduct") .ParentKeyColumn("ProductId") .ChildKeyColumn("OrderId") .Cascade.None() .AsSet(); Whenever I do an update of an order (Eg: Changed the Orderdescription and deleted one of the products associated with it) It works fine as in it updated the order table and deletes the row in the orderproduct table. the event listener that I have associated with it captures the update of the order table but does NOT capture the associated delete event when the orderproduct is deleted. This behaviour is observed only in case of a ManyTomany mapped relationships. Since I would also like audit the packageproduct deletion, its kind of an annoyance when the event listener aren't able to capture the delete event. Any information about it would be greatly appreciated.

    Read the article

  • Bottom button bar overlaps the last element of Listview!!

    - by elto
    I have a listview which is part of an Activity. I want user to have a choice for batch deleting the items in the listview, so when he chooses the corresponding option from the menu, every list item gets a checkbox next to it. When user clicks any checkbox, a button bar is to slide up from bottom (as in gmail app) and clicking delete button deletes the selected items, however clicking cancel button on the bar would uncheck all the checked items. This is my page layout.xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="@android:color/transparent" > <FrameLayout android:layout_width="fill_parent" android:layout_height="fill_parent" > <LinearLayout android:id="@+id/list_area" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="1" > <ListView android:id="@+id/mylist" android:layout_width="fill_parent" android:layout_height="wrap_content" android:background="@android:color/transparent" android:drawSelectorOnTop="false" android:layout_weight="1" /> <TextView android:id="@+id/empty_list_message" android:layout_width="fill_parent" android:layout_height="wrap_content" android:textColor="#FFFFFF" android:layout_gravity="center_vertical|center_horizontal" android:text="@string/msg_for_emptyschd" android:layout_margin="14dip" android:layout_weight="1" /> </LinearLayout> <RelativeLayout android:id="@+id/bottom_action_bar" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="wrap_content" android:background="@drawable/schedule_bottom_actionbar_border" android:layout_marginBottom="2dip" android:layout_gravity="bottom" android:visibility="gone" > <Button android:id="@+id/delete_selecteditems_button" android:text="Deleted Selected" android:layout_width="140dip" android:layout_height="40dip" android:layout_alignParentLeft="true" android:layout_marginLeft="3dip" android:layout_marginTop="3dip" /> <Button android:id="@+id/cancel_button" android:text="Cancel" android:layout_width="140dip" android:layout_height="40dip" android:layout_alignParentRight="true" android:layout_marginRight="3dip" android:layout_marginTop="3dip" /> </RelativeLayout> </FrameLayout> </LinearLayout> so far, I have got everything working except that when the bottom bar becomes visible upon checkbox selection, it overlaps the last element of the list. All other list items can be scrolled up, but you cant scroll up the very last item of the list, therefore user can not select that item if he intends to. Here is the screenshot of the overlap. I have tried using the listview footer option, but that appends the bar to the end of the list instead of keeping it fixed at the bottom of the screen. Is there a way I could "raise" the listview enough so that the overlap wont happen?? BTW, I have already tried adding the bottom-margin to the listview itself, or the LinearLayout wrapping the listview right before making the button-bar visible, but it introduces other bugs like clicking one checkbox checks some another checkbox in listview.

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • Creating meaningful routes in wizard style ASP.NET MVC form

    - by R0MANARMY
    I apologize in advance for a long question, figured better have a bit more information than not enough. I'm working on an application with a fairly complex form (~100 fields on it). In order to make the UI a little more presentable the fields are organized into regions and split across multiple (~10) tabs (not unlike this, but each tab does a submit/redirect to next tab). This large input form can also be in one of 3 views (read only, editable, print friendly). The form represents a large domain object (let's call it Foo). I have a controller for said domain object (FooController). It makes sense to me to have the controller be responsible for all the CRUD related operations. Here are the problems I'm having trouble figuring out. Goals: I'd like to keep to conventions so that Foo/Create creates a new record Foo/Delete deletes a record Foo/Edit/{foo_id} takes you to the first tab of the form ...etc I'd like to be able to not repeat the data access code such that I can have Foo/Edit/{foo_id}/tab1 Foo/View/{foo_id}/tab1 Foo/Print/{foo_id}tab1 ...etc use the same data access code to get the data and just specify which view to use to render it. My current implementation has a massive FooController with Create, Delete, Tab1, Tab2, etc actions. Tab actions are split out into separate files for organization (using partial classes, which may or may not be abuse of partial classes). Problem I'm running into is how to organize my controller(s) and routes to make that happen. I have the default route {controller}/{action}/{id} Which handles goal 1 properly but doesn't quite play nice with goal 2. I tried to address goal 2 by defining extra routes like so: routes.MapRoute( "FooEdit", "Foo/Edit/{id}/{action}", new { controller = "Foo", action = "Tab1", mode = "Edit", id = (string)null } ); routes.MapRoute( "FooView", "Foo/View/{id}/{action}", new { controller = "Foo", action = "Tab1", mode = "View", id = (string)null } ); routes.MapRoute( "FooPrint", "Foo/Print/{id}/{action}", new { controller = "Foo", action = "Tab1", mode = "Print", id = (string)null } ); However defining these extra routes causes the Url.Action to generate routs like Foo/Edit/Create instead of Foo/Create. That leads me to believe I designed something very very wrong, but this is my first attempt an asp.net mvc project and I don't know any better. Any advice with this particular situation would be awesome, but feedback on design in similar projects is welcome.

    Read the article

  • How to remove lowercase sentence fragments from text?

    - by Aaron
    Hello: I'm tyring to remove lowercase sentence fragments from standard text files using regular expresions or a simple Perl oneliner. These are commonly referred to as speech or attribution tags, for example - he said, she said, etc. This example shows before and after using manual deletion: Original: "Ah, that's perfectly true!" exclaimed Alyosha. "Oh, do leave off playing the fool! Some idiot comes in, and you put us to shame!" cried the girl by the window, suddenly turning to her father with a disdainful and contemptuous air. "Wait a little, Varvara!" cried her father, speaking peremptorily but looking at them quite approvingly. "That's her character," he said, addressing Alyosha again. "Where have you been?" he asked him. "I think," he said, "I've forgotten something... my handkerchief, I think.... Well, even if I've not forgotten anything, let me stay a little." He sat down. Father stood over him. "You sit down, too," said he. All lower case sentence fragments manually removed: "Ah, that's perfectly true!" "Oh, do leave off playing the fool! Some idiot comes in, and you put us to shame!" "Wait a little, Varvara!" "That's her character," "Where have you been?" "I think," "I've forgotten something... my handkerchief, I think.... Well, even if I've not forgotten anything, let me stay a little." He sat down. Father stood over him. "You sit down, too," I've changed straight quotes " to balanced and tried: ” (...)+[.] Of course, this removes some fragments but deletes some text in balanced quotes and text starting with uppercase letters. [^A-Z] didn't work in the above expression. I realize that it may be impossible to achieve 100% accuracy but any useful expression, perl, or python script would be deeply appreciated. Cheers, Aaron

    Read the article

  • ExecuteNonQuery on a stored proc causes it to be deleted

    - by FinancialRadDeveloper
    This is a strange one. I have a Dev SQL Server which has the stored proc on it, and the same stored proc when used with the same code on the UAT DB causes it to delete itself! Has anyone heard of this behaviour? SQL Code: -- Check if user is registered with the system IF OBJECT_ID('dbo.sp_is_valid_user') IS NOT NULL BEGIN DROP PROCEDURE dbo.sp_is_valid_user IF OBJECT_ID('dbo.sp_is_valid_user') IS NOT NULL PRINT '<<< FAILED DROPPING PROCEDURE dbo.sp_is_valid_user >>>' ELSE PRINT '<<< DROPPED PROCEDURE dbo.sp_is_valid_user >>>' END go create procedure dbo.sp_is_valid_user @username as varchar(20), @isvalid as int OUTPUT AS BEGIN declare @tmpuser as varchar(20) select @tmpuser = username from CPUserData where username = @username if @tmpuser = @username BEGIN select @isvalid = 1 END else BEGIN select @isvalid = 0 END END GO Usage example DECLARE @isvalid int exec dbo.sp_is_valid_user 'username', @isvalid OUTPUT SELECT valid = @isvalid The usage example work all day... when I access it via C# it deletes itself in the UAT SQL DB but not the Dev one!! C# Code: public bool IsValidUser(string sUsername, ref string sErrMsg) { string sDBConn = ConfigurationSettings.AppSettings["StoredProcDBConnection"]; SqlCommand sqlcmd = new SqlCommand(); SqlDataAdapter sqlAdapter = new SqlDataAdapter(); try { SqlConnection conn = new SqlConnection(sDBConn); sqlcmd.Connection = conn; conn.Open(); sqlcmd.CommandType = CommandType.StoredProcedure; sqlcmd.CommandText = "sp_is_valid_user"; // params to pass in sqlcmd.Parameters.AddWithValue("@username", sUsername); // param for checking success passed back out sqlcmd.Parameters.Add("@isvalid", SqlDbType.Int); sqlcmd.Parameters["@isvalid"].Direction = ParameterDirection.Output; sqlcmd.ExecuteNonQuery(); int nIsValid = (int)sqlcmd.Parameters["@isvalid"].Value; if (nIsValid == 1) { conn.Close(); sErrMsg = "User Valid"; return true; } else { conn.Close(); sErrMsg = "Username : " + sUsername + " not found."; return false; } } catch (Exception e) { sErrMsg = "Error :" + e.Source + " msg: " + e.Message; return false; } }

    Read the article

  • Populating fields in modal form using PHP, jQuery

    - by Benjamin
    I have a form that adds links to a database, deletes them, and -- soon -- allows the user to edit details. I am using jQuery and Ajax heavily on this project and would like to keep all control in the same page. In the past, to handle editing something like details about another website (link entry), I would have sent the user to another PHP page with form fields populated with PHP from a MySQL database table. How do I accomplish this using a jQuery UI modal form and calling the details individually for that particular entry? Here is what I have so far- <?php while ($linkDetails = mysql_fetch_assoc($getLinks)) {?> <div class="linkBox ui-corner-all" id="linkID<?php echo $linkDetails['id'];?>"> <div class="linkHeader"><?php echo $linkDetails['title'];?></div> <div class="linkDescription"><p><?php echo $linkDetails['description'];?></p> <p><strong>Link:</strong><br/> <span class="link"><a href="<?php echo $linkDetails['url'];?>" target="_blank"><?php echo $linkDetails['url'];?></a></span></p></div> <p align="right"> <span class="control"> <span class="delete addButton ui-state-default">Delete</span> <span class="edit addButton ui-state-default">Edit</span> </span> </p> </div> <?php }?> And here is the jQuery that I am using to delete entries- $(".delete").click(function() { var parent = $(this).closest('div'); var id = parent.attr('id'); $("#delete-confirm").dialog({ resizable: false, modal: true, title: 'Delete Link?', buttons: { 'Delete': function() { var dataString = 'id='+ id ; $.ajax({ type: "POST", url: "../includes/forms/delete_link.php", data: dataString, cache: false, success: function() { parent.fadeOut('slow'); $("#delete-confirm").dialog('close'); } }); }, Cancel: function() { $(this).dialog('close'); } } }); return false; }); Everything is working just fine, just need to find a solution to edit. Thanks!

    Read the article

  • Should I allow sending complete structures when using PUT for updates in a REST API or not?

    - by dafmetal
    I am designing a REST API and I wonder what the recommended way to handle updates to resources would be. More specifically, I would allow updates through a PUT on the resource, but what should I allow in the body of the PUT request? Always the complete structure of the resource? Always the subpart (that changed) of the structure of the resource? A combination of both? For example, take the resource http://example.org/api/v1/dogs/packs/p1. A GET on this resource would give the following: Request: GET http://example.org/api/v1/dogs/packs/p1 Accept: application/xml Response: <pack> <owner>David</owner> <dogs> <dog> <name>Woofer</name> <breed>Basset Hound</breed> </dog> <dog> <name>Mr. Bones</name> <breed>Basset Hound</breed> </dog> </dogs> </pack> Suppose I want to add a dog (Sniffers the Basset Hound) to the pack, would I support either: Request: PUT http://example.org/api/v1/dogs/packs/p1 <dog> <name>Sniffers</name> <breed>Basset Hound</breed> </dog> Response: HTTP/1.1 200 OK or Request: PUT http://example.org/api/v1/dogs/packs/p1 <pack> <owner>David</owner> <dogs> <dog> <name>Woofer</name> <breed>Basset Hound</breed> </dog> <dog> <name>Mr. Bones</name> <breed>Basset Hound</breed> </dog> <dog> <name>Sniffers</name> <breed>Basset Hound</breed> </dog> </dogs> </pack> Response: HTTP/1.1 200 OK or both? If supporting updates through subsections of the structure is recommended, how would I handle deletes (such as when a dog dies)? Through query parameters?

    Read the article

  • What's the best way to do base36 arithmetic in perl?

    - by DVK
    What's the best way to do base36 arithmetic in Perl? To be more specific, I need to be able to do the following: Operate on positive N-digit numbers in base 36 (e.g. digits are 0-9 A-Z) N is finite, say 9 Provide basic arithmetic, at the very least the following 3: Addition (A+B) Subtraction (A-B) Whole division, e.g. floor(A/B). Strictly speaking, I don't really need a base10 conversion ability - the numbers will 100% of time be in base36. So I'm quite OK if the solution does NOT implement conversion from base36 back to base10 and vice versa. I don't much care whether the solution is brute-force "convert to base 10 and back" or converting to binary, or some more elegant approach "natively" performing baseN operations (as stated above, to/from base10 conversion is not a requirement). My only 3 considerations are: It fits the minimum specifications above It's "standard". Currently we're using and old homegrown module based on base10 conversion done by hand that is buggy and sucks. I'd much rather replace that with some commonly used CPAN solution instead of re-writing my own bicycle from scratch, but I'm perfectly capable of building it if no better standard possibility exists. It must be fast-ish (though not lightning fast). Something that takes 1 second to sum up 2 9-digit base36 numbers is worse than anything I can roll on my own :) P.S. Just to provide some context in case people decide to solve my XY problem for me in addition to answering the technical question above :) We have a fairly large tree (stored in DB as a bunch of edges), and we need to superimpose order on a subset of that tree. The tree dimentions are big both depth- and breadth- wise. The tree is VERY actively updated (inserts and deletes and branch moves). This is currently done by having a second table with 3 columns: parent_vertex, child_vertex, local_order, where local_order is an 9-character string built of A-Z0-9 (e.g. base 36 number). Additional considerations: It is required that the local order is unique per child (and obviously unique per parent), Any complete re-ordering of a parent is somewhat expensive, and thus the implementation is to try and assign - for a parent with X children - the orders which are somewhat evenly distributed between 0 and 36**10-1, so that almost no tree inserts result in a full re-ordering.

    Read the article

  • Change a UIButton View's background upon press

    - by Michael
    This should be easy but its got me stumped. I've got a button on each row of a table cell. The button is to delete the file associated with that row. I have an image that indicates if the file is present on the iPhone or not present. When the user presses the button, the target action method (showDeleteSheet) then calls a UIActionSheet with a variable to give it the indexPath of the row pressed. Then the user presses delete on the action sheet and the action sheet's clickedButtonAtIndex method deletes the file. Now, I need to update the button's background image property to change the image in the appropriate table cell to the not downloaded image. This can be done in either the button's target action method or the action sheet's clickedButtonAtIndex method. Here is some code: In the table's cellForRowAtIndexPath method I create the button: UIButton *deleteButton = [[UIButton buttonWithType:UIButtonTypeCustom] retain]; [deleteButton addTarget:self action:@selector(showDeleteSheet:) forControlEvents:UIControlEventTouchDown]; deleteButton.frame = CGRectMake(246, 26, 30, 30); if (![AppData isDownloaded:[dets objectForKey:@"fileName"]]) { [deleteButton setBackgroundImage:notdownloadedImage forState:UIControlStateNormal]; } else { [deleteButton setBackgroundImage:notdownloadedImage forState:UIControlStateNormal]; } In the button's target method: -(void)showDeleteSheet:(id)sender { //get the cell row that the button is in NSIndexPath *indexPath = [table indexPathForCell:(UITableViewCell *)[[sender superview] superview]]; NSInteger currentSection = indexPath.section; NSInteger currentIndexRow = indexPath.row; NSLog(@"section = %i row = %i", currentSection, currentIndexRow); UIButton *deleteButton = [UIButton buttonWithType:UIButtonTypeCustom]; [table reloadData]; //if deleteButton is declared in .h then the other instances of deleteButton show 'local declaration hides instance variable' [deleteButton setBackgroundImage:[UIImage imageNamed:@"not-downloaded.png"] forState:UIControlStateNormal]; UIActionSheet *actionSheet = [[UIActionSheet alloc] initWithTitle:@"Delete this file?" delegate:self cancelButtonTitle:@"Cancel" destructiveButtonTitle:@"Delete" otherButtonTitles:nil]; actionSheet.tag = currentIndexRow; actionSheet.actionSheetStyle = UIActionSheetStyleDefault; [actionSheet showInView:self.view]; [actionSheet release]; } Perhaps the issue is that the call to [deleteButton setBackgroundImage...]; doesn't know which cell's button should be updated. If so I don't know how to tell it which. Because I test if the file is downloaded when the button is made and the background image is set, when I scroll down so the cell in question is de-queued then scroll back up it show's the correct image. I've tried to force the table to reload but [table reloadData]; is doing nothing. I've tried reloadRowsAtIndexPaths and still no good. Any one care to educate me on how this is done?

    Read the article

  • AD - DirectoryServices: VBNET2.0 - Speaking architecture...

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

  • AD-DirectoryServices: .NET2.0 - Speaking architecture, approach and best practices... Suggestions?

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. I have been able to add groups to the AD; I have been able to add users to the AD; The created user is automatically disabled? I seem to get confused with the use of a LDAP path to add objects. For instance, one needs two instances of a System.DirectoryServices.DirectoryEntry class to add a group, for instance. Why this? Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

  • a question about rails general practice with REST, json, and ajax

    - by Nik
    Hi all, I have this question concerning REST I think: I have read a few rest tutorials and the feeling I get from them is that each action in a restful controller tends to be lean and almost single purpose: "Index gives off a collection of a model show gives off one model edit/new a prep place for changing/create a model update/create changes and makes new model deletes removes one model" After reading all these tutorials, rest seems to be to be a means to create an interface for a model, much like active resource type of thing. the mantra seems to be "controller provides data and data only and is also pretty convention over configuration, so expect projects_path to return a bunch of projects" I can understand that, and I like the cleanliness. But here's when I run into some trouble in reality in applying these guidelines: say three models, Project with attrib title, User with attrib name, and Location with attrib address. Say in views/users/index.html.erb, I want to use Ajax to fetch and display a project in a div#project_display when the user clicks on a project element, I know that I can use views/projects/show.js.rjs like this: page.replace_html 'project_display' "#{@project.name}" where in the projects_controller.rb def show @project = Project.find(params[:id]) repsond_to do |format| format.js and other formats... end end I have no problem in doing that for a couple of years now. BUT doesn't that mean that my JS response for the project#show action is LOCkED to present data to div#project_display element and show only whatever I that rjs template says it should show? That's very limiting and doesn't sound very "interface" like. I have never used JSON before or much XML, so I thought, maybe the JS response should send back raw stuff, like JSON and somehow the page on which the ajax request was called has the instruction on what do to with these raw data. That sounds a lot more flexible, doesn't it? Because look back at that exmpale, what if in the views/locations/index.html.erb, I want to do the exact same thing except I want to put the response in div#project_goes_here and the response should be #{project.name} I know this is a trivial change but that's the point: the RJS only allows one template at a time. So I think the JSON route is the way to go, but how does the already loaded page, the one that the ajax call came from, know when or how to "look forward" to incoming data? I read that PrototypeJS has this template thing, I wouldn't mind using it with JSON, but if you can demonstrate this or other means for displaying received-from-ajax data, I am all attention. Thank You

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • Parsing language for both binary and character files

    - by Thorsten S.
    The problem: You have some data and your program needs specified input. For example strings which are numbers. You are searching for a way to transform the original data in a format you need. And the problem is: The source can be anything. It can be XML, property lists, binary which contains the needed data deeply embedded in binary junk. And your output format may vary also: It can be number strings, float, doubles.... You don't want to program. You want routines which gives you commands capable to transform the data in a form you wish. Surely it contains regular expressions, but it is very good designed and it offers capabilities which are sometimes much more easier and more powerful. Something like a super-grep which you can access (!) as program routines, not only as tool. It allows: joining/grouping/merging of results inserting/deleting/finding/replacing write macros which allows to execute a command chain repeatedly meta-grouping (lists-tables-hypertables) Example (No, I am not looking for a solution to this, it is just an example): You want to read xml strings embedded in a binary file with variable length records. Your tool reads the record length and deletes the junk surrounding your text. Now it splits open the xml and extracts the strings. Being Indian number glyphs and containing decimal commas instead of decimal points, your tool transforms it into ASCII and replaces commas with points. Now the results must be stored into matrices of variable length....etc. etc. I am searching for a good language / language-design and if possible, an implementation. Which design do you like or even, if it does not fulfill the conditions, wouldn't you want to miss ? EDIT: The question is if a solution for the problem exists and if yes, which implementations are available. You DO NOT implement your own sorting algorithm if Quicksort, Mergesort and Heapsort is available. You DO NOT invent your own text parsing method if you have regular expressions. You DO NOT invent your own 3D language for graphics if OpenGL/Direct3D is available. There are existing solutions or at least papers describing the problem and giving suggestions. And there are people who may have worked and experienced such problems and who can give ideas and suggestions. The idea that this problem is totally new and I should work out and implement it myself without background knowledge seems for me, I must admit, totally off the mark.

    Read the article

  • How can i tell jaxb / Maven to genereate multiple schema packages?

    - by M.R.
    Example: </plugin> <plugin> <groupId>org.jvnet.jaxb2.maven2</groupId> <artifactId>maven-jaxb2-plugin</artifactId> <version>0.7.1</version> <executions> <execution> <goals> <goal>generate</goal> </goals> </execution> </executions> <configuration> <schemaDirectory>src/main/resources/dir1</schemaDirectory> <schemaIncludes> <include>schema1.xsd</include> </schemaIncludes> <generatePackage>schema1.package</generatePackage> </configuration> </plugin> <plugin> <groupId>org.jvnet.jaxb2.maven2</groupId> <artifactId>maven-jaxb2-plugin</artifactId> <version>0.7.1</version> <executions> <execution> <goals> <goal>generate</goal> </goals> </execution> </executions> <configuration> <schemaDirectory>src/main/resources/dir2</schemaDirectory> <schemaIncludes> <include>schema2.xsd</include> </schemaIncludes> <generatePackage>schema2.package</generatePackage> </configuration> </plugin> </plugins> What happened: Maven executes the the first plugin. Then deletes the target folder and creates the second package, which then is visible. I tried to set target/somedir1 for the first configuration and target/somedir2 for the second configuration. But the behavior does not not change? Any ideas? I do not want to generate the packages directly in the src/main/java folder, because these packages are genereated and should not be mixed with manual created classes.

    Read the article

  • How do I delete a [sub]hash based off of the keys/values of another hash?

    - by Zack
    Lets assume I have two hashes. One of them contains a set of data that only needs to keep things that show up in the other hash. e.g. my %hash1 = ( test1 => { inner1 => { more => "alpha", evenmore => "beta" } }, test2 => { inner2 => { more => "charlie", somethingelse => "delta" } }, test3 => { inner9999 => { ohlookmore => "golf", somethingelse => "foxtrot" } } ); my %hash2 = ( major=> { test2 => "inner2", test3 => "inner3" } ); What I would like to do, is to delete the whole subhash in hash1 if it does not exist as a key/value in hash2{major}, preferably without modules. The information contained in "innerX" does not matter, it merely must be left alone (unless the subhash is to be deleted then it can go away). In the example above after this operation is preformed hash1 would look like: my %hash1 = ( test2 => { inner2 => { more => "charlie", somethingelse => "delta" } }, ); It deletes hash1{test1} and hash1{test3} because they don't match anything in hash2. Here's what I've currently tried, but it doesn't work. Nor is it probably the safest thing to do since I'm looping over the hash while trying to delete from it. However I'm deleting at the each which should be okay? This was my attempt at doing this, however perl complains about: Can't use string ("inner1") as a HASH ref while "strict refs" in use at while(my ($test, $inner) = each %hash1) { if(exists $hash2{major}{$test}{$inner}) { print "$test($inner) is in exists.\n"; } else { print "Looks like $test($inner) does not exist, REMOVING.\n"; #not to sure if $inner is needed to remove the whole entry delete ($hash1{$test}{$inner}); } }

    Read the article

  • Optimize MySQL query (ngrams, COUNT(), GROUP BY, ORDER BY)

    - by Gerardo
    I have a database with thousands of companies and their locations. I have implemented n-grams to optimize search. I am making one query to retrieve all the companies that match with the search query and another one to get a list with their locations and the number of companies in each location. The query I am trying to optimize is the latter. Maybe the problem is this: Every company ('anunciante') has a field ('estado') to make logical deletes. So, if 'estado' equals 1, the company should be retrieved. When I run the EXPLAIN command, it shows that it goes through almost 40k rows, when the actual result (the reality matching companies) are 80. How can I optimize this? This is my query (XXX represent the n-grams for the search query): SELECT provincias.provincia AS provincia, provincias.id, COUNT(*) AS cantidad FROM anunciantes JOIN anunciante_invertido AS a_i0 ON anunciantes.id = a_i0.id_anunciante JOIN indice_invertido AS indice0 ON a_i0.id_invertido = indice0.id LEFT OUTER JOIN domicilios ON anunciantes.id = domicilios.id_anunciante LEFT OUTER JOIN localidades ON domicilios.id_localidad = localidades.id LEFT OUTER JOIN provincias ON provincias.id = localidades.id_provincia WHERE anunciantes.estado = 1 AND indice0.id IN (SELECT invertido_ngrama.id_palabra FROM invertido_ngrama JOIN ngrama ON ngrama.id = invertido_ngrama.id_ngrama WHERE ngrama.ngrama = 'XXX') AND indice0.id IN (SELECT invertido_ngrama.id_palabra FROM invertido_ngrama JOIN ngrama ON ngrama.id = invertido_ngrama.id_ngrama WHERE ngrama.ngrama = 'XXX') AND indice0.id IN (SELECT invertido_ngrama.id_palabra FROM invertido_ngrama JOIN ngrama ON ngrama.id = invertido_ngrama.id_ngrama WHERE ngrama.ngrama = 'XXX') AND indice0.id IN (SELECT invertido_ngrama.id_palabra FROM invertido_ngrama JOIN ngrama ON ngrama.id = invertido_ngrama.id_ngrama WHERE ngrama.ngrama = 'XXX') AND indice0.id IN (SELECT invertido_ngrama.id_palabra FROM invertido_ngrama JOIN ngrama ON ngrama.id = invertido_ngrama.id_ngrama WHERE ngrama.ngrama = 'XXX') GROUP BY provincias.id ORDER BY cantidad DESC And this is the query explained (hope it can be read in this format): id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY anunciantes ref PRIMARY,estado estado 1 const 36669 Using index; Using temporary; Using filesort 1 PRIMARY domicilios ref id_anunciante id_anunciante 4 db84771_viaempresas.anunciantes.id 1 1 PRIMARY localidades eq_ref PRIMARY PRIMARY 4 db84771_viaempresas.domicilios.id_localidad 1 1 PRIMARY provincias eq_ref PRIMARY PRIMARY 4 db84771_viaempresas.localidades.id_provincia 1 1 PRIMARY a_i0 ref PRIMARY,id_anunciante,id_invertido PRIMARY 4 db84771_viaempresas.anunciantes.id 1 Using where; Using index 1 PRIMARY indice0 eq_ref PRIMARY PRIMARY 4 db84771_viaempresas.a_i0.id_invertido 1 Using index 6 DEPENDENT SUBQUERY ngrama const PRIMARY,ngrama ngrama 5 const 1 Using index 6 DEPENDENT SUBQUERY invertido_ngrama eq_ref PRIMARY,id_palabra,id_ngrama PRIMARY 8 func,const 1 Using index 5 DEPENDENT SUBQUERY ngrama const PRIMARY,ngrama ngrama 5 const 1 Using index 5 DEPENDENT SUBQUERY invertido_ngrama eq_ref PRIMARY,id_palabra,id_ngrama PRIMARY 8 func,const 1 Using index 4 DEPENDENT SUBQUERY ngrama const PRIMARY,ngrama ngrama 5 const 1 Using index 4 DEPENDENT SUBQUERY invertido_ngrama eq_ref PRIMARY,id_palabra,id_ngrama PRIMARY 8 func,const 1 Using index 3 DEPENDENT SUBQUERY ngrama const PRIMARY,ngrama ngrama 5 const 1 Using index 3 DEPENDENT SUBQUERY invertido_ngrama eq_ref PRIMARY,id_palabra,id_ngrama PRIMARY 8 func,const 1 Using index 2 DEPENDENT SUBQUERY ngrama const PRIMARY,ngrama ngrama 5 const 1 Using index 2 DEPENDENT SUBQUERY invertido_ngrama eq_ref PRIMARY,id_palabra,id_ngrama PRIMARY 8 func,const 1 Using index

    Read the article

  • Unable to add item to dataset in Linq to SQL

    - by Mike B
    I am having an issue adding an item to my dataset in Linq to SQL. I am using the exact same method in other tables with no problem. I suspect I know the problem but cannot find an answer (I also suspect all i really need is the right search term for Google). Please keep in mind this is a learning project (Although it is in use in a business) I have posted my code and datacontext below. What I am doing is: Create a view model (Relevant bits are shown) and a simple wpf window that allows editing of 3 properties that are bound to the category object. Category is from the datacontext. Edit works fine but add does not. If I check GetChangeSet() just before the db.submitChanges() call there are no adds, edits or deletes. I suspect an issue with the fact that a Category added without a Subcategory would be an orphan but I cannot seem to find the solution. Command code to open window: CategoryViewModel vm = new CategoryViewModel(); AddEditCategoryWindow window = new AddEditCategoryWindow(vm); window.ShowDialog(); ViewModel relevant stuff: public class CategoryViewModel : ViewModelBase { public Category category { get; set; } // Constructor used to Edit a Category public CategoryViewModel(Int16 categoryID) { db = new OITaskManagerDataContext(); category = QueryCategory(categoryID); } // Constructor used to Add a Category public CategoryViewModel() { db = new OITaskManagerDataContext(); category = new Category(); } } The code for saving changes: // Don't close window unless all controls are validated if (!vm.IsValid(this)) return; var changes = vm.db.GetChangeSet(); // DEBUG try { vm.db.SubmitChanges(ConflictMode.ContinueOnConflict); } catch (ChangeConflictException) { vm.db.ChangeConflicts.ResolveAll(RefreshMode.KeepChanges); vm.db.SubmitChanges(); } The Xaml (Edited fror brevity): <TextBox Text="{Binding category.CatName, Mode=TwoWay, ValidatesOnDataErrors=True, UpdateSourceTrigger=PropertyChanged}" /> <TextBox Text="{Binding category.CatDescription, ValidatesOnDataErrors=True, UpdateSourceTrigger=PropertyChanged}" /> <CheckBox IsChecked="{Binding category.CatIsInactive, Mode=TwoWay}" /> IssCategory in the Issues table is the old, text based category. This field is no longer used and will be removed from the database as soon as this is working and pushed live.

    Read the article

  • Problem with tcp server when converting to service

    - by djerry
    Hello lads, I'm working on monitoring some object (cdr-packets). I'm setting up a tcp-server and am listening on port 50043 for the packages. The program as a console application is working just fine, my server is working like it should and i'm receiving the packets. When i try to use it as a service, i cannot seem to get a client connected to my server. Is there something i need to change to deploy this as a service? Code below is from my application: this is my service class where i start protected override void OnStart(string[] args) { server = new TcpServer(); server.StartServer(); } this is the constructor of TcpServer public TcpServer() { try { _server = new TcpListener(IPAddress.Any, 50043); } catch (Exception) { _server = null; } } this is the method i call after initialising the class public void StartServer() { if (_server != null) { // Create a ArrayList for storing SocketListeners before starting the server. _socketListenersList = new ArrayList(); // Start the Server and start the thread to listen client requests. _server.Start(); _serverThread = new Thread(new ThreadStart(ServerThreadStart)); _serverThread.Start(); // Create a low priority thread that checks and deletes client // SocktConnection objcts that are marked for deletion. _purgingThread = new Thread(new ThreadStart(PurgingThreadStart)); _purgingThread.Priority = ThreadPriority.Lowest; _purgingThread.Start(); } } this is the thread that keep checking if any client tries to connect private void ServerThreadStart() { // Client Socket variable; Socket clientSocket = null; TcpSocketListener socketListener = null; while (!_stopServer) { try { // Wait for any client requests and if there is any request from any //client accept it (Wait indefinitely). clientSocket = _server.AcceptSocket(); // Create a SocketListener object for the client. socketListener = new TcpSocketListener(clientSocket); // Add the socket listener to an array list in a thread safe fashon. lock (_socketListenersList) { _socketListenersList.Add(socketListener); } // Start a communicating with the client in a different thread. socketListener.StartSocketListener(); } catch (SocketException se) { _stopServer = true; } } } when for the first time a packet waits to be read, and i get to "clientSocket = _server.AcceptSocket();", it throws an exception (service, not very good debugable) Does anyone recognize this problem or can help me? Thanks in advance

    Read the article

  • A better way to delete a list of elements from multiple tables

    - by manyxcxi
    I know this looks like a 'please write the code' request, but some basic pointer/principles for doing this the right way should be enough to get me going. I have the following stored procedure: CREATE PROCEDURE `TAA`.`runClean` (IN idlist varchar(1000)) BEGIN DECLARE EXIT HANDLER FOR NOT FOUND ROLLBACK; DECLARE EXIT HANDLER FOR SQLEXCEPTION ROLLBACK; DECLARE EXIT HANDLER FOR SQLWARNING ROLLBACK; START TRANSACTION; DELETE FROM RunningReports WHERE run_id IN (idlist); DELETE FROM TMD_INDATA_INVOICE WHERE run_id IN (idlist); DELETE FROM TMD_INDATA_LINE WHERE run_id IN (idlist); DELETE FROM TMD_OUTDATA_INVOICE WHERE run_id IN (idlist); DELETE FROM TMD_OUTDATA_LINE WHERE run_id IN (idlist); DELETE FROM TMD_TEST WHERE run_id IN (idlist); DELETE FROM RunHistory WHERE id IN (idlist); COMMIT; END $$ It is called by a PHP script to clean out old run history. It is not particularly efficient as you can see and I would like to speed it up. The PHP script gathers the ids to remove from the tables with the following query: $query = "SELECT id, stop_time FROM RunHistory WHERE config_id = $configId AND save = 0 AND NOT(stop_time IS NULL) ORDER BY stop_time"; It keeps the last five run entries and deletes all the rest. So using this query to bring back all the IDs, it determines which ones to delete and keeps the 'newest' five. After gathering the IDs it sends them to the stored procedure to remove them from the associated tables. I'm not very good with SQL, but I ASSUME that using an IN statement and not joining these tables together is probably the least efficient way I can do this, but I don't know enough to ask anything but "how do I do this better?" If possible, I would like to do this all in my stored procedure using a query to gather all the IDs except for the five 'newest', then delete them. Another twist, run entries can be marked save (save = 1) and should not be deleted. The RunHistory table looks like this: CREATE TABLE `TAA`.`RunHistory` ( `id` int(11) NOT NULL auto_increment, `start_time` datetime default NULL, `stop_time` datetime default NULL, `config_id` int(11) NOT NULL, [...] `save` tinyint(1) NOT NULL default '0', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;

    Read the article

  • EXC_MEMORY_ACCESS when trying to delete from Core Data ($cash solution)

    - by llloydxmas
    I have an application that downloads an xml file, parses the file, and creates core data objects while doing so. In the parse code I have a function called 'emptydatacontext' that removes all items from Core Data before creating replacements items from the xml data. This method looks like this: -(void) emptyDataContext { NSFetchRequest * allCon = [[NSFetchRequest alloc] init]; [allCon setEntity:[NSEntityDescription entityForName:@"Condition" inManagedObjectContext:managedObjectContext]]; NSError * error = nil; NSArray * conditions = [managedObjectContext executeFetchRequest:allCon error:&error]; DebugLog(@"ERROR: %@",error); DebugLog(@"RETRIEVED: %@", conditions); [allCon release]; for (NSManagedObject * condition in conditions) { [managedObjectContext deleteObject:condition]; } // Update the data model effectivly removing the objects we removed above. //NSError *error; if (![managedObjectContext save:&error]) { DebugLog(@"%@", [error domain]); } } The first time this runs it deletes all objects and functions as it should - creating new objects from the xml file. I created a 'update' button that starts the exact same process of retrieving the file the proceeding with the parse & build. All is well until its time to delete the core data objects. This 'deleteObject' call creates a "EXC_BAD_ACCESS" error each time. This only happens on the second time through. Captured errors return null. If I log the 'conditions' array I get a list of NSManagedObjects on the first run. On the second this log request causes a crash exactly as the deleteObject call does. I have a feeling it is something very simple I'm missing or not doing correctly to cause this behavior. The data works great on my tableviews - its only when trying to update I get the crashes. I have spent days & days on this trying numerous alternative methods. Whats left of my hair is falling out. I'd be willing to ante up some cash for anyone willing to look at my code and see what I'm doing wrong. Just need to get past this hurdle. Thanks in advance for the help!

    Read the article

  • Sort a list of pointers.

    - by YuppieNetworking
    Hello all, Once again I find myself failing at some really simple task in C++. Sometimes I wish I could de-learn all I know from OO in java, since my problems usually start by thinking like Java. Anyways, I have a std::list<BaseObject*> that I want to sort. Let's say that BaseObject is: class BaseObject { protected: int id; public: BaseObject(int i) : id(i) {}; virtual ~BaseObject() {}; }; I can sort the list of pointer to BaseObject with a comparator struct: struct Comparator { bool operator()(const BaseObject* o1, const BaseObject* o2) const { return o1->id < o2->id; } }; And it would look like this: std::list<BaseObject*> mylist; mylist.push_back(new BaseObject(1)); mylist.push_back(new BaseObject(2)); // ... mylist.sort(Comparator()); // intentionally omitted deletes and exception handling Until here, everything is a-ok. However, I introduced some derived classes: class Child : public BaseObject { protected: int var; public: Child(int id1, int n) : BaseObject(id1), var(n) {}; virtual ~Child() {}; }; class GrandChild : public Child { public: GrandChild(int id1, int n) : Child(id1,n) {}; virtual ~GrandChild() {}; }; So now I would like to sort following the following rules: For any Child object c and BaseObject b, b<c To compare BaseObject objects, use its ids, as before. To compare Child objects, compare its vars. If they are equal, fallback to rule 2. GrandChild objects should fallback to the Child behavior (rule 3). I initially thought that I could probably do some casts in Comparator. However, this casts away constness. Then I thought that probably I could compare typeids, but then everything looked messy and it is not even correct. How could I implement this sort, still using list<BaseObject*>::sort ? Thank you

    Read the article

  • SQL Native Client 10 Performance miserable (due to server-side cursors)

    - by namezero
    we have an application that uses ODBC via CDatabase/CRecordset in MFC (VS2010). We have two backends implemented. MSSQL and MySQL. Now, when we use MSSQL (with the Native Client 10.0), retrieving records with SELECT is dramatically slow via slow links (VPN, for example). The MySQL ODBC driver does not exhibit this nasty behavior. For example: CRecordset r(&m_db); r.Open(CRecordset::snapshot, L"SELECT a.something, b.sthelse FROM TableA AS a LEFT JOIN TableB AS b ON a.ID=b.Ref"); r.MoveFirst(); while(!r.IsEOF()) { // Retrieve CString strData; crs.GetFieldValue(L"a.something", strData); crs.MoveNext(); } Now, with the MySQL driver, everything runs as it should. The query is returned, and everything is lightning fast. However, with the MSSQL Native Client, things slow down, because on every MoveNext(), the driver communicates with the server. I think it is due to server-side cursors, but I didn't find a way to disable them. I have tried using: ::SQLSetConnectAttr(m_db.m_hdbc, SQL_ATTR_ODBC_CURSORS, SQL_CUR_USE_ODBC, SQL_IS_INTEGER); But this didn't help either. There are still long-running exec's to sp_cursorfetch() et al in SQL Profiler. I have also tried a small reference project with SQLAPI and bulk fetch, but that hangs in FetchNext() for a long time, too (even if there is only one record in the resultset). This however only happens on queries with LEFT JOINS, table-valued functions, etc. Note that the query doesn't take that long - executing the same SQL via SQL Studio over the same connection returns in a reasonable time. Question1: Is is possible to somehow get the native client to "cache" all results locally use local cursors in a similar fashion as the MySQL driver seems to do it? Maybe this is the wrong approach altogether, but I'm not sure how else to do this. All we want is to retrieve all data at once from a SELECT, then never talk the server again until the next query. We don't care about recordset updates, deletes, etc or any of that nonsense. We only want to retrieve data. We take that recordset, get all the data, and delete it. Question2: Is there a more efficient way to just retrieve data in MFC with ODBC?

    Read the article

  • Authenticating users in iPhone app

    - by Myron
    I'm developing an HTTP api for our web application. Initially, the primary consumer of the API will be an iPhone app we're developing, but I'm designing this with future uses in mind (such as mobile apps for other platforms). I'm trying to decide on the best way to authenticate users so they can access their accounts from the iPhone. I've got a design that I think works well, but I'm no security expert, so I figured it would be good to ask for feedback here. The design of the user authentication has 3 primary goals: Good user experience: We want to allow users to enter their credentials once, and remain logged in indefinitely, until they explicitly log out. I would have considered OAuth if not for the fact that the experience from an iPhone app is pretty awful, from what I've heard (i.e. it launches the login form in Safari, then tells the user to return to the app when authentication succeeds). No need to store the user creds with the app: I always hate the idea of having the user's password stored in either plain text or symmetrically encrypted anywhere, so I don't want the app to have to store the password to pass it to the API for future API requests. Security: We definitely don't need the intense security of a banking app, but I'd obviously like this to be secure. Overall, the API is REST-inspired (i.e. treating URLs as resources, and using the HTTP methods and status codes semantically). Each request to the API must include two custom HTTP headers: an API Key (unique to each client app) and a unique device ID. The API requires all requests to be made using HTTPS, so that the headers and body are encrypted. My plan is to have an api_sessions table in my database. It has a unique constraint on the API key and unique device ID (so that a device may only be logged into a single user account through a given app) as well as a foreign key to the users table. The API will have a login endpoint, which receives the username/password and, if they match an account, logs the user in, creating an api_sessions record for the given API key and device id. Future API requests will look up the api_session using the API key and device id, and, if a record is found, treat the request as being logged in under the user account referenced by the api_session record. There will also be a logout API endpoint, which deletes the record from the api_sessions table. Does anyone see any obvious security holes in this?

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37  | Next Page >