Search Results

Search found 12506 results on 501 pages for 'business transaction mana'.

Page 78/501 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Saving child collections with NHibernate

    - by Ben
    Hi, I am in the process or learning NHibernate so bare with me. I have an Order class and a Transaction class. Order has a one to many association with transaction. The transaction table in my database has a not null constraint on the OrderId foreign key. Order class: public class Order { public virtual Guid Id { get; set; } public virtual DateTime CreatedOn { get; set; } public virtual decimal Total { get; set; } public virtual ICollection<Transaction> Transactions { get; set; } public Order() { Transactions = new HashSet<Transaction>(); } } Order Mapping: <class name="Order" table="Orders"> <cache usage="read-write"/> <id name="Id"> <generator class="guid"/> </id> <property name="CreatedOn" type="datetime"/> <property name="Total" type="decimal"/> <set name="Transactions" table="Transactions" lazy="false" inverse="true"> <key column="OrderId"/> <one-to-many class="Transaction"/> </set> Transaction Class: public class Transaction { public virtual Guid Id { get; set; } public virtual DateTime ExecutedOn { get; set; } public virtual bool Success { get; set; } public virtual Order Order { get; set; } } Transaction Mapping: <class name="Transaction" table="Transactions"> <cache usage="read-write"/> <id name="Id" column="Id" type="Guid"> <generator class="guid"/> </id> <property name="ExecutedOn" type="datetime"/> <property name="Success" type="bool"/> <many-to-one name="Order" class="Order" column="OrderId" not-null="true"/> Really I don't want a bidirectional association. There is no need for my transaction objects to reference their order object directly (I just need to access the transactions of an order). However, I had to add this so that Order.Transactions is persisted to the database: Repository: public void Update(Order entity) { using (ISession session = NHibernateHelper.OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { session.Update(entity); foreach (var tx in entity.Transactions) { tx.Order = entity; session.SaveOrUpdate(tx); } transaction.Commit(); } } } My problem is that this will then issue an update for every transaction on the order collection (regardless of whether it has changed or not). What I was trying to get around was having to explicitly save the transaction before saving the order and instead just add the transactions to the order and then save the order: public void Can_add_transaction_to_existing_order() { var orderRepo = new OrderRepository(); var order = orderRepo.GetById(new Guid("aa3b5d04-c5c8-4ad9-9b3e-9ce73e488a9f")); Transaction tx = new Transaction(); tx.ExecutedOn = DateTime.Now; tx.Success = true; order.Transactions.Add(tx); orderRepo.Update(order); } Although I have found quite a few articles covering the set up of a one-to-many association, most of these discuss retrieving of data and not persisting back. Many thanks, Ben

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • Couldn't Make It to Oracle OpenWorld? Fear Not! Upcoming: Using the Oracle E-Business Suite SDK for Java in ADF Applications Webcast

    - by Juan Camilo Ruiz
    For those of you who didn't make it at Oracle OpenWorld, we have good news. The ADF and E-Business Suite teams are well aware that various ADF and Oracle E-Business Suite customers are looking for guidance on how to work with the Oracle E-Business Suite SDK for Java in ADF applications: its capabilities, limitations, etc. As some of you might know, Sara Woodhull from the Applications Technology Group (ATG) in Oracle E-Business Suite and I delivered a session on the topic at Oracle OpenWorld last week. The good news is that we are already planning to deliver this session again as a webcast, tentatively scheduled for Nov. 2, 2012.  Stay tuned to this and the Oracle E-Business Suite Technology Stack blog for upcoming information about the webcast.

    Read the article

  • Where can I locate business data to use in my application?

    - by Aaron McIver
    This question talks about any and all free public raw data which appeared to have valuable pieces but nothing that really provides what I am looking for. Instead of using a socially defined listing of businesses (foursquare), I would like a business listing data set of registered businesses and associated addresses that could then be searchable based on location (coordinates). The critical need is that the data set should be filterable based on varying criteria (give me all restaurants, coffee shops, etc...). If the data is free that is great but anywhere that sells this type of data would also suffice. Infochimps looked like a possibility but perhaps something a bit more extensive exists. Where can I find a free or for fee data set of registered business that is filterable based on type of business and location?

    Read the article

  • How do I use Instagram to post to my business page but not my personal page?

    - by DEdesigns57
    I tried asking this question on the Facebook forums but got no answer, maybe someone here might be able to help me with this. I have a Facebook account and within that same account is my Facebook page for my business. Problem is every time that I try to use Instagram to upload a photo it gets uploaded to my personal/primary Facebook page and not the business page which is under the same email address and password. Instagram ask for this email address and password but how can I just have Instagram upload it to my business page and not my personal page? Or at very least how can I do both?

    Read the article

  • What's New in Tutor 14 - Integration with BPA/BPM Suite for a complete business process management solution.

    Hear about the latest strategies for maximizing the value of your Oracle Applications using technologies in Oracle Fusion Middleware. Today's businesses recognize that to be more innovative with their business applications, they need to shorten their application implementations, eliminate brittle integrations and develop a simpler approach to securing and managing their applications. In this podcast we'll hear techniques for extending the reach of applications through improved user experience and collaboration, create application extensions that minimize the risk during upgrades, and make more informed decisions with integrated business intelligence. These approaches applied with Oracle Fusion Middleware and Oracle Applications can help lower TCO and provide rapid returns for your business.

    Read the article

  • Displaying single-instance business object data on SSRS

    - by Lachlan
    I have a SQL Server Reporting Services local (i.e. RDLC) report displayed in a ReportViewer, with two subreports. I am using business objects to populate the datasets. What is the best way to populate single-instance data on my report, e.g. a dynamic title, or a text box that lists a calculated value, not based on the report data? I am currently displaying data using the following style: public class MyRecordList { string Name { get; set; } List<MyRecord> Records { get; set;} } public MyRecord { string Description { get; set;} string Value { get; set;} } I set the datasource to the Records in an instance of MyRecordsList, and they print out find in a table. But adding a textbox and and referring to Name displays nothing. I also tried turning Name into a List, and referring to the first in the list, using: =First(Fields!Name.Value, "Report1_MyRecordList") but still nothing is printed on the report.

    Read the article

  • When connecting SAP Business One to SQL Server 2005, what is the

    - by Nick
    we have SAP Business One - Fourth Shift Edition running here at a small manufacturing company. The consulting company that has come in to do the installation/implementation uses the "sa" id/pass to initially connect to the database to get the list of companies. From then on, I have to assume that its the sa id/pass that is being used to connect the client software to the database. Is this appropriate? I dont know where this data is being stored... as an ODBC connection? directly in the registry somewhere? Is it secure? Would it be better to set the users network ID in the database security and then use the "trusted connection" setting instead? Or do most people create a separate login in the database for each user and use that in the client settings? seems like the easiest way would be to add the users network login to the sql server security so they can use the "trusted connection"... but then wouldn't that allow ANY software to connect to the database from that machine? So anyways: what are the best-practices for setting this up?

    Read the article

  • Is Form validation and Business validation too much?

    - by Robert Cabri
    I've got this question about form validation and business validation. I see a lot of frameworks that use some sort of form validation library. You submit some values and the library validates the values from the form. If not ok it will show some errors on you screen. If all goes to plan the values will be set into domain objects. Here the values will be or, better said, should validated (again). Most likely the same validation in the validation library. I know 2 PHP frameworks having this kind of construction Zend/Kohana. When I look at programming and some principles like Don't Repeat Yourself (DRY) and single responsibility principle (SRP) this isn't a good way. As you can see it validates twice. Why not create domain objects that do the actual validation. Example: Form with username and email form is submitted. Values of the username field and the email field will be populated in 2 different Domain objects: Username and Email class Username {} class Email {} These objects validate their data and if not valid throw an exception. Do you agree? What do you think about this aproach? Is there a better way to implement validations? I'm confused about a lot of frameworks/developers handling this stuff. Are they all wrong or am I missing a point? Edit: I know there should also be client side kind of validation. This is a different ballgame in my Opinion. If You have some comments on this and a way to deal with this kind of stuff, please provide.

    Read the article

  • Seperation of business logic

    - by bruno
    Dear all, When I was optimizing my architecture of our applications in our website, I came to a problem that I don't know the best solution for. Now at the moment we have a small dll based on this structure: Database <-> DAL <-> BLL the Dal uses Business Objects to pass to the BLL that will pass it to the applications that uses this dll. Only the BLL is public so any application that includes this dll, can see the bll. In the beginning, this was a good solution for our company. But when we are adding more and more applications on that Dll, the bigger the Bll is getting. Now we dont want that some applications can see Bll-logic from other applications. Now I don't know what the best solution is for that. The first thing I thought was, move and seperate the bll to other dll's which i can include in my application. But then must the Dal be public, so the other dll's can get the data... and that I seems like a good solution. My other soluition, is just to seperate the bll in different namespaces, and just include only the namespaces you need in the applications. But in this solution, you can get directly access to other bll's if you want. So i'm asking for your oppinions. Thx, Bruno

    Read the article

  • Access modifiers - Property on business objects - getting and setting

    - by Mike
    Hi, I am using LINQ to SQL for the DataAccess layer. I have similar business objects to what is in the data access layer. I have got the dataprovider getting the message #23. On instantiation of the message, in the message constructor, it gets the MessageType and makes a new instance of MessageType class and fills in the MessageType information from the database. Therefore; I want this to get the Name of the MessageType of the Message. user.Messages[23].MessageType.Name I also want an administrator to set the MessageType user.Messages[23].MessageType = MessageTypes.LoadType(3); but I don't want the user to publicly set the MessageType.Name. But when I make a new MessageType instance, the access modifier for the Name property is public because I want to set that from an external class (my data access layer). I could change this to property to internal, so that my class can access it like a public variable, and not allow my other application access to modify it. This still doesn't feel right as it seems like a public property. Are public access modifiers in this situation bad? Any tips or suggestions would be appreciated. Thanks.

    Read the article

  • Property on business objects - getting and setting

    - by Mike
    Hi, I am using LINQ to SQL for the DataAccess layer. I have similar business objects to what is in the data access layer. I have got the dataprovider getting the message #23. On instantiation of the message, in the message constructor, it gets the MessageType and makes a new instance of MessageType class and fills in the MessageType information from the database. Therefore; I want this to get the Name of the MessageType of the Message. user.Messages[23].MessageType.Name I also want an administrator to set the MessageType user.Messages[23].MessageType = MessageTypes.LoadType(3); but I don't want the user to publicly set the MessageType.Name. But when I make a new MessageType instance, the access modifier for the Name property is public because I want to set that from an external class (my data access layer). I could change this to property to internal, so that my class can access it like a public variable, and not allow my other application access to modify it. This still doesn't feel right as it seems like a public property. Are public access modifiers in this situation bad? Any tips or suggestions would be appreciated. Thanks.

    Read the article

  • Access Controller Context/ TempData from business objects

    - by thanikkal
    I am trying to build a session/tempdata provider that can be swapped. The default provider will work on top of asp.net mvc and it needed to access the .net mvc TempData from the business object class. I know the tempdata is available through the controller context, but i cant seem to find if that is exposed through HttpContext or something. I dont really want to pass the Controller context as an argument as that would dilute my interface definition since only asp.net based session provider needs this, other (using NoSQL DB etc) doesn't care about Controller Context. To clarify further, adding little more code here. my ISession interface look like this. and when this code goes to production, the session/tempdata is expected to work using NoSql db. But i also like to have another implementation that works on top of asp.net mvc session/tempdata for my dev testing etc. public interface ISession { T GetTempData<T>(string key); void PutTempData<T>(string key, T value); T GetSessiondata<T>(string key); void PutSessiondata<T>(string key, T value); }

    Read the article

  • Generating jquery 'rules' from business model to UI in asp.net mvc

    - by jim
    Hi all, I've had a good look around and am certain that there's no matching question on SO, so here goes. Has anyone created a 'helper' method on their model that generates jquery (or plain javascript) rules validation dynamically, based on the criteria/rules that are contained within the object and taken from a repository (i.e. DB). What i'm thinking of is a discrete set of partial views (and associated models) that have rules at the business logic 'level' and rather than (or in combination with) validating the rule(s) at postback, translating the same rules into tightly focussed jquery methods that work identically at client (js) and server (c#) levels. I can see benefits here re performance. Also, the rules definitions could be created in a single place (in c#) and the jquery generated off of that, thus allowing single edits to update both code streams. I appreciate that there would be limitations imposed by language specific contstraints but the general principle could be quite interesting if used appropriately. I'm also aware that testibility could be an issue when using two different language structures and hoping to achieve similar test outcomes - but those aside... any thoughts or experiences of similar out there?? cheers jimi

    Read the article

  • Are workflows good for web service business logic?

    - by JL
    I have a series of complex web services that are getting used in my SOA application. I am generally happy with the overall design of the application, but as the complexity grows, I was wondering if Windows Workflow might be the way to go. My motivations for this are that you can get a graphic representation of the applications functionality, so it would be easier to maintain the code by its business function, rather than what I have now ( a standard 3 tier class library structure). My concerns are: I would be inducing an abstraction in my code, and I don't want to spend time having to deal with possible WF quirks or bugs. I've never worked with WF, is it a solid technology? I don't want to hit any WF limitations that prevent me from developing my solution. Is a WF even the right solution for the task? Simply put I am considering writing my next web service in this app to call a WF, and in this work flow manage the tasks the web service needs to carry out. I think it will be much neater and easier to maintain than a regular c# class library (maintainable by namespaces, classes ). Do you think this is the right thing to do? I'm hoping for positive feedback on WF (.net 4), but brutal honestly at the end of the day would help more. Thanks

    Read the article

  • Building the DropThings ASP.NET Solution

    - by Sephrial
    Hi all, I'm attempting to build a project called DropThings but I am getting all these errors and I'm not sure how to resolve them. Can anyone lend a helping hand? I'm wondering if anyone else can build the website and if so, what steps it took you. Thanks in advance! Source Code: http://code.google.com/p/dropthings/ My Configuration: Visual Studio 2008 SP1 SQL 2005 (with the database loaded + Web.Config file configured) Microsoft Silverlight Projects 2008 Errors: Error File Metadata file '...\Dropthings-v2.2.0\src\DashboardDataAccess\bin\Debug\Dropthings.DataAccess.dll' could not be found. ...\Dropthings-v2.2.0\src\DashboardBusiness Build failed due to validation errors in ...\Dropthings-v2.2.0\src\DashboardDataAccess\DropthingsDataContext.dbml. Open the file and resolve the issues in the Error List, then try rebuilding the project. ...\Dropthings-v2.2.0\src\DashboardDataAccess\DropthingsDataContext.dbml Metadata file '...\Dropthings-v2.2.0\src\DashboardBusiness\bin\Debug\Dropthings.Business.dll' could not be found. ...\Dropthings-v2.2.0\src\Dropthings.Business.Activities Metadata file '...\Dropthings-v2.2.0\src\DashboardDataAccess\bin\Debug\Dropthings.DataAccess.dll' could not be found. ...\Dropthings-v2.2.0\src\Dropthings.Business.Activities Metadata file '...\Dropthings-v2.2.0\src\Dropthings.Business.Activities\bin\Debug\Dropthings.Business.Activities.dll' could not be found. ...\Dropthings-v2.2.0\src\Dropthings.Business.Workflows Metadata file '...\Dropthings-v2.2.0\src\DashboardBusiness\bin\Debug\Dropthings.Business.dll' could not be found. ...\Dropthings-v2.2.0\src\Dropthings.Business.Workflows Metadata file '...\Dropthings-v2.2.0\src\DashboardDataAccess\bin\Debug\Dropthings.DataAccess.dll' could not be found. ...\Dropthings-v2.2.0\src\Dropthings.Business.Workflows Metadata file '...\Dropthings-v2.2.0\src\DashboardDataAccess\bin\Debug\Dropthings.DataAccess.dll' could not be found Dropthings.Widget.Framework Metadata file '...\Dropthings-v2.2.0\src\DashboardBusiness\bin\Debug\Dropthings.Business.dll' could not be found. ...\Dropthings-v2.2.0\src\Dropthings.Business.Facade Metadata file '...\Dropthings-v2.2.0\src\DashboardDataAccess\bin\Debug\Dropthings.DataAccess.dll' could not be found. ...\Dropthings-v2.2.0\src\Dropthings.Business.Facade Metadata file '...\Dropthings-v2.2.0\src\DashboardDataAccess\bin\Debug\Dropthings.DataAccess.dll' could not be found Dropthings.Web.Framework Metadata file '...\Dropthings-v2.2.0\src\Dropthings.Business.Workflows\bin\Debug\Dropthings.Business.Workflows.dll' could not be found Dropthings.Web.Framework Metadata file '...\Dropthings-v2.2.0\src\Dropthings.Business.Activities\bin\Debug\Dropthings.Business.Activities.dll' could not be found Dropthings.Web.Framework Metadata file '...\Dropthings-v2.2.0\src\Dropthings.Widget.Framework\bin\Debug\Dropthings.Widget.Framework.dll' could not be found Dropthings.Web.Framework Metadata file '...\Dropthings-v2.2.0\src\Dropthings.Business.Facade\bin\Debug\Dropthings.Business.Facade.dll' could not be found Dropthings.Web.Framework Metadata file '...\Dropthings-v2.2.0\src\DashboardBusiness\bin\Debug\Dropthings.Business.dll' could not be found Dropthings.Web.Framework The type or namespace name 'Framework' does not exist in the namespace 'Dropthings.Web' (are you missing an assembly reference?) ...\Dropthings-v2.2.0\src\Dropthings\web.config

    Read the article

  • How many inserts can you have in a sql transaction.

    - by Mav
    I have a task to do that will require me using a transaction to ensure that many inserts will be completed or the entire update rolled back. I am concerned about the amount of data that needs to be inserted in this transaction and whether this will have a negative affect on the server. We are looking at about 10,000 records in table1 and 60,0000 records into table2. Is this safe to do in a single transaction?

    Read the article

  • Is there something like a "long running offline transaction" for NHibernate or any other ORM?

    - by Vilx-
    In essence this is a followup of this question. I'm beginning to feel that I should give up the whole idea, but I'll give it one more shot. What I want is pretty much like a DB transaction. It should track my changes to the DB and then in the end allow me to either commit or rollback them. If I insert an object, I should get it back in my next (appropriate) SELECT query. If I delete it, future SELECT queries should not return it. Etc. But there is one catch - this transaction would be very long running. It would start when the user opened a form (I'm talking about Windows Forms here), and the commit/rollback would be when the user closed it(with OK/Cancel). So it could take anywhere between seconds and days. This requirement rules out a standard DB transaction because that would lock the tables/rows it touched, and other users wouldn't be able to use the system. Also the transaction should not commit ANY changes to the DB until it was really committed. So if one user makes some changes, others don't see them until OK button is hit. This prevents errors in case the computer crashes or is disconnected from the network. I'm quite OK if the solution puts constraints on my model (I'm using MSSQL 2008, btw). I can design the DB/code any way I like. I'm also fine with the idea that a commit could fail because someone already modified one of the objects my transaction touched. Is there anything like this? I looked at NHibernate.Burrow, but I'm not sure that that's the thing I want. Added: It's the very beginning of the project so I'm not tied to NHibernate. I started out with it but I can still change easily.

    Read the article

  • Keyboard navigation in typical WPF Business Line Application

    - by Guge
    I have a WPF business line application with a tabbed userinterface, menus and toolbars. The application must be navigable through a keyboard, some users like to minimize mouse use. I have thrown together the included XAML sample to illustrate the problem of keyboard navigation. Using the tab key I can only get to the menu, and then to the toolbar. I have tried various attached properties for KeyboardNavigation and FocusManager, but I have not succeeded in the following goals: Navigate to the contents of the first tabpage using the keyboard. Disable tab key navigation from tabpage to menu and toolbar. Change tabpage using Ctrl-Tab. <Window x:Class="WpfApplication4.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="408" Width="569" > <DockPanel> <Menu DockPanel.Dock="Top"> <MenuItem Header="_File"> <MenuItem Header="Open"/> </MenuItem> <MenuItem Header="_Edit"></MenuItem> </Menu> <ToolBarTray DockPanel.Dock="Top"> <ToolBar> <Label Target="{Binding ElementName=SearchBox}"> _Search </Label> <TextBox Name="SearchBox" Width="80"></TextBox> <Button>Search</Button> </ToolBar> </ToolBarTray> <StatusBar DockPanel.Dock="Bottom"> Statusbar here </StatusBar> <TabControl> <TabItem Header="File 1"> <WrapPanel> <Button>Click</Button> <CheckBox>Check</CheckBox> <Slider Minimum="0" Maximum="100" Width="150"/> </WrapPanel> </TabItem> <TabItem Header="File 2"> </TabItem> <TabItem Header="File 3"> </TabItem> </TabControl> </DockPanel>

    Read the article

  • How to make an access database where both users with and without an ID number can make a transaction

    - by louise
    I am trying to create an access 2007 database that allows staff that already have ID numbers to make a transaction and also other guest users who do not have ID number make a transaction. What is the best way todo this in access? A transaction involves taking an item out of inventory. Therefore if one a user (staff or external) has an item out of inventory then no other users can get a hold of that item. Thanks, Any Ideas would be most appreciated!

    Read the article

  • What transaction manager should I use for JBDC template When using JPA ?

    - by Sajid
    I am using standard JPA transaction manager for my JPA transactions. However, now I want to add some JDBC entities which will share the same 'datasource'. How can I make the JDBC operations transactional with spring transaction? Do I need to swith to JTA transaction managers? Is it possible to use both JPA & JDBC transactional service with same datasource? Even better, is it possible to mix these two transactions?

    Read the article

  • Best way to return result from business layer to presentation layer when using LINQ-to-SQL

    - by samsur
    I have a business layer that has DTOs that are used in the presentation layer. This application uses entity framework. Here is an example of a class called RoleDTO: public class RoleDTO { public Guid RoleId { get; set; } public string RoleName { get; set; } public string RoleDescription { get; set; } public int? OrganizationId { get; set; } } In the BLL I want to have a method that returns a list of DTO. I would like to know which is the better approach: returning IQueryable or list of DTOs. Although I feel that returning IQueryable is not a good idea because the connection needs to be open. Here are the 2 different methods using the different approaches: First approach public class RoleBLL { private servicedeskEntities sde; public RoleBLL() { sde = new servicedeskEntities(); } public IQueryable<RoleDTO> GetAllRoles() { IQueryable<RoleDTO> role = from r in sde.Roles select new RoleDTO() { RoleId = r.RoleID, RoleName = r.RoleName, RoleDescription = r.RoleDescription, OrganizationId = r.OrganizationId }; return role; } Note: in the above method the DataContext is a private attribute and set in the constructor, so that the connection stays opened. Second approach public static List<RoleDTO> GetAllRoles() { List<RoleDTO> roleDTO = new List<RoleDTO>(); using (servicedeskEntities sde = new servicedeskEntities()) { var roles = from pri in sde.Roles select new { pri.RoleID, pri.RoleName, pri.RoleDescription }; //Add the role entites to the DTO list and return. This is necessary as anonymous types can be returned acrosss methods foreach (var item in roles) { RoleDTO roleItem = new RoleDTO(); roleItem.RoleId = item.RoleID; roleItem.RoleDescription = item.RoleDescription; roleItem.RoleName = item.RoleName; roleDTO.Add(roleItem); } return roleDTO; } } Please let me know, if there is a better approach.

    Read the article

  • How to config a default global EJB transaction attribute in JBoss Server?

    - by seven
    my project need to migrate from oc4j to jboss. But it seems that default EJB transaction attribute is different between them. For OC4J: If you do not specify any transaction attributes for an EJB method then OC4J uses default transaction attributes. OC4J by default uses Required for CMP 2.0, NotSupported for MDBs and Supports for all other types of EJBs. (refer to official doc) For JBoss: default for all types EJB is Required. (Maybe , refer to un-official site) To migrate my project within less effort, how to config a default global EJB transaction attribute, e.g. Supports, in JBoss Server?

    Read the article

  • Would this prevent the row from being read during the transaction?

    - by acidzombie24
    I remember an example where reads in a transaction then writing back the data is not safe because another transaction may read/write to it in the time between. So i would like to check the date and prevent the row from being modified or read until my transaction is finish. Would this do the trick? and are there any sql variants that this will not work on? update tbl set id=id where date>expire_date and id=@id Note: dateexpire_date happens to be my condition. It could be anything. Would this prevent other transaction from reading the row until i commit or rollback?

    Read the article

  • How to avoid StaleObjectStateException when transaction updates thousands of entities?

    - by ThinkFloyd
    We are using Hibernate 3.6.0.Final with JPA 2 and Spring 3.0.5 for a large scale enterprise application running on tomcat 7 and MySQL 5.5. Most of the transactions in application, lives for less than a second and update 5-10 entities but in some use cases we need to update more than 10-20K entities in single transaction, which takes few minutes and hence more than 70% of times such transaction fails with StaleObjectStateException because some of those entities got updated by some other transaction. We generally maintain version column in all tables and in case of StaleObjectStateException we generally retry but since these longs transactions are anyways very long so if we keep on retrying then also I am not very sure that we'll be able to escape StaleObjectStateException. Also lot of activities keep updating these entities in busy hours so we cannot go with pessimistic approach because it can potentially halt many activities in system. Please suggest how to fix such long transaction issue because we cannot spawn thousands of independent and small transactions because we cannot afford messed up data in case of some failed & some successful transactions.

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >