Search Results

Search found 2819 results on 113 pages for 'healthcare transaction ba'.

Page 85/113 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Is there a best practice for maintaining history in a database?

    - by Pete
    I don't do database work that often so this is totally unfamiliar territory for me. I have a table with a bunch of records that users can update. However, I now want to keep a history of their changes just in case they want to rollback. Rollback in this case is not the db rollback but more like revert changes two weeks later when they realized that they made a mistake. The distinction being that I can't have a transaction do the job. Is the current practice to use a separate table, or just a flag in the current table? It's a small database, 5 tables each with < 6 columns, < 1000 rows total.

    Read the article

  • How to do rolling balances in Linq2SQL

    - by David Liddle
    Given an account with a list of transactions I would like to output a query that shows each transaction with the rolling balance (just like you would see on an online banking account). TRANSACTIONS - ID - DATE - AMOUNT Here is what I created in T-SQL however was wondering if this can be translated to linq2sql code? select T.ID, convert(char(10), T.DATE, 101) as 'DATE', T.AMOUNT, (select sum(O.AMOUNT) from TRANSACTIONS O where O.DATE < T.DATE or (O.DATE = T.DATE and O.ID <= T.ID)) 'BALANCE' from TRANSACTIONS as T where T.DATE between @pStartDate and @pEndDate order by T.DATE, T.ID Alternatively I guess my other option is to just call a stored procedure for these kind of results. However, I have Services which call Repositories and didn't really want to put the sproc call in the Repository.

    Read the article

  • Are there SqlExceptions which throw but commit their data anyway?

    - by Jonn
    I've recently encountered the error: System.Data.SqlClient.SqlException: The transaction log for database 'mydatabase' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases on one of my windows services. It's supposed to retry after catching an Sql Exception, what I didn't expect was that it seemed like the data was still going through (I'm using an SqlBulkCopy btw) regardless of it throwing an exception. I've never encountered this scenario before. I'd like to know if there are other scenarios where such a thing like this might happen, and if this thing is entirely possible at all in the first place? PS. If anyone knows the error code to the above exception, that would help a great deal as well.

    Read the article

  • Grails Services / Transactions / RuntimeException / Testing

    - by Rob
    I'm testing come code in a service with transactional set to true , which talks to a customer supplied web service the main part of which looks like class BarcodeService { .. /// some stuff ... try{ cancelBarCodeResponse = cancelBarCode(cancelBarcodeRequest) } catch(myCommsException e) { throw new RuntimeException(e) } ... where myCommsException extends Exception .. I have a test which looks like // As no connection from my machine, it should fail .. shouldFailWithCause(RuntimeException){ barcodeServices.cancelBarcodeDetails() } The test fails cause it's catching a myCommsException rather than the RuntimeException i thought i'd converted it to .. Anyone care to point out what i'm doing wrong ? Also will the fact that it's not a RuntimeException mean any transaction related info done before my try/catch actually be written out rather than thrown away ?? Thanks

    Read the article

  • move data from one table to another, postgresql edition

    - by IggShaman
    Hi All, I'd like to move some data from one table to another (with a possibly different schema). Straightforward solution that comes into mind is - start a transaction with serializable isolation level; INSERT INTO dest_table SELECT data FROM orig_table,other-tables WHERE <condition>; DELETE FROM orig_table USING other-tables WHERE <condition>; COMMIT; Now what if the amount of data is rather big, and the <condition> is expensive to compute? In PostgreSQL, a RULE or a stored procedure can be used to delete data on the fly, evaluating condition only once. Which solution is better? Are there other options?

    Read the article

  • Why is checking in files called a 'commit'?

    - by Kjetil Klaussen
    The act of checking in files in a source control repository like git, mercurial or svn, is called a commit. Does anyone know the reason behind calling it a commit instead of just check in? English is not my mother tongue, so it might be some linguistic I don't quite get her, but what I'm I actually commiting to? (Hopefully I'm not commiting a crime, but you'll never know.) Is it in the meaning of "to consign for preservation"? Is it related to transactions (commit at the end of a transaction)?

    Read the article

  • Retrieve value of one column in a table

    - by user327094
    Here is my problem: I have 2 tables Accounts and Transaction Logs. in Accounts table, it has column "Amount" which is a base amount of an account. and in Trans Logs table, it also has column "Amount" which is additional (add or minus to the base amount) amount of the account. and I don't know how to retrieve that base amount to edit it, then save it back to the table. That means I need to get a value of the right column by using Acc_No to find. I'm using DataSet, by the way. i think it should go like this: Dim Amount as Decimal Amount = *the code to retrieve the base amount* Amount = Amount + txtAmount.Text *the code to save the new amount back to Accounts table* Thank you!

    Read the article

  • Monitoring Log Shipped Databases

    - by Registered User
    I need a consistent way to monitor databases that are read-only log shipped copies of production databases. In the past I have relied on the following methods: Set the job that restores logs to the database kick off another job as its last step. Set the job that restores logs to the database to insert a record in a control table as its last step. Query the msdb database to check the status of the job that restores logs to the database. Query a control table inside the database itself that gets a value immediately before transaction logs are backed up. Query MAX values from tables inside the database to see if it has recent changes. Although the above methods work, they can't be implemented for every log shipped database that I query for various reasons. What is the best method for monitoring the "data as of" date for a log shipped database?

    Read the article

  • Hibernate entities stored as HttpSession attribute values

    - by njudge
    I'm dealing with a legacy Java application with a large, fairly messy codebase. There's a fairly standard 'User' object that gets stored in the HttpSession between requests, so the servlets do stuff like this at the top: HttpSession session = request.getSession(true); User user = (User)session.getAttribute("User"); The old user authentication layer (which I won't describe; suffice to say, it did not use a database) is being replaced with code mapped to the DB with Hibernate. So 'User' is now a Hibernate entity. My understanding of Hibernate object life cycles is a little fuzzy, but it seems like storing 'User' in the HttpSession now becomes a problem, because it will be retrieved in a different transaction during the next request. What is the right thing to be doing here? Can I just use the Hibernate Session object's update() method to reattach the User instance the next time around? Do I need to?

    Read the article

  • Storing task state between multiple django processes

    - by user366148
    I am building a logging-bridge between rabbitmq messages and Django application to store background task state in the database for further investigation/review, also to make it possible to re-publish tasks via the Django admin interface. I guess it's nothing fancy, just a standard Producer-Consumer pattern. Web application publishes to message queue and inserts initial task state into the database Consumer, which is a separate python process, handles the message and updates the task state depending on task output The problem is, some tasks are missing in the db and therefore never executed. I suspect it's because Consumer receives the message earlier than db commit is performed. So basically, returning from Model.save() doesn't mean the transaction has ended and the whole communication breaks. Is there any way I could fix this? Maybe some kind of post_transaction signal I could use? Thank you in advance.

    Read the article

  • Java training for .NET developers?

    - by C Keene
    We are working with a large retail bank on training 40-50 .Net developers to use Java. They are familiar with C# and .Net framework and have built dozens of "run the business" style apps in .Net. We need advice on how to provide basic Java familiarity, with a focus on back end logic, security and transaction management. Back end is Spring/Hibernate, front end is Ajax (Dojo). Are there any online, self-paced, Java courses that would be good for C#/.Net developers to get up to speed quickly?

    Read the article

  • does this raw sql only one trip to the database or many trips?

    - by Álvaro García
    I gues that I have this sql: string strTSQL = "Begin TRAN delete from MyTable where ID = 1"; string strTSQL = ";delete from MyTable where ID = 2"; string strTSQL = ";delete from MyTable where ID = 3 COMMIT"; using(Entities dbContext = new Entities()) { dbCntext.MyTable.SQLQuery(strTSQL); } This use a transaction in the dataBase, so all the commands are executed or no one. But how I execute it through EF, it does only one trip to the database or many? Thanks.

    Read the article

  • Who owes who money optimisation problem

    - by Francis
    Say you have n people, each who owe each other money. In general it should be possible to reduce the amount of transactions that need to take place. i.e. if X owes Y £4 and Y owes X £8, then Y only needs to pay X £4 (1 transaction instead of 2). This becomes harder when X owes Y, but Y owes Z who owes X as well. I can see that you can easily calculate one particular cycle. It helps for me when I think of it as a fully connected graph, with the nodes being the amount each person owes. Problem seems to be NP-complete, but what kind of optimisation algorithm could I make, nevertheless, to reduce the total amount of transactions? Doesn't have to be that efficient, as N is quite small for me.

    Read the article

  • Sum up values in SQL once all values are available

    - by James Brown
    I have events flowing into a MySQL database and I need to group and sum the events to transactions and store away into another table. The data looks like: +----+---------+------+-------+ | id | transid | code | value | +----+---------+------+-------+ | 1 | 1 | b | 12 | | 2 | 1 | i | 23 | | 3 | 2 | b | 34 | | 4 | 1 | e | 45 | | 5 | 3 | b | 56 | | 6 | 2 | i | 67 | | 7 | 2 | e | 78 | | 8 | 3 | i | 89 | | 9 | 3 | i | 90 | +----+---------+------+-------+ The events arrive in batches and I would like to create the transaction by summing up the values for each transid, like: select transid, sum(value) from eventtable group by transid; but only after all the events for that transid have arrived. That is determined by the event with the code e (b for the beginning, e for the end and i for varying amount of intermediates). Being a novice in SQL, how could I implement the requirement for the existance of the end code before the summing?

    Read the article

  • How to know when the user touches the OK button of the last StoreKit alert "Thank you. Your purchase

    - by Walchy
    I have integrated "In App Purchase" in a game to let the user unlock more levels. Everything works fine, but I have a little problem with the last alert "Thank You. Your purchase was successful. [OK]". My program gets informed that the transaction was successfully completed before this last alert pops up and so my game starts running again - then the alert comes up, annoying the user. I would like to wait with my game running until the user touches the "OK" button, but since it is an alert from StoreKit I have no idea when this happens or how I could catch it. I don't want to create another dialog (this time my own, therefor under my control) below the alert, just asking for touching "OK" again - would be a bad user experience. Anybody have any ideas?

    Read the article

  • Why Does TFS Allow Orphaned Content and How Do I Get Rid of It?

    - by Chad
    My TfsVersionControl database has grown to 40+ GB in size. We recently did a TFS Destroy on a folder tree that should have cleared up at least 10 GB but instead it seemed to have no effect. When I look at the tables in TfsVersionControl, I am first shocked to see that there are no foreign keys at all in the database. Running a few queries, I see that there is some orphaning going on: tbl_Content has 13.9 GB of records that don't have a related tbl_File record tbl_File and tbl_Content have 2.4 GB that don't have a related tbl_Namespace record The cleanup job seems to be running nightly (prc_DeleteUnusedContent) and running it against the database manually doesn't remove any orphans. I see in the log for the cleanup job that it failed on 3/16, which is the morning after I destroyed the large amount of data. The error was due to a full transaction log. Could that error be the reason I'm left with all this orphaned data that can't be deleted? How can I permanently destroy this unneeded content?

    Read the article

  • Simple performance testing tool in C#?

    - by Tomas
    Hi, At first -I need to do it as my university project so I am not interested in using existing tools. I would like to know whether it is even possible to write a very simple tool that I could use for performance testing of web applications. It would only record actions (I do not know, maybe just packet sniffering?) and then replay. However I have basic idea (record packets on port 80 and sending them again), I do not know how to measure time for each transaction as they are not differentiated. Any help is greatly appreciated, thank you!

    Read the article

  • Web Development In Java Using Netbeans

    - by GigaPr
    Hi I am trying to implement a web application(university project) in java using the following Frameworks Spring Dependency Injection Spring AOP (Logging and Transaction Management) Spring DAO JDBC or HIBERNATE Spring MVC Log4J I create a new Web Application in Netbeans and it gives me a bunch of Files and folders by default. Could anyone explain me what are the files ? Where shall i put the code for the data access layer and business Logic? Or where can i found a basic tutorial to get started(with data access layer, business layer and possibly code example)? Thanks

    Read the article

  • Mock Objects properties not changing

    - by frictionlesspulley
    I am new to using Mock test in .Net. I am testing out a financial transaction which is of the following nature: int amt =20; //sets all the props and func and returns a FinaceAccount. //Note I did not SetUp the amt of the account. var account =GetFinanceAccount() //service layer to be tested _financeService.tranx(account,amt); //checks if the amt was added to the account.amt //here the amt comes out same as that set in GetFinanceAccount. Assert.AreEqual(account.amt ,amt) I know that the function tranx works correctly but there is an issue with the test. Are there any GOOD reference material on Mocking in .Net

    Read the article

  • SQL Server 2008 error message from stored procedure

    - by George2
    Hello everyone, I am using SQL Server 2008 Enterprise. When we met with such error message from stored procedure, Message 1205, Level 13, State 52, the process Pr_FooV2, Line 9 Services (Process ID 111) and another process is deadlock in the lock | communication buffer resources, and has been chosen as the deadlock victim. Rerun the transaction. I am wondering whether such messages are stored in log files? I searched log folder of my SQL Server 2008 installation root (in my environment, it is C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log), but can not find such files. thanks in advance, George

    Read the article

  • Customers and suppliers database design issue

    - by hectorsq
    I am developing a web application in which I will have customers and suppliers. Initially I thought on using a Customers table and a Suppliers table. Then when I was thinking on bank transactions, I noticed that each transaction needs to refer to a customer or a supplier, so I thought on using a single table named Business in which I will save both customers and suppliers. If I use Customers and Suppliers tables when I want to list the bank transactions I will have to search in both tables to get the company name. If I use a Businesses table I will have to use a business type column, and have the union of possible fields for all businesses types. Any suggestions on the design?

    Read the article

  • MySQL storage engine dilemma

    - by burntblark
    There are two MySQL database features that I want to use in my application. The first is FULL-TEXT-SEARCH and TRANSACTIONS. Now, the dilemma here is that I cannot get this feature in one storage engine. It's either I use MyIsam (which has the FULL-TEXT-SEARCH feature) or I use InnoDB (which supports the TRANSACTION feature). I can't have both. My question is, is there anyway I can have both features in my application before I am forced to make a choice between the two storage engines.

    Read the article

  • How should I implement simple caches with concurrency on Redis?

    - by solublefish
    Background I have a 2-tier web service - just my app server and an RDBMS. I want to move to a pool of identical app servers behind a load balancer. I currently cache a bunch of objects in-process. I hope to move them to a shared Redis. I have a dozen or so caches of simple, small-sized business objects. For example, I have a set of Foos. Each Foo has a unique FooId and an OwnerId. One "owner" may own multiple Foos. In a traditional RDBMS this is just a table with an index on the PK FooId and one on OwnerId. I'm caching this in one process simply: Dictionary<int,Foo> _cacheFooById; Dictionary<int,HashSet<int>> _indexFooIdsByOwnerId; Reads come straight from here, and writes go here and to the RDBMS. I usually have this invariant: "For a given group [say by OwnerId], the whole group is in cache or none of it is." So when I cache miss on a Foo, I pull that Foo and all the owner's other Foos from the RDBMS. Updates make sure to keep the index up to date and respect the invariant. When an owner calls GetMyFoos I never have to worry that some are cached and some aren't. What I did already The first/simplest answer seems to be to use plain ol' SET and GET with a composite key and json value: SET( "ServiceCache:Foo:" + theFoo.Id, JsonSerialize(theFoo)); I later decided I liked: HSET( "ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); That lets me get all the values in one cache as HVALS. It also felt right - I'm literally moving hashtables to Redis, so perhaps my top-level items should be hashes. This works to first order. If my high-level code is like: UpdateCache(myFoo); AddToIndex(myFoo); That translates into: HSET ("ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); var myFoos = JsonDeserialize( HGET ("ServiceCache:FooIndex", theFoo.OwnerId) ); myFoos.Add(theFoo.OwnerId); HSET ("ServiceCache:FooIndex", theFoo.OwnerId, JsonSerialize(myFoos)); However, this is broken in two ways. Two concurrent operations can read/modify/write at the same time. The latter "wins" the final HSET and the former's index update is lost. Another operation could read the index in between the first and second lines. It would miss a Foo that it should find. So how do I index properly? I think I could use a Redis set instead of a json-encoded value for the index. That would solve part of the problem since the "add-to-index-if-not-already-present" would be atomic. I also read about using MULTI as a "transaction" but it doesn't seem like it does what I want. Am I right that I can't really MULTI; HGET; {update}; HSET; EXEC since it doesn't even do the HGET before I issue the EXEC? I also read about using WATCH and MULTI for optimistic concurrency, then retrying on failure. But WATCH only works on top-level keys. So it's back to SET/GET instead of HSET/HGET. And now I need a new index-like-thing to support getting all the values in a given cache. If I understand it right, I can combine all these things to do the job. Something like: while(!succeeded) { WATCH( "ServiceCache:Foo:" + theFoo.FooId ); WATCH( "ServiceCache:FooIndexByOwner:" + theFoo.OwnerId ); WATCH( "ServiceCache:FooIndexAll" ); MULTI(); SET ("ServiceCache:Foo:" + theFoo.FooId, JsonSerialize(theFoo)); SADD ("ServiceCache:FooIndexByOwner:" + theFoo.OwnerId, theFoo.FooId); SADD ("ServiceCache:FooIndexAll", theFoo.FooId); EXEC(); //TODO somehow set succeeded properly } Finally I'd have to translate this pseudocode into real code depending how my client library uses WATCH/MULTI/EXEC; it looks like they need some sort of context to hook them together. All in all this seems like a lot of complexity for what has to be a very common case; I can't help but think there's a better, smarter, Redis-ish way to do things that I'm just not seeing. How do I lock properly? Even if I had no indexes, there's still a (probably rare) race condition. A: HGET - cache miss B: HGET - cache miss A: SELECT B: SELECT A: HSET C: HGET - cache hit C: UPDATE C: HSET B: HSET ** this is stale data that's clobbering C's update. Note that C could just be a really-fast A. Again I think WATCH, MULTI, retry would work, but... ick. I know in some places people use special Redis keys as locks for other objects. Is that a reasonable approach here? Should those be top-level keys like ServiceCache:FooLocks:{Id} or ServiceCache:Locks:Foo:{Id}? Or make a separate hash for them - ServiceCache:Locks with subkeys Foo:{Id}, or ServiceCache:Locks:Foo with subkeys {Id} ? How would I work around abandoned locks, say if a transaction (or a whole server) crashes while "holding" the lock?

    Read the article

  • Do partitions allow multiple bulk loads?

    - by ck
    I have a database that contains data for many "clients". Currently, we insert tens of thousands of rows into multiple tables every so often using .Net SqlBulkCopy which causes the entire tables to be locked and inaccessible for the duration of the transaction. As most of our business processes rely upon accessing data for only one client at a time, we would like to be able to load data for one client, while updating data for another client. To make things more fun, all PKs, FKs and clustered indexes are on GUID columns (I am looking at changing this). I'm looking at adding the ClientID into all tables, then partitioning on this. Would this give me the functionality I require?

    Read the article

  • Can I stagger UpdatePanel updates in .NET?

    - by cusimar9
    I have a situation in which I select an account and I want to bring back its details. This is a single UpdatePanel round trip and its quite quick. In addition, I need to bring back some transactional information which is from a much bigger table and takes a couple of seconds for the query to come back. Ideally, I would like to put this into a second update panel and update this additional information once it has been received, but after the first update panel has updated i.e. the user sees: Change account See account details (almost instant) See transactional info (2 seconds later) The only way I can think of doing this is to use javascript to cause a SECOND postback once the account details have been retrieved to get the transaction information. Is there a better way?

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >