Search Results

Search found 2601 results on 105 pages for 'commit'.

Page 50/105 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Transaction Isolation on select, insert, delete

    - by Bradford
    What could possibly go wrong with the following transaction if executed by concurrent users in the default isolation level of READ COMMITTED? BEGIN TRANSACTION SELECT * FROM t WHERE pid = 10 and r between 40 and 60 -- ... this returns tid = 1, 3, 5 -- ... process returned data ... DELETE FROM t WHERE tid in (1, 3, 5) INSERT INTO t (tid, pid, r) VALUES (77, 10, 35) INSERT INTO t (tid, pid, r) VALUES (78, 10, 37) INSERT INTO t (tid, pid, r) VALUES (79, 11, 39) COMMIT

    Read the article

  • What does really happen when we do a BEGIN TRAN in SQL Server 2005?

    - by Misnomer
    Hi all, I came across this issue or maybe something I didn't realize but I did a Begin Tran and had some code inside it and never ran a commit or rollback as I forgot about it. That caused all many of the database queries or even just a simple select top 1000 command were just sitting on loading..? Now it probably has put some locks on the tables I guess since it did not let me query them..but I just wanted to know what exactly happened and what are the practices to be followed here ?

    Read the article

  • CruiseControl.NET : Sending email using SVN username to ActiveDirectory mapping

    - by matthewpe
    Is it possible to configure CruiseControl.NET to send an email to the user(s) that did modifications to a broken build by mapping their SVN username to the corresponding Active-Directory alias (hence retrieving the correct, updated email address). Our SVN server is set-up to allow users of a certain Active-Directory group to read and commit changes: I don't want to have to maintain the CruiseControl.NET config every time a user gets added to our Programmers group in Active-Directory. Thanks a lot!

    Read the article

  • Good book for learning Bourne shell?

    - by John Isaacks
    I want to learn how to write shell scripts. Particularly I want to write a svn post-commit script to upload files from a test server to a production server. I am sure I will want to write more as I get more into it. I have very little linux/unix knowledge. Can anyone recommend a good book?

    Read the article

  • DataGridView Cell Validating only when 'Enter' is pressed

    - by Eldad
    Hi, I want to validate and commit the value entered in the DataGridViewCell ONLY when the user presses the 'Enter' key. If the users presses any other key or mouse button (Arrow keys, Pressing a different cell using the mouse...), I want the behavior to be similar to the 'ESC' key: Move the focus to the new cell and revert the edited cell value to its previous value.

    Read the article

  • Transactional NTFS (TxF) on Process.Start()

    - by Ian
    Consider the following code: try { using(TransactionScope) { Process.Start("SQLInstaller.EXE"); throw new Exception(); Commit(); } } catch(Exception ex) { //Do something here } Will the changes made by SQLInstaller.exe be rollback in this scenario? More specifically, will the changes made by an external process launched through Process.Start() be handled by TxF? Thanks!

    Read the article

  • git: ignore everything except subdirectory

    - by Michael Goerz
    I want to ignore all files in my repository except those that occur in the 'bin' subdirectory. I tried adding the following to my .gitignore * !bin/* This does not have the desired effect, however: I created a new file inside of bin/, but doing 'git status' still "shows nothing to commit (working directory clean)" Any suggestions? Thanks, Michael

    Read the article

  • NHibernate unintentional lazy property loading

    - by chiccodoro
    I introduced a mapping for a business object which has (among others) a property called "Name": public class Foo : BusinessObjectBase { ... public virtual string Name { get; set; } } For some reason, when I fetch "Foo" objects, NHibernate seems to apply lazy property loading (for simple properties, not associations): The following code piece generates n+1 SQL statements, whereof the first only fetches the ids, and the remaining n fetch the Name for each record: ISession session = ...IQuery query = session.CreateQuery(queryString); ITransaction tx = session.BeginTransaction(); List<Foo> result = new List<Foo>(); foreach (Foo foo in query.Enumerable()) { result.Add(foo); } tx.Commit(); session.Close(); produces: NHibernate: select foo0_.FOO_ID as col_0_0_ from V1_FOO foo0_<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 81<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36470<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36473 Similarly, the following code leads to a LazyLoadingException after session is closed: ISession session = ... ITransaction tx = session.BeginTransaction(); Foo result = session.Load<Foo>(id); tx.Commit(); session.Close(); Console.WriteLine(result.Name); Following this post, "lazy properties ... is rarely an important feature to enable ... (and) in Hibernate 3, is disabled by default." So what am I doing wrong? I managed to work around the LazyLoadingException by doing a NHibernateUtil.Initialize(foo) but the even worse part are the n+1 sql statements which bring my application to its knees. This is how the mapping looks like: <class name="Foo" table="V1_FOO"> ... <property name="Name" column="NAME"/> </class> BTW: The abstract "BusinessObjectBase" base class encapsulates the ID property which serves as the internal identifier.

    Read the article

  • When does validation happen in Core Data?

    - by dontWatchMyProfile
    From the docs: If you make changes to managed objects associated with a given context, those changes remain local to that context until you commit the changes by sending the context a save: message. At that point—provided that there are no validation errors—the changes are committed to the store. So does that essentially mean, that validation happens automatically as soon as I call -save?

    Read the article

  • ASP.Net MVC Moq SetupGet

    - by Nicholas Murray
    Hi, I am starting out with TDD using Moq to Mock an interface that I have: public interface IDataService { void Commit(); TopListService TopLists { get; } } From the samples I have seen I would expect SetupGet (or Setup) to appear in the intellisense when I type var mockDataService = new Mock<IDataService>(); mockDataService. But it is missing. Could someone suggest why?

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • inserted in multiple tables of database at a time

    - by rajkumari
    can we work with multiple tables of database with same connection object of database at same time . suppose i have inserted value in table 1 and at same time also inserted value in table 2 of data base with same object of data base connection. I am using sqlite database in code and getting database locked exception while commit() .

    Read the article

  • Django: How can I protect against concurrent modification of data base entries

    - by Ber
    If there a way to protect against concurrent modifications of the same data base entry by two or more users? It would be acceptable to show an error message to the user performing the second commit/save operation, but data should not be silently overwritten. I think locking the entry is not an option, as a user might use the "Back" button or simply close his browser, leaving the lock for ever.

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • replacing Fragment inside onActivityResult() geting error

    - by ajay
    When am lancing camera using Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE); startActivityForResult(intent, CAMERA_REQUEST);) And then we getting callback after taking pic to the onActivityResult(){ //HERE AM CALL ING ANOTHER FRAGMENT FragmentManager fm = getFragmentManager(); fm.beginTransaction().replace( R.id.tab_upload, new uploadingActivty(), "tabId").setTransition(FragmentTransaction.TRANSIT_FRAGMENT_OPEN) .addToBackStack(null).commit(); } Then we getting error as 11-21 16:13:44.316: E/AndroidRuntime(30944): Caused by: java.lang.IllegalStateException: Can not perform this action after onSaveInstanceState Any idea?

    Read the article

  • On-demand refresh mode for indexed view (=Materialized views) on SQL Server?

    - by MOLAP
    I know Oracle offers several refreshmode options for their materialized views (on demand, on commit, periodically). Does Microsoft SQLServer offer the same functions for their indexed views? If not, how can I else use indexed views on SQLServer if my purpose is to export data on a daily+ on-demand basis, and want to avoid performance overhead problems? Does a workaround exist?

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >