Search Results

Search found 5295 results on 212 pages for 'transaction scope'.

Page 2/212 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • C# Anonymous method variable scope problem with IEnumerable<T>

    - by PaN1C_Showt1Me
    Hi. I'm trying to iterate through all components and for those who implements ISupportsOpen allow to open a project. The problem is when the anonymous method is called, then the component variable is always the same element (as coming from the outer scope from IEnumerable) foreach (ISupportsOpen component in something.Site.Container.Components.OfType<ISupportsOpen>()) { MyClass m = new MyClass(); m.Called += new EventHandler(delegate(object sender, EventArgs e) { if (component.CanOpenProject(..)) component.OpenProject(..); }); itemsList.Add(m); } How should it be solved, please?

    Read the article

  • How to structure javascript callback so that function scope is maintained properly

    - by Chetan
    I'm using XMLHttpRequest, and I want to access a local variable in the success callback function. Here is the code: function getFileContents(filePath, callbackFn) { var xhr = new XMLHttpRequest(); xhr.onreadystatechange = function() { if (xhr.readyState == 4) { callbackFn(xhr.responseText); } } xhr.open("GET", chrome.extension.getURL(filePath), true); xhr.send(); } And I want to call it like this: var test = "lol"; getFileContents("hello.js", function(data) { alert(test); }); Here, test would be out of the scope of the callback function, since only the enclosing function's variables are accessible inside the callback function. What is the best way to pass test to the callback function so the alert(test); will display test correctly?

    Read the article

  • google analytics reverse transaction not working with sales performance

    - by prasad maganti
    We have google analytics account and trying to do reverse transaction. We have created a transaction on one date and reverse transaction on some other date. After transaction if we do reverse transaction it disappears from transactions list. Is it the expected behavior or abnormal behavior? But, if we check the same order data in sales performance, the reverse transaction does not reflects on when we created the transaction, it reflecting on when we made reverse transaction date. It should not be do like this. The reverse transaction should affect the same date on when we made transaction date.

    Read the article

  • c++ function scope

    - by Myx
    I have a main function in A.cpp which has the following relevant two lines of code: B definition(input_file); definition.Print(); In B.h I have the following relevant lines of code: class B { public: // Constructors B(void); B(const char *filename); ~B(void); // File input int ParseLSFile(const char *filename); // Debugging void Print(void); // Data int var1; double var2; vector<char* > var3; map<char*, vector<char* > > var4; } In B.cpp, I have the following function signatures (sorry for being redundant): B::B(void) : var1(-1), var2(numeric_limits<double>::infinity()) { } B::B(const char *filename) { B *def = new B(); def->ParseLSFile(filename); } B::~B(void) { // Free memory for var3 and var 4 } int B::ParseLSFile(const char *filename) { // assign var1, var2, var3, and var4 values } void B::Print(void) { // print contents of var1, var2, var3, and var4 to stdout } So when I call Print() from within B::ParseLSFile(...), then the contents of my structures print correctly to stdout. However, when I call definition.Print() from A.cpp, my structures are empty or contain garbage. Can anyone recommend the correct way to initialize/pass my structures so that I can access them outside of the scope of my function definition? Thanks.

    Read the article

  • Transitioning to Transaction Base

    - by Glen McCallum
    I was actually hired at Oracle Health Sciences to work on the HTB application. Long story short, when HL7 version 3 was relatively new ... Canada made an initial sprint at adoption. Since then progress has slowed. I was part of that initial adoption and learned a lot about the Reference Information Model. At that time we worked mostly with CDA R2 Level 3 (fully coded/ structured xml) documents.HTB is a HL7 v3 RIM-based repository. Love it or hate it, the product is unique in the market place. One of the advantages is the flexibility of the model. You can aggregate information from literally any source system without any HTB data model modification and then use that data in a semantically meaningful way. That's extremely powerful.There is a minor speed bump getting up to speed with HL7 v3, there's no doubt about that. I believe that is why Oracle recruited me from Canada originally - so I could have a running start at HTB. In the near future I'm looking forward to an application deep dive with John Hatem.

    Read the article

  • SQL Server transaction log backups,

    - by krimerd
    Hi there, I have a question regarding the transaction log backups in sql server 2008. I am currently taking full backups once a week (Sunday) and transaction log backups daily. I put full backup in folder1 on Sunday and then on Monday I also put the 1st transaction log backup in the same folder. On tuesday, before I take the 2nd transaction log backup I move the first transaction log backup from folder1 an put it into folder2 and then I take the 2nd transaction log backup and put it in the folder1. Same thing on Wed, Thurs and so on. Basicaly in folder1 I always have the latest full backup and the latest transaction log backup while the other transaction log backups are in folder2. My questions is, when sql server is about to take, lets say 4th (Thursday) transaction log backup, does it look for the previous transac log backups (1st, 2nd, and 3rd) so that this new backup will only include the transactions from the last backup or it has some other way of knowing whether there are other transac log backups. Basically, I am asking this because all my transaction log backups seem to be about the same size and I thought that their size will depend on the amount of transactions since the last transaction log backup. Can anyone please explain if my assumptions are right? Thanks...

    Read the article

  • MooTools/JavaScript variable scope

    - by 827
    I am trying to make each number displayed clickable. "1" should alert() 80, "2" should produce 60, etc. However, when the alert(adjust) is called, it only shows 0, not the correct numbers. However, if the commented out alert(adjust) is uncommented, it produces the correct number on page load, but not on clicking. I was wondering why the code inside addEvents cannot access the previously defined variable adjust. <html> <head> <script type="text/javascript" charset="utf-8" src="mootools.js"></script> <script type="text/javascript" charset="utf-8"> window.addEvent('domready', function() { var id_numbers = [1,2,3,4,5]; for(var i = 0; i<id_numbers.length; i++) { var adjust = (20 * (5 - id_numbers[i])); // alert(adjust); $('i_' + id_numbers[i]).addEvents({ 'click': function() { alert(adjust); } }); } }); </script> </head> <body> <div id="i_1">1</div> <div id="i_2">2</div> <div id="i_3">3</div> <div id="i_4">4</div> <div id="i_5">5</div> </body> </html> Thanks.

    Read the article

  • PHP Scope Resolution Operator Question

    - by anthony
    I'm having trouble with the MyClass::function(); style of calling methods and can't figure out why. Here's an example (I'm using Kohana framework btw): class Test_Core { public $var1 = "lots of testing"; public function output() { $print_out = $this->var1; echo $print_out; } } I try to use the following to call it, but it returns $var1 as undefined: Test::output() However, this works fine: $test = new Test(); $test->output(); I generally use this style of calling objects as opposed to the "new Class" style, but I can't figure out why it doesn't want to work.

    Read the article

  • SQL Server Transaction Marks: Restoring multiple databases to a common relative point

    - by Mladen Prajdic
    We’re all familiar with the ability to restore a database to point in time using the RESTORE WITH STOPAT statement. But what if we have multiple databases that are accessed from one application or are modifying each other? And over multiple instances? And all databases have different workloads? And we want to restore all of the databases to some known common relative point? The catch here is that this common relative point isn’t the same point in time for all databases. This common relative point in time might be now in DB1, now-1 hour in DB2 and yesterday in DB3. And we don’t know the exact times. Let me introduce you to Transaction Marks. When we run a marked transaction using the WITH MARK option a flag is set in the transaction log and a row is added to msdb..logmarkhistory table. When restoring a transaction log backup we can restore to either before or after that marked transaction. The best thing is that we don’t even need to have one database modifying another database. All we have to do is use a marked transaction with the same name in different database. Let’s see how this works with an example. The code comments say what’s going on. USE master GOCREATE DATABASE TestTxMark1GOUSE TestTxMark1GOCREATE TABLE TestTable1( ID INT, VALUE UNIQUEIDENTIFIER) -- insert some data into the table so we can have a starting pointINSERT INTO TestTable1SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NULLFROM master..spt_valuesORDER BY RNSELECT *FROM TestTable1GO-- TAKE A FULL BACKUP of the databseBACKUP DATABASE TestTxMark1 TO DISK = 'c:\TestTxMark1.bak'GO USE master GOCREATE DATABASE TestTxMark2GOUSE TestTxMark2GOCREATE TABLE TestTable2( ID INT, VALUE UNIQUEIDENTIFIER)-- insert some data into the table so we can have a starting pointINSERT INTO TestTable2SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NEWID()FROM master..spt_valuesORDER BY RNSELECT *FROM TestTable2GO-- TAKE A FULL BACKUP of our databseBACKUP DATABASE TestTxMark2 TO DISK = 'c:\TestTxMark2.bak'GO -- start a marked transaction that modifies both databasesBEGIN TRAN TxDb WITH MARK -- update values from NULL to random value UPDATE TestTable1 SET VALUE = NEWID(); -- update first 100 values from random value -- to NULL in different DB UPDATE TestTxMark2.dbo.TestTable2 SET VALUE = NULL WHERE ID <= 100;COMMITGO     -- some time goes by here -- with various database activity... -- We see two entries for marks in each database. -- This is just informational and has no bearing on the restore itself.SELECT * FROM msdb..logmarkhistory USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark1 TO DISK = 'c:\TestTxMark1.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark1GO USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark2 TO DISK = 'c:\TestTxMark2.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark2GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark1 FROM DISK = 'c:\TestTxMark1.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark1 FROM DISK = 'c:\TestTxMark1.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark2 FROM DISK = 'c:\TestTxMark2.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark2 FROM DISK = 'c:\TestTxMark2.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO USE TestTxMark1-- we restored to time before the transaction -- so we have NULL values in our tableSELECT * FROM TestTable1 USE TestTxMark2-- we restored to time before the transaction -- so we DON'T have NULL values in our tableSELECT * FROM TestTable2   Transaction marks can be used like a crude sync mechanism for cross database operations. With them we can mark our databases with a common “restore to” point so we know we have a valid state between all databases to restore to.

    Read the article

  • opening transaction validation in vb.net

    - by Mark
    Can anyone help me on how can I validate transaction example: transaction = mySqlConn.BeginTransaction(IsolationLevel.ReadCommitted) If there's still an opened transaction, then the code above will ignore.. How do I know if there was a transaction not yet committed before opening new transaction to avoid Nested Transaction?

    Read the article

  • Using Transaction Logging to Recover Post-Archived Essbase data

    - by Keith Rosenthal
    Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Transaction logging works by writing transactions to a log store.  The information in the log store can then be recovered by replaying the log store entries in sequence since the last archive took place.  The following information is recorded within a transaction log entry: Sequence ID Username Start Time End Time Request Type A request type can be one of the following categories: Calculations, including the default calculation as well as both server and client side calculations Data loads, including data imports as well as data loaded using a load rule Data clears as well as outline resets Locking and sending data from SmartView and the Spreadsheet Add-In.  Changes from Planning web forms are also tracked since a lock and send operation occurs during this process. You can use the Display Transactions command in the EAS console or the query database MAXL command to view the transaction log entries. Enabling Transaction Logging Transaction logging can be enabled at the Essbase server, application or database level by adding the TRANSACTIONLOGLOCATION essbase.cfg setting.  The following is the TRANSACTIONLOGLOCATION syntax: TRANSACTIONLOGLOCATION [appname [dbname]] LOGLOCATION NATIVE ENABLE | DISABLE Note that you can have multiple TRANSACTIONLOGLOCATION entries in the essbase.cfg file.  For example: TRANSACTIONLOGLOCATION Hyperion/trlog NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Hyperion/trlog NATIVE DISABLE The first statement will enable transaction logging for all Essbase applications, and the second statement will disable transaction logging for the Sample application.  As a result, transaction logging will be enabled for all applications except the Sample application. A location on a physical disk other than the disk where ARBORPATH or the disk files reside is recommended to optimize overall Essbase performance. Configuring Transaction Log Replay Although transaction log entries are stored based on the LOGLOCATION parameter of the TRANSACTIONLOGLOCATION essbase.cfg setting, copies of data load and rules files are stored in the ARBORPATH/app/appname/dbname/Replay directory to optimize the performance of replaying logged transactions.  The default is to archive client data loads, but this configuration setting can be used to archive server data loads (including SQL server data loads) or both client and server data loads. To change the type of data to be archived, add the TRANSACTIONLOGDATALOADARCHIVE configuration setting to the essbase.cfg file.  Note that you can have multiple TRANSACTIONLOGDATALOADARCHIVE entries in the essbase.cfg file to adjust settings for individual applications and databases. Replaying the Transaction Log and Transaction Log Security Considerations To replay the transactions, use either the Replay Transactions command in the EAS console or the alter database MAXL command using the replay transactions grammar.  Transactions can be replayed either after a specified log time or using a range of transaction sequence IDs. The default when replaying transactions is to use the security settings of the user who originally performed the transaction.  However, if that user no longer exists or that user's username was changed, the replay operation will fail. Instead of using the default security setting, add the REPLAYSECURITYOPTION essbase.cfg setting to use the security settings of the administrator who performs the replay operation.  REPLAYSECURITYOPTION 2 will explicitly use the security settings of the administrator performing the replay operation.  REPLAYSECURITYOPTION 3 will use the administrator security settings if the original user’s security settings cannot be used. Removing Transaction Logs and Archived Replay Data Load and Rules Files Transaction logs and archived replay data load and rules files are not automatically removed and are only removed manually.  Since these files can consume a considerable amount of space, the files should be removed on a periodic basis. The transaction logs should be removed one database at a time instead of all databases simultaneously.  The data load and rules files associated with the replayed transactions should be removed in chronological order from earliest to latest.  In addition, do not remove any data load and rules files with a timestamp later than the timestamp of the most recent archive file. Partitioned Database Considerations For partitioned databases, partition commands such as synchronization commands cannot be replayed.  When recovering data, the partition changes must be replayed manually and logged transactions must be replayed in the correct chronological order. If the partitioned database includes any @XREF commands in the calc script, the logged transactions must be selectively replayed in the correct chronological order between the source and target databases. References For additional information, please see the Oracle EPM System Backup and Recovery Guide.  For EPM 11.1.2.2, the link is http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_1112200.pdf

    Read the article

  • How to add a specific method to a particular scope in Visual Studion 2005

    - by pragadheesh
    Hi, In my visual studio project (C++), when i copy a method(meth1) of a particular scope say 'scope1' and paste it in the same code area, it is getting pasted in General Scope. i.e I want to add a method into a particular scope but when i try it is getting added in general scope. How can i solve this? For eg: There is an existing method: void add(int a, int b) { .... } This method is in File scope. i.e limited for that file. Now i want to add another method add2 in the same file scope. So I copied the existing add method and pasted it. void add2(int a, int b) { .... } But this method is getting added in the global scope and not in the file scope.

    Read the article

  • Should I sacrifice code succintness to ensure the narrowest variable scope? [duplicate]

    - by David Scholefield
    This question already has an answer here: Is the usage of internal scope blocks within a function bad style? 3 answers In many languages (e.g. both Perl and Java - which are the two languages I work most with) it is possible to narrow the scope of local variables by declaring them within a block. Although it adds extra code length (the opening and closing block braces), and possibly reduces readability, should I create blocks purely to narrow the scope of variables to the statements that use the variables and to uphold the principle of narrowest scope or does this sacrifice succinctness and readability just to unnecessarily uphold an agreed 'best practice' principle? I usually declare local variables to functions/methods at the start of the function to aid readability, but I could not do this, and just create blocks throughout the function and declare the variables throughout the code - within those blocks - to narrow their scope.

    Read the article

  • How can I get Transaction Logs to auto-truncate after Backup

    - by Yaakov Ellis
    Setup: Sql Server 2008 R2, databases set up with Full recovery mode. I have set up a maintenance plan that backs up the transaction logs for a number of databases on the server. It is set to create backup files in sub-directories for each database, verify backup integrity is turned on, and backup compression is used. The job is set to run once every 2 hours during business hours (8am-6pm). I have tested the job and it runs fine, creates the log backup files as it should. However, from what I have read, once the transaction log is backed up, it should be ok to truncate the transaction log. I do not see any option for doing this in the Sql Server Maintenance Plan designer. How can I set this up?

    Read the article

  • SQL Server 7 Transaction Logs Issues

    - by nate
    Over the week my database server transaction log was full. With our app people could select from the database but could not update or insert into the database. In the past we have just truncated the transaction logs. After that, everything was back to normal. This week I truncated the transaction logs, and shrink that database. Now we can select, update, and insert into the database. The only issue is when we do a big job, and to a lot on inserting or updating, we get the following error: Database error: S1008:[Microsft][ODBC]Operation canceled We never had this issue before, I am assuming the that is the same as a timeout error. Has anyone else had this issue before, or know how I resolve this?

    Read the article

  • How To Check If a Transaction Related to Oracle Asset Tracking Has Been Accounted in SLA

    - by LuciaC-Oracle
    In Oracle Asset Tracking (OAT), we often see situations where a pending transaction has failed to be processed by the OAT programs. Typical situations can be: a pending transaction errors with "Unable to derive accounts from sub ledger accounting for the material transaction" a transaction is not picked by OAT programs. The Create Accounting program log file will show error messages and possible corrective actions to solve the error.  But as this is usually a scheduled program, often any errors that are reported are missed by users. To aid OAT users to identify if a transaction has failed and accounting has not been created, we have now created a SQL script which can be run for any pending transaction: How To Check If a Transaction Related to Oracle Asset Tracking Has Been Accounted in SLA ? (Doc ID 1673414.1)Using the script in this note, the user can pass the material transaction ID for the related transaction and the script will check if SLA accounting entries have been created for this specific transaction or not.If the SLA accounting entries have not been created, the script will prompt the user to run Create Accounting program.  After Create Accounting has been run, the user can run the script again to confirm that accounting has been created.

    Read the article

  • Transaction Boundaries and Rollbacks in Oracle SOA Suite

    - by Antonella Giovannetti
    A new eCourse/video is available in the Oracle Learning Library, "Transaction Boundaries and Rollbacks in Oracle SOA Suite" . The course covers: Definition of transaction, XA, Rollback and transaction boundary. BPEL transaction boundaries from a fault propagation point of view Parameters bpel.config.transaction and bpel.config.oneWayDeliveryPolicy for the configuration of both synchronous and asynchronous BPEL processes. Transaction behavior in Mediator Rollback scenarios based on type of faults Rollback using bpelx:rollback within a <throw> activity. The video is accessible here

    Read the article

  • Transaction Isolation Level of Serializable not working for me

    - by Shahriar
    I have a website which is used by all branches of a store and what it does is that it records customer purchases into a table called myTransactions.myTransactions table has a column named SerialNumber.For each purchase i create a record in the transactions table and assign a serial to it.The stored procedure that does this calls a UDF function to get a new serialNumber before inserting the record.Like below : Create Procedure mytransaction_Insert as begin insert into myTransactions(column1,column2,column3,...SerialNumber) values( Value1 ,Value2,Value3,...., getTransactionNSerialNumber()) end Create function getTransactionNSerialNumber as begin RETURN isnull(SELECT TOP (1) SerialNumber FROM myTransactions READUNCOMMITTED ORDER BY SerialNumber DESC),0) + 1 end The website is being used by so many users in different stores at the same time and it is creating many duplicate serialNumbers(same SerialNumbers).So i added a Sql transaction with ReadCommitted level to the transaction and i still got duplicate transaction numbers.I changed it to SERIALIZABLE in order to lock the resources and i not only got duplicate transaction numbers(!!HOW!!) but i also got sporadic deadlocks between the same stored procedure calls.This is what i tried : (With ommissions of try catch blocks and rollbacks) Create Procedure mytransaction_Insert as begin SET TRANSACTION ISOLATION LEVEL SERIALIZABLE BEGIN TRASNACTION ins insert into myTransactions(column1,column2,column3,...SerialNumber) values( Value1 ,Value2 , Value3, ...., getTransactionNSerialNumber()) COMMIT TRANSACTION ins SET TRANSACTION ISOLATION READCOMMITTED end I even copied the function that gets the serial number directly into the stored procedure instead of the UDF function call and still got duplicate serialNumbers.So,How can a stored procedure line create something Like the c# lock() {} block. By the way,i have to implement the transaction serial number using the same pattern and i can't change the serialNumber to any other identity field or whatever.And for some reasons i need to generate the serialNumber inside the databse and i can't move SerialNumber generation to application level. Thank you.

    Read the article

  • ngGrid reusable filter AngularJS

    - by wootscootinboogie
    I have a business requirement that I filter a boolean value in my ngGrid. The filter has three states: only true, only false and both. Filtering like this seems to be a common enough use case that I should refactor that functionality out of my code for re use (possibly in a directive/filter?). I'd like to know how I can go about pulling out the customFilter function in my controller and make it so that I can pass the filter a property name on which to filter, and a value for selectedFilterOption. The code currently works, but I feel like this is a good chance to get better at angular :). So how can I pull out my filtering used here and make it a reusable piece of functionality? app.controller('DocumentController',function($scope,DocumentService) { $scope.filterOptions = { filterText: '', useExternalFilter: false }; $scope.totalServerItems =0; $scope.pagingOptions ={ pageSizes: [5,10,100], pageSize: 5, currentPage: 1 } //filter! $scope.dropdownOptions = [{ name: 'Show all' },{ name: 'Show active' },{ name: 'Show trash' }]; //default choice for filtering is 'show active' $scope.selectedFilterOption = $scope.dropdownOptions[1]; //three stage bool filter $scope.customFilter = function(data){ var tempData = []; angular.forEach(data,function(item){ if($scope.selectedFilterOption.name === 'Show all'){ tempData.push(item); } else if($scope.selectedFilterOption.name ==='Show active' && !item.markedForDelete){ tempData.push(item); } else if($scope.selectedFilterOption.name ==='Show trash' && item.markedForDelete){ tempData.push(item); } }); return tempData; } //grabbing data $scope.getPagedDataAsync = function(pageSize, page, filterValue, searchText){ var data; if(searchText){ var ft = searchText.toLowerCase(); DocumentService.get('filterableData.json').success(function(largeLoad){ //filter the data when searching data = $scope.customFilter(largeLoad).filter(function(item){ return JSON.stringify(item).toLowerCase().indexOf(ft) != -1; }) $scope.setPagingData($scope.customFilter(data),page,pageSize); }) } else{ DocumentService.get('filterableData.json').success(function(largeLoad){ var testLargeLoad = $scope.customFilter(largeLoad); //filter the data on initial page load when no search text has been entered $scope.setPagingData(testLargeLoad,page,pageSize); }) } }; //paging $scope.setPagingData = function(data, page, pageSize){ var pagedData = data.slice((page -1) * pageSize, page * pageSize); //filter the data for paging $scope.myData = $scope.customFilter(pagedData); $scope.myData = pagedData; $scope.totalServerItems = data.length; if(!$scope.$$phase){ $scope.$apply(); } } //watch for filter option change, set the data property of gridOptions to the newly filtered data $scope.$watch('selectedFilterOption',function(){ var data = $scope.customFilter($scope.myData); $scope.myData = data; $scope.getPagedDataAsync($scope.pagingOptions.pageSize, $scope.pagingOptions.currentPage); $scope.setPagingData($scope.myData,$scope.pagingOptions.currentPage,$scope.pagingOptions.pageSize); }) $scope.$watch('pagingOptions',function(newVal, oldVal){ if(newVal !== oldVal && newVal.currentPage !== oldVal.currentPage){ $scope.getPagedDataAsync($scope.pagingOptions.pageSize,$scope.pagingOptions.currentPage,$scope.filterOptions.filterText); } },true) $scope.message ="This is a message"; $scope.gridOptions = { data: 'myData', enablePaging: true, showFooter:true, totalServerItems: 'totalServerItems', pagingOptions: $scope.pagingOptions, filterOptions: $scope.filterOptions, showFilter: true, enableCellEdit: true, showColumnMenu: true, enableColumnReordering: true, enablePinning: true, showGroupPanel: true, groupsCollapsedByDefault: true, enableColumnResize: true } //get the data on page load $scope.getPagedDataAsync($scope.pagingOptions.pageSize, $scope.pagingOptions.currentPage); }); HTML

    Read the article

  • SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions

    - by pinaldave
    To learn any technology and move to a more advanced level, it is very important to understand the fundamentals of the subject first. Today, we will be talking about something which has been quite introduced a long time ago but not properly explored when it comes to the isolation level. Snapshot Isolation was introduced in SQL Server in 2005. However, the reality is that there are still many software shops which are using the SQL Server 2000, and therefore cannot be able to maintain the Snapshot Isolation. Many software shops have upgraded to the later version of the SQL Server, but their respective developers have not spend enough time to upgrade themselves with the latest technology. “It works!” is a very common answer of many when they are asked about utilizing the new technology, instead of backward compatibility commands. In one of the recent consultation project, I had same experience when developers have “heard about it” but have no idea about snapshot isolation. They were thinking it is the same as Snapshot Replication – which is plain wrong. This is the same demo I am including here which I have created for them. In Snapshot Isolation, the updated row versions for each transaction are maintained in TempDB. Once a transaction has begun, it ignores all the newer rows inserted or updated in the table. Let us examine this example which shows the simple demonstration. This transaction works on optimistic concurrency model. Since reading a certain transaction does not block writing transaction, it also does not block the reading transaction, which reduced the blocking. First, enable database to work with Snapshot Isolation. Additionally, check the existing values in the table from HumanResources.Shift. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO Now, we will need two different sessions to prove this example. First Session: Set Transaction level isolation to snapshot and begin the transaction. Update the column “ModifiedDate” to today’s date. -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO Please note that we have not yet been committed to the transaction. Now, open the second session and run the following “SELECT” statement. Then, check the values of the table. Please pay attention on setting the Isolation level for the second one as “Snapshot” at the same time when we already start the transaction using BEGIN TRAN. -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values in the table are still original values. They have not been modified yet. Once again, go back to session 1 and begin the transaction. -- Session 1 COMMIT After that, go back to Session 2 and see the values of the table. -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values are yet not changed and they are still the same old values which were there right in the beginning of the session. Now, let us commit the transaction in the session 2. Once committed, run the same SELECT statement once more and see what the result is. -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that it now reflects the new updated value. I hope that this example is clear enough as it would give you good idea how the Snapshot Isolation level works. There is much more to write about an extra level, READ_COMMITTED_SNAPSHOT, which we will be discussing in another post soon. If you wish to use this transaction’s Isolation level in your production database, I would appreciate your comments about their performance on your servers. I have included here the complete script used in this example for your quick reference. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 COMMIT -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Transaction Isolation

    Read the article

  • SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions

    - by pinaldave
    To learn any technology and move to a more advanced level, it is very important to understand the fundamentals of the subject first. Today, we will be talking about something which has been quite introduced a long time ago but not properly explored when it comes to the isolation level. Snapshot Isolation was introduced in SQL Server in 2005. However, the reality is that there are still many software shops which are using the SQL Server 2000, and therefore cannot be able to maintain the Snapshot Isolation. Many software shops have upgraded to the later version of the SQL Server, but their respective developers have not spend enough time to upgrade themselves with the latest technology. “It works!” is a very common answer of many when they are asked about utilizing the new technology, instead of backward compatibility commands. In one of the recent consultation project, I had same experience when developers have “heard about it” but have no idea about snapshot isolation. They were thinking it is the same as Snapshot Replication – which is plain wrong. This is the same demo I am including here which I have created for them. In Snapshot Isolation, the updated row versions for each transaction are maintained in TempDB. Once a transaction has begun, it ignores all the newer rows inserted or updated in the table. Let us examine this example which shows the simple demonstration. This transaction works on optimistic concurrency model. Since reading a certain transaction does not block writing transaction, it also does not block the reading transaction, which reduced the blocking. First, enable database to work with Snapshot Isolation. Additionally, check the existing values in the table from HumanResources.Shift. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO Now, we will need two different sessions to prove this example. First Session: Set Transaction level isolation to snapshot and begin the transaction. Update the column “ModifiedDate” to today’s date. -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO Please note that we have not yet been committed to the transaction. Now, open the second session and run the following “SELECT” statement. Then, check the values of the table. Please pay attention on setting the Isolation level for the second one as “Snapshot” at the same time when we already start the transaction using BEGIN TRAN. -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values in the table are still original values. They have not been modified yet. Once again, go back to session 1 and begin the transaction. -- Session 1 COMMIT After that, go back to Session 2 and see the values of the table. -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values are yet not changed and they are still the same old values which were there right in the beginning of the session. Now, let us commit the transaction in the session 2. Once committed, run the same SELECT statement once more and see what the result is. -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that it now reflects the new updated value. I hope that this example is clear enough as it would give you good idea how the Snapshot Isolation level works. There is much more to write about an extra level, READ_COMMITTED_SNAPSHOT, which we will be discussing in another post soon. If you wish to use this transaction’s Isolation level in your production database, I would appreciate your comments about their performance on your servers. I have included here the complete script used in this example for your quick reference. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 COMMIT -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Transaction Isolation

    Read the article

  • SSIS Transaction with Sql Transaction

    - by Mike
    I started with a package to make sure Transactions are working correctly. The package level transaction is set to Required. I have two Execute Sql Task, one deletes rows from a table and one does 1/0, to throw the error. Both task are set to supported transaction level and Serializable IsolationLevel. That works. Now when I replace my two sql task to two separate procedure calls, the first one, ChargeInterest, runs successful but the second one, PaymentProcess, fails always saying. [Execute SQL Task] Error: Executing the query "Exec [proc_xx_NotesReceivable_PaymentProcess] ..." failed with the following error: "Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. PaymentProcess being the second stored procedure. Both procedures have there own BEGIN, COMMIT AND ROLLBACKS inside the SP. I believe that the transactions are being successfully handed in the Charge Interest because I can run the following without issues or the dreaded you started with 0 and now have 1 transaction. EXEC [proc_XX_NotesReceivable_ChargeInterest] 'NR', 'M', 186, 300 EXEC [proc_XX_NotesReceivable_PaymentProcess] 'NR', 186, 300 --OR GO BEGIN TRAN EXEC [proc_XX_NotesReceivable_ChargeInterest] 'NR', 'M', 186, 300 EXEC [proc_XX_NotesReceivable_PaymentProcess] 'NR', 186, 300 ROLLBACK TRAN Now I have noticed that DTC does get kicked off in both instances? Why I am not sure because it is using the same connection. In the live example I can see the transaction get started but disappears if I put a breakpoint on the PreExecute event of the second stored procedure. What is the correct way to mingle SP transactions with SSIS transactions?

    Read the article

  • What is required for a scope in an injection framework?

    - by johncarl
    Working with libraries like Seam, Guice and Spring I have become accustomed to dealing with variables within a scope. These libraries give you a handful of scopes and allow you to define your own. This is a very handy pattern for dealing with variable lifecycles and dependency injection. I have been trying to identify where scoping is the proper solution, or where another solution is more appropriate (context variable, singleton, etc). I have found that if the scope lifecycle is not well defined it is very difficult and often failure prone to manage injections in this way. I have searched on this topic but have found little discussion on the pattern. Is there some good articles discussing where to use scoping and what are required/suggested prerequisites for scoping? I interested in both reference discussion or your view on what is required or suggested for a proper scope implementation. Keep in mind that I am referring to scoping as a general idea, this includes things like globally scoped singletons, request or session scoped web variable, conversation scopes, and others. Edit: Some simple background on custom scopes: Google Guice custom scope Some definitions relevant to above: “scoping” - A set of requirements that define what objects get injected at what time. A simple example of this is Thread scope, based on a ThreadLocal. This scope would inject a variable based on what thread instantiated the class. Here's an example of this: “context variable” - A repository passed from one object to another holding relevant variables. Much like scoping this is a more brute force way of accessing variables based on the calling code. Example: methodOne(Context context){ methodTwo(context); } methodTwo(Context context){ ... //same context as method one, if called from method one } “globally scoped singleton” - Following the singleton pattern, there is one object per application instance. This applies to scopes because there is a basic lifecycle to this object: there is only one of these objects instantiated. Here's an example of a JSR330 Singleton scoped object: @Singleton public void SingletonExample{ ... } usage: public class One { @Inject SingeltonExample example1; } public class Two { @Inject SingeltonExample example2; } After instantiation: one.example1 == two.example2 //true;

    Read the article

  • Which layer implement Transaction mechanism

    - by didxga
    I knew ORM tools, such as Hibernate, have their own transaction management mechanism. We can also harness transaction by using JDBC directly. And DBMS has its transaction facilities either. I wonder that in which layer(s) the transaction is actually implemented in a J2EE application? Regards!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >