Search Results

Search found 1380 results on 56 pages for 'transactions'.

Page 21/56 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Right-Time Retail Part 2

    - by David Dorf
    This is part two of the three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Integration Of course these real-time enabling technologies are only as good as the systems that utilize them, and it only takes one bottleneck to slow everyone else down. What good is an immediate stock-out notification if the supply chain can’t react until tomorrow? Since being formed in 2006, Oracle Retail has been not only adding more integrations between systems, but also modernizing integrations for appropriate speed. Notice I tossed in the word “appropriate.” Not everything needs to be real-time – again, we’re talking about Right-Time Retail. The speed of data capture, analysis, and execution must be synchronized or you’re wasting effort. Unfortunately, there isn’t an enterprise-wide dial that you can crank-up for your estate. You’ll need to improve things piecemeal, with people and processes as limiting factors while choosing the appropriate types of integrations. There are three integration styles we see in the retail industry. First is batch. I know, the word “batch” just sounds slow, but this pattern is less about velocity and more about volume. When there are large amounts of data to be moved, you’ll want to use batch processes. Our technology of choice here is Oracle Data Integrator (ODI), which provides a fast version of Extract-Transform-Load (ETL). Instead of the three-step process, the load and transform steps are combined to save time. ODI is a key technology for moving data into Retail Analytics where we can apply science. Performing analytics on each sale as it occurs doesn’t make any sense, so we batch up a statistically significant amount and submit all at once. The second style is fire-and-forget. For some types of data, we want the data to arrive ASAP but immediacy is not necessary. Speed is less important than guaranteed delivery, so we use message-oriented middleware available in both Weblogic and the Oracle database. For example, Point-of-Service transactions are queued for delivery to Central Office at corporate. If the network is offline, those transactions remain in the queue and will be delivered when the network returns. Transactions cannot be lost and they must be delivered in order. (Ever tried processing a return before the sale?) To enhance the standard queues, we offer the Retail Integration Bus (RIB) to help the management and monitoring of fire-and-forget messaging in the enterprise. The third style is request-response and is most commonly implemented as Web services. This is a synchronous message where the sender waits for a response. In this situation, the volume of data is small, guaranteed delivery is not necessary, but speed is very important. Examples include the website checking inventory, a price lookup, or processing a credit card authorization. The Oracle Service Bus (OSB) typically handles the routing of such messages, and we’ve enhanced its abilities with the Retail Service Backbone (RSB). To better understand these integration patterns and where they apply within the retail enterprise, we’re providing the Retail Reference Library (RRL) at no charge to Oracle Retail customers. The library is composed of a large number of industry business processes, including those necessary to support Commerce Anywhere, as well as detailed architectural diagrams. These diagrams allow implementers to understand the systems involved in integrations and the specific data payloads. Furthermore, with our upcoming release we’ll be providing a new tool called the Retail Integration Console (RIC) that allows IT to monitor and manage integrations from a single point. Using RIC, retailers can quickly discern where integration activity is occurring, volume statistics, average response times, and errors. The dashboards provide the ability to dive down into the architecture documentation to gather information all the way down to the specific payload. Retailers that want real-time integrations will also need real-time monitoring of those integrations to ensure service-level agreements are maintained. Part 3 looks at marketing.

    Read the article

  • Data Source Security Part 4

    - by Steve Felts
    So far, I have covered Client Identity and Oracle Proxy Session features, with WLS or database credentials.  This article will cover one more feature, Identify-based pooling.  Then, there is one more topic to cover - how these options play with transactions.Identity-based Connection Pooling An identity based pool creates a heterogeneous pool of connections.  This allows applications to use a JDBC connection with a specific DBMS credential by pooling physical connections with different DBMS credentials.  The DBMS credential is based on either the WebLogic user mapped to a database user or the database user directly, based on the “use database credentials” setting as described earlier. Using this feature enabled with “use database credentials” enabled seems to be what is proposed in the JDBC standard, basically a heterogeneous pool with users specified by getConnection(user, password). The allocation of connections is more complex if Enable Identity Based Connection Pooling attribute is enabled on the data source.  When an application requests a database connection, the WebLogic Server instance selects an existing physical connection or creates a new physical connection with requested DBMS identity. The following section provides information on how heterogeneous connections are created:1. At connection pool initialization, the physical JDBC connections based on the configured or default “initial capacity” are created with the configured default DBMS credential of the data source.2. An application tries to get a connection from a data source.3a. If “use database credentials” is not enabled, the user specified in getConnection is mapped to a DBMS credential, as described earlier.  If the credential map doesn’t have a matching user, the default DBMS credential is used from the datasource descriptor.3b. If “use database credentials” is enabled, the user and password specified in getConnection are used directly.4. The connection pool is searched for a connection with a matching DBMS credential.5. If a match is found, the connection is reserved and returned to the application.6. If no match is found, a connection is created or reused based on the maximum capacity of the pool: - If the maximum capacity has not been reached, a new connection is created with the DBMS credential, reserved, and returned to the application.- If the pool has reached maximum capacity, based on the least recently used (LRU) algorithm, a physical connection is selected from the pool and destroyed. A new connection is created with the DBMS credential, reserved, and returned to the application. It should be clear that finding a matching connection is more expensive than a homogeneous pool.  Destroying a connection and getting a new one is very expensive.  If you can use a normal homogeneous pool or one of the light-weight options (client identity or an Oracle proxy connection), those should be used instead of identity based pooling. Regardless of how physical connections are created, each physical connection in the pool has its own DBMS credential information maintained by the pool. Once a physical connection is reserved by the pool, it does not change its DBMS credential even if the current thread changes its WebLogic user credential and continues to use the same connection. To configure this feature, select Enable Identity Based Connection Pooling.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableIdentityBasedConnectionPooling.html  "Enable identity-based connection pooling for a JDBC data source" in Oracle WebLogic Server Administration Console Help. You must make the following changes to use Logging Last Resource (LLR) transaction optimization with Identity-based Pooling to get around the problem that multiple users will be accessing the associated transaction table.- You must configure a custom schema for LLR using a fully qualified LLR table name. All LLR connections will then use the named schema rather than the default schema when accessing the LLR transaction table.  - Use database specific administration tools to grant permission to access the named LLR table to all users that could access this table via a global transaction. By default, the LLR table is created during boot by the user configured for the connection in the data source. In most cases, the database will only allow access to this user and not allow access to mapped users. Connections within Transactions Now that we have covered the behavior of all of these various options, it’s time to discuss the exception to all of the rules.  When you get a connection within a transaction, it is associated with the transaction context on a particular WLS instance. When getting a connection with a data source configured with non-XA LLR or 1PC (using the JTS driver) with global transactions, the first connection obtained within the transaction is returned on subsequent connection requests regardless of the values of username/password specified and independent of the associated proxy user session, if any. The connection must be shared among all users of the connection when using LLR or 1PC. For XA data sources, the first connection obtained within the global transaction is returned on subsequent connection requests within the application server, regardless of the values of username/password specified and independent of the associated proxy user session, if any.  The connection must be shared among all users of the connection within a global transaction within the application server/JVM.

    Read the article

  • Why is my WCF RIA Services custom object deserializing with an extra list member?

    - by oasasaurus
    I have been developing a Silverlight WCF RIA Services application dealing with mock financial transactions. To more efficiently send summary data to the client without going overboard with serialized entities I have created a summary class that isn’t in my EDM, and figured out how to serialize and send it over the wire to the SL client using DataContract() and DataMember(). Everything seemed to be working out great, until I tried to bind controls to a list inside my custom object. The list seems to always get deserialized with an extra, almost empty entity in it that I don’t know how to get rid of. So, here are some of the pieces. First the relevant bits from the custom object class: <DataContract()> _ Public Class EconomicsSummary Public Sub New() RecentTransactions = New List(Of Transaction) TotalAccountHistory = New List(Of Transaction) End Sub Public Sub New(ByVal enUser As EntityUser) Me.UserId = enUser.UserId Me.UserName = enUser.UserName Me.Accounts = enUser.Accounts Me.Jobs = enUser.Jobs RecentTransactions = New List(Of Transaction) TotalAccountHistory = New List(Of Transaction) End Sub <DataMember()> _ <Key()> _ Public Property UserId As System.Guid <DataMember()> _ Public Property NumTransactions As Integer <DataMember()> _ <Include()> _ <Association("Summary_RecentTransactions", "UserId", "User_UserId")> _ Public Property RecentTransactions As List(Of Transaction) <DataMember()> _ <Include()> _ <Association("Summary_TotalAccountHistory", "UserId", "User_UserId")> _ Public Property TotalAccountHistory As List(Of Transaction) End Class Next, the relevant parts of the function called to return the object: Public Function GetEconomicsSummary(ByVal guidUserId As System.Guid) As EconomicsSummary Dim objOutput As New EconomicsSummary(enUser) For Each objTransaction As Transaction In (From t As Transaction In Me.ObjectContext.Transactions.Include("Account") Where t.Account.aspnet_User_UserId = guidUserId Select t Order By t.TransactionDate Descending Take 10) objTransaction.User_UserId = objOutput.UserId objOutput.RecentTransactions.Add(objTransaction) Next objOutput.NumTransactions = objOutput.RecentTransactions.Count … Return objOutput End Function Notice that I’m collecting the NumTransactions count before serialization. Should be 10 right? It is – BEFORE serialization. The DataGrid is bound to the data source as follows: <sdk:DataGrid AutoGenerateColumns="False" Height="100" MaxWidth="{Binding ElementName=aciSummary, Path=ActualWidth}" ItemsSource="{Binding Source={StaticResource EconomicsSummaryRecentTransactionsViewSource}, Mode=OneWay}" Name="gridRecentTransactions" RowDetailsVisibilityMode="VisibleWhenSelected" IsReadOnly="True"> <sdk:DataGrid.Columns> <sdk:DataGridTextColumn x:Name="TransactionDateColumn" Binding="{Binding Path=TransactionDate, StringFormat=\{0:d\}}" Header="Date" Width="SizeToHeader" /> <sdk:DataGridTextColumn x:Name="AccountNameColumn" Binding="{Binding Path=Account.Title}" Header="Account" Width="SizeToCells" /> <sdk:DataGridTextColumn x:Name="CurrencyAmountColumn" Binding="{Binding Path=CurrencyAmount, StringFormat=\{0:c\}}" Header="Amount" Width="SizeToHeader" /> <sdk:DataGridTextColumn x:Name="TitleColumn" Binding="{Binding Path=Title}" Header="Description" Width="SizeToCells" /> <sdk:DataGridTextColumn x:Name="ItemQuantityColumn" Binding="{Binding Path=ItemQuantity}" Header="Qty" Width="SizeToHeader" /> </sdk:DataGrid.Columns> </sdk:DataGrid> You might be wondering where the ItemsSource is coming from, that looks like this: <CollectionViewSource x:Key="EconomicsSummaryRecentTransactionsViewSource" Source="{Binding Path=DataView.RecentTransactions, ElementName=EconomicsSummaryDomainDataSource}" /> When I noticed that the DataGrid had the extra row I tried outputting some data after the data source finishes loading, as follows: Private Sub EconomicsSummaryDomainDataSource_LoadedData(ByVal sender As System.Object, ByVal e As System.Windows.Controls.LoadedDataEventArgs) Handles EconomicsSummaryDomainDataSource.LoadedData If e.HasError Then System.Windows.MessageBox.Show(e.Error.ToString, "Load Error", System.Windows.MessageBoxButton.OK) e.MarkErrorAsHandled() End If Dim objSummary As EconomicsSummary = CType(EconomicsSummaryDomainDataSource.Data(0), EconomicsSummary) Dim sb As New StringBuilder("") sb.AppendLine(String.Format("Num Transactions: {0} ({1})", objSummary.RecentTransactions.Count.ToString(), objSummary.NumTransactions.ToString())) For Each objTransaction As Transaction In objSummary.RecentTransactions sb.AppendLine(String.Format("Recent TransactionId {0} dated {1} CurrencyAmount {2} NewBalance {3}", objTransaction.TransactionId.ToString, objTransaction.TransactionDate.ToString("d"), objTransaction.CurrencyAmount.ToString("c"), objTransaction.NewBalance.ToString("c"))) Next txtDebug.Text = sb.ToString() End Sub Output from that looks like this: Num Transactions: 11 (10) Recent TransactionId 2283 dated 6/1/2010 CurrencyAmount $31.00 NewBalance $392.00 Recent TransactionId 2281 dated 5/31/2010 CurrencyAmount $33.00 NewBalance $361.00 Recent TransactionId 2279 dated 5/28/2010 CurrencyAmount $8.00 NewBalance $328.00 Recent TransactionId 2277 dated 5/26/2010 CurrencyAmount $22.00 NewBalance $320.00 Recent TransactionId 2275 dated 5/24/2010 CurrencyAmount $5.00 NewBalance $298.00 Recent TransactionId 2273 dated 5/21/2010 CurrencyAmount $19.00 NewBalance $293.00 Recent TransactionId 2271 dated 5/20/2010 CurrencyAmount $20.00 NewBalance $274.00 Recent TransactionId 2269 dated 5/19/2010 CurrencyAmount $48.00 NewBalance $254.00 Recent TransactionId 2267 dated 5/18/2010 CurrencyAmount $42.00 NewBalance $206.00 Recent TransactionId 2265 dated 5/14/2010 CurrencyAmount $5.00 NewBalance $164.00 Recent TransactionId 0 dated 6/1/2010 CurrencyAmount $0.00 NewBalance $361.00 So I have a few different questions: -First and foremost, where the devil is that extra Transaction entity coming from and how do I get rid of it? Does it have anything to do with the other list of Transaction entities being serialized as part of the EconomicsSummary class (TotalAccountHistory)? Do I need to decorate the EconomicsSummary class members a little more/differently? -Second, where are the peculiar values coming from on that extra entity? PRE-POSTING UPDATE 1: I did a little checking, it looks like that last entry is the first one in the TotalAccountHistory list. Do I need to do something with CollectionDataContract()? PRE-POSTING UPDATE 2: I fixed one bug in TotalAccountHistory, since the objects weren’t coming from the database their keys weren’t unique. So I set the keys on the Transaction entities inside TotalAccountHistory to be unique and guess what? Now, after deserialization RecentTransactions contains all its original items, plus every item in TotalAccountHistory. I’m pretty sure this has to do with the deserializer getting confused by two collections of the same type. But I don’t yet know how to resolve it…

    Read the article

  • SQL Profiler: Read/Write units

    - by Ian Boyd
    i've picked a query out of SQL Server Profiler that says it took 1,497 reads: EventClass: SQL:BatchCompleted TextData: SELECT Transactions.... CPU: 406 Reads: 1497 Writes: 0 Duration: 406 So i've taken this query into Query Analyzer, so i may try to reduce the number of reads. But when i turn on SET STATISTICS IO ON to see the IO activity for the query, i get nowhere close to one thousand reads: Table Scan Count Logical Reads =================== ========== ============= FintracTransactions 4 20 LCDs 2 4 LCTs 2 4 FintracTransacti... 0 0 Users 1 2 MALs 0 0 Patrons 0 0 Shifts 1 2 Cages 1 1 Windows 1 3 Logins 1 3 Sessions 1 6 Transactions 1 7 Which if i do my math right, there is a total of 51 reads; not 1,497. So i assume Reads in SQL Profiler is an arbitrary metric. Does anyone know the conversion of SQL Server Profiler Reads to IO Reads? See also SQL Profiler CPU / duration unit Query Analyzer VS. Query Profiler Reads, Writes, and Duration Discrepencies

    Read the article

  • Do MSDTC and disaster recovery go together?

    - by DevDelivery
    Our application writes to multiple Sql Server databases within a distributed transaction. The Ops guys are saying that this messes up their disaster recovery plan because while the transactions on the live tables may commit at the same time, the log shipping on the separate databases happen at slightly different times. So in in a disaster recovery situation, there will be a few partial transactions. Is there a method for maintaining separate but synced databases in DR? Or do we have to re-design to relatively independent databases (or a single database)?

    Read the article

  • What does transaction.commit() do when the flushmode is set manual in Hibernate?

    - by wei
    Here is a block of code in the Java Persistence with Hibernate book by Christian and Gavin, Session session = getSessionFactory().openSession(); session.setFlushMode(FlushMode.MANUAL); // First step in the conversation session.beginTransaction(); Item item = (Item) session.get(Item.class, new Long(123) ); session.getTransaction().commit(); // Second step in the conversation session.beginTransaction(); Item newItem = new Item(); Long newId = (Long) session.save(newItem); // Triggers INSERT! session.getTransaction().commit(); // Roll back the conversation! session.close();//enter code here I am confused that why the first step and second step need to be wrapped into two separate transactions? Since the flushmode is set manual here, no operations (suppose we ignore the insert here) will hit the database anyway. So why bother with transactions here? thanks

    Read the article

  • Dynamically Added CheckBox Column is Disabled in GridView

    - by Mark Maslar
    I'm dynamically adding a Boolean column to a DataSet. The DataSet's table is the DataSource for a GridView, which AutoGenerates the columns. Issue: The checkboxes for this dynamically generated column are all disabled. How can I enable them? ds.Tables["Transactions"].Columns.Add("Retry", typeof(System.Boolean)); ds.Tables["Transactions"].Columns["Retry"].ReadOnly = false; In other words, how can I control how GridView generates the CheckBoxes for a Boolean field? (And why does setting ReadOnly to False have no effect?) Thanks!

    Read the article

  • iPhone In-App Purchase Store Kit error -1003 "Cannot connect to iTunes Store"

    - by Rei
    Hi all- I've been working on adding in-app purchases and was able to create and test in-app purchases using Store Kit (yay!). During testing, I exercised my app in a way which caused the app to crash mid purchase (so I guess the normal cycle of receiving paymentQueue:updatedTransactions and calling finishTransaction was interrupted). Now I am unable to successfully complete any transactions and instead am getting only transactions with transactionState SKPaymentTransactionStateFailed when paymentQueue:updatedTransactions is called. The transaction.error.code is -1003 and the transaction.error.localizedDescription is "Cannot connect to iTunes Store"! I have tried removing all products from iTunesConnect, and rebuilt them using different identifiers but that did not help. I have also tried using the App Store app to really connect to the real App Store and download some apps so I do have connectivity. Finally, I have visited the Settings:Store app to make sure I am signed out of my normal app store account. Any ideas? -Rei

    Read the article

  • Best solution for reporting database

    - by zzyzx
    Here is the situation: There is a transaction intensive database - used for both routine transactions and reports. I was wondering if I could isolate these two operations and 2 independent databases, so reports could run off of one database and all the transactions could occur in another one. This would improve performance for the OLTP SQL database. I have gone over a few options like, Mirroring, Log shipping, Replication, Snapshots, Clustering - but would like to discuss the best possible strategy for the desired result. Please advise the best solution to implement this strategy, or any other thoughts/suggestion you may have.

    Read the article

  • Are there any tools to help the user to design a State Machine to be consumed by my application?

    - by kolrie
    When reading this question I remembered there was something I have been researching for a while now and I though Stackoverflow could be of help. I have created a framework that handles applications as state machines. Currently all the state business logic and transactions are handled via Java code. I was looking for some UI implementation that would allow the user to draw the state machines and transactions and generate a file that can later on be consumed by my framework to "run" the workflow according to one or more defined state machines. Ideally I would like to use an open standard like SCXML. The goal as the UI would be to have something like this plugin IBM have for Rational Software Architect: Do you know any editor, plugin or library that would have something similar or at least serve as a good starting point?

    Read the article

  • Graphical database monitoring tool for debugging

    - by salle55
    I would love a tool that in real-time showed changes in a set of predefined tables in a graphical way, for example different colors on fields that has changed value, added records, deleted records etc. I don't want a list of all transactions (like SQL Server Profiler), instead a clever visualized more graphical approach where you can get a great overview if you are just monitoring a few tables. I realize the visualization would be hard if there is a lot of transactions against the database, but with monitoring on a few tables and a single session during debugging it would be possible. Does something like this exist? I think it would be great for debugging! Preferably for SQL Server and/or MySQL.

    Read the article

  • better way to write this

    - by ash34
    Hi, I have to create a hash of the form h[:bill] = ["Billy", "NA", 20, "PROJ_A"] by login where 20 is the cumulative number of hours reported by the login for all task transactions returned by the query where each login has multiple reported transactions. Did I do this in a bad way or this seems alright. h = Hash.new Task.find_each(:include => [:user], :joins => :user, :conditions => ["from_date >= ? AND from_date <= ? AND category = ?", Date.today - 30, Date.today + 30, 'PROJ1']) do |t| h[t.login.intern] = [t.user.name, 'NA', h[t.login.intern].nil? ? (t.hrs_per_day * t.num_days) : h[t.login.intern][2] + (t.hrs_day * t.workdays), t.category] end Also if I have to aggregate not just by login but login and category how do I accomplish this? thanks, ash

    Read the article

  • Should integration testing of DAOs be done in an application server?

    - by HDave
    I have a three tier application under development and am creating integration tests for DAOs in the persistence layer. When the application runs in Websphere or JBoss I expect to use the connection pooling and transaction manager of those application servers. When the application runs in Tomcat or Jetty, we'll be using C3P0 for pooling and Atomikos for transactions. Because of these different subsystems, should the DAO's be tested in a fully configured application server environment or should we handle those concerns when integration testing the service layer? Currently we plan on setting up a simple JDBC data source with non-JTA (i.e. resource-local) transactions for DAO integration testing, thus no application server is involved....but this leaves me wondering about environmental problems we won't uncover.

    Read the article

  • Consolidate loan, purchase & sale tables into one transaction table.

    - by Frank Computer
    INFORMIX-SE with ISQL 7.3: I have separate tables for Loan, Purchase & Sales transactions. Each tables rows are joined to their respective customer rows by: customer.id [serial] = loan.foreign_id [integer]; = purchase.foreign_id [integer]; = sale.foreign_id [integer]; I would like to consolidate the three tables into one table called "transaction", where a column: transaction.trx_type char(1) {L=Loan, P=Purchase, S=Sale} identifies the transaction type. Each transaction will be assigned a unique transaction number [serial]. Is this a good idea or is it better to keep them in separate tables? Storage space is not a concern, I think it would be easier programming & user-wise to have all types of transactions under one table, whenever possible. This implies denormalization.

    Read the article

  • SQL IDENTITY COLUMN

    - by andreas
    Hey guys i am have an sql table which is basically a statement. Now lest say the records i have in my table have a date and an identity column which is autonumbered and defines the order which the transactions are displayed in the front end to the client. The issue is during an insert some of the data have gone missing and some transactions between two dates are missing. I need to insert the data into the table but i need to insert them between the dates and not at the end of the table.If i do a a normal insert now the data will appear at the end of the table and not at the date i specify because the identity column is autonumbered and cannot be updated. Thanks

    Read the article

  • Using a "vo" for joined data?

    - by keithjgrant
    I'm building a small financial system. Because of double-entry accounting, transactions always come in batches of two or more, so I've got a batch table and a transaction table. (The transaction table has batch_id, account_id, and amount fields, and shared data like date and description are relegated to the batch table). I've been using basic vo-type models for each table so far. Because of this table structure structure, though, transactions will almost always be selected with a join on the batch table. So should I take the selected records and splice them into two separate vo objects, or should I create a "shared" vo that contains both batch and transaction data? There are a few cases in which batch records and/or transaction records. Are there possible pitfalls down the road if I have "overlapping" vo classes?

    Read the article

  • Process every pair in a sequence

    - by Henry Jackson
    I'm looking for a concise way to process every (unordered) pair of elements in a sequence in .NET. I know I can do it with nested foreach loops, but I was looking for something a little more readable. I was imagining something like a modified Any() extension method: IEnumerable<Transaction> transactions = ... if (transactions.AnyPair( (first, second) => first.UniqueID == second.UniqueID)) throw ... Or maybe a foreach-style one: IEnumerable<JigsawPiece> pieces = ... pieces.ForEachPair( (first, second) => { TryFit(first, second); }); This question has been asked for other languages (e.g. see http://stackoverflow.com/questions/942543/operation-on-every-pair-of-element-in-a-list), but I'm looking for a .NET solution.

    Read the article

  • What about the Sql transaction log

    - by Michel
    Hi, i always thought that the sql transaction log keeps track of all the transactions done in the database so it could help recovering the database file in case of a unexpected power down or something like that So then, in normal usage, when the data is committed and written to disk, it is cleared because all the data is nice and safe in the mdf file. Seeing the ldf file grow and reading some i understand that that is not the case, and it will keep growing, until: you shrink the log. Only at that point all the commited transactions are cleared and the log file is shrinked. I found some sp's who should do this, but also found the theory that you first have to backup the database? That last step doesn't make sense to me, so can anyone tell me of that is correct and if so, why that is?

    Read the article

  • rails data aggregation

    - by ash34
    Hi, I have to create a hash of the form h[:bill] = ["Billy", "NA", 20, "PROJ_A"] by login where 20 is the cumulative number of hours reported by the login for all task transactions returned by the query where each login has multiple reported transactions. Did I do this in a bad way or this seems alright. h = Hash.new Task.find_each(:include => [:user], :joins => :user, :conditions => ["from_date >= ? AND from_date <= ? AND category = ?", Date.today - 30, Date.today + 30, 'PROJ1']) do |t| h[t.login.intern] = [t.user.name, 'NA', h[t.login.intern].nil? ? (t.hrs_per_day * t.num_days) : h[t.login.intern][2] + (t.hrs_day * t.workdays), t.category] end Also if I have to aggregate this data not just by login but login and category how do I accomplish this? thanks, ash

    Read the article

  • Does it make sense to use BOTH mongodb and mysql in the same rails application?

    - by Brian Armstrong
    I have a good reason to use mongodb for part of my app. But people generally describe it as not a good fit for "transactional" applications like a bank where transactions have to be exact/consistent, etc. Does it make sense to split the models up in Rails and have some of them use MySql and others mongo? Or will this generally cause more problems than it's worth? I'm not building a banking app or anything, but was thinking it might make sense for my users table or or transactions table (recording revenue) to do that part in MySql.

    Read the article

  • Why are mainframes still around?

    - by ThaDon
    It's a question you've probably asked or been asked several times. What's so great about Mainframes? The answer you've probably been given is "they are fast" "normal computers can't process as many 'transactions' per second as they do". Jeese, I mean it's not like Google is running a bunch of Mainframes and look how many transactions/sec they do! The question here really is "why?". When I ask this question to the mainframe devs I know, they can't answer, they simply restate "It's fast". With the advent of Cloud Computing, I can't imagine mainframes being able to compete both cost-wise and mindshare-wise (aren't all the Cobol devs going to retire at some point, or will offshore just pickup the slack?). And yet, I know a few companies that still pump out net-new Cobol/Mainframe apps, even for things we could do easily in say .NET and Java. Anyone have a real good answer as to why "The Mainframe is faster", or can point me to some good articles relating to the topic?

    Read the article

  • Who owes who money optimisation problem

    - by Francis
    Say you have n people, each who owe each other money. In general it should be possible to reduce the amount of transactions that need to take place. i.e. if X owes Y £4 and Y owes X £8, then Y only needs to pay X £4 (1 transaction instead of 2). This becomes harder when X owes Y, but Y owes Z who owes X as well. I can see that you can easily calculate one particular cycle. It helps for me when I think of it as a fully connected graph, with the nodes being the amount each person owes. Problem seems to be NP-complete, but what kind of optimisation algorithm could I make, nevertheless, to reduce the total amount of transactions? Doesn't have to be that efficient, as N is quite small for me.

    Read the article

  • Average of a Sum in Mysql query

    - by chupeman
    I am having some problems creating a query that gives me the average of a sum. I read a few examples here in stackoverflow and couldn't do it. Can anyone help me to understand how to do this please? This is the data I have: Basically I need the average transaction value by cashier. I can't run a basic avg because it will take all rows but each transaction can have multiple rows. At the end I want to have: Cashier| Average| 131 | 44.31 |(Which comes from the sum divided by 3 transactions not 5 rows) 130 | 33.15 | etc. This is the query I have to SUM the transactions but don't know how or where to include the AVG function. SELECT `products`.`Transaction_x0020_Number`, Sum(`products`.`Sales_x0020_Value`) AS `SUM of Sales_x0020_Value`, `products`.`Cashier` FROM `products` GROUP BY `products`.`Transaction_x0020_Number`, `products`.`Date`, `products`.`Cashier` HAVING (`products`.`Date` ={d'2010-06-04'}) Any help is appreciated.

    Read the article

  • How should flushing be handled in a doctrine EntityManager instance shared across different services in symfony2?

    - by Jbm
    I have defined several services in symfony 2 which persist changes to the database. These services have the doctrine instance as one of their dependencies: a.given.service: class: Acme\TestBundle\Service\AGivenService arguments: [@doctrine] If I have two different services and both of them persist objects through the EntityManager, which is obtained like this from the doctrine instance: $em = $doctrine->getEntityManager(); Would all services always share the same EntityManager? If so, how should I handle flushing if I wanted to handle all the changes in a single transaction? I have checked this: http://docs.doctrine-project.org/projects/doctrine-orm/en/2.0.x/reference/transactions-and-concurrency.html and it explains how to handle different transactions in a request, but I want to achieve the opposite, which is having different changes in different services handled as a single transaction. Is there a better approach to handle multiple changes in different services? For now my best bet is having a front-end service in charge of calling the other services and doing the flushing afterwards. Backend services would persist objects but would not do any flushing.

    Read the article

  • Homework - C# - Creating an object instance with a button click

    - by Erica
    I'm new to learning Windows Programming with C#. My current assignment is to create a very simple bank account program: The user enters the accountholder name, account number and beginning balance, then presses a "Continue" button to work with that account by making deposits and withdrawals. I wrote a separate "BankAccount" class with the required data members and methods. I've put the code for the creation of the BankAccount object in the Continue button click event BankAccount currentAccount = new BankAccount(acctName, acctNum, beginningBalance); But that seems to make it local to that method only, and currentAccount is not recognized when I'm programming the click event for the "Record Transactions" (deposits and withdrawals) button. How and where should the creation of the BankAccount object be coded in order for it to be created when the "Continue" button is clicked and also recognized in the "Record Transactions" button click event? Please let me know if any clarification is needed, or if you need to see part or all of my code.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >