Search Results

Search found 3996 results on 160 pages for 'operations'.

Page 27/160 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Backing up SQL Azure

    - by Herve Roggero
    That's it!!! After many days and nights... and an amazing set of challenges, I just released the Enzo Backup for SQL Azure BETA product (http://www.bluesyntax.net). Clearly, that was one of the most challenging projects I have done so far. Why??? Because to create a highly redundant system, expecting failures at all times for an operation that could take anywhere from a couple of minutes to a couple of hours, and still making sure that the operation completes at some point was remarkably challenging. Some routines have more error trapping that actual code... Here are a few things I had to take into account: Exponential Backoff (explained in another post) Dual dynamic determination of number of rows to backup  Dynamic reduction of batch rows used to restore the data Implementation of a flexible BULK Insert API that the tool could use Implementation of a custom Storage REST API to handle automatic retries Automatic data chunking based on blob sizes Compression of data Implementation of the Task Parallel Library at multiple levels including deserialization of Azure Table rows and backup/restore operations Full or Partial Restore operations Implementation of a Ghost class to serialize/deserialize data tables And that's just a partial list... I will explain what some of those mean in future blob posts. A lot of the complexities had to do with implementing a form of retry logic, depending on the resource and the operation.

    Read the article

  • You Need BRM When You have EBS – and Even When You Don’t!

    - by bwalstra
    Here is a list of criteria to test your business-systems (Oracle E-Business Suite, EBS) or otherwise to support your lines of digital business - if you score low, you need Oracle Billing and Revenue Management (BRM). Functions Scalability High Availability (99.999%) Performance Extensibility (e.g. APIs, Tools) Upgradability Maintenance Security Standards Compliance Regulatory Compliance (e.g. SOX) User Experience Implementation Complexity Features Customer Management Real-Time Service Authorization Pricing/Promotions Flexibility Subscriptions Usage Rating and Pricing Real-Time Balance Mgmt. Non-Currency Resources Billing & Invoicing A/R & G/L Payments & Collections Revenue Assurance Integration with Key Enterprise Applications Reporting Business Intelligence Order & Service Mgmt (OSM) Siebel CRM E-Business Suite On-/Off-line Mediation Payment Processing Taxation Royalties & Settlements Operations Management Disaster Recovery Overall Evaluation Implementation Configuration Extensibility Maintenance Upgradability Functional Richness Feature Richness Usability OOB Integrations Operations Management Leveraging Oracle Technology Overall Fit for Purpose You need Oracle BRM: Built for high-volume transaction processing Monetizes any service or event based on any metric Supports high-volume usage rating, pricing and promotions Provides real-time charging, service authorization and balance management Supports any account structure (e.g. corporate hierarchies etc.) Scales from low volumes to extremely high volumes of transactions (e.g. billions of trxn per hour) Exposes every single function via APIs (e.g. Java, C/C++, PERL, COM, Web Services, JCA) Immediate Business Benefits of BRM: Improved business agility and performance Supports the flexibility, innovation, and customer-centricity required for current and future business models Faster time to market for new products and services Supports 360 view of the customer in real-time – products can be launched to targeted customers at a record-breaking pace Streamlined deployment and operation Productized integrations, standards-based APIs, and OOB enablement lower deployment and maintenance costs Extensible and scalable solution Minimizes risk – initial phase deployed rapidly; solution extended and scaled seamlessly per business requirements Key Considerations Productized integration with key Oracle applications Lower integration risks and cost Efficient order-to-cash process Engineered solution – certification on Exa platform Exadata tested at PayPal in the re-platforming project Optimal performance of Oracle assets on Oracle hardware Productized solution in Rapid Offer Design and Order Delivery Fast offer design and implementation Significantly shorter order cycle time Productized integration with Oracle Enterprise Manager Visibility to system operability for optimal up time

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #039

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 FQL – Facebook Query Language Facebook list following advantages of FQL: Condensed XML reduces bandwidth and parsing costs. More complex requests can reduce the number of requests necessary. Provides a single consistent, unified interface for all of your data. It’s fun! UDF – Get the Day of the Week Function The day of the week can be retrieved in SQL Server by using the DatePart function. The value returned by the function is between 1 (Sunday) and 7 (Saturday). To convert this to a string representing the day of the week, use a CASE statement. UDF – Function to Get Previous And Next Work Day – Exclude Saturday and Sunday While reading ColdFusion blog of Ben Nadel Getting the Previous Day In ColdFusion, Excluding Saturday And Sunday, I realize that I use similar function on my SQL Server Database. This function excludes the Weekends (Saturday and Sunday), and it gets previous as well as next work day. Complete Series of SQL Server Interview Questions and Answers Data Warehousing Interview Questions and Answers – Introduction Data Warehousing Interview Questions and Answers – Part 1 Data Warehousing Interview Questions and Answers – Part 2 Data Warehousing Interview Questions and Answers – Part 3 Data Warehousing Interview Questions and Answers Complete List Download 2008 Introduction to Log Viewer In SQL Server all the windows event logs can be seen along with SQL Server logs. Interface for all the logs is same and can be launched from the same place. This log can be exported and filtered as well. DBCC SHRINKFILE Takes Long Time to Run If you are DBA who are involved with Database Maintenance and file group maintenance, you must have experience that many times DBCC SHRINKFILE operations takes a long time but any other operations with Database are relatively quicker. mssqlsystemresource – Resource Database The purpose of resource database is to facilitates upgrading to the new version of SQL Server without any hassle. In previous versions whenever version of SQL Server was upgraded all the previous version system objects needs to be dropped and new version system objects to be created. 2009 Puzzle – Write Script to Generate Primary Key and Foreign Key In SQL Server Management Studio (SSMS), there is no option to script all the keys. If one is required to script keys they will have to manually script each key one at a time. If database has many tables, generating one key at a time can be a very intricate task. I want to throw a question to all of you if any of you have scripts for the same purpose. Maximizing View of SQL Server Management Studio – Full Screen – New Screen I had explained the following two different methods: 1) Open Results in Separate Tab - This is a very interesting method as result pan shows up in a different tab instead of the splitting screen horizontally. 2) Open SSMS in Full Screen - This works always and to its best. Not many people are aware of this method; hence, very few people use it to enhance performance. 2010 Find Queries using Parallelism from Cached Plan T-SQL script gets all the queries and their execution plan where parallelism operations are kicked up. Pay attention there is TOP 10 is used, if you have lots of transactional operations, I suggest that you change TOP 10 to TOP 50 This is the list of the all the articles in the series of computed columns. SQL SERVER – Computed Column – PERSISTED and Storage This article talks about how computed columns are created and why they take more storage space than before. SQL SERVER – Computed Column – PERSISTED and Performance This article talks about how PERSISTED columns give better performance than non-persisted columns. SQL SERVER – Computed Column – PERSISTED and Performance – Part 2 This article talks about how non-persisted columns give better performance than PERSISTED columns. SQL SERVER – Computed Column and Performance – Part 3 This article talks about how Index improves the performance of Computed Columns. SQL SERVER – Computed Column – PERSISTED and Storage – Part 2 This article talks about how creating index on computed column does not grow the row length of table. SQL SERVER – Computed Columns – Index and Performance This article summarized all the articles related to computed columns. 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 21 of 31 What is Data Warehousing? What is Business Intelligence (BI)? What is a Dimension Table? What is Dimensional Modeling? What is a Fact Table? What are the Fundamental Stages of Data Warehousing? What are the Different Methods of Loading Dimension tables? Describes the Foreign Key Columns in Fact Table and Dimension Table? What is Data Mining? What is the Difference between a View and a Materialized View? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 22 of 31 What is OLTP? What is OLAP? What is the Difference between OLTP and OLAP? What is ODS? What is ER Diagram? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 23 of 31 What is ETL? What is VLDB? Is OLTP Database is Design Optimal for Data Warehouse? If denormalizing improves Data Warehouse Processes, then why is the Fact Table is in the Normal Form? What are Lookup Tables? What are Aggregate Tables? What is Real-Time Data-Warehousing? What are Conformed Dimensions? What is a Conformed Fact? How do you Load the Time Dimension? What is a Level of Granularity of a Fact Table? What are Non-Additive Facts? What is a Factless Facts Table? What are Slowly Changing Dimensions (SCD)? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 24 of 31 What is Hybrid Slowly Changing Dimension? What is BUS Schema? What is a Star Schema? What Snow Flake Schema? Differences between the Star and Snowflake Schema? What is Difference between ER Modeling and Dimensional Modeling? What is Degenerate Dimension Table? Why is Data Modeling Important? What is a Surrogate Key? What is Junk Dimension? What is a Data Mart? What is the Difference between OLAP and Data Warehouse? What is a Cube and Linked Cube with Reference to Data Warehouse? What is Snapshot with Reference to Data Warehouse? What is Active Data Warehousing? What is the Difference between Data Warehousing and Business Intelligence? What is MDS? Explain the Paradigm of Bill Inmon and Ralph Kimball. SQL SERVER – Azure Interview Questions and Answers – Guest Post by Paras Doshi – Day 25 of 31 Paras Doshi has submitted 21 interesting question and answers for SQL Azure. 1.What is SQL Azure? 2.What is cloud computing? 3.How is SQL Azure different than SQL server? 4.How many replicas are maintained for each SQL Azure database? 5.How can we migrate from SQL server to SQL Azure? 6.Which tools are available to manage SQL Azure databases and servers? 7.Tell me something about security and SQL Azure. 8.What is SQL Azure Firewall? 9.What is the difference between web edition and business edition? 10.How do we synchronize On Premise SQL server with SQL Azure? 11.How do we Backup SQL Azure Data? 12.What is the current pricing model of SQL Azure? 13.What is the current limitation of the size of SQL Azure DB? 14.How do you handle datasets larger than 50 GB? 15.What happens when the SQL Azure database reaches Max Size? 16.How many databases can we create in a single server? 17.How many servers can we create in a single subscription? 18.How do you improve the performance of a SQL Azure Database? 19.What is code near application topology? 20.What were the latest updates to SQL Azure service? 21.When does a workload on SQL Azure get throttled? SQL SERVER – Interview Questions and Answers – Guest Post by Malathi Mahadevan – Day 26 of 31 Malachi had asked a simple question which has several answers. Each answer makes you think and ponder about the reality of the IT world. Look at the simple question – ‘What is the toughest challenge you have faced in your present job and how did you handle it’? and its various answers. Each answer has its own story. SQL SERVER – Interview Questions and Answers – Guest Post by Rick Morelan – Day 27 of 31 Rick Morelan of Joes2Pros has written an excellent blog post on the subject how to find top N values. Most people are fully aware of how the TOP keyword works with a SELECT statement. After years preparing so many students to pass the SQL Certification I noticed they were pretty well prepared for job interviews too. Yes, they would do well in the interview but not great. There seemed to be a few questions that would come up repeatedly for almost everyone. Rick addresses similar questions in his lucid writing skills. 2012 Observation of Top with Index and Order of Resultset SQL Server has lots of things to learn and share. It is amazing to see how people evaluate and understand different techniques and styles differently when implementing. The real reason may be absolutely different but we may blame something totally different for the incorrect results. Read the blog post to learn more. How do I Record Video and Webcast How to Convert Hex to Decimal or INT Earlier I asked regarding a question about how to convert Hex to Decimal. I promised that I will post an answer with Due Credit to the author but never got around to post a blog post around it. Read the original post over here SQL SERVER – Question – How to Convert Hex to Decimal. Query to Get Unique Distinct Data Based on Condition – Eliminate Duplicate Data from Resultset The natural reaction will be to suggest DISTINCT or GROUP BY. However, not all the questions can be solved by DISTINCT or GROUP BY. Let us see the following example, where a user wanted only latest records to be displayed. Let us see the example to understand further. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Election 2012: Twitter Breaks Records with MySQL

    - by Bertrand Matthelié
    Twitter VP of Infrastructure Operations Engineering Mazen Rawashdeh shared news and numbers yesterday on his blog: "Last night, the world tuned in to Twitter to share the election results as U.S. voters chose a president and settled many other campaigns. Throughout the day, people sent more than 31 million election-related Tweets (which contained certain key terms and relevant hashtags). And as results rolled in, we tracked the surge in election-related Tweets at 327,452 Tweets per minute (TPM). These numbers reflect the largest election-related Twitter conversation during our 6 years of existence, though they don’t capture the total volume of all Tweets yesterday." "Last night, Twitter averaged about 9,965 TPS from 8:11pm to 9:11pm PT, with a one-second peak of 15,107 TPS at 8:20pm PT and a one-minute peak of 874,560 TPM. Seeing a sustained peak over the course of an entire event is a change from the way people have previously turned to Twitter during live events. Now, rather than brief spikes, we are seeing sustained peaks for hours." Congrats to Jeremy Cole, Davi Arnaut and the rest of the team at Twitter for their excellent work! Jeremy recently held a keynote presentation at MySQL Connect describing how MySQL powers Twitter, and why they chose and continue to rely on MySQL for their operations. You can watch the presentation here. He also went into more details during another presentation later that day and you can access the slides here. Below a couple of tweets from Jeremy after what have surely been hectic days...  Keep up the good work guys!

    Read the article

  • Silverlight Cream for May 29, 2010 -- #872

    - by Dave Campbell
    In this Issue: Michael Washington, Chris Koenig, Kunal Chowdhury, SilverLaw, Shayne Burgess, Ian T. Lackey, Alan Beasley, Marlon Grech. Shoutouts: Ozymandias has a post up that's not Silverlight necessarily, but it's pretty cool: Typeface Selection Flowchart Damian Schenkelman posted about the latest: Prism 2.2 Release available. Get it at Codeplex. From SilverlightCream.com: Silverlight 4 OData Paging with RX Extensions Michael Washington continues with this OData and Rx post using the View Model Style. Michael has some good external links, good info, and all the code. WP7 Part 4: Morphing and Mapping Chris Koenig has the 4th in his WP7 series he's doing, and this one is on MVVMLight and BingMaps ... code included. Silverlight 4: Interoperability with Excel using the COM Object Kunal Chowdhury has a post up about Excel Interoperability using the COM object including opening an Excel Workbook and writing data out, then modifying the data in the spreadsheet and seeing it updated in the app. Creating A Flexible Surface Effect – Silverlight 4 (Part 1) SilverLaw put up a demo of an awesome 'water ripple' SL4 demo a couple days ago, and now he's got part 1 of a great tutorial explaining it all. Service Operations and the WCF Data Services Client Shayne Burgess has a post up about Service Operations and how they can be used by the WCF Data Services client. Role Based Silverlight Behaviors Also from the Open Light Group, Ian T. Lackey has a post up about Behaviors that takes a list of roles and updates the UI appropriatetly. How to Toggle (Show/Hide) using Behaviours (Behaviors) between Visual States or Storyboards in Expression Blend for Windows Phone Alan Beasley has a quick post up talking about the solution he found to a problem he was having with state switching in a WP7 app. MEFedMVVM: Testability Marlon Grech has another MEFedMVVM post up and he's discussing Testability all rolled in there with everything else :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • How do I measure performance of a virtual server?

    - by Sergey
    I've got a VPS running Ubuntu. Being a virtual server, I understand that it shares resources with unknown number of other servers, and I'm noticing that it's considerably slower than my desktop machine. Is there some tool to measure the performance of the virtual machine? I'd be curious to see some approximate measure similar to bogomips, possibly for CPU (operations/sec), memory and disk read/write speed. I'd like to be able to compare those numbers to my desktop machine. I'm not interested in the specs of the actual physical machine my VPS is running on - by doing cat /proc/cpuinfo I can see that it's a nice quad-core Xeon machine, but it doesn't matter to me. I'm basically interested in how fast a program would run in my VPS - how many CPU operations it can make in a second, how many bytes to write to RAM or to disk. I only have ssh access to the machine so the tool need to be command-line. I could write a script which, say, does some calculations in a loop for a second and counts how many loops it was able to do, or something similar to measure disk and RAM performance. But I'm sure something like this already exists.

    Read the article

  • Oracle SCM at APICS Denver Oct 14-16

    - by Stephen Slade
    Join us in Denver, October 14–16, 2012, for the 2012 APICS International Conference & Expo. One of the world's largest gatherings of supply chain and operations management professionals, APICS provides an annual interactive learning environment for operations and supply chain professionals to lead and apply best practices. For those of you considering attending APICS  next month, be sure to keep Oracle Supply Chain applications on your radar. Oracle will again have a prominent position at the annual global conference. Our product booth with have supply chain demonstrations for manufacturing, value chain planning, value chain execution and Agile product lifecycle management offerings. Stop by our booth to register for one of numerous prizes and awards and chat with one of our supply chain product experts. Oracle customers will be presenting at various sessions throughout the event.  One of the great stories to be shared is the SUN supply chain transformation. For those interested in moving costs down to the bottom line, this is the session you should attend. http://www.apics.org/sites/conference/2012/home

    Read the article

  • Best practice for setting Effect parameters in XNA

    - by hichaeretaqua
    I want to ask if there is a best practice for setting Effect parameters in XNA. Or in other words, what exactly happens when I call pass.Apply(). I can imagine multiple scenarios: Each time Apply is called, all effect parameters are transferred to the GPU and therefor it has no real influence how often I set a parameter. Each time Apply is called, only the parameters that got reset are transferred. So caching Set-operations that don't actually set a new value should be avoided. Each time Apply is called, only the parameters that got changed are transferred. So caching Set-operations is useless. This whole questions is bootless because no one of the mentions ways has any noteworthy impact on game performance. So the final question: Is it useful to implement some caching of set operation like: private Matrix _world; public Matrix World { get{ return _world; } set { if (value == world) return; _effect.Parameters["xWorld"].SetValue(value); _world = value; } } Thanking you in anticipation.

    Read the article

  • apache2 is making my amazon ec2 unavailable, any ideas?

    - by Tim
    I have a web server running on a EC2 c1.medium intance. The instance is running on ubuntu, with apache2 and mysql.The ubuntu and apache version are the next; Ubuntu DISTRIB_ID=Ubuntu DISTRIB_RELEASE=11.04 DISTRIB_CODENAME=natty DISTRIB_DESCRIPTION="Ubuntu 11.04" Apache2 Server version: Apache/2.2.17 (Ubuntu) Server built: Feb 22 2011 18:33:02 Sometimes randomly, my server "hangs up", I cannot connect to it using normal web access or ssh access. If I reboot the instance it reboots fine, the amazon system log doesn't show anything weird, but the problem persists The only way to solve it its stopping the instance, and start it again. I think that the problem its has something to do with apache, because the last lines of the error log lists: normal errors [Sun Jun 19 06:25:09 2011] [notice] Apache/2.2.17 (Ubuntu) PHP/5.3.5-1ubuntu7.2 with Suhosin-Patch configured -- resuming normal operations nobody cant connetc... no more erros until i stop and start the instance normal errors [Wed Jun 22 14:21:18 2011] [notice] Apache/2.2.17 (Ubuntu) PHP/5.3.5-1ubuntu7.2 with Suhosin-Patch configured -- resuming normal operations nobody cant connetc... no more erros until i stop and start the instance Can somebody please help me?

    Read the article

  • Should we write detailed architecture design or just an outline when designing a program?

    - by EpsilonVector
    When I'm doing design for a task, I keep fighting this nagging feeling that aside from being a general outline it's going to be more or less ignored in the end. I'll give you an example: I was writing a frontend for a device that has read/write operations. It made perfect sense in the class diagram to give it a read and a write function. Yet when it came down to actually writing them I realized they were literally the same function with just one line of code changed (read vs write function call), so to avoid code duplication I ended up implementing a do_io function with a parameter that distinguishes between operations. Goodbye original design. This is not a terribly disruptive change, but it happens often and can happen in more critical parts of the program as well, so I can't help but wondering if there's a point to design more detail than a general outline, at least when it comes to the program's architecture (obviously when you are specifying an API you have to spell everything out). This might be just the result of my inexperience in doing design, but on the other hand we have agile methodologies which sort of say "we give up on planning far ahead, everything is going to change in a few days anyway", which is often how I feel. So, how exactly should I "use" design?

    Read the article

  • Manager/Container class vs static class methods

    - by Ben
    Suppose I a have a Widget class that is part of a framework used independently by many applications. I create Widget instances in many situations and their lifetimes vary. In addition to Widget's instance specified methods, I would like to be able to perform the follow class wide operations: Find a single Widget instance based on a unique id Iterate over the list of all Widgets Remove a widget from the set of all widgets In order support these operations, I have been considering two approaches: Container class - Create some container or manager class, WidgetContainer, which holds a list of all Widget instances, support iteration and provides methods for Widget addition, removal and lookup. For example in C#: public class WidgetContainer : IEnumerable<Widget { public void AddWidget(Widget); public Widget GetWidget(WidgetId id); public void RemoveWidget(WidgetId id); } Static class methods - Add static class methods to Widget. For example: public class Widget { public Widget(WidgetId id); public static Widget GetWidget(WidgetId id); public static void RemoveWidget(WidgetId id); public static IEnumerable<Widget AllWidgets(); } Using a container class has the added problem of how to access the container class. Make it a singleton?..yuck! Create some World object that provides access to all such container classes? I have seen many frameworks that use the container class approach, so what is the general consensus?

    Read the article

  • Banco Espírito Santo Increases Sales Campaign Success Rate with Siebel CRM

    - by Tony Berk
    Banco Espírito Santo (BES), founded in 1869, is the second-largest private financial institution in Portugal with a 20.3% domestic market share, 2.1 million customers, and more than 700 in-country branches. It also has a strong international presence with operations in 23 countries and four continents. With strong growth in its major markets, BES needed a modern, cost-effective, scalable, and reliable customer relationship management (CRM) solution for its retail operations. The bank wanted to optimize client relationship management and integrate all customer touch points and service channels to improve the success of its sales and marketing initiatives. BES implemented the same CRM solution as many other leading banks: Oracle's Siebel CRM. With Siebel CRM 8.1 and other Oracle solutions, BES significantly increased sales of its new financial products across all channels by up to 25%, and it expects to increase annual revenue by up US$4 million annually. It also improved the success rate of bank branch sales, marketing, and lead generation campaigns by nearly 10%. “We are very happy with Oracle’s Siebel CRM applications. We already knew that this was the best solution available, but it has surpassed our best expectations,” said João Manaças, Customer Relationship Management Manager, Personal Marketing Department, Banco Espírito Santo. Click here to learn more about BES's use of Siebel CRM.

    Read the article

  • Column order can matter

    - by Dave Ballantyne
    Ordinarily, column order of a SQL statement does not matter. Select a,b,c from table will produce the same execution plan as   Select c,b,a from table However, sometimes it can make a difference.   Consider this statement (maxdop is used to make a simpler plan and has no impact to the main point):   select SalesOrderID, CustomerID, OrderDate, ROW_NUMBER() over (Partition By CustomerId order by OrderDate asc) as RownAsc, ROW_NUMBER() over (Partition By CustomerId order by OrderDate Desc) as RownDesc from sales.SalesOrderHeader order by CustomerID,OrderDateoption(maxdop 1) If you look at the execution plan, you will see similar to this That is three sorts.  One for RownAsc,  one for RownDesc and the final one for the ‘Order by’ clause.  Sorting is an expensive operation and one that should be avoided if possible.  So with this in mind, it may come as some surprise that the optimizer does not re-order operations to group them together when the incoming data is in a similar (if not exactly the same) sorted sequence.  A simple change to swap the RownAsc and RownDesc columns to produce this statement : select SalesOrderID, CustomerID, OrderDate, ROW_NUMBER() over (Partition By CustomerId order by OrderDate Desc) as RownDesc , ROW_NUMBER() over (Partition By CustomerId order by OrderDate asc) as RownAsc from Sales.SalesOrderHeader order by CustomerID,OrderDateoption(maxdop 1) Will result a different and more efficient query plan with one less sort. The optimizer, although unable to automatically re-order operations, HAS taken advantage of the data ordering if it is as required.  This is well worth taking advantage of if you have different sorting requirements in one statement. Try grouping the functions that require the same order together and save yourself a few extra sorts.

    Read the article

  • Best practice settings Effect parameters in XNA

    - by hichaeretaqua
    I want to ask if there is a best practice settings effect parameters in XNA. Or in other words, what exactly happens when I call pass.Apply(). I can imagine multiple scenarios: Each time Apply() is called, all effect parameters are transferred to the GPU and therefor it has no real influence how often I set a parameter. Each time Apply() is called, only the parameters that got reset are transferred. So caching Set-operations that don't actually set a new value should be avoided. Each time Apply() is called, only the parameters that got changed are transferred. So caching Set-operations is useless. This whole questions is bootless because no one of the mentions ways has any noteworthy impact on game performance. So the final question: Is it useful to implement some caching of Set-operation like: private Matrix _world; public Matrix World { get{ return _world;} set { if(value == world)return; _effect.Parameters["xWorld"].SetValue(value); _world = value; } Thanking you in anticipation

    Read the article

  • BizTalk: History of one project architecture

    - by Leonid Ganeline
    "In the beginning God made heaven and earth. Then he started to integrate." At the very start was the requirement: integrate two working systems. Small digging up: It was one system. It was good but IT guys want to change it to the new one, much better, chipper, more flexible, and more progressive in technologies, more suitable for the future, for the faster world and hungry competitors. One thing. One small, little thing. We cannot turn off the old system (call it A, because it was the first), turn on the new one (call it B, because it is second but not the last one). The A has a hundreds users all across a country, they must study B. A still has a lot nice custom features, home-made features that cannot disappear. These features have to be moved to the B and it is a long process, months and months of redevelopment. So, the decision was simple. Let’s move not jump, let’s both systems working side-by-side several months. In this time we could teach the users and move all custom A’s special functionality to B. That automatically means both systems should work side-by-side all these months and use the same data. Data in A and B must be in sync. That’s how the integration projects get birth. Moreover, the specific of the user tasks requires the both systems must be in sync in real-time. Nightly synchronization is not working, absolutely.   First draft The first draft seems simple. Both systems keep data in SQL databases. When data changes, the Create, Update, Delete operations performed on the data, and the sync process could be started. The obvious decision is to use triggers on tables. When we are talking about data, we are talking about several entities. For example, Orders and Items [in Orders]. We decided to use the BizTalk Server to synchronize systems. Why it was chosen is another story. Second draft   Let’s take an example how it works in more details. 1.       User creates a new entity in the A system. This fires an insert trigger on the entity table. Trigger has to pass the message “Entity created”. This message includes all attributes of the new entity, but I focused on the Id of this entity in the A system. Notation for this message is id.A. System A sends id.A to the BizTalk Server. 2.       BizTalk transforms id.A to the format of the system B. This is easiest part and I will not focus on this kind of transformations in the following text. The message on the picture is still id.A but it is in slightly different format, that’s why it is changing in color. BizTalk sends id.A to the system B. 3.       The system B creates the entity on its side. But it uses different id-s for entities, these id-s are id.B. System B saves id.A+id.B. System B sends the message id.A+id.B back to the BizTalk. 4.       BizTalk sends the message id.A+id.B to the system A. 5.       System A saves id.A+id.B. Why both id-s should be saved on both systems? It was one of the next requirements. Users of both systems have to know the systems are in sync or not in sync. Users working with the entity on the system A can see the id.B and use it to switch to the system B and work there with the copy of the same entity. The decision was to store the pairs of entity id-s on both sides. If there is only one id, the entities are not in sync yet (for the Create operation). Third draft Next problem was the reliability of the synchronization. The synchronizing process can be interrupted on each step, when message goes through the wires. It can be communication problem, timeout, temporary shutdown one of the systems, the second system cannot be synchronized by some internal reason. There were several potential problems that prevented from enclosing the whole synchronization process in one transaction. Decision was to restart the whole sync process if it was not finished (in case of the error). For this purpose was created an additional service. Let’s call it the Resync service. We still keep the id pairs in both systems, but only for the fast access not for the synchronization process. For the synchronizing these id-s now are kept in one main place, in the Resync service database. The Resync service keeps record as: ·       Id.A ·       Id.B ·       Entity.Type ·       Operation (Create, Update, Delete) ·       IsSyncStarted (true/false) ·       IsSyncFinished (true/false0 The example now looks like: 1.       System A creates id.A. id.A is saved on the A. Id.A is sent to the BizTalk. 2.       BizTalk sends id.A to the Resync and to the B. id.A is saved on the Resync. 3.       System B creates id.B. id.A+id.B are saved on the B. id.A+id.B are sent to the BizTalk. 4.       BizTalk sends id.A+id.B to the Resync and to the A. id.A+id.B are saved on the Resync. 5.       id.A+id.B are saved on the B. Resync changes the IsSyncStarted and IsSyncFinished flags accordingly. The Resync service implements three main methods: ·       Save (id.A, Entity.Type, Operation) ·       Save (id.A, id.B, Entity.Type, Operation) ·       Resync () Two Save() are used to save id-s to the service storage. See in the above example, in 2 and 4 steps. What about the Resync()? It is the method that finishes the interrupted synchronization processes. If Save() is started by the trigger event, the Resync() is working as an independent process. It periodically scans the Resync storage to find out “unfinished” records. Then it restarts the synchronization processes. It tries to synchronize them several times then gives up.     One more thing, both systems A and B must tolerate duplicates of one synchronizing process. Say on the step 3 the system B was not able to send id.A+id.B back. The Resync service must restart the synchronization process that will send the id.A to B second time. In this case system B must just send back again also created id.A+id.B pair without errors. That means “tolerate duplicates”. Fourth draft Next draft was created only because of the aesthetics. As it always happens, aesthetics gave significant performance gain to the whole system. First was the stupid question. Why do we need this additional service with special database? Can we just master the BizTalk to do something like this Resync() does? So the Resync orchestration is doing the same thing as the Resync service. It is started by the Id.A and finished by the id.A+id.B message. The first works as a Start message, the second works as a Finish message.     Here is a diagram the whole process without errors. It is pretty straightforward. The Resync orchestration is waiting for the Finish message specific period of time then resubmits the Id.A message. It resubmits the Id.A message specific number of times then gives up and gets suspended. It can be resubmitted then it starts the whole process again: waiting [, resubmitting [, get suspended]], finishing. Tuning up The Resync orchestration resubmits the id.A message with special “Resubmitted” flag. The subscription filter on the Resync orchestration includes predicate as (Resubmit_Flag != “Resubmitted”). That means only the first Sync orchestration starts the Resync orchestration. Other Sync orchestration instantiated by the resubmitting can finish this Resync orchestration but cannot start another instance of the Resync   Here is a diagram where system B was inaccessible for some period of time. The Resync orchestration resubmitted the id.A two times. Then system B got the response the id.A+id.B and this finished the Resync service execution. What is interesting about this, there were submitted several identical id.A messages and only one id.A+id.B message. Because of this, the system B and the Resync must tolerate the duplicate messages. We also told about this requirement for the system B. Now the same requirement is for the Resunc. Let’s assume the system B was very slow in the first response and the Resync service had time to resubmit two id.A messages. System B responded not, as it was in previous case, with one id.A+id.B but with two id.A+id.B messages. First of them finished the Resync execution for the id.A. What about the second id.A+id.B? Where it goes? So, we have to add one more internal requirement. The whole solution must tolerate many identical id.A+id.B messages. It is easy task with the BizTalk. I added the “SinkExtraMessages” subscriber (orchestration with one receive shape), that just get these messages and do nothing. Real design Real architecture is much more complex and interesting. In reality each system can submit several id.A almost simultaneously and completely unordered. There are not only the “Create entity” operation but the Update and Delete operations. And these operations relate each other. Say the Update operation after Delete means not the same as Update after Create. In reality there are entities related each other. Say the Order and Order Items. Change on one of it could start the series of the operations on another. Moreover, the system internals are the “black boxes” and we cannot predict the exact content and order of the operation series. It worth to say, I had to spend a time to manage the zombie message problems. The zombies are still here, but this is not a problem now. And this is another story. What is interesting in the last design? One orchestration works to help another to be more reliable. Why two orchestration design is more reliable, isn’t it something strange? The Synch orchestration takes all the message exchange between systems, here is the area where most of the errors could happen. The Resync orchestration sends and receives messages only within the BizTalk server. Is there another design? Sure. All Resync functionality could be implemented inside the Sync orchestration. Hey guys, some other ideas?

    Read the article

  • What's New in Database Lifecycle Management in Enterprise Manager 12c Release 3

    - by HariSrinivasan
    Enterprise Manager 12c Release 3 includes improvements and enhancements across every area of the product. This blog provides an overview of the new and enhanced features in the Database Lifecycle Management area. I will deep dive into specific features more in depth in subsequent posts. "What's New?"  In this release, we focused on four things: 1. Lifecycle Management Support for new Database12c - Pluggable Databases 2. Management of long running processes, such as a security patch cycle (Change Activity Planner) 3. Management of large number of systems by · Leveraging new framework capabilities for lifecycle operations, such as the new advanced ‘emcli’ script option · Refining features such as configuration search and compliance 4. Minor improvements and quality fixes to existing features · Rollback support for Single instance databases · Improved "OFFLINE" Patching experience · Faster collection of ORACLE_HOME configurations Lifecycle Management Support for new Database 12c - Pluggable Databases Database 12c introduces Pluggable Databases (PDBs), the brand new addition to help you achieve your consolidation goals. Pluggable databases offer unprecedented consolidation at database level and native lifecycle verbs for creating, plugging and unplugging the databases on a container database (CDB). Enterprise Manager can supplement the capabilities of pluggable databases by offering workflows for migrating, provisioning and cloning them using the software library and the deployment procedures. For example, Enterprise Manager can migrate an existing database to a PDB or clone a PDB by storing a versioned copy in the software library. One can also manage the planned downtime related to patching by  migrating the PDBs to a new CDB. While pluggable databases offer these exciting features, it can also pose configuration management and compliance challenges if not managed properly. Enterprise Manager features like inventory management, topology associations and configuration search can mitigate the sprawl of PDBs and also lock them to predefined golden standards using configuration comparison and compliance rules. Learn More ... Management of Long Running datacenter processes - Change Activity Planner (CAP) Currently, customers resort to cumbersome methods to create, execute, track and monitor change activities within their data center. Some customers use traditional tools such as spreadsheets, project planners and in-house custom built solutions. Customers often have weekly sync up meetings across stake holders to collect status and updates. Some of the change activities, for example the quarterly patch set update (PSU) patch rollouts are not single tasks but processes with multiple tasks. Some of those tasks are performed within Enterprise Manager Cloud Control (for example Patch) and some are performed outside of Enterprise Manager Cloud Control. These tasks often run for a longer period of time and involve multiple people or teams. Enterprise Manger Cloud Control supports core data center operations such as configuration management, compliance management, and automation. Enterprise Manager Cloud Control release 12.1.0.3 leverages these capabilities and introduces the Change Activity Planner (CAP). CAP provides the ability to plan, execute, and track change activities in real time. It covers the typical datacenter activities that are spread over a long period of time, across multiple people and multiple targets (even target types). Here are some examples of Change Activity Process in a datacenter: · Patching large environments (PSU/CPU Patching cycles) · Upgrading large number of database environments · Rolling out Compliance Rules · Database Consolidation to Exadata environments CAP provides user flows for Compliance Officers/Managers (incl. lead administrators) and Operators (DBAs and admins). Managers can create change activity plans for various projects, allocate resources, targets, and groups affected. Upon activation of the plan, tasks are created and automatically assigned to individual administrators based on target ownership. Administrators (DBAs) can identify their tasks and understand the context, schedules, and priorities. They can complete tasks using Enterprise Manager Cloud Control automation features such as patch plans (or in some cases outside Enterprise Manager). Upon completion, compliance is evaluated for validations and updates the status of the tasks and the plans. Learn More about CAP ...  Improved Configuration & Compliance Management of a large number of systems Improved Configuration Comparison:  Get to the configuration comparison results faster for simple ad-hoc comparisons. When performing a 1 to 1 comparison, Enterprise Manager will perform the comparison immediately and take the user directly to the results without having to wait for a job to be submitted and executed. Flattened system comparisons reduce comparison setup time and reduce complexity. In addition to the previously existing topological comparison, users now have an option to compare using a “flattened” methodology. Flattening means to remove duplicate target instances within the systems and remove the hierarchy of member targets. The result are much easier to spot differences particularly for specific use cases like comparing patch levels between complex systems like RAC and Fusion Apps. Improved Configuration Search & Advanced EMCLI Script option for Mass Automation Enterprise manager 12c introduces a new framework level capability to be able to script and stitch together multiple tasks using EMCLI. This powerful capability can be leveraged for lifecycle operations, especially when executing a task over a large number of targets. Specific usages of this include, retrieving a qualified list of targets using Configuration Search and then using the resultset for automation. Another example would be executing a patching operation and then re-executing on targets where it may have failed. This is complemented by other enhancements, such as a better usability for designing reusable configuration searches. IN EM 12c Rel 3, a simplified UI makes building adhoc searches even easier. Searching for missing patches is a common use of configuration search. This required the use of the advanced options which are now clearly defined and easy to use. Perform “Configuration Search” using the EMCLI. Users can find and execute Configuration Searches from the EMCLI which can be extremely useful for building sophisticated automation scripts. For an example, Run the Search named “Oracle Databases on Exadata” which finds all Database targets running on top of Exadata. Further filter the results by refining by options like name, host, etc.. emcli get_targets -config_search="Databases on Exadata" –target_name="exa%“ Use this in powerful mass automation operations using the new emcli script option. For example, to solve the use case of – Finding all DBs running on Exadata and housing E-Biz and Patch them. Create a Python script with emcli functions and invoke it in the new EMCLI script option shell. Invoke the script in the new EMCLI with script option directly: $<path to emcli>/emcli @myPSU_Patch.py Richer compliance content:  Now over 50 Oracle Provided Compliance Standards including new standards for Pluggable Database, Fusion Applications, Oracle Identity Manager, Oracle VM and Internet Directory. 9 Oracle provided Real Time Monitoring Standards containing over 900 Compliance Rules across 500 Facets. These new Real time Compliance Standards covers both Exadata Compute nodes and Linux servers. The result is increased Oracle software coverage and faster time to compliance monitoring on Exadata. Enhancements to Patch Management: Overhauled "OFFLINE" Patching experience: Simplified Patch uploads UI to improve the offline experience of patching. There is now a single step process to get the patches into software library. Customers often maintain local repositories of patches, sometimes called software depots, where they host the patches downloaded from My Oracle Support. In the past, you had to move these patches to your desktop then upload them to the Enterprise Manager's Software library through the Enterprise Manager Cloud Control user interface. You can now use the following EMCLI command to upload multiple patches directly from a remote location within the data center: $emcli upload_patches -location <Path to Patch directory> -from_host <HOSTNAME> The upload process filters all of the new patches, automatically selects the relevant metadata files from the location, and uploads the patches to software library. Other Improvements:  Patch rollback for single instance databases, new option in the Patch Plan to rollback the patches added to the patch plans. Upon execution, the procedure would rollback the patch and the SQL applied to the single instance Databases. Improved and faster configuration collection of Oracle Home targets can enable more reliable automation at higher level functions like Provisioning, Patching or Database as a Service. Just to recap, here is a list of database lifecycle management features:  * Red highlights mark – New or Enhanced in the Release 3. • Discovery, inventory tracking and reporting • Database provisioning including o Migration to Pluggable databases o Plugging and unplugging of pluggable databases o Gold image based cloning o Scaling of RAC nodes •Schema and data change management •End-to-end patch management in online and offline modes, including o Patch advisories in online (connected with My Oracle Support) and offline mode o Patch pre-deployment analysis, deployment and rollback (currently only for single instance databases) o Reporting • Upgrade planning and execution of the upgrade process • Configuration management including • Compliance management with out-of-box content • Change Activity Planner for planning, designing and tracking long running processes For more information on Enterprise Manager’s database lifecycle management capabilities, visit http://www.oracle.com/technetwork/oem/lifecycle-mgmt/index.html

    Read the article

  • The Middle of Every Project

    - by andrew.sparks
    I read a quote somewhere “The middle of every successful project looks like a mess.” or something to that effect. I suppose the projects where the beginning, middle and end are a mess are the ones you need to watch out for. Right now we are in ramp up of the maintenance/support teams at a big project in the Nordics. We are facing a year of mixed mode operations, where we have production operations and the phased rollout to new locations in parallel. The support team supports, and the deployment team deploys. As usual the assumption right up to about a month or so before initial go-live was that the deployment team would carry the support. Not! Consequently we had a last minute scramble over the Christmas/New Year to fire up a support/maintenance team. While it is a bit messy and not perfect – the quality of the mess (I mean scramble) is not so bad. Weekly operational review with the operational delivery managers, written issue lists and assigned actions, candid discussions getting the problems on the table and documented, issues getting solved and moved off the table. So while the middle of a project might look like a mess (even the start) it is methodical use of project management tools of checklists and scheduled communication points that are helping us navigate out of the mess and bring it all under control.

    Read the article

  • Announcing Oracle Environmental Accounting and Reporting

    - by Theresa Hickman
    Oracle just launched a new product called Oracle Environmental Accounting and Reporting designed to help company’s track and report their greenhouse emissions at the operational level.  Companies around the world are facing increasing pressure to improve their energy efficiency and reduce waste in their operations. Also, new worldwide greenhouse gas legislation is putting added pressure on companies to report the impact of their emissions and energy usage on the environment. Today, companies undergo extensive and expensive data audits to maintain a ledger of up-to-date emissions factors that compare figures on an annual basis. Existing “ad hoc” approaches utilizing manual or niche solutions have a high operational cost and weak data security and audit-ability. The ideal solution is to embed environmental usage within the mainstream business operations, such as recording energy usage at the time of invoice entry, and then report on those results. This is precisely what Oracle Environmental Accounting and Reporting is designed to do. You can now capture environmental data either electronically or manually; convert that to greenhouse gas emissions; comply with mandatory and voluntary greenhouse gas reporting schemes; and identify opportunities for CO2 emissions and cost reductions.   Oracle recently acquired the intellectual property for this solution which works with both Oracle E-Business Suite Release 12 and JD Edwards EnterpriseOne Release 9.0. For more information, visit Oracle Environmental Accounting and Reporting.

    Read the article

  • unit/integration testing web service proxy client

    - by cori
    I'm rewriting a PHP client/proxy library that provides an interface to a SOAP-based .Net webservice, and in the process I want to add some unit and integration tests so future modifications are less risky. The work the library I'm working on performs is to marshall the calls to the web service and do a little reorganizing of the responses to present a slightly more -object-oriented interface to the underlying service. Since this library is little else than a thin layer on top of web service calls, my basic assumption is that I'll really be writing integration tests more than unit tests - for example, I don't see any reason to mock away the web service - the work that's performed by the code I'm working on is very light; it's almost passing the response from the service right back to its consumer. Most of the calls are basic CRUD operations: CreateRole(), CreateUser(), DeleteUser(), FindUser(), &ct. I'll be starting from a known database state - the system I'm using for these tests is isolated for testing purposes, so the results will be more or less predictable. My question is this: is it natural to use web service calls to confirm the results of operations within the tests and to reset the state of the application within the scope of each test? Here's an example: One test might be createUserReturnsValidUserId() and might go like this: public function createUserReturnsValidUserId() { // we're assuming a global connection to the service $newUserId = $client->CreateUser("user1"); assertNotNull($newUserId); assertNotNull($client->FindUser($newUserId); $client->deleteUser($newUserId); } So I'm creating a user, making sure I get an ID back and that it represents a user in the system, and then cleaning up after myself (so that later tests don't rely on the success or failure of this test w/r/t the number of users in the system, for example). However this still seems pretty fragile - lots of dependencies and opportunities for tests to fail and effect the results of later tests, which I definitely want to avoid. Am I missing some options of ways to decouple these tests from the system under test, or is this really the best I can do? I think this is a fairly general unit/integration testing question, but if it matters I'm using PHPUnit for the testing framework.

    Read the article

  • Oracle Supply Chain at Pella Showcase, April 24-25, 2012

    - by Stephen Slade
    Nothing promtes a product like a grest customer testimony! For nearly a decade, Pella has been holding these 'open-houses' or Showcases as they are called, to illustrate the utilization of Oracle products in their operations. Building custom windows and doors is not an easy task.  With about a trillion combinations of unique sizes, colors and features availalbe, getting the complex multi-unit custom order wrong can be easy to do. I've been to a few of these Showcases and each time,  continually impressed by the precision, best practices and lean disciplines enacted at Pella. Operations representatives and users at Pella, demonstrate the way in which they use Oracle Supply Chain products to deliver fulfillment excellence. Orders are all custom made and delivered in about a week.  Factory tours are conducted and visitors have a chance to see Oracle in operation on the shop floor, driving informational flow and order accuracy in the 99+% range.  It's a must see for anyone considering expansion of their supply chain footprint.  The event is April 24-25 in Pella Iowa, outside Des Moines.   This year, there is a seperate track for CIOs and executives. Register at 1.800.820.5592  - ask for event 10281

    Read the article

  • Multiple vulnerabilities in Firefox

    - by RitwikGhoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-1960 Information Exposure vulnerability 5.0 Firefox Solaris 10 SPARC: 145080-12 X86: 145081-11 CVE-2012-1970 Denial of Service (DoS) vulnerability 10.0 CVE-2012-1971 Denial of Service (DoS) vulnerability 9.3 CVE-2012-1972 Resource Management Errors vulnerability 10.0 CVE-2012-1973 Resource Management Errors vulnerability 10.0 CVE-2012-1974 Resource Management Errors vulnerability 10.0 CVE-2012-1975 Resource Management Errors vulnerability 10.0 CVE-2012-1976 Resource Management Errors vulnerability 10.0 CVE-2012-3956 Resource Management Errors vulnerability 10.0 CVE-2012-3957 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-3958 Resource Management Errors vulnerability 10.0 CVE-2012-3959 Resource Management Errors vulnerability 10.0 CVE-2012-3960 Resource Management Errors vulnerability 10.0 CVE-2012-3961 Resource Management Errors vulnerability 10.0 CVE-2012-3962 Arbitrary code execution vulnerability 9.3 CVE-2012-3963 Resource Management Errors vulnerability 10.0 CVE-2012-3964 Resource Management Errors vulnerability 10.0 CVE-2012-3966 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-3967 Arbitrary code execution vulnerability 6.8 CVE-2012-3968 Resource Management Errors vulnerability 10.0 CVE-2012-3969 Numeric Errors vulnerability 9.3 CVE-2012-3970 Resource Management Errors vulnerability 10.0 CVE-2012-3972 Information Exposure vulnerability 5.0 CVE-2012-3974 Resource Management Errors vulnerability 6.9 CVE-2012-3976 Denial of Service (DoS) vulnerability 5.8 CVE-2012-3978 Permissions, Privileges, and Access Controls vulnerability 6.8 CVE-2012-3980 Improper Control of Generation of Code ('Code Injection') vulnerability 9.3 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Understanding Service Compensation part of Industrial SOA series

    - by JuergenKress
    Some of the most important SOA design patterns that we have successfully applied in projects will be described in this article. These include the Compensation pattern and the UI mediator pattern, the Common Data Format pattern and the Data Access pattern. All of these patterns are included in Thomas Erl's book, "SOA Design Patterns", and are presented here in detail, together with our practical experiences. We begin our "best of" SOA pattern collection with the Compensation pattern. Compensation is required in error situations in an SOA, as multiple atomic service operations cannot generally be linked with classic transactions this would violate the principle of loose coupling. An error situation of this sort will occur, particularly if service operations are combined into processes or new services during orchestration or by applying the Composite pattern, and the transaction bracket has to be expanded as a result. We need mechanisms to undo the effects of individual services (the status changes in the overall system) and to ensure that a consistent system state is maintained at all times, so as to preserve system integrity. For the Compensation pattern, we would like to address the following questions: Why is compensation important in relation to SOA? How is the topic of compensation linked with the topic of transactions? What are the challenges with regard to compensation... Read the full article in the Service Technology Magazine or at OTN. Share your comments and feedback on the Industrial SOA series by using the hashtag #industrialsoa. Missed an article of the Industrial SOA series visit the overview at OTN. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Industrial SOA,SOA,SOA Service Compensation,Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

    Read the article

  • Relationship between Repository and Unit of Work

    - by NullOrEmpty
    I am going to implement a repository, and I would like to use the UOW pattern since the consumer of the repository could do several operations, and I want to commit them at once. After read several articles about the matter, I still don't get how to relate this two elements, depending on the article it is being done in a way u other. Sometimes the UOW is something internal to the repository: public class Repository { UnitOfWork _uow; public Repository() { _uow = IoC.Get<UnitOfWork>(); } public void Save(Entity e) { _uow.Track(e); } public void SubmittChanges() { SaveInStorage(_uow.GetChanges()); } } And sometimes it is external: public class Repository { public void Save(Entity e, UnitOfWork uow) { uow.Track(e); } public void SubmittChanges(UnitOfWork uow) { SaveInStorage(uow.GetChanges()); } } Other times, is the UOW whom references the Repository public class UnitOfWork { Repository _repository; public UnitOfWork(Repository repository) { _repository = repository; } public void Save(Entity e) { this.Track(e); } public void SubmittChanges() { _repository.Save(this.GetChanges()); } } How are these two elements related? UOW tracks the elements that needs be changed, and repository contains the logic to persist those changes, but... who call who? Does the last make more sense? Also, who manages the connection? If several operations have to be done in the repository, I think using the same connection and even transaction is more sound, so maybe put the connection object inside the UOW and this one inside the repository makes sense as well. Cheers

    Read the article

  • Why is Javascript used in MongoDB and CouchDB instead of other languages such as Java, C++?

    - by startup007
    I asked this question on SO but was suggested to try here. So here it goes: My understanding of Javascript so far has been that it is a client-side language that capture events and makes a web-page dynamic. But on reading the comparison between MongoDB and CouchDB I noticed that both are using Javascript. This makes me wonder the reason behind the choice of JavaScript over other conventional languages. I guess I am trying to understand the role of JavaScript and its advantages over other languages. Update: I am not asking about the languages / drivers supported by the two databases. The comparison says: Both CouchDB and MongoDB make use of Javascript. CouchDB uses Javascript extensively including in the building of views. MongoDB also supports running arbitrary javascript functions server-side and uses javascript for map/reduce operations. My lack of understanding pertains to why is Javascript being used at all for the backend work. Why is it preferred for building views in CouchDB, or for using map/reduce operations? Why C/C++ or Java were not used? What are the advantages in using Javascript for such back-end work?

    Read the article

  • Library Organization in .NET

    - by Greg Ros
    I've written a .NET bitwise operations library as part of my projects (stuff ranging from get MSB set to some more complicated bitwise transformations) and I mean to release it as free software. I'm a bit confused about a design aspect of the library, though. Many of the methods/transformations in the library come with different endianness. A simple example is a getBitAt method that regards index 0 as the least significant bit, or the most significant bit, depending on the version used. In practice, I've found that using separate functions for different endianness results in much more comprehensible and reusable code than assuming all operations are little-endian or something. I'm really stumped regarding how best to package the library. Should I have methods that have LE and BE versions take an enum parameter in their signature, e.g. Endianness.Little, Endianness.Big? Should I have different static classes with identically named methods? such as MSB.GetBit and LSB.GetBit On a much wider note, is there a standard I could use in cases like this? Some guide? Is my design issue trivial? I have a perfectionist bent, and I sometimes get stuck on tricky design issues like this... Note: I've sort of realized I'm using endianness somewhat colloquially to refer to the order/place value of digital component parts (be they bits, bytes, or words) in a larger whole, in any setting. I'm not talking about machine-level endianness or serial transmission endianness. Just about place-value semantics in general. So there isn't a context of targeting different machines/transmission techniques or something.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >